Six distinct perspectives attack your code simultaneously, each with a different standard of success. Disagreements between them are signal, not noise.
Not a checklist. Genuine distinct perspectives that will disagree with each other sometimes.
"Would you be comfortable being woken up at 3am to deal with this in production?"
One of the most useful single-question heuristics in engineering. Almost nobody codifies it. The Good Friend applies it ruthlessly.
Engineering, architecture, and operational reality.
# Review a file
advocate review src/main.py
# Review a whole project
advocate review ./my-project/
# Just the hardest personas
advocate review src/auth.py -p red_team -p good_friend
# From stdin
echo "We plan to store tokens in localStorage" | advocate review --stdin
# HTML report
advocate review src/ --html report.html
# Different provider
advocate review src/ --provider openai --model gpt-4o
All six personas run simultaneously. Full review in ~30 seconds. Use --sequential to save costs.
When personas disagree, the tension itself is the finding. Sage says "simplify" while SME says "necessary"? That's worth examining.
Claude, OpenAI, or Gemini. With transmogrifier integration for prompt optimization across models.
Advocate reviewed itself. Found 30 issues. Fixed them. Re-reviewed. Good Friend returned zero.