Advocate

Six distinct perspectives attack your code simultaneously, each with a different standard of success. Disagreements between them are signal, not noise.

# Review any code in 30 seconds
pip install advocate[anthropic]
advocate review ./src/

The Six Personas

Not a checklist. Genuine distinct perspectives that will disagree with each other sometimes.

Red Team

It's vulnerable; harden.
Success: the thing survives assault.
Security, injection, exploitation, data corruption, race conditions

Adversarial

It's wrong; defend.
Success: the argument holds under direct challenge.
Wrong assumptions, edge cases, failure modes, backward compatibility

Sage

It's complicated; simplify.
Success: a smart person can explain it simply.
Design, concept, blast radius

User

It's unintuitive; clarify.
Success: someone unfamiliar can navigate it without a guide.
Design, concept, edge cases

Subject Matter Expert

Peer-review.
Success: a peer would sign off on it.
Wrong assumptions, backward compatibility, design, concept

Good Friend

The harsh truth you need to hear.
Success: you'd rather know now than later.
Financial risk, the 3am test, blast radius, failure modes

The 3am Test

"Would you be comfortable being woken up at 3am to deal with this in production?"

One of the most useful single-question heuristics in engineering. Almost nobody codifies it. The Good Friend applies it ruthlessly.

14 Dimensions

Engineering, architecture, and operational reality.

SecurityAuth, authz, privilege escalation
InjectionSQL, XSS, command, template
ExploitationRace conditions, TOCTOU, replay
Data CorruptionTruncation, encoding, integrity
Edge CasesBoundaries, empty, Unicode, time
Race ConditionsConcurrency, inconsistent state
Failure ModesDependencies, network, disk
Wrong AssumptionsImplicit, unvalidated beliefs
Backward CompatBreaking users, data, APIs
Blast RadiusHow much breaks if this fails?
Financial RiskHidden costs, lock-in, scaling
The 3am TestPager fatigue, human cost
DesignOver-engineering, wrong patterns
ConceptIs the approach fundamentally sound?

Usage

# Review a file
advocate review src/main.py

# Review a whole project
advocate review ./my-project/

# Just the hardest personas
advocate review src/auth.py -p red_team -p good_friend

# From stdin
echo "We plan to store tokens in localStorage" | advocate review --stdin

# HTML report
advocate review src/ --html report.html

# Different provider
advocate review src/ --provider openai --model gpt-4o

Built for Real Work

Parallel by Default

All six personas run simultaneously. Full review in ~30 seconds. Use --sequential to save costs.

Disagreement Detection

When personas disagree, the tension itself is the finding. Sage says "simplify" while SME says "necessary"? That's worth examining.

Multi-Provider

Claude, OpenAI, or Gemini. With transmogrifier integration for prompt optimization across models.

Self-Reviewed

Advocate reviewed itself. Found 30 issues. Fixed them. Re-reviewed. Good Friend returned zero.