What if every task you throw at your AI came back reviewed by a full team — not just one voice?
OPC (One Person Company) assembles the right specialists for each task on the fly. Security audit? It brings in a security engineer. New feature? A PM, a designer, and a first-time user show up. Need a custom expert? Drop a markdown file — your team grows instantly.
One slash command. The whole product lifecycle. /opc review, /opc analyze, /opc brainstorm.

But is it actually better than just asking Claude directly?
The Honest Test
I ran both approaches on the same codebase — OPC’s own repo:
Single Claude prompt (“review these files for issues”): 14 findings. Variable shadowing, DRY violations, missing exit codes, edge cases. Thorough, precise, code-focused.
OPC (3 agents: new-user, security, devops): 9 findings. Fewer code bugs. But it caught 5 things Claude completely missed:
- A new user would run
opc reviewin their terminal (not Claude Code) and get confused — no hint it’s a skill, not a CLI command - The install symlink command in README assumes you’re in the parent directory — muscle memory says
cdinto the repo first, which breaks it silently - The postinstall failure message doesn’t tell you what a failure looks like
- The Claude Code link goes to a marketing page, not install docs
These aren’t code bugs. They’re perspective bugs — issues you only find when you think like a specific person.
What I Actually Built
OPC isn’t magic. Under the hood it’s:
- 11 markdown files — each defines a specialist role with expertise areas and anti-patterns (“don’t flag missing auth on local tools”)
- Parallel Claude calls — 2-5 agents run simultaneously, each with a different system prompt
- A coordinator pass — verifies facts, deduplicates, dismisses false positives
The agents don’t talk to each other. There’s no “collaboration.” The coordinator is the same Claude instance reading all outputs. I’m not going to pretend this is some breakthrough in multi-agent systems.
What it IS: a structured way to get multiple review perspectives without writing the prompt every time. /opc review vs. typing “review from security, new user, and devops perspectives” — the former is 10 characters, the latter is a paragraph you’ll never write consistently.
The Parts That Actually Work Well
Anti-patterns per role. Each role file says what NOT to flag. The security agent won’t flag “no auth” on a local CLI tool. The new-user agent won’t suggest hand-holding for a developer tool. This is the single most impactful design choice — it prevents the generic checklist problem that kills most AI review tools.
Verification gate. The coordinator doesn’t just merge agent outputs. It has explicit checks: “Does this finding have a file:line reference? Does the severity match the actual impact? Did the agent actually read the files in scope?” This catches lazy agent outputs.
JSON reports. Every review saves structured JSON to ~/.opc/reports/. You can track findings over time, compare reviews, or browse them in a web viewer (npx @touchskyer/opc-viewer).
Try It
npm install -g @touchskyer/opc
# Then in Claude Code:
/opc review
Zero dependencies. Just markdown files. Works in 30 seconds.
GitHub — star it if you find a bug OPC catches that Claude alone wouldn’t.