Code review checklist - coordinates specialist reviewers for thorough analysis
75
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Read .opencode/context-log.md first for issue context and build history.
Extract issue IDs from branch name or PR, then fetch details:
git branch --show-current — parse for ENG-123, PROJ-456, #123, gh-123gh pr view --json body,titleteam-context_get_issue, GitHub → gh issue viewUse context to verify: requirements alignment, acceptance criteria, scope creep, project goals.
After getting the diff: read entire modified file(s) for full context.
Read project rules: Check for and read AGENTS.md, .opencode/AGENTS.md, CONVENTIONS.md, .github/copilot-instructions.md in the repo root. These contain project-specific coding standards, architectural decisions, and review expectations. Include relevant rules in the payload to all sub-agents (not just maintainability) — security rules go to security, testing expectations go to correctness, etc.
Before any LLM analysis, run available project linters/checkers on modified files. Detect which tools are available and run them:
Check project root for config files to detect tooling:
Gemfile / .rubocop.yml → bundle exec rubocop --format json <files>package.json → check for eslint/biome scripts → npx eslint --format json <files> or npx biome check <files>tsconfig.json → npx tsc --noEmit --pretty 2>&1 (type errors only)Brakeman (Rails) → bundle exec brakeman --only-files <files> --format json -qpyproject.toml / setup.cfg → ruff check <files> --output-format json or mypy <files>.golangci.yml → golangci-lint run <files>Only run tools that are already configured in the project — never install new tools.
Collect output. Include the raw linter findings in the payload to sub-agents under a ## Static Analysis Findings section.
If no linters are configured or all pass clean, note "No static analysis findings" and move on.
The LLM's job is to find semantic issues that tools can't catch — logic errors, missing edge cases, architectural problems. Let tools handle syntax, style, and known vulnerability patterns.
Count the diff size (lines changed). Then choose a path:
Run the checklist below yourself. Do NOT spawn subagents — the overhead isn't worth it.
Use the issue context you fetched above when evaluating correctness and scope — check that the diff actually satisfies the requirements and acceptance criteria.
Checklist (scan all):
Agentic exploration (even for small diffs): Before flagging any issue, use grep and read to follow references. If the diff calls a function, read that function. If it changes a type, find all callers. Do not flag "unused code" or "missing error handling" without verifying against the actual codebase.
Prepare a base payload containing:
Prepare extended context:
team-context_get_issue / gh issue view — not the placeholder.Then spawn four Task calls in parallel (all in a single message), all with subagent_type="expert". Each prompt instructs the expert to load a specific review skill. Tailor each prompt:
Task(subagent_type="expert", prompt="Load the `review-security` skill and follow its instructions.\n\n<base payload>\n\n## Project Rules\n<AGENTS.md / CONVENTIONS.md content>\n\n## Static Analysis Findings\n<linter output>")
Task(subagent_type="expert", prompt="Load the `review-correctness` skill and follow its instructions.\n\n<base payload>\n\n## Project Rules\n<AGENTS.md / CONVENTIONS.md content>\n\n## Issue Context\n<issue details, requirements, acceptance criteria>\n\n## Static Analysis Findings\n<linter output>")
Task(subagent_type="expert", prompt="Load the `review-performance` skill and follow its instructions.\n\n<base payload>\n\n## Project Rules\n<AGENTS.md / CONVENTIONS.md content>\n\n## Static Analysis Findings\n<linter output>")
Task(subagent_type="expert", prompt="Load the `review-maintainability` skill and follow its instructions.\n\n<base payload>\n\n## Project Rules\n<AGENTS.md / CONVENTIONS.md content>\n\n## Issue Context\n<issue details>\n\n## Static Analysis Findings\n<linter output>")Each specialist returns a JSON array of findings.
After all specialists return:
Regardless of path, always trace the callback/job chain:
Be terse. Developers can read code — don't explain what the diff does.
## Verdict: [APPROVE | CHANGES REQUESTED | COMMENT]
[One sentence why, if not obvious]
## Blockers
- **file.rb:10** - [2-5 word issue]. [1 sentence context if needed]
```suggestion
# concrete replacement code# concrete replacement code**Rules:**
- Skip sections with no items (don't say "None")
- Max 1-2 sentences per item. No filler.
- **Always include a `suggestion` code block** with the concrete fix, unless the fix requires architectural changes that can't be expressed as a snippet
- Use "I" statements, frame as questions not directives
**For PRs:** output the PR URL as a clickable link at the very top (before TL;DR), then add TL;DR. If issue context found, add Requirements Check after verdict.31daf20
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.