Reviews optimization patches using a 5-persona peer review system. Requires unanimous approval backed by benchmarks.
66
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/lading-optimize-review/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a unique review mechanism (5-persona peer review with unanimous approval and benchmarks) which makes it distinctive, but it lacks explicit trigger guidance ('Use when...') and natural user-facing keywords. The specificity of what actions are performed and what outputs are produced could be improved.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user wants a thorough review of performance optimization patches, code optimizations, or wants multi-perspective feedback on proposed changes.'
Include natural trigger terms users would say, such as 'code review', 'performance patch', 'optimization review', 'review my changes', or 'perf improvements'.
Expand the 'what' to clarify outputs, e.g., 'Reviews optimization patches using a 5-persona peer review system, providing approval/rejection decisions, benchmark comparisons, and actionable feedback.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (optimization patches) and describes the mechanism (5-persona peer review system, unanimous approval, benchmarks), but doesn't list specific concrete actions beyond 'reviews'. What kinds of optimization patches? What does the review produce? | 2 / 3 |
Completeness | Describes what it does (reviews optimization patches via peer review) but has no explicit 'Use when...' clause or equivalent trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also somewhat thin, placing this at 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'optimization patches', 'peer review', and 'benchmarks', but misses natural user phrases like 'review my code changes', 'performance optimization', 'code review', or 'patch review'. Users are unlikely to say '5-persona peer review system'. | 2 / 3 |
Distinctiveness Conflict Risk | The '5-persona peer review system' with 'unanimous approval backed by benchmarks' is a very distinctive mechanism that is unlikely to conflict with other code review or optimization skills. It carves out a clear niche. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill that provides a rigorous multi-persona review framework with concrete commands, thresholds, and checklists. Its main strength is the clear workflow with explicit validation gates and rejection criteria at every phase. Minor weaknesses include some redundancy (duplicated outcomes table) and the length of inline content that could benefit from splitting persona details into referenced files.
Suggestions
Remove the duplicated outcomes table in Phase 4 — reference the one at the top or keep only one instance.
Consider moving the detailed persona checklists (Phase 2) into a referenced file like `assets/personas.md` to reduce the main skill's token footprint.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and avoids explaining concepts Claude already knows, but there's some redundancy — the outcomes table appears twice identically, and some sections like 'NO EXCEPTIONS' are somewhat verbose for what they convey. The arguments table and report ID generation are appropriately detailed since they're project-specific. | 2 / 3 |
Actionability | Provides fully executable bash commands with exact flags, specific file paths, concrete thresholds (>=5% time, >=10% mem, >=20% allocs), detailed checklists for each persona, and clear template references. The workflow is copy-paste ready with specific argument substitution patterns. | 3 / 3 |
Workflow Clarity | The 5-phase workflow is clearly sequenced with explicit validation checkpoints: baseline verification before proceeding, statistical requirements as gates, each persona has a checklist with pass/fail criteria, Kani failure has a documented fallback path, and rejection conditions are explicit throughout. The feedback loop of 'if duplicate found -> REJECT' and 'if bug found -> REJECT' provides clear error recovery. | 3 / 3 |
Progressive Disclosure | The skill references external assets (template files, db.yaml) appropriately, but the body itself is quite long (~150 lines of substantive content) with all persona checklists inline. The persona checklists could potentially be split into a referenced file. However, no bundle files were provided to verify the referenced templates exist, and the references are one-level deep which is good. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
01ef8d7
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.