Review Dojo code for best practices, common mistakes, security issues, and optimization opportunities. Use when auditing models, systems, tests, or preparing for deployment.
77
66%
Does it follow best practices?
Impact
100%
1.29xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dojo-review/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is reasonably well-structured with both a 'what' and 'when' clause, and the Dojo-specific framing provides some distinctiveness. However, the capabilities listed are high-level categories rather than concrete actions, and the trigger terms could be more comprehensive with Dojo-specific terminology and common user phrasings for code review tasks.
Suggestions
Add more specific concrete actions like 'validate model field definitions, check system configurations, verify test coverage patterns' instead of generic categories like 'best practices' and 'common mistakes'.
Include more Dojo-specific trigger terms and concepts (e.g., specific Dojo components, patterns, or file types) to improve matching precision and reduce overlap with generic code review skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Dojo code review) and lists categories of review (best practices, common mistakes, security issues, optimization opportunities), but these are fairly high-level categories rather than concrete specific actions like 'check for SQL injection' or 'validate model field types'. | 2 / 3 |
Completeness | Clearly answers both 'what' (review Dojo code for best practices, common mistakes, security issues, optimization opportunities) and 'when' (Use when auditing models, systems, tests, or preparing for deployment) with explicit trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'code review', 'best practices', 'security issues', 'models', 'tests', 'deployment', but 'Dojo' is a specific framework term that helps. Missing common variations users might say like 'code audit', 'lint', 'code quality', or specific Dojo concepts that would improve matching. | 2 / 3 |
Distinctiveness Conflict Risk | The 'Dojo' qualifier helps distinguish it from generic code review skills, but terms like 'best practices', 'security issues', and 'optimization' are very common across many code review or linting skills. 'Auditing models, systems, tests' could overlap with general code review or testing skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable code review skill with excellent concrete examples showing common Dojo mistakes and their fixes. Its main weaknesses are verbosity (the checklist duplicates the review categories, and introductory sections add little value) and a lack of structured workflow with validation checkpoints for the review process itself. The content would benefit from trimming redundancy and adding explicit verification steps.
Suggestions
Remove the 'When to Use This Skill' and 'What This Skill Does' sections — they restate what's obvious and waste tokens.
Consolidate the Review Checklist with the Review Categories sections to eliminate duplication, or move the detailed code examples to a separate EXAMPLES.md file.
Add explicit validation steps to the workflow, e.g., 'After fixing issues, run `sozo build` to verify compilation, then re-review the specific section to confirm the fix is correct.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long with many code examples that are useful but somewhat repetitive in pattern (❌ bad / ✅ good). Some sections like 'When to Use This Skill' and 'What This Skill Does' add little value since they restate what's obvious from the title and context. The checklist at the end partially duplicates the review categories above it. | 2 / 3 |
Actionability | The skill provides concrete, executable Cairo code examples for every review category, with clear before/after patterns showing exactly what to look for and how to fix issues. The checklists and anti-patterns are specific and directly usable. | 3 / 3 |
Workflow Clarity | The 'Next Steps' section provides a sequence but lacks explicit validation checkpoints. There's no feedback loop for the review process itself — no guidance on how to verify fixes were correctly applied or how to prioritize issues found. For a code review skill involving potentially destructive changes, the workflow could be more structured. | 2 / 3 |
Progressive Disclosure | The content is mostly inline in one large file when some sections (like the detailed code examples for each review category) could be split into separate reference files. The 'Related Skills' section at the end provides good cross-references, but the main body is a long monolithic document that could benefit from better separation. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
093849a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.