Review Dojo code for best practices, common mistakes, security issues, and optimization opportunities. Use when auditing models, systems, tests, or preparing for deployment.
77
66%
Does it follow best practices?
Impact
100%
1.29xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dojo-review/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has a solid structure with both 'what' and 'when' clauses clearly stated, which is its strongest aspect. However, the capabilities listed are high-level categories rather than concrete actions, and the trigger terms could be more specific to the Dojo framework to reduce overlap with generic code review skills.
Suggestions
Add more specific concrete actions like 'validate model field definitions, check system configurations, verify test coverage patterns' instead of broad categories like 'best practices' and 'common mistakes'.
Include more Dojo-specific trigger terms and natural language variations users might use, such as specific Dojo concepts, file types, or common Dojo components to improve matching precision.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Dojo code review) and lists categories of review (best practices, common mistakes, security issues, optimization opportunities), but these are fairly high-level categories rather than concrete specific actions like 'check for SQL injection' or 'validate model field types'. | 2 / 3 |
Completeness | Clearly answers both 'what' (review Dojo code for best practices, common mistakes, security issues, optimization) and 'when' (Use when auditing models, systems, tests, or preparing for deployment), with an explicit 'Use when...' clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'models', 'systems', 'tests', 'deployment', 'security issues', and 'Dojo', but misses common natural variations users might say such as 'code review', 'lint', 'audit', 'refactor', or specific Dojo framework terms that would help with matching. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Dojo' provides some specificity to a particular framework, but terms like 'best practices', 'security issues', and 'optimization opportunities' are generic enough to overlap with general code review skills. 'Models, systems, tests' are also quite broad. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable code review skill with excellent concrete examples covering models, systems, security, gas optimization, and testing. Its main weaknesses are verbosity (the content could be more concise by splitting detailed examples into reference files) and the lack of explicit validation/feedback loops in the review workflow. The checklist is a strong practical tool but partially duplicates the narrative sections above it.
Suggestions
Remove the 'When to Use This Skill' and 'What This Skill Does' sections—Claude can infer these from context, saving ~15 lines of tokens.
Move the detailed code examples for each review category into a separate REVIEW_EXAMPLES.md file, keeping only the checklist and brief descriptions in the main skill.
Add an explicit validation step in the workflow, e.g., 'After fixing issues, run `sozo build` to verify compilation, then re-review to confirm fixes don't introduce new issues.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long with many code examples that are useful but somewhat repetitive in pattern (❌ bad / ✅ good). Some sections like 'When to Use This Skill' and 'What This Skill Does' add little value since Claude can infer these. The review checklist partially duplicates the review categories above it. | 2 / 3 |
Actionability | The skill provides fully concrete, executable Cairo code examples for every review category. Each anti-pattern and fix is shown with specific, copy-paste-ready code. The checklists give precise items to verify. | 3 / 3 |
Workflow Clarity | The 'Next Steps' section provides a sequence but lacks explicit validation checkpoints. There's no feedback loop for the review process itself (e.g., re-validate after fixes). The review categories are clear but presented as parallel checks rather than a sequenced workflow with verification gates. | 2 / 3 |
Progressive Disclosure | The skill references related skills at the end (dojo-model, dojo-system, etc.) which is good, but the main content is a monolithic document with extensive inline code examples that could be split into separate reference files. The review checklist and detailed examples could live in separate files with the SKILL.md serving as a concise overview. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
52a1507
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.