Review Dojo code for best practices, common mistakes, security issues, and optimization opportunities. Use when auditing models, systems, tests, or preparing for deployment.
77
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 100%
↑ 1.29xAgent success when using this skill
Validation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has good structure with an explicit 'Use when' clause that clearly communicates both purpose and triggers. However, it relies on somewhat generic review categories rather than Dojo-specific concrete actions, and the trigger terms could be more comprehensive to capture natural user language variations.
Suggestions
Add Dojo-specific concrete actions like 'validate model definitions', 'check system configurations', or 'review Dojo-specific patterns' to increase specificity.
Include more natural trigger term variations such as 'code review', 'audit code', 'check for bugs', 'review my Dojo app' to improve discoverability.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Dojo code) and lists categories of review (best practices, common mistakes, security issues, optimization opportunities), but these are somewhat generic review categories rather than concrete specific actions like 'check for SQL injection' or 'validate model relationships'. | 2 / 3 |
Completeness | Clearly answers both what (review Dojo code for best practices, mistakes, security, optimization) and when (auditing models, systems, tests, or preparing for deployment) with an explicit 'Use when' clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'models', 'systems', 'tests', 'deployment', but missing natural variations users might say like 'code review', 'audit', 'check my code', 'review PR', or Dojo-specific terminology that would help distinguish this skill. | 2 / 3 |
Distinctiveness Conflict Risk | The 'Dojo' framework specification provides some distinctiveness, but terms like 'code review', 'best practices', and 'security issues' are generic enough to potentially conflict with other code review or security-focused skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid code review skill with excellent actionability through concrete Cairo code examples showing correct and incorrect patterns. The main weaknesses are verbosity from repetitive example patterns and lack of a clear review workflow with validation checkpoints. The content would benefit from being split into a concise overview with detailed patterns in separate reference files.
Suggestions
Add a clear sequential review workflow at the top (e.g., '1. Run static checks, 2. Review models, 3. Review systems, 4. Document findings, 5. Verify fixes') with explicit validation steps
Move detailed code examples for each category into separate reference files (e.g., MODEL_PATTERNS.md, SECURITY_PATTERNS.md) and keep only 1-2 key examples inline
Remove the checklist section as it duplicates the review categories, or consolidate into a single quick-reference checklist file
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy in code examples (showing both ❌ and ✅ patterns repeatedly) and the checklist section largely duplicates information already covered in the review categories. Some sections could be tightened. | 2 / 3 |
Actionability | Excellent concrete guidance with executable Cairo code examples throughout. Each review category provides specific, copy-paste ready code showing both incorrect and correct patterns. The checklist provides clear, actionable items. | 3 / 3 |
Workflow Clarity | The skill describes what to check but lacks a clear sequential workflow for conducting a review. The 'Next Steps' section provides a sequence but there's no validation checkpoint or feedback loop for the review process itself (e.g., 'verify fix, re-check'). | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but the skill is quite long (~300 lines) and could benefit from splitting detailed patterns into separate reference files. The 'Related Skills' section provides good navigation but inline content is heavy. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.