You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user guides, and technical references using AI...
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill code-documentation-doc-generate59
Quality
37%
Does it follow best practices?
Impact
100%
1.01xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-documentation-doc-generate/SKILL.mdDiscovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description provides a reasonable overview of documentation capabilities but suffers from several issues: it uses second person voice ('You are'), lacks explicit trigger guidance for when to use the skill, and trails off incompletely with 'using AI...'. The description would benefit from concrete trigger terms and a clear 'Use when...' clause to help Claude select it appropriately.
Suggestions
Add an explicit 'Use when...' clause with trigger terms like 'document my code', 'generate README', 'API documentation', 'explain this codebase'
Rewrite in third person voice (e.g., 'Generates comprehensive documentation from code including API docs, architecture diagrams...') instead of 'You are a documentation expert'
Complete the truncated description and add common user phrases like 'README', 'docstrings', 'code documentation', '.md files'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (documentation) and lists some actions (API docs, architecture diagrams, user guides, technical references), but uses vague language like 'comprehensive, maintainable' and the trailing 'using AI...' is unclear and incomplete. | 2 / 3 |
Completeness | Describes what it does (generate various documentation types) but completely lacks any 'Use when...' clause or explicit trigger guidance for when Claude should select this skill. The description also uses second person voice ('You are') which violates the rubric guidelines. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'API docs', 'architecture diagrams', 'user guides', 'technical references', but misses common variations users might say like 'README', 'docstrings', 'code comments', 'documentation generation', or file extensions. | 2 / 3 |
Distinctiveness Conflict Risk | Focuses on documentation from code which provides some specificity, but 'documentation expert' is broad and could overlap with general writing skills or code explanation skills. The scope of 'API docs, architecture diagrams, user guides' spans multiple potentially distinct use cases. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill has good structure and appropriate progressive disclosure, but suffers from a lack of actionable, concrete guidance. The instructions describe what to do at a high level without providing executable code, specific commands, or concrete examples that Claude could immediately use. The workflow is present but lacks validation checkpoints.
Suggestions
Add concrete, executable examples for at least one documentation type (e.g., a Python script using pydoc or sphinx-apidoc to generate API docs)
Replace abstract instructions like 'Extract information from code' with specific commands or code snippets showing how to do this
Add explicit validation steps (e.g., 'Run `doc8 docs/` to lint, fix errors before committing')
Include a concrete example of expected output format (e.g., sample generated markdown structure)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary framing ('You are a documentation expert...') and context that Claude already understands. The 'Use this skill when' and 'Do not use this skill when' sections add value but could be more concise. | 2 / 3 |
Actionability | The instructions are vague and abstract ('Identify required doc types', 'Extract information from code') with no concrete code, commands, or executable examples. There's no specific guidance on how to actually generate documentation - just high-level descriptions of what to do. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence but lack validation checkpoints. The instruction to 'validate accuracy' is mentioned but not explained. No feedback loops for error recovery, and the workflow is too abstract to follow precisely. | 2 / 3 |
Progressive Disclosure | Good structure with clear sections and a single-level reference to 'resources/implementation-playbook.md' for detailed examples. The skill appropriately keeps the overview concise while pointing to external resources for depth. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.