You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user guides, and technical references using AI-powered analysis and industry best practices.
41
27%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-documentation-doc-generate/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (code documentation) and lists several output types, but relies on buzzwords ('AI-powered analysis', 'industry best practices') and uses second-person framing ('You are'). It critically lacks any 'Use when...' clause, making it difficult for Claude to know when to select this skill over others. The trigger terms are decent but miss common user vocabulary for documentation tasks.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks to document code, generate API references, create READMEs, or produce architecture documentation.'
Replace vague phrases like 'AI-powered analysis and industry best practices' with concrete actions such as 'parses source code to extract function signatures, class hierarchies, and module dependencies'.
Switch from second-person voice ('You are a documentation expert') to third-person voice describing capabilities ('Generates API docs, architecture diagrams...') and add common user terms like 'README', 'docstrings', 'swagger', 'code comments'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (documentation from code) and lists some outputs (API docs, architecture diagrams, user guides, technical references), but uses vague qualifiers like 'AI-powered analysis' and 'industry best practices' which are buzzwords rather than concrete actions. | 2 / 3 |
Completeness | Describes what it does (generate various documentation types) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'when' is entirely absent, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'API docs', 'architecture diagrams', 'user guides', 'technical references', and 'documentation', but misses common user variations like 'README', 'docstrings', 'JSDoc', 'swagger', 'openapi', or 'code comments'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on documentation generation from code is somewhat specific, but terms like 'user guides' and 'technical references' are broad enough to overlap with general writing or technical writing skills. The lack of explicit triggers increases conflict risk. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is essentially a high-level outline that delegates all concrete guidance to an external resource. The instructions are too abstract and vague to be actionable on their own — they describe what to do conceptually but never show how. The skill would benefit significantly from concrete examples, specific tool recommendations, executable code snippets, and explicit validation steps.
Suggestions
Add concrete, executable examples for at least one documentation type (e.g., generating API docs with a specific tool like pydoc, Sphinx, or TypeDoc) so Claude has actionable guidance without needing the external resource.
Include specific tool names and commands in the instructions (e.g., 'Run `npx typedoc --entryPointStrategy expand ./src` to generate TypeScript API docs') instead of abstract directives like 'Extract information from code'.
Add explicit validation steps with concrete commands (e.g., 'Validate generated docs: `markdownlint docs/` and check for broken links with `markdown-link-check docs/**/*.md`').
Provide at least one concrete output example showing what a generated documentation artifact looks like, so Claude understands the expected format and quality.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill has some unnecessary filler (e.g., the 'Context' section restates what's already obvious, and 'Use this skill when' / 'Do not use this skill when' sections add moderate value but are somewhat verbose). The instructions themselves are lean but vague. | 2 / 3 |
Actionability | The instructions are entirely abstract and vague — 'Extract information from code, configs, and comments' and 'Generate docs with consistent terminology' provide no concrete commands, code examples, tool names, or executable steps. There is nothing copy-paste ready or specific enough for Claude to act on. | 1 / 3 |
Workflow Clarity | The instructions list high-level steps but lack any sequencing detail, validation checkpoints, or feedback loops. Steps like 'Add automation (linting, CI) and validate accuracy' are hand-wavy with no concrete validation mechanism described. | 1 / 3 |
Progressive Disclosure | There is a reference to `resources/implementation-playbook.md` for detailed examples, which is good one-level-deep disclosure. However, the main content is too thin — it delegates almost all substance to the external file without providing a useful quick-start or concrete overview in the skill itself. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
6a07b83
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.