Code Documentation Analyzer - Auto-activating skill for Technical Documentation. Triggers on: code documentation analyzer, code documentation analyzer Part of the Technical Documentation skill category.
32
0%
Does it follow best practices?
Impact
88%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/17-technical-docs/code-documentation-analyzer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a placeholder with no substantive content. It names a broad category ('Code Documentation Analyzer' / 'Technical Documentation') but provides zero concrete actions, no natural trigger terms, and no guidance on when the skill should be selected. It would be nearly impossible for Claude to correctly choose this skill from a pool of alternatives.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Analyzes code documentation coverage, identifies undocumented functions, generates docstrings and API reference docs.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to document code, add docstrings, generate API docs, check documentation coverage, or mentions README generation.'
Remove the duplicate trigger term and replace with diverse natural keywords users would actually say, such as 'docstrings', 'API docs', 'code comments', 'README', 'JSDoc', 'documentation coverage'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('Code Documentation' / 'Technical Documentation') but lists no concrete actions whatsoever. There is no mention of what the skill actually does—no verbs like 'analyze', 'generate', 'extract', or 'review' with specific objects. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no explanation of capabilities and no explicit 'Use when...' clause or equivalent trigger guidance. | 1 / 3 |
Trigger Term Quality | The only trigger terms listed are 'code documentation analyzer' repeated twice. These are not natural phrases a user would say; users are more likely to say 'document my code', 'add docstrings', 'generate API docs', etc. No common variations or natural keywords are provided. | 1 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic—'Technical Documentation' and 'Code Documentation' could overlap with any number of documentation-related skills. There are no distinct triggers or specific capabilities that would differentiate it from other documentation or code analysis skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty template with no actual content. It repeatedly names the skill ('code documentation analyzer') without ever explaining what it does, how to do it, or providing any actionable guidance. It reads as auto-generated boilerplate with zero instructional value.
Suggestions
Replace the generic 'Capabilities' section with concrete, executable steps for analyzing code documentation (e.g., specific commands to extract docstrings, check coverage, or generate reports).
Add at least one complete, copy-paste-ready code example showing how to perform a code documentation analysis task (e.g., using a tool like pydoc, Sphinx, or a custom script).
Define a clear workflow with sequenced steps and validation checkpoints, such as: 1) scan codebase for undocumented functions, 2) generate coverage report, 3) validate output format.
Remove all filler text that restates the skill name without adding information, and replace 'Example Triggers' with actual input/output examples demonstrating the skill in action.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats the phrase 'code documentation analyzer' excessively, and provides zero substantive information about how to actually analyze code documentation. | 1 / 3 |
Actionability | There are no concrete steps, commands, code examples, or executable guidance whatsoever. Every section is vague and abstract — 'Provides step-by-step guidance' without actually providing any steps. | 1 / 3 |
Workflow Clarity | No workflow is defined at all. There are no steps, no sequence, no validation checkpoints — just generic claims about capabilities without any actual process description. | 1 / 3 |
Progressive Disclosure | The content is a flat, shallow document with no references to detailed materials, no linked resources, and no meaningful structure beyond generic placeholder headings. There is nothing to progressively disclose because there is no substantive content. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.