CtrlK
BlogDocsLog inGet started
Tessl Logo

code-documentation-doc-generate

You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user guides, and technical references using AI-powered analysis and industry best practices.

41

Quality

27%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/code-documentation-doc-generate/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (code documentation) and lists several output types, but relies on buzzwords ('AI-powered analysis', 'industry best practices') and uses second-person framing ('You are a documentation expert') which is inappropriate for a skill description. The complete absence of a 'Use when...' clause significantly weakens its utility for skill selection, and the trigger terms, while present, lack the breadth needed for reliable matching.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks to generate documentation, create API references, document codebases, write READMEs, or produce architecture diagrams from source code.'

Replace vague buzzwords like 'AI-powered analysis' and 'industry best practices' with concrete actions such as 'parses source code to extract function signatures, class hierarchies, and module dependencies'.

Rewrite in third person voice (e.g., 'Generates API docs, architecture diagrams...' instead of 'You are a documentation expert') and add common user-facing trigger terms like 'README', 'docstrings', 'Swagger/OpenAPI', '.md files'.

DimensionReasoningScore

Specificity

Names the domain (documentation from code) and lists some outputs (API docs, architecture diagrams, user guides, technical references), but uses vague qualifiers like 'AI-powered analysis' and 'industry best practices' which are buzzwords rather than concrete actions.

2 / 3

Completeness

Describes what it does (generate various documentation types) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'when' is entirely absent, warranting a score of 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'API docs', 'architecture diagrams', 'user guides', 'technical references', and 'documentation', but misses common user variations like 'README', 'docstrings', 'JSDoc', 'swagger', or 'code comments'. The terms are reasonable but not comprehensive.

2 / 3

Distinctiveness Conflict Risk

The focus on documentation generation from code is somewhat specific, but terms like 'user guides' and 'technical references' could overlap with general writing or technical writing skills. The lack of explicit file types or trigger conditions reduces distinctiveness.

2 / 3

Total

7

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a high-level project brief than actionable instructions for Claude. It lacks any concrete code, commands, tool configurations, or specific examples, relying entirely on abstract directives. The heavy delegation to an external playbook without sufficient standalone content makes the skill body itself minimally useful.

Suggestions

Add concrete, executable examples for at least one documentation type (e.g., generating API docs with a specific tool like pydoc, typedoc, or Sphinx, with actual commands and config snippets).

Replace vague instructions like 'Extract information from code' with specific steps, e.g., 'Run `typedoc --entryPointStrategy expand ./src` to generate API reference from TypeScript source.'

Add validation checkpoints to the workflow, such as 'Run `markdownlint docs/` to check formatting' or 'Verify all public functions have docstrings with `interrogate -v .`'.

Include at least a minimal concrete example of expected output (e.g., a sample generated doc structure or a documentation plan template) directly in the skill body rather than deferring everything to the external playbook.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary sections like 'Context' that restates the description, and the 'Use this skill when' / 'Do not use this skill when' sections are somewhat verbose. However, it's not egregiously padded—most sections are reasonably brief.

2 / 3

Actionability

The instructions are entirely abstract and vague—'Identify required doc types,' 'Extract information from code,' 'Generate docs with consistent terminology' are descriptions of goals, not concrete executable steps. There are no code examples, specific commands, tool configurations, or copy-paste ready snippets.

1 / 3

Workflow Clarity

The instructions list high-level phases without any concrete sequencing, validation checkpoints, or feedback loops. Steps like 'Add automation (linting, CI) and validate accuracy' are vague and lack any specifics on how to validate or what constitutes passing validation.

1 / 3

Progressive Disclosure

There is a reference to `resources/implementation-playbook.md` for detailed examples, which is good one-level-deep disclosure. However, the main content is too thin to serve as a useful overview—it delegates almost all substance to the external file without providing enough actionable content in the skill itself.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.