CtrlK
BlogDocsLog inGet started
Tessl Logo

update-docs

Syncs documentation from source-of-truth files like package.json and .env.example

42

Quality

27%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/update-docs/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description conveys a reasonable concept but lacks the detail needed for reliable skill selection. It omits explicit trigger guidance ('Use when...'), doesn't enumerate the concrete actions performed, and could benefit from more natural user-facing keywords. The description is too terse to confidently distinguish it from other documentation-related skills.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when documentation is out of sync with source files, or when the user mentions updating README, syncing docs, or documentation drift.'

List specific concrete actions such as 'Updates README sections, regenerates environment variable docs, validates that documented dependencies match package.json.'

Include more natural trigger terms users might say, such as 'README', 'update docs', 'outdated documentation', 'env vars', or 'dependency list'.

DimensionReasoningScore

Specificity

Names the domain (documentation syncing) and gives examples of source files (package.json, .env.example), but doesn't list the specific actions performed (e.g., updating README sections, generating docs, validating consistency).

2 / 3

Completeness

Describes what it does (syncs documentation from source-of-truth files) but has no 'Use when...' clause or equivalent explicit trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is also somewhat thin, placing this at 1.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'documentation', 'package.json', '.env.example', and 'syncs', but misses common variations users might say such as 'README', 'docs out of date', 'update docs', 'documentation drift', or 'sync README'.

2 / 3

Distinctiveness Conflict Risk

The concept of syncing docs from source-of-truth files is somewhat specific, but 'documentation' is broad enough to overlap with general documentation generation or editing skills. The mention of specific files like package.json and .env.example helps but isn't sufficient for a clear niche.

2 / 3

Total

7

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a high-level task outline rather than actionable instructions. It lacks concrete code examples for parsing source files, templates for generated documentation, validation steps, and any error handling guidance. The brevity is somewhat appropriate but comes at the cost of actionability—Claude would need to make many assumptions about output format, content depth, and edge cases.

Suggestions

Add concrete code or commands for parsing package.json scripts and .env.example (e.g., using jq or a Python/Node script with expected output format)

Provide templates or example snippets showing what the generated CONTRIB.md and RUNBOOK.md sections should look like

Add validation steps: verify generated docs render correctly, check that all scripts/env vars from source files are accounted for, and confirm no broken links

Specify how to handle edge cases: missing source files, empty scripts section, .env.example with no comments, or docs directory not existing

DimensionReasoningScore

Conciseness

Reasonably brief and doesn't over-explain concepts Claude knows, but some items like 'Include descriptions from comments' and 'Document purpose and format' are vague filler rather than precise instructions. Could be tighter.

2 / 3

Actionability

No executable code, no concrete commands, no example outputs, no templates for the generated docs. The skill describes what to do at a high level but provides no specific guidance on how to parse package.json, what the output format should look like, or how to identify obsolete docs programmatically.

1 / 3

Workflow Clarity

Steps are listed but lack validation checkpoints, error handling, or feedback loops. There's no verification that generated docs are correct, no handling of missing files (e.g., what if .env.example doesn't exist), and no explicit sequencing dependencies between steps. For a multi-step documentation generation process, this is insufficient.

1 / 3

Progressive Disclosure

The content is short and organized with numbered steps, which is fine for a simple skill. However, it references generating two separate doc files (CONTRIB.md and RUNBOOK.md) with substantial content but provides no templates or references to example files that would help guide generation.

2 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
sc30gsw/claude-code-customes
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.