CtrlK
BlogDocsLog inGet started
Tessl Logo

audit-content

Comprehensive content quality and maintenance assessment. Evaluates documentation quality, relevance, maintenance needs, and provides actionable recommendations.

37

Quality

22%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/audit-content/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

9%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is too abstract and buzzword-heavy, reading more like a marketing tagline than a functional skill description. It lacks concrete actions, natural trigger terms users would say, explicit 'when to use' guidance, and sufficient specificity to distinguish it from other documentation or content-related skills.

Suggestions

Add a 'Use when...' clause with specific trigger scenarios, e.g., 'Use when the user asks to audit documentation, check for outdated content, review doc quality, or assess whether docs need updating.'

Replace vague terms like 'comprehensive content quality assessment' with concrete actions, e.g., 'Checks documentation for broken links, outdated references, readability issues, and missing sections. Scores content freshness and flags pages needing updates.'

Include natural user-facing keywords like 'doc review', 'stale docs', 'documentation audit', 'content review', 'outdated pages' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (content/documentation quality) and some actions (evaluates quality, relevance, maintenance needs, provides recommendations), but the actions are fairly high-level and not concrete enough to clearly distinguish specific capabilities like 'checks for broken links, identifies outdated sections, scores readability'.

2 / 3

Completeness

While it partially addresses 'what' (evaluates documentation quality, relevance, maintenance needs), there is no 'Use when...' clause or equivalent explicit trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also vague enough that this falls to 1.

1 / 3

Trigger Term Quality

The description lacks natural keywords a user would actually say. Terms like 'content quality', 'maintenance assessment', and 'actionable recommendations' are abstract/buzzwordy rather than natural trigger terms. Missing terms like 'review docs', 'outdated documentation', 'doc audit', 'stale content', etc.

1 / 3

Distinctiveness Conflict Risk

The description is very generic — 'content quality assessment' and 'documentation quality' could overlap with many skills related to writing, editing, documentation generation, linting, or code review. There are no distinct triggers to differentiate it from other documentation or content-related skills.

1 / 3

Total

5

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a project specification or requirements document than an actionable skill for Claude. It describes what a content audit should accomplish but provides no concrete implementation—no code, no specific commands, no example output formats, and no tool usage patterns. The workflow is logically sequenced but lacks the executable specificity needed for Claude to reliably perform the task.

Suggestions

Add concrete, executable code or command examples for each workflow step (e.g., how to scan files, check modification dates, validate links using specific tools or shell commands).

Provide a concrete example of the CONTENT_AUDIT_REPORT output format with sample data so Claude knows exactly what to produce.

Add a validation/verification step before applying fixes in FIX_MODE, including a feedback loop (preview changes → confirm → apply → verify).

Remove explanatory parentheticals that Claude already understands (e.g., '(e.g., plain language)', '(e.g., > 6 months old)') to improve token efficiency.

DimensionReasoningScore

Conciseness

The skill is reasonably structured but includes some unnecessary elaboration. Parenthetical examples like '(e.g., plain language)' and explanations of what each quality metric means add little value for Claude. The inputs section is clean, but the workflow steps could be tighter.

2 / 3

Actionability

The skill is entirely abstract and descriptive—there are no concrete commands, code snippets, or executable examples. Instructions like 'Scan the target PATH to list all content assets' and 'Evaluate content against quality metrics' describe what to do without showing how. No specific tools, scripts, or implementation patterns are provided.

1 / 3

Workflow Clarity

Steps are listed in a logical sequence (inventory → assessment → reporting), but there are no validation checkpoints or feedback loops. For a content audit that could involve batch operations and automated fixes (FIX_MODE), there's no verify-before-applying step, which should cap this at 2.

2 / 3

Progressive Disclosure

The content is organized into clear sections (Inputs, Workflow steps, Outputs, Quick Reference), which is good. However, the quality assessment criteria and severity definitions could be split into reference files rather than inline. There are no references to external files for detailed guidance.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
dandye/ai-runbooks
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.