CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-readiness-audit

Audit a documentation site for agent-friendliness: discovery, markdown delivery, crawlability, semantic structure, machine-readable surfaces, and content legibility. Use when asked to assess docs.docker.com or any docs site for AI/agent readiness, produce a scored report, compare with external scanners, or generate a remediation list. Triggers on: "audit docs for agent readiness", "how agent-friendly is docs.docker.com", "score our docs for AI agents", "review llms.txt / markdown / crawlability", "create an agent-readiness remediation plan".

93

Quality

92%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines a specific niche (agent-friendliness auditing of documentation sites), lists concrete capabilities, provides explicit 'Use when' guidance, and includes natural trigger phrases. It uses proper third-person voice throughout and is concise yet comprehensive. The description would allow Claude to confidently select this skill from a large pool without ambiguity.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: audit for discovery, markdown delivery, crawlability, semantic structure, machine-readable surfaces, content legibility, produce scored reports, compare with external scanners, generate remediation lists.

3 / 3

Completeness

Clearly answers both 'what' (audit documentation sites for agent-friendliness across six dimensions, produce scored reports, compare with scanners, generate remediation lists) and 'when' (explicit 'Use when' clause plus specific trigger phrases).

3 / 3

Trigger Term Quality

Includes excellent natural trigger phrases users would actually say, such as 'audit docs for agent readiness', 'score our docs for AI agents', 'review llms.txt / markdown / crawlability', and 'create an agent-readiness remediation plan'. Good coverage of variations including specific site references.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche — auditing documentation sites specifically for AI/agent readiness is a very specific domain unlikely to conflict with general documentation, general auditing, or general AI skills. The trigger terms are unique and well-scoped.

3 / 3

Total

12

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted, professional skill that provides a clear 9-step workflow for auditing documentation sites for agent-friendliness. Its strengths are strong actionability with concrete commands and checks, excellent workflow sequencing with validation built in, and good progressive disclosure to supporting files. The main weakness is moderate verbosity—some conditional explanations and Docker-specific context could be trimmed to improve token efficiency.

DimensionReasoningScore

Conciseness

The skill is generally well-written and avoids explaining basic concepts, but it's somewhat verbose for what it conveys. Several sections include hedging and conditional explanations (e.g., the Docker-specific MCP paragraph, repeated reminders about docs-only vs app hosts) that could be tightened. However, it largely respects Claude's intelligence and doesn't over-explain fundamentals.

2 / 3

Actionability

The skill provides concrete, executable guidance throughout: specific bash commands with arguments, exact file paths to check (/llms.txt, /robots.txt, /sitemap.xml), specific sampling criteria (at least 12 pages, named page types), explicit fetch checks to perform, and references to a bundled script and rubric. The instructions are specific enough to act on immediately.

3 / 3

Workflow Clarity

The 9-step workflow is clearly sequenced with logical progression from scoping → gathering signals → sampling → checking → scoring → comparing → remediating → reporting. Validation is embedded throughout (e.g., 'score only what you verified', 'trust the live fetch' over scanner disagreements, foundational caps preventing inflated scores). The priority-based remediation list (P0/P1/P2) provides a clear feedback mechanism.

3 / 3

Progressive Disclosure

The skill appropriately references external files for detailed content: the rubric is in references/rubric.md, the report template is in references/report-template.md, and the baseline probe script is in scripts/baseline-probes.sh. These are one-level-deep, clearly signaled references. The main SKILL.md stays at the right level of abstraction as an orchestration guide.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
docker/docs
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.