CtrlK
BlogDocsLog inGet started
Tessl Logo

audit-agents-skills

Comprehensive quality audit for Claude Code agents, skills, and commands with comparative analysis

Install with Tessl CLI

npx tessl i github:FlorianBruniaux/claude-code-ultimate-guide --skill audit-agents-skills
What are skills?

54

2.08x

Quality

33%

Does it follow best practices?

Impact

98%

2.08x

Average score across 3 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./examples/skills/audit-agents-skills/SKILL.md
SKILL.md
Review
Evals

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies its domain (Claude Code quality auditing) but remains too high-level and abstract. It lacks specific concrete actions, natural trigger terms users would say, and critically missing any 'Use when...' guidance that would help Claude know when to select this skill over others.

Suggestions

Add a 'Use when...' clause with explicit triggers like 'Use when the user asks to review, validate, or audit their skills, agents, or commands'

List specific concrete actions such as 'validates YAML frontmatter, checks description quality, identifies missing fields, compares against best practices'

Include natural user phrases as trigger terms: 'review my skill', 'check my agent', 'validate command', 'skill quality', 'audit my configuration'

DimensionReasoningScore

Specificity

Names the domain ('quality audit for Claude Code agents, skills, and commands') and mentions 'comparative analysis' as an action, but lacks concrete specific actions like 'validates syntax', 'checks for errors', or 'generates reports'.

2 / 3

Completeness

Describes what it does at a high level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'quality audit', 'agents', 'skills', 'commands', but misses natural user phrases like 'review my skill', 'check quality', 'validate', or 'lint'.

2 / 3

Distinctiveness Conflict Risk

The phrase 'Claude Code agents, skills, and commands' provides some specificity to this ecosystem, but 'quality audit' and 'comparative analysis' are generic enough to potentially overlap with other review/analysis skills.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive but severely over-engineered for a SKILL.md file. It explains concepts Claude already knows (YAML parsing, regex, Jaccard similarity), includes unnecessary industry context, and embeds content that belongs in referenced files. The core audit workflow is sound but buried under excessive documentation.

Suggestions

Reduce content by 70%+ by moving Industry Context, CI/CD Integration, Maintenance, and Detection Patterns to separate reference files

Remove explanations of basic concepts (YAML parsing, regex, token counting) - Claude knows these

Make the workflow actionable by providing the actual scoring/criteria.yaml content or a complete inline version

Add explicit validation checkpoints between phases (e.g., 'Verify all files parsed successfully before scoring')

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines with extensive explanations Claude doesn't need (what YAML is, how regex works, industry report summaries). The 'Industry Context' section alone is unnecessary padding. Much content could be in referenced files.

1 / 3

Actionability

Contains some concrete code snippets (Python detection patterns, YAML examples) but much is pseudocode or conceptual. The actual audit workflow relies on external files (scoring/criteria.yaml) that aren't provided, making it incomplete.

2 / 3

Workflow Clarity

Five phases are clearly numbered with logical sequence, but validation checkpoints are weak. No explicit 'verify before proceeding' steps between phases. The workflow describes what happens but lacks error recovery guidance.

2 / 3

Progressive Disclosure

References external files (scoring/criteria.yaml, examples/) appropriately, but the SKILL.md itself is a monolithic wall of text. Content like 'Industry Context', 'CI/CD Integration', and 'Maintenance' sections should be in separate files.

2 / 3

Total

7

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (548 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

8

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.