CtrlK
BlogDocsLog inGet started
Tessl Logo

jbvc/debugger

Debugging specialist for errors, test failures, and unexpected behavior. Use proactively when encountering any issues.

46

Quality

46%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too vague and broad to be effective for skill selection. It identifies a domain (debugging) but fails to list concrete actions, lacks sufficient trigger terms, and its overly generic scope ('any issues') would cause frequent conflicts with other skills. It needs specific capabilities and narrower, more natural trigger terms.

Suggestions

Add concrete actions the skill performs, e.g., 'Analyzes stack traces, diagnoses test failures, traces root causes of bugs, inspects error logs, and suggests fixes.'

Narrow and enrich the trigger clause with natural user terms, e.g., 'Use when the user mentions bugs, crashes, exceptions, stack traces, error messages, broken tests, or says something is not working.'

Replace 'any issues' with specific scenarios to reduce conflict risk, e.g., 'Use when code produces unexpected output, tests fail, or runtime errors occur.'

DimensionReasoningScore

Specificity

The description uses vague language like 'errors, test failures, and unexpected behavior' without listing any concrete actions. It says what domain it covers but not what it actually does (e.g., analyze stack traces, set breakpoints, inspect variables, bisect commits).

1 / 3

Completeness

The 'what' is weak (just 'debugging specialist') and the 'when' clause exists ('Use proactively when encountering any issues') but is extremely broad and vague rather than providing explicit, useful triggers. The 'when' is so generic it barely qualifies.

2 / 3

Trigger Term Quality

Includes some relevant keywords like 'errors', 'test failures', and 'debugging' that users might naturally mention, but misses common variations like 'bug', 'crash', 'exception', 'stack trace', 'broken', 'not working', 'failing'.

2 / 3

Distinctiveness Conflict Risk

'Errors, test failures, and unexpected behavior' is extremely broad and would overlap with virtually any coding, testing, or development skill. 'Any issues' makes the trigger scope so wide it could conflict with many other skills.

1 / 3

Total

6

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a generic, abstract debugging guide that provides high-level process steps but lacks any concrete, actionable content—no code examples, no specific commands, no tool usage patterns. The workflow has a reasonable sequence but misses validation checkpoints and feedback loops. The content reads more like a description of what a debugger does rather than instructions that would meaningfully augment Claude's capabilities.

Suggestions

Add concrete, executable examples: show specific debugging commands (e.g., using a debugger, reading stack traces, adding logging statements) with real code snippets rather than abstract descriptions.

Include a feedback loop in the workflow: after 'Form and test hypotheses', add explicit steps for what to do when a hypothesis is disproven, and add a validation checkpoint before declaring the fix complete.

Remove the generic 'Use this skill when' / 'Do not use this skill when' boilerplate that just restates the skill name without adding value, and cut meta-instructions like 'Clarify goals, constraints, and required inputs' that Claude already knows to do.

Describe what 'resources/implementation-playbook.md' contains so Claude knows when to reference it, e.g., 'For language-specific debugging patterns and common error catalogs, see [implementation-playbook.md](resources/implementation-playbook.md).'

DimensionReasoningScore

Conciseness

The content has some unnecessary filler (e.g., generic 'Use this skill when' / 'Do not use this skill when' sections that just repeat the word 'debugger', and vague meta-instructions like 'Clarify goals, constraints, and required inputs'). However, the core debugging steps are reasonably concise.

2 / 3

Actionability

The skill provides only abstract, high-level guidance ('Analyze error messages and logs', 'Form and test hypotheses', 'Add strategic debug logging') with no concrete code examples, specific commands, or executable steps. Everything describes rather than instructs.

1 / 3

Workflow Clarity

There is a numbered sequence (capture error, identify repro steps, isolate failure, implement fix, verify), which provides some structure. However, there are no explicit validation checkpoints, no feedback loops for when a hypothesis is wrong, and no concrete verification steps.

2 / 3

Progressive Disclosure

There is a reference to 'resources/implementation-playbook.md' for detailed examples, which is good. However, the main content itself is poorly organized with redundant sections and the reference is vaguely signaled ('If detailed examples are required') without describing what the playbook contains.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents