CtrlK
BlogDocsLog inGet started
Tessl Logo

debugging-strategies

Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use when investigating bugs, performance issues, or unexpected behavior.

70

0.98x
Quality

47%

Does it follow best practices?

Impact

84%

0.98x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/developer-essentials/skills/debugging-strategies/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has good structural completeness with an explicit 'Use when' clause, but suffers from moderate vagueness in its capability descriptions—listing categories of techniques rather than concrete actions. The trigger terms cover the basics but miss many natural user phrasings, and the 'any codebase or technology stack' framing reduces distinctiveness.

Suggestions

Replace abstract categories with specific concrete actions, e.g., 'Analyze stack traces, set breakpoints, profile memory and CPU usage, inspect logs, bisect commits to isolate regressions'.

Expand trigger terms in the 'Use when' clause to include natural user language like 'error', 'crash', 'exception', 'slow', 'not working', 'broken', 'memory leak'.

DimensionReasoningScore

Specificity

Names the domain (debugging) and some actions ('debugging techniques, profiling tools, root cause analysis'), but these are still fairly abstract categories rather than concrete specific actions like 'set breakpoints, analyze stack traces, inspect memory usage'.

2 / 3

Completeness

Clearly answers both 'what' (systematic debugging techniques, profiling tools, root cause analysis to track down bugs) and 'when' with an explicit 'Use when investigating bugs, performance issues, or unexpected behavior' clause.

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'bugs', 'performance issues', 'unexpected behavior', and 'debugging', but misses common natural variations users might say such as 'error', 'crash', 'stack trace', 'slow', 'broken', 'not working', 'exception'.

2 / 3

Distinctiveness Conflict Risk

While debugging is a recognizable niche, the phrase 'across any codebase or technology stack' is very broad and could overlap with general coding assistance skills, code review skills, or performance optimization skills. The scope is too wide to be clearly distinct.

2 / 3

Total

9

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a generic debugging tutorial or textbook chapter than a focused skill for Claude. It is extremely verbose, explaining well-known concepts (scientific method, rubber duck debugging, 'take breaks') that waste context window tokens. While it contains some useful executable code snippets, the majority of content is abstract checklists and advice that Claude already knows, and the monolithic structure with no external references makes it poorly suited as a SKILL.md.

Suggestions

Remove all generic debugging advice Claude already knows (scientific method, rubber duck debugging, 'read error messages,' 'take breaks,' common mistakes) — these waste tokens without adding value.

Split language-specific debugging tools (Python, Go, JS/TS) into separate reference files and link to them from a concise overview in SKILL.md.

Replace the abstract markdown checklists (rendered as code blocks) with concrete, executable examples showing actual debugging sessions with specific inputs and expected outputs.

Add explicit validation checkpoints to the debugging workflow, e.g., 'Confirm reproduction is consistent before proceeding to Phase 2' with concrete verification criteria.

DimensionReasoningScore

Conciseness

Extremely verbose and padded with content Claude already knows. Explains basic concepts like the scientific method, rubber duck debugging, 'read error messages,' and 'take breaks.' The debugging mindset section, common mistakes, and best practices are all generic advice that wastes tokens. Much of the content is markdown-within-markdown (checklists rendered as code blocks) adding unnecessary formatting overhead.

1 / 3

Actionability

Contains some executable code examples (pdb, Chrome DevTools, git bisect, VS Code launch.json) which are concrete and useful. However, much of the skill is abstract checklists and markdown tables rather than executable guidance. The 'Debugging Patterns by Issue Type' sections are entirely descriptive bullet points with no concrete code or commands.

2 / 3

Workflow Clarity

The four-phase process (Reproduce → Gather Info → Hypothesize → Test) provides a clear sequence, and git bisect has explicit steps. However, there are no validation checkpoints or feedback loops for the overall debugging process. The phases are described abstractly without concrete verification steps to confirm progress between phases.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to external files. At ~400+ lines covering multiple languages (JS/TS, Python, Go), debugging patterns, tools, and techniques, much of this content should be split into separate reference files. Everything is inlined with no navigation structure or cross-references.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (528 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.