CtrlK
BlogDocsLog inGet started
Tessl Logo

debugging-strategies

Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use when investigating bugs, performance issues, or unexpected behavior.

70

0.98x
Quality

47%

Does it follow best practices?

Impact

84%

0.98x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/developer-essentials/skills/debugging-strategies/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a solid structure with an explicit 'Use when' clause, which is its strongest aspect. However, it leans on abstract category names ('systematic debugging techniques', 'profiling tools') rather than listing concrete actions, and the trigger terms could be expanded to cover more natural user language. The broad scope ('any codebase or technology stack') weakens its distinctiveness.

Suggestions

Replace abstract categories with concrete actions, e.g., 'Analyze stack traces, set breakpoints, inspect logs, profile CPU/memory usage, and perform root cause analysis'.

Expand trigger terms in the 'Use when' clause to include common user phrases like 'error', 'crash', 'not working', 'slow response', 'exception', 'stack trace', or 'memory leak'.

DimensionReasoningScore

Specificity

Names the domain (debugging) and some actions ('debugging techniques, profiling tools, root cause analysis'), but these are still fairly abstract categories rather than concrete specific actions like 'set breakpoints, analyze stack traces, inspect memory usage'.

2 / 3

Completeness

Clearly answers both 'what' (systematic debugging techniques, profiling tools, root cause analysis to track down bugs) and 'when' with an explicit 'Use when investigating bugs, performance issues, or unexpected behavior' clause.

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'bugs', 'performance issues', 'unexpected behavior', and 'debugging', but misses common natural variations users might say such as 'error', 'crash', 'stack trace', 'slow', 'broken', 'not working', 'exception', or 'log analysis'.

2 / 3

Distinctiveness Conflict Risk

While debugging is a recognizable niche, the phrase 'across any codebase or technology stack' is very broad and could overlap with general coding assistance skills, code review skills, or performance optimization skills. The triggers 'bugs' and 'performance issues' could conflict with more specialized skills.

2 / 3

Total

9

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a generic debugging tutorial or blog post than a focused, actionable skill for Claude. It is excessively verbose, explaining well-known concepts (scientific method, rubber duck debugging, 'take breaks') that waste context window tokens. While it contains some useful code snippets, the majority of content is abstract checklists and motivational advice that Claude already knows, and the monolithic structure with no external references makes it poorly organized for practical use.

Suggestions

Cut at least 60% of the content by removing generic advice Claude already knows (scientific method, rubber duck debugging, 'read error messages,' 'take breaks,' common mistakes list) and focus only on concrete, executable techniques.

Split language-specific debugging sections (Python, Go, TypeScript) into separate reference files and link to them from a concise overview in SKILL.md.

Replace the markdown-in-code-block checklists with actual actionable commands or scripts—e.g., instead of a 'Reproduction Checklist' in a code fence, provide a concrete template or script that automates reproduction steps.

Add explicit validation checkpoints and feedback loops to the debugging workflow, such as 'After adding logging, verify the log output contains X before proceeding to hypothesis formation.'

DimensionReasoningScore

Conciseness

Extremely verbose and padded with content Claude already knows. Explains basic concepts like the scientific method, rubber duck debugging, 'read error messages,' and 'take breaks.' The debugging mindset section, common mistakes, and best practices are all generic advice that wastes tokens. Much of the content is markdown-within-markdown (checklists rendered as code blocks) adding unnecessary overhead.

1 / 3

Actionability

Contains some executable code examples (pdb, Chrome DevTools, git bisect, VS Code launch.json), but much of the skill is abstract checklists and markdown tables rather than concrete, copy-paste-ready commands. Many sections describe strategies in prose rather than providing specific executable guidance. The code examples that exist are reasonable but surrounded by vague advisory content.

2 / 3

Workflow Clarity

The four-phase debugging process (Reproduce → Gather Info → Hypothesize → Test) provides a clear sequence, but each phase is filled with generic checklists rather than explicit validation checkpoints or feedback loops. There's no concrete 'if this fails, do that' recovery path. For a skill involving systematic investigation, the lack of explicit verification steps between phases caps this at 2.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to external files. At 300+ lines covering multiple languages, multiple debugging techniques, and multiple issue patterns, much of this content should be split into separate reference files (e.g., language-specific debugging guides, pattern-specific guides). Everything is inlined with no navigation structure beyond headers.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (528 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.