CtrlK
BlogDocsLog inGet started
Tessl Logo

debugging-strategies

Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use when investigating bugs, performance issues, or unexpected behavior.

70

0.98x
Quality

47%

Does it follow best practices?

Impact

84%

0.98x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/developer-essentials/skills/debugging-strategies/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a solid structure with an explicit 'Use when' clause, which is its strongest aspect. However, it relies on abstract category names ('systematic debugging techniques', 'profiling tools') rather than listing concrete actions, and the trigger terms could be expanded to cover more natural user language. The overly broad scope ('any codebase or technology stack') weakens its distinctiveness.

Suggestions

Replace abstract categories with concrete actions, e.g., 'Analyze stack traces, set breakpoints, inspect logs, profile CPU/memory usage, and perform root cause analysis'.

Expand trigger terms in the 'Use when' clause to include common user phrases like 'error', 'crash', 'not working', 'slow performance', 'exception', 'stack trace', or 'log analysis'.

DimensionReasoningScore

Specificity

Names the domain (debugging) and some actions ('debugging techniques, profiling tools, root cause analysis'), but these are still fairly abstract categories rather than concrete specific actions like 'set breakpoints, analyze stack traces, inspect memory usage'.

2 / 3

Completeness

Clearly answers both 'what' (systematic debugging techniques, profiling tools, root cause analysis to track down bugs) and 'when' with an explicit 'Use when investigating bugs, performance issues, or unexpected behavior' clause.

3 / 3

Trigger Term Quality

Includes some natural keywords like 'bugs', 'performance issues', 'unexpected behavior', and 'debugging', but misses common variations users would say such as 'error', 'crash', 'stack trace', 'slow', 'broken', 'not working', 'exception', or 'log analysis'.

2 / 3

Distinctiveness Conflict Risk

The phrase 'across any codebase or technology stack' is extremely broad and could overlap with general coding assistance skills. While debugging is a recognizable niche, the description's breadth ('any codebase or technology stack') reduces distinctiveness.

2 / 3

Total

9

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads like a generic debugging tutorial or textbook chapter rather than a focused, token-efficient skill for Claude. It is heavily padded with advice Claude already knows (read error messages, take breaks, rubber duck debugging) and lacks the tight, actionable structure needed. The few executable code examples are diluted by extensive prose checklists and motivational guidance.

Suggestions

Cut at least 60% of the content by removing generic debugging advice Claude already knows (scientific method, rubber duck debugging, 'read error messages,' mindset tips) and focus only on concrete tool usage and commands.

Split language-specific debugging sections (Python, JS/TS, Go) into separate referenced files to improve progressive disclosure and reduce the monolithic structure.

Replace the markdown-in-markdown checklists (Phase 1-4) with concrete, executable examples showing actual debugging sessions with specific commands and expected outputs.

Add explicit validation/feedback loops to workflows, e.g., 'Run the profiler → if hotspot is in function X, apply fix Y → re-profile to confirm improvement > Z%.'

DimensionReasoningScore

Conciseness

Extremely verbose and padded with content Claude already knows. Explains basic concepts like 'the scientific method,' 'rubber duck debugging,' and 'read error messages.' The debugging mindset section, common mistakes, and best practices are all generic advice that wastes tokens. Much of the content is markdown-within-markdown checklists that restate obvious debugging wisdom rather than providing novel, actionable guidance.

1 / 3

Actionability

Contains some executable code examples (pdb, Chrome DevTools, git bisect, VS Code launch.json), but much of the skill is abstract checklists and markdown tables rather than concrete, copy-paste-ready commands. The 'Debugging Patterns by Issue Type' sections are entirely descriptive bullet points with no executable code. Many code snippets are illustrative rather than directly usable in a real debugging session.

2 / 3

Workflow Clarity

The four-phase process (Reproduce → Gather Info → Hypothesize → Test) provides a clear sequence, but each phase is filled with generic checklists rather than explicit validation checkpoints or feedback loops. There's no 'if this fails, do that' recovery guidance. The git bisect section is the only workflow with clear step-by-step commands and a completion criterion.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files or bundle resources. Everything is inlined in a single massive document covering multiple languages, multiple debugging techniques, and multiple issue types. Content would benefit enormously from splitting language-specific debugging, advanced techniques, and checklists into separate referenced files.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (528 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.