Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use when investigating bugs, performance issues, or unexpected behavior.
Install with Tessl CLI
npx tessl i github:wshobson/agents --skill debugging-strategies72
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structure with an explicit 'Use when...' clause that clearly defines trigger conditions. However, it relies on abstract category terms ('systematic debugging techniques', 'profiling tools') rather than concrete actions, and the broad scope ('any codebase or technology stack') reduces distinctiveness. The trigger terms cover the basics but miss many natural phrases users would use when encountering bugs.
Suggestions
Replace abstract categories with specific concrete actions (e.g., 'analyze stack traces, set breakpoints, inspect logs, profile memory and CPU usage, bisect commits')
Expand trigger terms to include common user phrases like 'crash', 'error', 'exception', 'slow', 'not working', 'broken', 'memory leak'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (debugging) and mentions some actions ('debugging techniques, profiling tools, root cause analysis'), but these are categories rather than concrete specific actions like 'set breakpoints, analyze stack traces, inspect memory usage'. | 2 / 3 |
Completeness | Clearly answers both what ('systematic debugging techniques, profiling tools, root cause analysis') and when ('Use when investigating bugs, performance issues, or unexpected behavior') with an explicit 'Use when...' clause. | 3 / 3 |
Trigger Term Quality | Includes some natural keywords ('bugs', 'performance issues', 'unexpected behavior', 'debugging') but misses common variations users might say like 'crash', 'error', 'slow', 'broken', 'not working', 'exception', 'stack trace'. | 2 / 3 |
Distinctiveness Conflict Risk | The phrase 'across any codebase or technology stack' is very broad and could overlap with general coding/development skills. 'Performance issues' could conflict with optimization-focused skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides excellent actionable code examples across multiple languages and covers debugging comprehensively. However, it suffers from verbosity by explaining concepts Claude already knows (scientific method, rubber duck debugging, basic mindset principles) and could benefit from better content organization by moving language-specific details to reference files. The workflow structure exists but lacks explicit validation checkpoints and feedback loops.
Suggestions
Remove or drastically condense the 'Core Principles' and 'Debugging Mindset' sections - Claude already understands the scientific method and basic debugging philosophy
Move language-specific debugging sections (JavaScript/TypeScript, Python, Go) to separate reference files and keep only a brief overview with links in the main skill
Add explicit validation checkpoints to the debugging phases, such as 'Confirm reproduction is consistent before proceeding' and 'Document hypothesis before testing'
Convert the 'Quick Debugging Checklist' into a more prominent, actionable first-response section rather than burying it at the end
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains significant verbosity explaining concepts Claude already knows (scientific method, rubber duck debugging, basic debugging mindset). While the code examples are useful, sections like 'Debugging Mindset' and 'Core Principles' add little value for an AI that understands these concepts. | 2 / 3 |
Actionability | The skill provides extensive executable code examples across multiple languages (TypeScript, Python, Go), specific tool configurations (VS Code launch.json), and concrete commands (git bisect). The examples are copy-paste ready and cover real debugging scenarios. | 3 / 3 |
Workflow Clarity | While the four-phase debugging process is outlined (Reproduce, Gather Info, Hypothesize, Test), the phases are presented as checklists rather than actionable workflows with validation checkpoints. Missing explicit feedback loops for when hypotheses fail or when to escalate. | 2 / 3 |
Progressive Disclosure | References to external files are listed at the end (references/debugging-tools-guide.md, etc.), but the main content is a monolithic 400+ line document. Much of the language-specific debugging content could be split into separate reference files with the SKILL.md providing a concise overview. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (537 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.