Systematically debug code issues using proven methodologies. Use when encountering errors, unexpected behavior, or performance problems. Handles error analysis, root cause identification, debugging strategies, and fix verification.
Install with Tessl CLI
npx tessl i github:supercent-io/skills-template --skill debugging82
Quality
79%
Does it follow best practices?
Impact
78%
1.09xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agent-skills/debugging/SKILL.mdDiscovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description that clearly communicates both purpose and trigger conditions. The 'Use when...' clause with specific scenarios (errors, unexpected behavior, performance problems) is well-constructed. However, the capability descriptions lean toward abstract categories rather than concrete actions, and the debugging domain could potentially overlap with general coding assistance skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (debugging) and lists some actions ('error analysis, root cause identification, debugging strategies, fix verification'), but these are somewhat abstract categories rather than concrete specific actions like 'set breakpoints' or 'analyze stack traces'. | 2 / 3 |
Completeness | Clearly answers both what ('Systematically debug code issues using proven methodologies... error analysis, root cause identification, debugging strategies, fix verification') and when ('Use when encountering errors, unexpected behavior, or performance problems'). | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'errors', 'unexpected behavior', 'performance problems', 'debug'. These are terms users naturally use when encountering issues with their code. | 3 / 3 |
Distinctiveness Conflict Risk | While debugging is a specific domain, 'code issues' and 'errors' could overlap with general coding assistance skills or error handling skills. The description doesn't strongly differentiate from other code-related skills. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable debugging skill with clear workflows and executable examples. Its main weakness is length - it tries to be comprehensive rather than lean, including some content Claude already knows (basic debugging wisdom, tool tables). The structure is good but could benefit from splitting detailed examples into separate reference files.
Suggestions
Move the 'Debugging Tools' table and 'Best practices' list to a separate REFERENCE.md file, as these are lookup content rather than core workflow
Trim explanatory text that states obvious debugging principles (e.g., 'Error messages usually point to the issue')
Consider moving the detailed examples (TypeError, Race condition, Memory leak) to an EXAMPLES.md file with brief summaries in the main skill
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some content Claude already knows (common bug patterns table, basic debugging concepts like 'read the error'). The references section and some explanatory text could be trimmed. | 2 / 3 |
Actionability | Provides fully executable code examples throughout, including specific bash commands, Python debugging patterns, and concrete before/after fix examples. The reproduction steps and fix checklist are copy-paste ready. | 3 / 3 |
Workflow Clarity | Clear 6-step sequential process with explicit validation (Step 6: Verify and Prevent). Includes feedback loops through the fix checklist and regression testing. The binary search debugging approach provides clear methodology for isolation. | 3 / 3 |
Progressive Disclosure | Content is well-structured with clear sections, but the skill is monolithic (~200 lines) with detailed examples that could be split into separate files. The references section points to external resources but no internal file organization for advanced topics. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.