CtrlK
BlogDocsLog inGet started
Tessl Logo

jbvc/debugging-strategies

Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use when investigating bugs, performance issues, or unexpected behavior.

55

Quality

55%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has good structural completeness with an explicit 'Use when' clause and covers the debugging domain adequately. However, it leans toward abstract category names rather than concrete actions, and its broad scope ('any codebase or technology stack') reduces distinctiveness. The trigger terms are reasonable but miss many common natural phrases users would employ when encountering bugs.

Suggestions

Replace abstract categories with concrete actions, e.g., 'Analyze stack traces, set breakpoints, profile CPU/memory usage, inspect logs, and isolate failing code paths'.

Expand trigger terms in the 'Use when' clause to include natural user phrases like 'error', 'crash', 'not working', 'slow performance', 'exception', 'stack trace'.

DimensionReasoningScore

Specificity

Names the domain (debugging) and some actions ('debugging techniques, profiling tools, root cause analysis'), but these are still fairly abstract categories rather than concrete specific actions like 'set breakpoints, analyze stack traces, inspect memory usage'.

2 / 3

Completeness

Clearly answers both 'what' (systematic debugging techniques, profiling tools, root cause analysis to track down bugs) and 'when' with an explicit 'Use when investigating bugs, performance issues, or unexpected behavior' clause.

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'bugs', 'performance issues', 'unexpected behavior', 'debugging', and 'profiling', but misses common natural variations users might say such as 'error', 'crash', 'stack trace', 'slow', 'broken', 'not working', 'exception'.

2 / 3

Distinctiveness Conflict Risk

The phrase 'across any codebase or technology stack' is extremely broad and could overlap with general coding assistance, code review, or performance optimization skills. While 'debugging' and 'root cause analysis' provide some distinctiveness, the scope is too wide to clearly carve out a niche.

2 / 3

Total

9

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder that provides abstract debugging advice without any concrete, actionable guidance. It lacks specific tools, commands, code examples, or detailed techniques — everything is deferred to a referenced playbook without adequate summary. The content reads more like a table of contents for a debugging philosophy than a skill Claude can execute.

Suggestions

Add concrete, executable examples for at least 2-3 debugging techniques (e.g., specific profiling commands like `py-spy`, `perf`, or `console.time`; specific log analysis patterns; binary search with git bisect).

Replace the abstract instruction steps with a concrete workflow that includes validation checkpoints, e.g., 'After reproducing, confirm the issue is deterministic by running 3 times' and 'After applying fix, verify the original reproduction case passes.'

Remove the marketing opening line and trim the 'Use/Do not use' sections to save tokens, redirecting that space toward actionable content.

Provide a brief summary of what's in `resources/implementation-playbook.md` (e.g., section names and what each covers) so Claude knows when to consult it.

DimensionReasoningScore

Conciseness

The opening line ('Transform debugging from frustrating guesswork into systematic problem-solving...') is marketing fluff that wastes tokens. The 'Use this skill when' and 'Do not use this skill when' sections are somewhat useful but border on obvious. The instructions themselves are lean but very high-level.

2 / 3

Actionability

The instructions are entirely abstract and vague — 'Form hypotheses and design controlled experiments' and 'Narrow scope with binary search and targeted instrumentation' provide no concrete commands, code examples, tool names, or specific techniques. There is nothing executable or copy-paste ready.

1 / 3

Workflow Clarity

While there is a loose sequence (reproduce → hypothesize → narrow → document → verify), the steps are too abstract to be actionable. There are no validation checkpoints, no feedback loops for when hypotheses are wrong, and no concrete guidance on what 'verify the fix' means in practice.

1 / 3

Progressive Disclosure

The skill references `resources/implementation-playbook.md` for detailed patterns, which is a reasonable one-level-deep reference. However, the overview itself is so thin that it's unclear what the playbook contains or when to use specific sections of it. The reference could be better signaled with descriptions of what's inside.

2 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents