Troubleshoot systematically using the Scientific Method. Use when debugging crashes, tracing errors, diagnosing unexpected behavior, or investigating exceptions.
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.github/skills/common/common-debugging/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with a clear 'Use when' clause and good trigger terms that developers would naturally use. Its main weakness is that the 'what' portion is somewhat abstract—it describes the approach (Scientific Method) but doesn't enumerate the specific concrete actions or steps the skill teaches. The distinctiveness could also be improved by clarifying how this differs from other debugging-related skills.
Suggestions
Add specific concrete actions to the 'what' portion, e.g., 'Formulate hypotheses, isolate variables, reproduce issues, analyze stack traces, and verify fixes systematically using the Scientific Method.'
Improve distinctiveness by clarifying the scope or differentiator, e.g., 'for complex or hard-to-reproduce issues where ad-hoc debugging has failed' to distinguish from simpler error-fixing skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (troubleshooting/debugging) and mentions the approach (Scientific Method), but doesn't list specific concrete actions like 'analyze stack traces', 'reproduce issues', 'isolate variables', etc. The actions listed (debugging crashes, tracing errors) are more like trigger scenarios than concrete capabilities. | 2 / 3 |
Completeness | Clearly answers both 'what' (troubleshoot systematically using the Scientific Method) and 'when' (explicit 'Use when' clause covering debugging crashes, tracing errors, diagnosing unexpected behavior, investigating exceptions). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'debugging', 'crashes', 'tracing errors', 'unexpected behavior', 'exceptions'. These are terms developers naturally use when seeking help with troubleshooting problems. | 3 / 3 |
Distinctiveness Conflict Risk | The debugging/troubleshooting domain is fairly broad and could overlap with language-specific debugging skills or error-handling skills. However, the 'Scientific Method' framing provides some distinctiveness as a methodology-focused skill rather than a tool-specific one. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, concise debugging methodology skill that effectively communicates the scientific method approach and common anti-patterns. Its main weakness is the lack of concrete, executable examples (e.g., actual debugging commands, tool-specific workflows) and missing feedback loops in the workflow (what happens when a hypothesis is disproven). The content reads more as a philosophical guide than an operational playbook.
Suggestions
Add a concrete debugging example showing the scientific method applied to a real scenario (e.g., a stack trace → hypothesis → specific debugging commands → fix → verification test run).
Add an explicit feedback loop in the scientific method: if the experiment disproves the hypothesis, return to OBSERVE/HYPOTHESIZE with new data gathered from the failed experiment.
Include specific tool usage guidance—e.g., debugger commands, logging patterns, or git bisect for binary search—to make the best practices section more actionable.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient. Every section adds value without explaining concepts Claude already knows. No padding or unnecessary context—just actionable principles and techniques. | 3 / 3 |
Actionability | The guidance is concrete in terms of methodology (scientific method steps, anti-patterns, techniques like binary search and minimal repro), but lacks executable code examples or specific commands. It describes approaches rather than providing copy-paste ready debugging workflows with actual tool usage. | 2 / 3 |
Workflow Clarity | The 5-step scientific method provides a clear sequence, but lacks explicit validation checkpoints and feedback loops. For example, there's no guidance on what to do if the hypothesis is wrong (loop back to step 1/2), and the verify step doesn't specify how to verify (e.g., run test suite, check logs again). | 2 / 3 |
Progressive Disclosure | For a skill of this size (~30 lines), the content is well-organized into clear sections with a single-level reference to a bug report template. The structure is easy to scan and navigate. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 9 / 11 Passed | |
4c72e76
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.