Debug complex issues using competing hypotheses with parallel investigation, evidence collection, and root cause arbitration. Use this skill when debugging bugs with multiple potential causes, performing root cause analysis, or organizing parallel investigation workflows.
63
75%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/agent-teams/skills/parallel-debugging/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates a specific debugging methodology (competing hypotheses with parallel investigation), provides concrete actions, and includes an explicit 'Use this skill when' clause with natural trigger terms. It is well-differentiated from generic debugging skills by emphasizing its structured, multi-hypothesis approach.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'competing hypotheses', 'parallel investigation', 'evidence collection', and 'root cause arbitration'. These describe a clear methodology with distinct steps. | 3 / 3 |
Completeness | Clearly answers both what ('Debug complex issues using competing hypotheses with parallel investigation, evidence collection, and root cause arbitration') and when ('when debugging bugs with multiple potential causes, performing root cause analysis, or organizing parallel investigation workflows') with an explicit 'Use this skill when' clause. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'debugging', 'bugs', 'multiple potential causes', 'root cause analysis', 'parallel investigation'. These cover common ways users describe complex debugging scenarios. | 3 / 3 |
Distinctiveness Conflict Risk | The focus on competing hypotheses, parallel investigation, and root cause arbitration creates a distinct niche that differentiates it from general debugging or simple troubleshooting skills. The methodology-specific language makes it unlikely to conflict with basic debugging skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a solid conceptual framework for structured debugging using competing hypotheses, with good categorization of failure modes and a clear arbitration protocol. However, it falls short on actionability — it describes a methodology rather than providing concrete, executable steps for Claude to follow, and the 'parallel agent investigation' aspect mentioned in the title and description is never operationalized. The content would benefit from concrete tool usage instructions and a clearer end-to-end workflow.
Suggestions
Add concrete instructions for how to actually conduct parallel investigation — e.g., how to use subagents/tools, what each investigator should do, and how results are collected and reported back.
Include a concrete end-to-end example showing the full workflow from bug report through hypothesis generation, investigation, evidence collection, and root cause determination.
Trim the failure mode categories to just the category names and 1-2 non-obvious examples per category — Claude already understands common bug types like off-by-one errors and null pointer issues.
Add specific tool usage guidance (e.g., grep/ripgrep commands for evidence collection, git log/blame for correlational evidence) to make the skill more actionable.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-organized but includes some content that could be tightened. The six failure mode categories with bullet-pointed examples are useful reference material but border on explaining things Claude already knows (e.g., 'Off-by-one errors in loops or array access'). The evidence types table and confidence levels add value but could be more compact. | 2 / 3 |
Actionability | The skill provides a structured framework with clear categories, tables, and a checklist, but lacks concrete executable examples. There are no actual commands, code snippets for running investigations, or specific tool usage instructions. The citation format example is good but the overall guidance is more of a conceptual framework than step-by-step executable instructions. | 2 / 3 |
Workflow Clarity | The Result Arbitration Protocol provides a clear 4-step sequence with a validation checklist at the end, which is good. However, the overall debugging workflow (from hypothesis generation through parallel investigation to arbitration) is not clearly sequenced as a cohesive process. The 'parallel agent investigation' mentioned in the intro is never concretely described — how to spawn agents, coordinate them, or collect their results is missing. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and sections, but it's a monolithic document with no references to supporting files. Given the depth of content (hypothesis categories, evidence standards, arbitration protocol), some of this could be split into referenced files. However, with no bundle files provided, the single-file approach is acceptable though not optimal. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
34632bc
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.