Use when asked to debug, fix a bug, investigate an error, or do root cause analysis, and when users report errors, stack traces, unexpected behavior, or say something stopped working.
73
66%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./openclaw/skills/gstack-openclaw-investigate/SKILL.mdQuality
Discovery
54%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at trigger term coverage with natural user phrases for debugging scenarios, but critically lacks any description of what the skill actually does—its capabilities, methods, or outputs. It reads entirely as a 'Use when...' clause without the preceding capability statement, making it incomplete as a skill description.
Suggestions
Add a capability statement before the 'Use when' clause describing concrete actions, e.g., 'Analyzes error messages and stack traces, traces code paths to identify root causes, and suggests targeted fixes.'
Specify the scope or approach of the debugging skill (e.g., what languages, what types of bugs, what outputs it produces) to improve specificity and distinctiveness.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lists no concrete actions or capabilities—it only describes when to use the skill ('debug, fix a bug, investigate an error') but never says what the skill actually does (e.g., 'analyzes stack traces, sets breakpoints, traces code paths'). The verbs used are trigger conditions, not capability descriptions. | 1 / 3 |
Completeness | The 'when' clause is explicit and well-articulated with multiple trigger scenarios. However, the 'what does this do' part is essentially absent—there is no description of the skill's capabilities, methodology, or outputs. It only tells Claude when to select it, not what it provides. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural user terms: 'debug', 'fix a bug', 'investigate an error', 'root cause analysis', 'errors', 'stack traces', 'unexpected behavior', 'stopped working'. These are all phrases users would naturally say when they need debugging help. | 3 / 3 |
Distinctiveness Conflict Risk | The debugging/error investigation domain is fairly specific, and the trigger terms are well-targeted. However, without specifying what kind of debugging (e.g., language, framework, approach), it could overlap with language-specific debugging skills or general code review skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, well-structured debugging skill with excellent workflow clarity and actionability. The phased approach with explicit validation checkpoints, the 3-strike escalation rule, and the structured debug report template are particularly effective. Minor weaknesses include some motivational/explanatory text that Claude doesn't need and a monolithic structure that could benefit from splitting detailed reference material into separate files.
Suggestions
Trim motivational explanations like 'Fixing symptoms creates whack-a-mole debugging' and 'there is no for now' — Claude understands why root cause analysis matters.
Consider extracting the Pattern Analysis section into a separate PATTERNS.md reference file, keeping only a brief checklist in the main skill with a link to the detailed patterns.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient and well-structured, but includes some unnecessary elaboration (e.g., 'Fixing symptoms creates whack-a-mole debugging' is motivational rather than instructional, and some red flag explanations are slightly verbose). Most content earns its place, but could be tightened by ~15-20%. | 2 / 3 |
Actionability | Provides concrete, executable guidance throughout: specific git commands, clear step-by-step processes, explicit output formats (debug report template), specific thresholds (3-strike rule, >5 files), and actionable patterns to check. The structured debug report format is copy-paste ready. | 3 / 3 |
Workflow Clarity | Excellent multi-phase workflow with clear sequencing (Phases 1-5), explicit validation checkpoints (reproduce before fixing, verify hypothesis before implementing, fresh verification after fix, run full test suite), and a well-defined feedback loop (3-strike rule with escalation options). Error recovery paths are clearly specified. | 3 / 3 |
Progressive Disclosure | The content is well-organized with clear phases and sections, but it's a fairly long monolithic document (~150 lines) with no references to external files for detailed content like pattern analysis examples or debug report templates. The pattern analysis section and debug report template could be split out for better progressive disclosure. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
30fe6bb
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.