Advanced debugging system with Serena MCP integration for intelligent codebase analysis and error resolution
49
36%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/debug-error/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description relies heavily on buzzwords ('advanced', 'intelligent') without specifying concrete actions or when the skill should be triggered. It names a specific integration (Serena MCP) which adds some distinctiveness, but overall lacks the specificity and completeness needed for Claude to reliably select this skill from a pool of alternatives.
Suggestions
Replace vague phrases like 'intelligent codebase analysis' with specific actions such as 'traces error origins across files, analyzes stack traces, identifies root causes of exceptions'.
Add an explicit 'Use when...' clause with natural trigger terms like 'Use when the user encounters a bug, runtime error, stack trace, or needs help debugging code using Serena MCP tools'.
Remove marketing-style adjectives ('advanced', 'intelligent') and replace with concrete capability descriptions that differentiate this from generic debugging assistance.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, buzzword-heavy language like 'advanced debugging system', 'intelligent codebase analysis', and 'error resolution' without listing any concrete actions. No specific capabilities are enumerated. | 1 / 3 |
Completeness | The description only vaguely addresses 'what' (debugging and error resolution) and completely lacks a 'when' clause or explicit trigger guidance. The absence of a 'Use when...' clause caps this at 2, and the weak 'what' brings it to 1. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'debugging', 'codebase analysis', 'error resolution', and 'Serena MCP' that users might mention. However, it lacks common variations and natural terms users would say like 'fix bug', 'stack trace', 'exception', 'breakpoint', etc. | 2 / 3 |
Distinctiveness Conflict Risk | 'Serena MCP integration' provides some distinctiveness as a specific tool reference, but 'debugging' and 'error resolution' are very broad terms that could overlap with general coding assistance or other debugging-related skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a structured debugging framework with good Serena MCP tool references, but suffers from mixing generic debugging knowledge Claude already possesses with the genuinely useful tool-specific guidance. The workflow lacks validation checkpoints and feedback loops critical for code modification operations, and the absence of concrete tool invocation examples reduces actionability.
Suggestions
Add concrete tool invocation examples showing actual parameters, e.g., `mcp__serena__search_for_pattern` with a real pattern string and expected output format
Add explicit validation/feedback loops in the workflow: after step 7 (Solution Implementation), include a 'verify fix didn't break anything' checkpoint with rollback guidance if it fails
Remove generic debugging advice Claude already knows (stack trace reading, common cause categories) and focus tokens on Serena-specific patterns and tool chaining strategies
Add a concrete end-to-end example showing a real error → tool calls → resolution flow to make the workflow actionable
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill has some unnecessary verbosity - the options table is extensive but reasonable, however the workflow steps include generic debugging advice Claude already knows (e.g., 'read stack trace bottom to top', 'consider common causes: null references, type mismatches'). The best practices section restates what's already in the workflow. | 2 / 3 |
Actionability | The skill names specific Serena MCP tools which is good, but lacks executable examples of actual tool invocations with parameters. Steps like 'Create minimal test case' and 'Consider common causes' are vague rather than concrete. No example showing a real debugging session or tool call syntax. | 2 / 3 |
Workflow Clarity | The 8-step workflow is clearly sequenced, but lacks validation checkpoints and feedback loops. There's no explicit 'if fix doesn't work, go back to step X' guidance, and no verification step between hypothesis and implementation. For a debugging workflow involving code modifications (destructive operations via replace_symbol_body), missing validation caps this at 2. | 2 / 3 |
Progressive Disclosure | The content is reasonably structured with clear sections (options, tool priorities, workflow, best practices), but everything is inline in one file. The workflow section is lengthy and could benefit from linking to separate detailed guides for complex scenarios. No references to external files for advanced usage patterns. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
7aff694
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.