Collect CodeRabbit debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information for CodeRabbit problems. Trigger with phrases like "coderabbit debug", "coderabbit support bundle", "coderabbit diagnostic", "coderabbit not working evidence".
85
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong completeness and distinctiveness. It clearly defines when to use the skill and provides excellent trigger terms. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., collecting logs, checking configurations, gathering repo metadata).
Suggestions
Add more specific concrete actions to improve specificity, e.g., 'Collects logs, configuration files, webhook status, and error messages from CodeRabbit integrations for support tickets and troubleshooting.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (CodeRabbit debug/support) and mentions actions like 'collect debug evidence' and 'preparing support tickets', but doesn't list multiple specific concrete actions (e.g., what evidence is collected, what files are gathered, what logs are checked). | 2 / 3 |
Completeness | Clearly answers both 'what' (collect CodeRabbit debug evidence for support tickets and troubleshooting) and 'when' (encountering persistent issues, preparing support tickets, collecting diagnostic information) with explicit trigger phrases. | 3 / 3 |
Trigger Term Quality | Excellent trigger term coverage with explicit phrases: 'coderabbit debug', 'coderabbit support bundle', 'coderabbit diagnostic', 'coderabbit not working evidence'. These are natural terms a user would say when encountering CodeRabbit issues. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — the skill is narrowly scoped to CodeRabbit-specific debugging and support evidence collection. The trigger terms are all CodeRabbit-prefixed, making conflicts with other skills very unlikely. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted diagnostic skill with highly actionable, executable bash scripts and a clear multi-step workflow. Its main weakness is length—the inline scripts make it token-heavy, and the compile step partially duplicates earlier steps. The error handling table and resources section add practical value.
Suggestions
Consider extracting the longer bash scripts (Steps 1-3, 6) into a referenced shell script file, keeping SKILL.md as a concise overview of the diagnostic workflow.
Remove redundancy between Steps 1-3 and Step 6's bundle compilation by having the bundle script call or incorporate the earlier checks rather than re-implementing them.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient but has some redundancy—Step 6 re-collects information already gathered in Steps 1-3, and the inline Python config validator is somewhat verbose. The markdown comments explaining webhook issues and configuration discrepancies are useful but could be tighter. | 2 / 3 |
Actionability | Every step provides concrete, executable bash scripts or specific instructions. The code is copy-paste ready with proper error handling (set -euo pipefail), parameterized variables, and specific gh API calls with jq filters. | 3 / 3 |
Workflow Clarity | The 6-step sequence is logically ordered from basic checks (installation) through validation, history review, and finally bundle compilation. Step 2 includes YAML validation with error feedback, Step 4 provides a comparison checkpoint, and the error handling table covers common failure modes with solutions. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections, but it's quite long (~150 lines of substantive content) and could benefit from splitting detailed scripts into separate files. The single reference to 'coderabbit-common-errors' at the end is good but the main body could be more of an overview with scripts linked externally. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.