Tune CodeRabbit review configuration: learnings, code guidelines, and noise reduction. Use when fine-tuning review quality, training CodeRabbit with team preferences, adding code guidelines, or reducing false positives. Trigger with phrases like "coderabbit tune reviews", "coderabbit learnings", "coderabbit guidelines", "reduce coderabbit noise", "coderabbit false positives".
62
75%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/coderabbit-pack/skills/coderabbit-core-workflow-b/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It clearly specifies what the skill does (tune CodeRabbit review configuration), when to use it (fine-tuning review quality, training with team preferences, reducing false positives), and provides explicit trigger phrases. The CodeRabbit-specific terminology makes it highly distinctive and unlikely to conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: tuning configuration, managing learnings, adding code guidelines, and noise reduction. These are distinct, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (tune CodeRabbit review configuration: learnings, code guidelines, noise reduction) and 'when' (explicit 'Use when' clause plus 'Trigger with phrases like' section with concrete examples). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including 'coderabbit tune reviews', 'coderabbit learnings', 'coderabbit guidelines', 'reduce coderabbit noise', 'coderabbit false positives'. These are phrases users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to the specific 'CodeRabbit' product name and the focus on review tuning/configuration. The trigger terms are all prefixed with 'coderabbit', making conflicts with other skills very unlikely. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a reasonably well-structured skill that covers CodeRabbit tuning comprehensively with concrete YAML examples and a monitoring script. Its main weaknesses are the lack of validation checkpoints after configuration changes, some sections that are more descriptive commentary than actionable instructions (especially Step 5), and moderate verbosity that could be trimmed. The error handling table is a nice touch.
Suggestions
Add a validation step after configuration changes — e.g., 'Open a test PR or re-trigger a review with `@coderabbitai review` to verify your config changes take effect'
Convert Step 5 (A/B Test Review Profiles) from commented YAML into concrete actionable instructions with specific commands or dashboard checks to measure the metrics mentioned
Trim the Overview, Prerequisites, and Output sections which largely restate information Claude can infer from context or that is already covered in the steps themselves
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary context (e.g., the overview explaining what CodeRabbit does, the prerequisites section, the 'Output' summary section restating what was already covered). The YAML comments listing auto-detected files are useful but slightly verbose. Overall mostly efficient but could be tightened. | 2 / 3 |
Actionability | Provides concrete YAML configuration examples and a bash script, which is good. However, several sections rely on commented-out YAML as pseudo-instructions rather than executable config (Step 5 is entirely comments describing a process, not actionable config). The learnings section (Step 2) uses markdown comments to describe conversational interactions rather than providing concrete steps. The bash script in Step 6 is executable but requires manual variable substitution. | 2 / 3 |
Workflow Clarity | Steps are clearly sequenced and logically ordered from configuration through monitoring. However, there are no validation checkpoints — after configuring guidelines, learnings, or tone, there's no step to verify the configuration is valid or working. The A/B testing step (Step 5) is entirely aspirational commentary with no concrete validation. For a workflow that modifies review configuration, a validation step (e.g., trigger a test review, verify config syntax) is missing. | 2 / 3 |
Progressive Disclosure | The skill references other workflows (coderabbit-core-workflow-a, coderabbit-common-errors) and external resources, which is good. However, there are no bundle files to offload detailed content to, and the skill itself is fairly long (~150 lines of content). The error handling table and resources section are well-organized, but the inline YAML examples could benefit from being split into referenced files for a cleaner overview. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
09b10d6
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.