Tune CodeRabbit review configuration: learnings, code guidelines, and noise reduction. Use when fine-tuning review quality, training CodeRabbit with team preferences, adding code guidelines, or reducing false positives. Trigger with phrases like "coderabbit tune reviews", "coderabbit learnings", "coderabbit guidelines", "reduce coderabbit noise", "coderabbit false positives".
87
86%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
After initial CodeRabbit setup (Workflow A), this skill covers tuning review quality through learnings, code guidelines, tone customization, and noise reduction. CodeRabbit improves over time by learning from your team's feedback patterns and custom rules.
coderabbit-core-workflow-a)CodeRabbit automatically detects coding rules from standard config files in your repo. It also reads AI agent configuration files for additional context.
# Files CodeRabbit auto-detects for coding rules:
# - .eslintrc.* / eslint.config.* (JavaScript/TypeScript rules)
# - .prettierrc / prettier.config.* (Formatting rules)
# - biome.json / biome.jsonc (Biome linter rules)
# - .cursorrules (Cursor AI rules)
# - CLAUDE.md (Claude Code instructions)
# - .editorconfig (Editor settings)
# - .rubocop.yml (Ruby style)
# - ruff.toml / pyproject.toml (Python rules)
# Add custom guidelines file:
# Create docs/CODING_STANDARDS.md with your team's rules
# Then reference it in .coderabbit.yaml:# .coderabbit.yaml - Custom code guidelines
reviews:
knowledge_base:
code_guidelines:
auto_detection: true # Auto-detect from config files
custom_patterns:
- "docs/CODING_STANDARDS.md"
- "docs/SECURITY_POLICY.md"
- "team/code-style.txt"Learnings are enabled by default. CodeRabbit learns from your team's review interactions:
# When CodeRabbit gives feedback you disagree with, reply:
"We intentionally use default exports in this project for Next.js pages.
Please don't flag default exports in files under src/pages/."
# CodeRabbit remembers this preference for future reviews.
# When you want to reinforce a pattern, reply positively:
"Good catch! We always want to flag missing error boundaries in React components."
# View current learnings in the CodeRabbit dashboard:
# app.coderabbit.ai > Organization > Learnings# .coderabbit.yaml - Tone configuration
tone_instructions: |
Be concise and direct. Skip pleasantries.
Use bullet points for multiple suggestions.
Include code examples for non-obvious fixes.
Rate severity as: Critical > Warning > Suggestion > Nitpick.
# Review profiles control comment volume:
reviews:
profile: "chill" # Fewer comments, only significant issues
# profile: "assertive" # Balanced (default, recommended for most teams)
# Fun tone options (if your team appreciates them):
# tone_instructions: "Review like a wise but slightly sarcastic senior engineer."
# tone_instructions: "You must talk like a pirate. Arr!"# .coderabbit.yaml - Noise reduction strategies
reviews:
# Skip paths that generate noise
path_filters:
- "!**/*.lock"
- "!**/*.snap"
- "!**/*.generated.*"
- "!**/migrations/*.sql" # DB migrations are reviewed manually
- "!**/__mocks__/**"
- "!**/fixtures/**"
- "!**/testdata/**"
# Give context to prevent misguided comments
path_instructions:
- path: "src/legacy/**"
instructions: |
This is legacy code being incrementally migrated.
Only flag security issues and bugs. Do NOT suggest refactoring.
Do NOT comment on naming conventions or code style.
- path: "src/generated/**"
instructions: |
This code is auto-generated by protobuf/GraphQL codegen.
Only review if there are manual modifications (check git blame).
Skip style and structure comments entirely.
- path: "scripts/**"
instructions: |
These are one-off scripts. Do not enforce production code standards.
Only flag: security issues, destructive operations without confirmation,
and missing error handling on file/network operations.
# Skip PRs from automated tools
auto_review:
ignore_title_keywords:
- "chore: bump"
- "chore(deps)"
- "Bump version"
- "auto-generated"# Try different profiles to find the right signal-to-noise ratio:
#
# Week 1-2: Run "assertive" (default)
# - Track: comments per PR, acceptance rate, developer satisfaction
#
# Week 3-4: Switch to "chill"
# - Compare same metrics
#
# Decision framework:
# - Acceptance rate < 30%? → Profile too aggressive, switch to chill
# - Acceptance rate > 70%? → Reviews are valued, keep current profile
# - Developers ignoring reviews? → Too many nitpicks, switch to chill
# - Security issues slipping through? → Switch to assertiveset -euo pipefail
# Check CodeRabbit comment acceptance rate on recent PRs
ORG="your-org"
REPO="your-repo"
echo "=== CodeRabbit Review Effectiveness ==="
for PR in $(gh api "repos/$ORG/$REPO/pulls?state=closed&per_page=20" --jq '.[].number'); do
TOTAL=$(gh api "repos/$ORG/$REPO/pulls/$PR/comments" \
--jq '[.[] | select(.user.login=="coderabbitai[bot]")] | length' 2>/dev/null)
[ "$TOTAL" -gt 0 ] && echo "PR #$PR: $TOTAL CodeRabbit comments"
done| Issue | Cause | Solution |
|---|---|---|
| Reviews ignore custom rules | Guidelines file not referenced | Add path to custom_patterns in config |
| Learnings not sticking | Organization-level vs repo-level | Check learnings scope in dashboard |
| Too few comments | Profile set to "chill" | Switch to "assertive" for more thorough reviews |
| Same issue flagged repeatedly | Learning not created | Reply explicitly stating the preference |
| Tone instructions ignored | YAML formatting issue | Ensure tone_instructions is a proper string |
For common errors and troubleshooting, see coderabbit-common-errors.
5585c45
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.