CtrlK
BlogDocsLog inGet started
Tessl Logo

coderabbit-performance-tuning

Optimize CodeRabbit review speed, relevance, and signal-to-noise ratio. Use when reviews take too long, contain too many irrelevant comments, or when teams are experiencing review fatigue. Trigger with phrases like "coderabbit performance", "optimize coderabbit", "coderabbit slow", "coderabbit noise", "coderabbit too many comments", "coderabbit relevance".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/coderabbit-pack/skills/coderabbit-performance-tuning/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured description with excellent trigger terms and clear 'when to use' guidance, making it strong on completeness and distinctiveness. Its main weakness is that the capabilities are described at the outcome level ('optimize speed, relevance, signal-to-noise ratio') rather than listing specific concrete actions the skill performs to achieve those outcomes.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Adjusts review rules, configures path-based filters, tunes comment severity thresholds, and sets up ignore patterns to optimize CodeRabbit review speed, relevance, and signal-to-noise ratio.'

DimensionReasoningScore

Specificity

The description names the domain (CodeRabbit review optimization) and mentions some goals (speed, relevance, signal-to-noise ratio), but doesn't list specific concrete actions like 'adjust review rules', 'configure ignore patterns', or 'tune comment thresholds'. The actions remain at the level of abstract outcomes rather than concrete steps.

2 / 3

Completeness

The description clearly answers both 'what' (optimize CodeRabbit review speed, relevance, and signal-to-noise ratio) and 'when' (reviews take too long, contain too many irrelevant comments, review fatigue), with explicit trigger phrases provided. Both dimensions are well-covered.

3 / 3

Trigger Term Quality

The description explicitly lists natural trigger phrases like 'coderabbit performance', 'optimize coderabbit', 'coderabbit slow', 'coderabbit noise', 'coderabbit too many comments', and 'coderabbit relevance'. These are terms users would naturally say when experiencing these issues, providing good coverage of common variations.

3 / 3

Distinctiveness Conflict Risk

The description is highly specific to CodeRabbit optimization, a distinct niche tool. The trigger terms all include 'coderabbit' which makes it very unlikely to conflict with other skills like general code review or performance optimization skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, highly actionable skill with excellent concrete examples and configurations that are immediately usable. Its main weaknesses are the length (could delegate reference-style content like exhaustive filter lists to separate files) and the lack of explicit validation/iteration loops between tuning steps and measurement. The workflow would benefit from a clearer 'change → measure → adjust' feedback cycle.

Suggestions

Add an explicit feedback loop after Step 6: 'If average comments > 10, return to Step 2 and adjust profile; if comments are irrelevant, return to Step 3 and refine path instructions.'

Move the exhaustive path_filters list and detailed path_instructions examples into a separate reference file (e.g., CODERABBIT_FILTERS.md) and link to it from the main skill.

Add a validation checkpoint after Steps 2-4: 'Run a test PR or re-review a recent PR to verify the configuration change produces the expected effect.'

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some unnecessary framing (e.g., the Overview paragraph restates what the table already shows, the Prerequisites section is somewhat obvious, and the markdown table in Step 1 is wrapped in a markdown code block unnecessarily). The performance factors table is useful but the content could be tightened overall.

2 / 3

Actionability

Excellent actionability throughout — every step includes concrete, copy-paste-ready YAML configurations, bash scripts, and specific GitHub Actions workflows. The path instructions examples are realistic and immediately usable, and the measurement script is fully executable.

3 / 3

Workflow Clarity

Steps are clearly numbered and sequenced, but there are no explicit validation checkpoints or feedback loops between steps. For example, after changing the profile or adding path instructions, there's no 'verify the change took effect' step — the measurement script in Step 6 is the only validation, but it's not tied back to iterating on earlier steps if metrics are poor.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections, but it's quite long (~180 lines of substantive content) with all detail inline. The path_instructions examples and path_filters lists could be split into reference files. The 'Next Steps' reference to 'coderabbit-core-workflow-b' is good but the main body could benefit from more delegation to supplementary files.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.