CtrlK
BlogDocsLog inGet started
Tessl Logo

coderabbit-observability

Monitor CodeRabbit review effectiveness with metrics, dashboards, and alerts. Use when tracking review coverage, measuring comment acceptance rates, or building dashboards for CodeRabbit adoption across your organization. Trigger with phrases like "coderabbit monitoring", "coderabbit metrics", "coderabbit observability", "monitor coderabbit", "coderabbit alerts", "coderabbit dashboard".

84

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly defines its scope around CodeRabbit review monitoring, provides explicit trigger guidance with both a 'Use when' clause and enumerated trigger phrases, and occupies a distinct niche. The description is concise yet comprehensive, using third-person voice and listing concrete capabilities without unnecessary verbosity.

DimensionReasoningScore

Specificity

Lists multiple concrete actions: monitoring review effectiveness, tracking review coverage, measuring comment acceptance rates, building dashboards, and setting up alerts. These are specific, actionable capabilities.

3 / 3

Completeness

Clearly answers both 'what' (monitor CodeRabbit review effectiveness with metrics, dashboards, and alerts) and 'when' (explicit 'Use when...' clause plus a 'Trigger with phrases like...' section providing concrete trigger guidance).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'coderabbit monitoring', 'coderabbit metrics', 'coderabbit observability', 'coderabbit alerts', 'coderabbit dashboard', plus contextual phrases like 'review coverage' and 'comment acceptance rates'.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive — focuses specifically on CodeRabbit monitoring and observability, a very narrow niche. The combination of 'CodeRabbit' + 'monitoring/metrics/dashboards' makes it extremely unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides highly actionable, executable scripts and workflows for monitoring CodeRabbit effectiveness, which is its strongest quality. However, it's somewhat verbose for a skill file — the inline scripts are lengthy and could be referenced externally, and the workflow lacks explicit validation checkpoints between steps. The error handling table is a nice touch but doesn't compensate for missing in-workflow verification steps.

Suggestions

Add validation checkpoints after each step, e.g., 'Run the coverage script on a known repo and verify output shows non-zero PR counts before proceeding' — this is especially important for the GitHub Actions deployments.

Extract the bash scripts and YAML workflows into separate referenced files (e.g., scripts/coderabbit-coverage.sh, .github/workflows/coderabbit-metrics.yml) and keep SKILL.md as a concise overview with links.

Remove Step 5's blank markdown template — it's not actionable and adds little value since the GitHub Actions workflow in Step 3 already generates the summary programmatically.

DimensionReasoningScore

Conciseness

The skill is fairly long with some unnecessary elements like the markdown dashboard template (Step 5) which is just a blank table, and the 'Output' section that merely restates what the steps already cover. The metrics table and error handling table add value, but the overall content could be tightened.

2 / 3

Actionability

The skill provides fully executable bash scripts, complete GitHub Actions workflow YAML files, and specific API calls with proper error handling. Scripts are copy-paste ready with parameterized inputs and clear usage patterns.

3 / 3

Workflow Clarity

Steps are clearly sequenced from measuring coverage through building dashboards and alerts. However, there are no validation checkpoints between steps — no verification that scripts produce expected output, no feedback loops for when API calls fail or return unexpected data, and no guidance on confirming the GitHub Actions workflows are working correctly after deployment.

2 / 3

Progressive Disclosure

The content is mostly inline in a single file with over 200 lines of code. The bash scripts and YAML workflows could be referenced as separate files. There is a brief reference to 'coderabbit-incident-runbook' at the end, but the main content would benefit from splitting the executable scripts into referenced files with the SKILL.md serving as an overview.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.