CtrlK
BlogDocsLog inGet started
Tessl Logo

coderabbit-observability

Monitor CodeRabbit review effectiveness with metrics, dashboards, and alerts. Use when tracking review coverage, measuring comment acceptance rates, or building dashboards for CodeRabbit adoption across your organization. Trigger with phrases like "coderabbit monitoring", "coderabbit metrics", "coderabbit observability", "monitor coderabbit", "coderabbit alerts", "coderabbit dashboard".

84

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that hits all the key criteria. It provides specific capabilities, explicit trigger guidance with both 'Use when' and 'Trigger with phrases' clauses, and is clearly scoped to a distinct niche (CodeRabbit monitoring). The description uses proper third-person voice throughout.

DimensionReasoningScore

Specificity

Lists multiple concrete actions: monitoring review effectiveness, tracking review coverage, measuring comment acceptance rates, and building dashboards. These are specific, actionable capabilities.

3 / 3

Completeness

Clearly answers both 'what' (monitor CodeRabbit review effectiveness with metrics, dashboards, and alerts) and 'when' (explicit 'Use when' clause with specific scenarios plus a 'Trigger with phrases' section).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'coderabbit monitoring', 'coderabbit metrics', 'coderabbit observability', 'coderabbit alerts', 'coderabbit dashboard'. These are terms users would naturally use when seeking this functionality.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive — scoped specifically to CodeRabbit monitoring and observability. The combination of 'CodeRabbit' + 'monitoring/metrics/dashboards' creates a clear niche unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with executable scripts and complete workflow definitions for monitoring CodeRabbit. Its main weaknesses are the lack of validation checkpoints within the workflow steps and the monolithic structure that could benefit from splitting scripts into separate bundle files. The metrics table and error handling table add good value, though some content is slightly redundant.

Suggestions

Add validation checkpoints within the workflow, such as verifying `gh auth status` before running scripts and checking API response codes before processing results.

Extract the bash scripts and GitHub Actions YAML files into separate bundle files, referencing them from SKILL.md to improve progressive disclosure and reduce inline bulk.

Remove the blank dashboard template in Step 5 (or make it a referenced file) since it's not executable and mostly restates the metrics table from the top.

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some unnecessary elements: the markdown dashboard template in Step 5 is essentially a blank table that doesn't add much value, the 'Healthy ranges' echo statements duplicate the metrics table, and the Output section restates what the steps already cover. The metrics table and error handling table are useful and dense, though.

2 / 3

Actionability

The skill provides fully executable bash scripts, complete GitHub Actions workflow YAML files, and concrete API calls using `gh`. Scripts include proper argument handling, error handling with `set -euo pipefail`, and are copy-paste ready with minimal modification needed.

3 / 3

Workflow Clarity

Steps are clearly sequenced from measuring coverage through building dashboards and alerts. However, there are no validation checkpoints between steps — for example, no verification that the gh CLI is authenticated, no check that API responses are valid before processing, and no feedback loop for when scripts produce unexpected results. The error handling table at the end partially compensates but is disconnected from the workflow steps.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and a logical progression, but it's quite long (~200 lines of code/content) with everything inline. The bash scripts and GitHub Actions workflows could be referenced as separate files. The single reference to 'coderabbit-incident-runbook' in Next Steps is good but the skill would benefit from splitting the scripts into bundle files.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.