Automate Sentry tasks via Rube MCP (Composio): manage issues/events, configure alerts, track releases, monitor projects and teams. Always search tools first for current schemas.
75
65%
Does it follow best practices?
Impact
94%
1.77xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/all-skills/skills/sentry-automation/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is reasonably strong with specific capabilities and a clear domain niche (Sentry via Rube MCP/Composio). Its main weaknesses are the lack of an explicit 'Use when...' clause and missing natural trigger terms that users would commonly associate with error monitoring and tracking. The operational instruction about searching tools first is a nice touch but doesn't compensate for the missing trigger guidance.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Sentry error tracking, monitoring exceptions, managing alerts, or working with Composio/Rube MCP for Sentry.'
Include natural user-facing trigger terms like 'error tracking', 'error monitoring', 'crash reports', 'exceptions', and 'bug tracking' to improve discoverability.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: manage issues/events, configure alerts, track releases, monitor projects and teams. Also includes the operational guidance to search tools first for current schemas. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (automate Sentry tasks via Rube MCP with specific capabilities listed), but lacks an explicit 'Use when...' clause. The 'when' is only implied by the domain terms. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Sentry', 'issues', 'events', 'alerts', 'releases', 'projects', 'teams', and 'Composio/Rube MCP'. However, it misses common user variations like 'error tracking', 'error monitoring', 'bug tracking', 'crash reports', or 'exceptions' that users would naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to the specific mention of 'Sentry' and 'Rube MCP (Composio)' which clearly carve out a unique niche. Unlikely to conflict with other skills unless there are multiple Sentry-related skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured API integration skill with clear workflow sequences and good labeling of required vs optional steps. Its main weaknesses are repetitive pitfall information across sections (hurting conciseness), lack of concrete executable examples for tool invocations (hurting actionability), and all content being in a single file despite its length. The workflow clarity is the strongest dimension with explicit prerequisites and sequencing.
Suggestions
Add at least one concrete, copy-paste ready tool invocation example (e.g., a full RUBE_SEARCH_TOOLS call or a CREATE_PROJECT_RULE_FOR_ALERTS call with actual conditions/actions JSON structure) to improve actionability.
Consolidate the repeated pitfalls about slug vs display name into a single 'ID Resolution' section and remove duplicates from individual workflows to improve conciseness.
Provide an example JSON schema for alert rule conditions/actions/filters, since the skill explicitly notes these use 'specific JSON schemas' but gives no examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-organized but contains significant repetition—pitfalls about org slugs vs display names are repeated across nearly every workflow section, and the 'Known Pitfalls' section at the end duplicates information already stated in individual workflows. The quick reference table also largely restates what's already covered. Some trimming would improve token efficiency. | 2 / 3 |
Actionability | Tool names and parameter lists are concrete and specific, which is good. However, there are no executable code examples or copy-paste ready tool invocations with actual parameter structures. The alert rule creation section mentions 'specific JSON schemas' for conditions/actions/filters but doesn't provide any examples, and the ID resolution patterns use pseudocode rather than actual tool call examples. | 2 / 3 |
Workflow Clarity | Each workflow has a clear 'when to use' trigger, a numbered tool sequence with explicit labels ([Required], [Optional], [Prerequisite], [Alternative]), key parameters, and pitfalls. The setup section includes a verification step before proceeding. The sequences are well-ordered with prerequisite steps clearly marked. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a useful quick reference table, but it's a monolithic document with no bundle files to offload detailed content. The alert rule configuration details, search query syntax reference, and the full quick reference table could be split into separate files. For a skill of this length (~200+ lines), some content separation would improve navigability. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
d065ead
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.