Automate Abstract tasks via Rube MCP (Composio). Always search tools first for current schemas.
60
42%
Does it follow best practices?
Impact
88%
2.37xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./composio-skills/abstract-automation/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description fails on all dimensions. It uses vague, abstract language without specifying concrete capabilities, lacks natural trigger terms users would say, provides no 'when to use' guidance, and is too generic to distinguish from other automation skills. The mention of specific tools (Rube MCP, Composio) without explaining what they do provides no useful selection criteria.
Suggestions
Replace 'Abstract tasks' with specific concrete actions this skill performs (e.g., 'Create calendar events, send emails, manage CRM contacts').
Add a 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks to automate workflows, connect apps, or integrate services').
Explain what Rube MCP/Composio actually enables in user-facing terms rather than just naming the tools.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Abstract tasks' without specifying what concrete actions are performed. 'Abstract tasks' is meaningless without context, and 'search tools first' is procedural guidance rather than capability description. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Abstract tasks') and there is no 'when' clause or explicit trigger guidance. The instruction to 'search tools first' is operational advice, not usage context. | 1 / 3 |
Trigger Term Quality | Contains technical jargon ('Rube MCP', 'Composio') that users would not naturally say. 'Abstract tasks' is not a natural search term. No common user-facing keywords are present. | 1 / 3 |
Distinctiveness Conflict Risk | 'Automate Abstract tasks' is so generic it could conflict with virtually any automation-related skill. The only distinguishing element is the tool names (Rube MCP, Composio), which don't clarify the skill's unique purpose. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently guides Claude through Abstract automation via Rube MCP. The workflow is clear with proper validation checkpoints, and the content respects token budget. The main weakness is that tool call examples use placeholder notation rather than fully concrete examples, which slightly reduces immediate actionability.
Suggestions
Replace pseudocode argument placeholders with at least one concrete, complete example showing actual field names and values for a common Abstract operation
Add a brief example showing what a successful RUBE_SEARCH_TOOLS response looks like so Claude knows what to extract from it
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of what Abstract or Composio are. Every section provides actionable information without padding or unnecessary context. | 3 / 3 |
Actionability | Provides concrete tool call patterns with parameter examples, but uses pseudocode-style notation rather than fully executable code. The argument placeholders like '/* schema-compliant args */' reduce copy-paste readiness. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit sequencing (discover → check connection → execute). Includes validation checkpoint for connection status and explicit guidance to confirm ACTIVE status before proceeding. | 3 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep. Quick reference table provides efficient navigation for common operations. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
2790447
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.