Automate Apiverve tasks via Rube MCP (Composio). Always search tools first for current schemas.
67
Quality
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./composio-skills/apiverve-automation/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague and lacks the specificity needed for effective skill selection. It fails to explain what specific tasks can be automated, what Apiverve actually does, or when Claude should choose this skill. The technical product names provide minimal distinctiveness but don't help users understand the skill's purpose.
Suggestions
Add specific concrete actions that can be performed (e.g., 'Query API data, validate endpoints, monitor API health') instead of generic 'automate tasks'
Include an explicit 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user mentions Apiverve, API verification, or needs to interact with Apiverve services')
Briefly explain what Apiverve is or does to help Claude understand the domain and distinguish this skill from other API-related skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Apiverve tasks' without specifying what concrete actions can be performed. No specific capabilities are listed beyond generic 'automation'. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Apiverve tasks') and there is no explicit 'when' clause or trigger guidance. The instruction to 'search tools first' is operational guidance, not usage triggers. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords ('Apiverve', 'Rube MCP', 'Composio', 'tools', 'schemas') but these are technical/product names rather than natural terms users would say. Missing common variations or user-friendly trigger terms. | 2 / 3 |
Distinctiveness Conflict Risk | The specific product names (Apiverve, Rube MCP, Composio) provide some distinctiveness, but 'automate tasks' is generic enough to potentially conflict with other automation-related skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently teaches the Rube MCP workflow pattern for Apiverve automation. The content is appropriately concise and has clear workflow sequencing with validation steps. The main weakness is that actionability could be improved with a concrete, real-world Apiverve example rather than generic placeholders.
Suggestions
Add one concrete Apiverve example showing a real use case (e.g., a specific API verification task) with actual tool slugs and arguments that would be returned from RUBE_SEARCH_TOOLS
Include an example of what a successful RUBE_SEARCH_TOOLS response looks like so users know what to expect and how to extract the tool_slug and schema
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of what Apiverve or Composio are. Every section provides actionable information without padding or unnecessary context. | 3 / 3 |
Actionability | Provides concrete tool call patterns with specific parameters, but the examples are pseudo-code style rather than fully executable. The actual Apiverve operations are abstracted behind 'your specific task' placeholders without concrete examples of real Apiverve use cases. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit sequencing (discover → check connection → execute). Includes validation checkpoint for connection status ('Confirm connection status shows ACTIVE before running any workflows') and known pitfalls section addresses error prevention. | 3 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from prerequisites to setup to workflow to pitfalls. Quick reference table provides at-a-glance navigation. External link to toolkit docs is one level deep and clearly signaled. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
2790447
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.