Automate Apilio tasks via Rube MCP (Composio). Always search tools first for current schemas.
67
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./composio-skills/apilio-automation/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague and lacks the essential components for effective skill selection. It fails to specify what Apilio tasks can be automated, provides no concrete actions, and completely omits guidance on when Claude should select this skill. The technical product names provide minimal distinctiveness but don't help users understand the skill's purpose.
Suggestions
Add specific concrete actions that can be performed (e.g., 'Create triggers, manage variables, send notifications, configure logic blocks')
Add a 'Use when...' clause with natural trigger terms users might say (e.g., 'Use when the user mentions Apilio, home automation logic, smart home rules, or IoT workflows')
Briefly explain what Apilio is for users unfamiliar with the product to improve trigger term coverage
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Apilio tasks' without specifying what concrete actions can be performed. No specific capabilities are listed beyond generic 'tasks'. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Apilio tasks') and there is no 'Use when...' clause or explicit trigger guidance. The instruction to 'search tools first' is implementation guidance, not usage triggers. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords ('Apilio', 'Rube MCP', 'Composio') but these are technical/product names rather than natural terms users would say. Missing common variations or user-facing trigger terms. | 2 / 3 |
Distinctiveness Conflict Risk | The specific product names (Apilio, Rube MCP, Composio) provide some distinctiveness, but 'automate tasks' is generic enough to potentially conflict with other automation skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently guides Claude through Apilio automation via Rube MCP. The workflow is clear with proper validation checkpoints, and the Known Pitfalls section adds valuable guardrails. The main weakness is that tool call examples use placeholder syntax rather than fully concrete examples, though this may be intentional given the dynamic nature of tool schemas.
Suggestions
Consider adding one complete, concrete example showing a real Apilio operation end-to-end with actual argument values (even if noted as illustrative)
The Step 3 example could show a real tool_slug and arguments structure to make it more immediately actionable
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding unnecessary explanations of what Apilio or MCP are. Every section serves a purpose and assumes Claude's competence with APIs and tool execution patterns. | 3 / 3 |
Actionability | Provides concrete tool call patterns with specific parameters, but uses pseudo-code style rather than fully executable examples. The argument placeholders like '/* schema-compliant args from search results */' require inference rather than being copy-paste ready. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit validation checkpoint (Step 2: Check Connection shows ACTIVE before proceeding). The Known Pitfalls section provides error recovery guidance and the sequence is unambiguous. | 3 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to Composio docs is one level deep and clearly signaled. Quick reference table aids navigation. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
2790447
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.