Automate Appdrag tasks via Rube MCP (Composio). Always search tools first for current schemas.
67
Quality
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./composio-skills/appdrag-automation/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague to effectively guide skill selection. It names the integration target (Appdrag via Rube MCP/Composio) but fails to explain what specific tasks can be automated or when this skill should be chosen over others. The operational instruction about searching tools first doesn't help with skill selection.
Suggestions
Add specific concrete actions that can be performed (e.g., 'Create apps, manage databases, deploy cloud functions, handle API integrations')
Add an explicit 'Use when...' clause with natural trigger terms users might say (e.g., 'Use when the user mentions Appdrag, no-code app building, or cloud backend automation')
Clarify what Appdrag is for users unfamiliar with it (e.g., 'no-code platform') to improve trigger matching
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Appdrag tasks' without specifying what concrete actions can be performed. No specific capabilities are listed beyond generic 'tasks'. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Appdrag tasks') and there is no explicit 'when' clause or trigger guidance. The instruction to 'search tools first' is operational guidance, not usage triggers. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords ('Appdrag', 'Rube MCP', 'Composio') but these are technical/product names rather than natural terms users would say. Missing common variations or task-oriented trigger words. | 2 / 3 |
Distinctiveness Conflict Risk | The specific product names (Appdrag, Rube MCP, Composio) provide some distinctiveness, but 'automate tasks' is generic enough to potentially conflict with other automation skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently teaches Appdrag automation through Rube MCP. Its strengths are clear workflow sequencing, good organization, and appropriate brevity. The main weakness is that examples use placeholder syntax rather than fully concrete, executable patterns with realistic argument values.
Suggestions
Replace placeholder comments like '/* schema-compliant args from search results */' with a concrete example showing actual Appdrag-specific arguments (even if noting they may vary)
Add one complete end-to-end example showing a specific Appdrag task from search through execution with realistic parameter values
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of basic concepts. Every section serves a purpose with no padding or unnecessary context about what Appdrag or Composio are. | 3 / 3 |
Actionability | Provides concrete tool call patterns with specific parameter structures, but uses pseudocode-style examples rather than fully executable code. The argument placeholders like '/* schema-compliant args from search results */' reduce copy-paste readiness. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit sequence (discover → check connection → execute). Includes validation checkpoint for connection status ('Confirm connection status shows ACTIVE before running any workflows') and known pitfalls section addresses error prevention. | 3 / 3 |
Progressive Disclosure | Well-organized with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep and clearly signaled. Quick reference table provides efficient navigation for common operations. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
2790447
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.