Automate Async Interview tasks via Rube MCP (Composio). Always search tools first for current schemas.
67
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./composio-skills/async-interview-automation/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague and procedural to effectively guide skill selection. It names specific tools (Rube MCP, Composio) but fails to explain what concrete actions the skill performs or when Claude should select it. The instruction about searching tools is implementation guidance rather than a capability description.
Suggestions
Add a 'Use when...' clause specifying trigger scenarios, e.g., 'Use when the user mentions Async Interview, candidate scheduling, interview automation, or Composio integration'
Replace 'Automate Async Interview tasks' with specific concrete actions like 'Schedule candidate interviews, send interview invitations, track interview completion status, retrieve candidate responses'
Move the procedural instruction ('Always search tools first') to the skill body rather than the description, which should focus on capabilities and triggers
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Async Interview tasks' without specifying what concrete actions are performed. 'Search tools first for current schemas' is a procedural instruction, not a capability description. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Async Interview tasks') and there is no 'when' clause or explicit trigger guidance. The instruction about searching tools is procedural, not a usage trigger. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords ('Async Interview', 'Rube MCP', 'Composio') but these are technical/product-specific terms. Missing natural user language variations for what tasks users might request. | 2 / 3 |
Distinctiveness Conflict Risk | The specific product names (Rube MCP, Composio, Async Interview) provide some distinctiveness, but 'automate tasks' is generic enough to potentially conflict with other automation skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently guides Claude through Async Interview automation via Rube MCP. The workflow is clear with proper validation checkpoints, and the content is appropriately concise. The main weakness is that the tool call examples use placeholder syntax rather than fully executable examples with realistic argument values.
Suggestions
Provide at least one complete, realistic example with actual argument values (e.g., a specific Async Interview use case like 'create interview' or 'list candidates') to make the guidance more actionable
Consider adding an example response from RUBE_SEARCH_TOOLS showing what the returned schema looks like, so Claude knows what to expect and how to use it
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding unnecessary explanations of what Async Interview or Composio are. Every section serves a purpose and assumes Claude understands the concepts. | 3 / 3 |
Actionability | Provides concrete tool call patterns with parameter examples, but uses pseudo-code style rather than fully executable code. The arguments are placeholders rather than complete, copy-paste ready examples. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit sequence (discover → check connection → execute). Setup section includes validation checkpoint (confirm ACTIVE status before proceeding). Known pitfalls section provides error prevention guidance. | 3 / 3 |
Progressive Disclosure | Well-organized with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep and clearly signaled. Quick reference table aids navigation. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
2790447
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.