Automate Aivoov tasks via Rube MCP (Composio). Always search tools first for current schemas.
Install with Tessl CLI
npx tessl i github:haniakrim21/everything-claude-code --skill aivoov-automation62
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague to effectively guide skill selection. It names specific products but fails to explain what Aivoov actually does, what tasks can be automated, or when Claude should select this skill. Users unfamiliar with Aivoov would have no idea if this skill applies to their needs.
Suggestions
Add specific concrete actions that can be performed (e.g., 'Generate text-to-speech audio, manage voice projects, convert scripts to audio files').
Add a 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user mentions Aivoov, text-to-speech, voice synthesis, or audio generation').
Briefly explain what Aivoov is for users who may not recognize the product name.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Aivoov tasks' without specifying what concrete actions can be performed. No specific capabilities are listed beyond generic 'tasks'. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Aivoov tasks') and there is no 'Use when...' clause or explicit trigger guidance. The instruction to 'search tools first' is implementation guidance, not usage triggers. | 1 / 3 |
Trigger Term Quality | Includes 'Aivoov', 'Rube MCP', and 'Composio' as specific product names that users might mention, but lacks natural action-oriented keywords users would say when needing this skill. | 2 / 3 |
Distinctiveness Conflict Risk | The specific product names (Aivoov, Rube MCP, Composio) provide some distinctiveness, but the generic 'automate tasks' framing could overlap with other automation skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently teaches Aivoov automation via Rube MCP. It excels at workflow clarity with explicit validation steps and comprehensive pitfall documentation. The main weakness is that tool call examples use a pseudo-code format rather than showing actual executable syntax for a specific MCP client implementation.
Suggestions
Show at least one fully executable example with real MCP client syntax (e.g., actual JSON-RPC call or SDK usage) rather than pseudo-code notation
Include an example of handling a failed connection check and the recovery flow
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of basic concepts. Every section serves a purpose with no padding or unnecessary context about what Aivoov or Composio are. | 3 / 3 |
Actionability | Provides concrete tool call patterns with specific parameters, but uses pseudo-code style rather than fully executable examples. The tool calls show structure but aren't copy-paste ready for any specific MCP client. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit sequencing (discover → check connection → execute). Includes validation checkpoint for connection status ('Confirm connection status shows ACTIVE before running any workflows') and known pitfalls section addresses error prevention. | 3 / 3 |
Progressive Disclosure | Well-organized with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep and clearly signaled. Quick reference table provides efficient navigation for common operations. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.