Automate Agenty tasks via Rube MCP (Composio). Always search tools first for current schemas.
Install with Tessl CLI
npx tessl i github:haniakrim21/everything-claude-code --skill agenty-automation67
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 97%
↑ 4.40xAgent success when using this skill
Validation for skill structure
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too terse and relies heavily on technical terminology that users wouldn't naturally use. It fails to explain what specific automations are possible and completely lacks a 'Use when...' clause to guide skill selection. The description reads more like an internal implementation note than a user-facing skill description.
Suggestions
Add a 'Use when...' clause with natural trigger terms like 'automate workflow', 'Agenty automation', 'run Composio tasks', or specific task types this skill handles.
Expand the capabilities section to list 2-3 concrete actions users can perform (e.g., 'Create automated workflows, trigger Agenty actions, manage task sequences').
Replace or supplement technical jargon with user-friendly language describing the outcomes users would request.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Agenty tasks via Rube MCP/Composio) and one action (search tools for schemas), but lacks comprehensive concrete actions describing what automation capabilities are available. | 2 / 3 |
Completeness | Partially addresses 'what' (automate Agenty tasks) but provides no 'when' clause or explicit trigger guidance. The instruction to 'search tools first' is implementation guidance, not usage triggers. | 1 / 3 |
Trigger Term Quality | Uses technical jargon ('Rube MCP', 'Composio', 'schemas') that users would not naturally say. Missing natural trigger terms like 'automate', 'workflow', or specific task types users might request. | 1 / 3 |
Distinctiveness Conflict Risk | The specific mention of 'Agenty', 'Rube MCP', and 'Composio' provides some distinctiveness, but 'automate tasks' is generic enough to potentially conflict with other automation-related skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently guides Claude through Agenty automation via Rube MCP. The workflow is clear with appropriate validation checkpoints, and the content respects token budget. The main weakness is that code examples use placeholder syntax rather than fully executable patterns, which slightly reduces immediate actionability.
Suggestions
Replace placeholder comments like '/* schema-compliant args from search results */' with a concrete example showing actual field names and values for a common Agenty operation
Add one complete end-to-end example showing a real Agenty task from search through execution with actual tool slugs and arguments
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of concepts Claude already knows. Every section serves a purpose with no padding or unnecessary context about what Agenty or Composio are. | 3 / 3 |
Actionability | Provides concrete tool call patterns with specific parameters, but uses pseudo-code style rather than fully executable examples. The argument placeholders like '/* schema-compliant args from search results */' reduce copy-paste readiness. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit sequencing (discover → check connection → execute). Includes validation checkpoint for connection status ('Confirm connection status shows ACTIVE before running any workflows') and error recovery guidance. | 3 / 3 |
Progressive Disclosure | Well-organized with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep and clearly signaled. Quick reference table provides efficient navigation. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.