CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-ml-api-automation

Automate AI ML API tasks via Rube MCP (Composio). Always search tools first for current schemas.

Install with Tessl CLI

npx tessl i github:ComposioHQ/awesome-claude-skills --skill ai-ml-api-automation
What are skills?

60

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too vague to effectively guide skill selection. It fails to specify what concrete actions can be performed, lacks natural trigger terms users would say, and provides no explicit guidance on when Claude should select this skill. The mention of specific tools (Rube MCP, Composio) provides minimal distinctiveness but doesn't compensate for the lack of actionable detail.

Suggestions

List specific concrete actions this skill performs (e.g., 'Call OpenAI endpoints, manage ML model deployments, query inference APIs').

Add a 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks about calling AI APIs, integrating with ML services, or mentions specific providers like OpenAI, Anthropic, or HuggingFace').

Clarify what 'Rube MCP' and 'Composio' are and what capabilities they enable, since users may not know these tool names.

DimensionReasoningScore

Specificity

The description uses vague language like 'Automate AI ML API tasks' without listing concrete actions. No specific capabilities are enumerated - what tasks? What APIs? What automation actions?

1 / 3

Completeness

The 'what' is extremely vague ('Automate AI ML API tasks') and there is no 'when' clause. The instruction to 'search tools first' is operational guidance, not a trigger condition for when to use this skill.

1 / 3

Trigger Term Quality

Contains some relevant keywords ('AI', 'ML', 'API', 'Composio', 'Rube MCP') but these are technical jargon. Missing natural user terms - users likely wouldn't say 'Rube MCP' or know to mention 'Composio'.

2 / 3

Distinctiveness Conflict Risk

'Rube MCP' and 'Composio' provide some distinctiveness as specific tool names, but 'AI ML API tasks' is broad enough to potentially conflict with other AI/ML related skills.

2 / 3

Total

6

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a solid workflow structure for AI ML API automation with clear sequencing and validation checkpoints. The main weaknesses are moderate redundancy between sections and the use of pseudocode-style examples rather than fully executable code with realistic parameter values. The Known Pitfalls section adds genuine value by documenting common failure modes.

Suggestions

Remove the standalone 'Tool Discovery' section since it duplicates Step 1 of the Core Workflow Pattern

Provide at least one complete, realistic example with actual tool slugs and arguments that would work for a common AI ML API task

DimensionReasoningScore

Conciseness

The content is reasonably efficient but includes some redundancy - the 'Tool Discovery' section repeats information that appears again in 'Core Workflow Pattern Step 1', and some explanations could be tighter.

2 / 3

Actionability

Provides concrete tool call patterns with parameter examples, but uses pseudocode-style notation rather than actual executable code. The arguments shown are placeholders rather than real working examples.

2 / 3

Workflow Clarity

Clear 3-step workflow with explicit sequencing (discover → check connection → execute). Includes validation checkpoint (verify ACTIVE status before executing) and the Known Pitfalls section provides error recovery guidance.

3 / 3

Progressive Disclosure

Well-structured with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep and clearly signaled. Quick reference table provides efficient navigation.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.