Automate Agentql tasks via Rube MCP (Composio). Always search tools first for current schemas.
Install with Tessl CLI
npx tessl i github:haniakrim21/everything-claude-code --skill agentql-automation63
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 94%
↑ 2.13xAgent success when using this skill
Validation for skill structure
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely lacking in specificity and user-facing guidance. It relies on technical product names without explaining what the skill actually does or when it should be used. The instruction to 'search tools first' is operational guidance rather than a description of capabilities.
Suggestions
Replace 'Automate Agentql tasks' with specific concrete actions (e.g., 'Scrape web data, interact with web elements, extract structured content from websites').
Add a 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user needs web scraping, browser automation, or extracting data from web pages').
Remove operational instructions ('Always search tools first') from the description and focus on capability and trigger information.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Agentql tasks' without specifying what concrete actions are performed. No specific capabilities are listed beyond the abstract concept of automation. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Agentql tasks') and there is no 'Use when...' clause or equivalent guidance on when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Uses technical jargon ('Agentql', 'Rube MCP', 'Composio') that users are unlikely to naturally say. Missing common user-facing terms that would trigger this skill appropriately. | 1 / 3 |
Distinctiveness Conflict Risk | The specific tool names (Agentql, Rube MCP, Composio) provide some distinctiveness, but without clear use cases, it's unclear when this should be chosen over other automation-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently guides Claude through Agentql automation via Rube MCP. The workflow is clear with proper validation checkpoints, and the content respects token budget. The main weakness is that tool call examples use placeholder syntax rather than fully executable examples with realistic argument values.
Suggestions
Provide at least one complete, realistic example with actual argument values (e.g., a specific Agentql query use case) rather than placeholder syntax like `{/* schema-compliant args */}`
Add a concrete example showing what a successful `RUBE_SEARCH_TOOLS` response looks like, so Claude knows what to expect and how to parse the returned schemas
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of concepts Claude already knows. Every section serves a purpose with no padding or unnecessary context about what Agentql or Composio are. | 3 / 3 |
Actionability | Provides concrete tool call patterns with field names, but examples are pseudocode-like rather than fully executable. The argument structures show placeholders rather than complete, copy-paste ready examples with realistic values. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit validation checkpoint (Step 2: Check Connection shows ACTIVE before proceeding). The Known Pitfalls section provides error recovery guidance and the sequence is unambiguous. | 3 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep and clearly signaled. Quick reference table aids navigation. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.