Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas.
62
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-airtable-automation/SKILL.mdQuality
Discovery
57%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche (Airtable automation via a specific MCP tool) which makes it distinctive, but it lacks specificity about what concrete actions it performs and omits an explicit 'Use when...' clause. The entity list (records, bases, tables, fields, views) hints at scope but doesn't convey actionable capabilities.
Suggestions
Add specific concrete actions like 'Create, update, delete, and query Airtable records; manage bases, tables, fields, and views' instead of the vague 'automate tasks'.
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Airtable, wants to manage Airtable records, or mentions Composio/Rube MCP for database automation.'
Include natural user trigger terms like 'database', 'spreadsheet', 'create record', 'update row', or 'list tables' that users might actually say when needing this skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Airtable) and lists some entities (records, bases, tables, fields, views) but doesn't describe specific concrete actions beyond the vague 'automate tasks'. What actions? Create, update, delete, query? | 2 / 3 |
Completeness | Partially answers 'what' (automate Airtable tasks) but lacks an explicit 'Use when...' clause. The instruction to 'always search tools first' is operational guidance rather than a trigger condition. Per rubric, missing 'Use when' caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes 'Airtable', 'records', 'bases', 'tables', 'fields', 'views' which are relevant keywords, but misses natural user phrases like 'spreadsheet', 'database', 'create record', 'update row', or 'Composio'. The mention of 'Rube MCP' is technical jargon users wouldn't naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Airtable' and 'Rube MCP (Composio)' creates a clear, distinct niche that is unlikely to conflict with other skills. Airtable is a specific enough platform to avoid overlap. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid reference-style skill that covers Airtable operations comprehensively with good structure, useful pitfall callouts, and clear tool sequences. Its main weaknesses are the lack of executable example invocations (actual MCP call payloads), missing validation/feedback loops in workflows involving destructive or batch operations, and some redundancy between sections that inflates token usage.
Suggestions
Add at least one fully executable example showing a complete MCP tool call with all parameters filled in (e.g., a RUBE_SEARCH_TOOLS call followed by AIRTABLE_CREATE_RECORD with a concrete fields payload).
Add explicit validation steps after destructive/batch operations—e.g., after CREATE_RECORDS, call LIST_RECORDS to verify; after DELETE, confirm the record is gone.
Consolidate the duplicate pitfall/limit information (batch limits appear in both workflow pitfalls and the Known Pitfalls section) and remove the generic 'When to Use' and 'Limitations' boilerplate to save tokens.
Consider splitting the formula syntax reference and ID format reference into separate linked files to reduce the main skill's length and improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy—the 'Known Pitfalls' section repeats batch limits already mentioned in workflow pitfalls, the quick reference table largely duplicates the workflow tool sequences, and the boilerplate 'When to Use' and 'Limitations' sections add little value. Some tightening is possible. | 2 / 3 |
Actionability | The skill provides concrete tool names, parameter names, formula syntax examples, and error codes, which is good. However, there are no executable code/command examples showing actual MCP tool calls with full parameter payloads—everything remains at the level of listing tool names and parameter descriptions rather than showing copy-paste-ready invocations. | 2 / 3 |
Workflow Clarity | Multi-step workflows are clearly sequenced with labeled steps and prerequisite/optional annotations. However, there are no explicit validation checkpoints or feedback loops—e.g., after creating records, there's no step to verify success, and batch chunking for >10 records lacks a retry/verify loop. For operations like delete, missing validation caps this at 2. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a quick reference table, but it's a long monolithic document (~150 lines of dense content) with no references to external files for detailed topics like formula syntax or advanced patterns. Some content (formula syntax, ID formats) could be split out for better navigation. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
431bfad
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.