Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas.
52
41%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-airtable-automation/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies the target platform (Airtable) and the integration mechanism (Rube MCP/Composio), but lacks specific concrete actions and has no explicit 'Use when...' trigger clause. The entity list (records, bases, tables, fields, views) is helpful but insufficient without action verbs describing what can be done with them.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create, read, update, or delete Airtable records, manage bases, or configure table schemas.'
Replace 'automate tasks' with specific concrete actions like 'Create, update, delete, and query records; manage bases and tables; configure fields and views.'
Include natural trigger terms users might say, such as 'Airtable database', 'add a row', 'look up record', 'list tables', or 'sync Airtable data'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Airtable) and lists some entities (records, bases, tables, fields, views) but doesn't describe specific concrete actions beyond the vague 'automate tasks'. What actions? Create, update, delete, query? | 2 / 3 |
Completeness | Provides a partial 'what' (automate Airtable tasks) but has no explicit 'when' clause — no 'Use when...' or equivalent trigger guidance. The instruction to 'search tools first' is operational guidance, not a trigger condition. Per rubric, missing 'Use when' caps completeness at 2, and the 'what' is also weak, so this scores 1. | 1 / 3 |
Trigger Term Quality | Includes 'Airtable', 'records', 'bases', 'tables', 'fields', 'views' which are relevant keywords, but misses natural user phrases like 'spreadsheet', 'database', 'create record', 'update row', or common task-oriented terms users would actually say. | 2 / 3 |
Distinctiveness Conflict Risk | Mentioning 'Airtable' and 'Rube MCP (Composio)' provides some distinctiveness, but 'automate tasks' with generic entity names like 'tables', 'fields', 'views' could overlap with other database or spreadsheet skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a competent Airtable automation skill that covers the key operations, parameters, and pitfalls well. Its main weaknesses are the lack of executable tool call examples (showing actual JSON payloads), missing validation/feedback loops for destructive and batch operations, and some content redundancy that inflates the token footprint. The 'When to Use' section adds no value.
Suggestions
Add at least one concrete, copy-paste-ready example of an actual MCP tool invocation with full parameter JSON (e.g., a RUBE_SEARCH_TOOLS call followed by AIRTABLE_CREATE_RECORD with a real payload structure).
Add explicit validation checkpoints for batch and destructive workflows — e.g., 'After chunked CREATE_RECORDS, verify count with LIST_RECORDS before proceeding' and 'Before DELETE, confirm record IDs match expected set'.
Remove the redundant 'When to Use' section and consolidate duplicate pitfall information (batch limits, field name sensitivity) into a single location to reduce token usage.
Consider moving the formula syntax reference and quick reference table to separate linked files to improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some redundancy — pitfalls are repeated across sections (e.g., field name case sensitivity, batch limits mentioned in both workflow sections and a dedicated 'Known Pitfalls' section), and the quick reference table largely duplicates information already covered in the workflows. The 'When to Use' section at the end is vacuous. | 2 / 3 |
Actionability | The skill provides concrete tool names, parameter names, and formula syntax examples, which is good. However, there are no executable code/command examples showing actual MCP tool invocations with real parameter structures (e.g., JSON payloads). The guidance is specific but stops short of copy-paste ready examples of actual tool calls. | 2 / 3 |
Workflow Clarity | Multi-step workflows are clearly sequenced with labeled steps and prerequisite/optional annotations. However, there are no explicit validation checkpoints or feedback loops — for batch operations (chunking 10 records at a time) and destructive operations (delete), there's no verify-before-proceeding or error-recovery guidance, which caps this at 2 per the rubric. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and sections, but it's a monolithic document (~150+ lines) with no references to external files for detailed content like formula syntax or the full quick reference table. The formula syntax section and quick reference table could be split out to keep the main skill leaner. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
636b862
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.