Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas.
55
45%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/airtable-automation/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche (Airtable via Composio/Rube MCP) which makes it distinctive, but it lacks specificity about what concrete actions it performs and completely omits a 'Use when...' clause. The listed nouns (records, bases, tables, fields, views) hint at scope but don't describe actionable capabilities.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks to create, read, update, or delete Airtable records, manage bases, or configure table schemas.'
Replace 'Automate Airtable tasks' with specific actions like 'Create, update, delete, and query Airtable records; manage bases, tables, fields, and views.'
Include natural user terms like 'database', 'spreadsheet', 'add row', 'lookup record' that users might say when they need Airtable functionality.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Airtable) and lists some entities (records, bases, tables, fields, views) but doesn't describe specific concrete actions beyond the vague 'automate tasks'. What actions? Create, update, delete, query? | 2 / 3 |
Completeness | Describes a vague 'what' (automate Airtable tasks) but has no explicit 'when' clause — no 'Use when...' or equivalent trigger guidance. The instruction to 'always search tools first' is operational guidance, not a trigger condition. Per rubric, missing 'Use when...' caps completeness at 2, and the 'what' is also weak, so this scores 1. | 1 / 3 |
Trigger Term Quality | Includes 'Airtable', 'records', 'bases', 'tables', 'fields', 'views' which are relevant keywords, but misses natural user phrases like 'spreadsheet', 'database', 'create record', 'update row', or 'Composio'. The term 'Rube MCP' is technical jargon unlikely to be used by users. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Airtable' and 'Rube MCP (Composio)' creates a very specific niche that is unlikely to conflict with other skills. Airtable is a distinct product, and the tooling context further narrows it. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a competent reference-style skill that covers Airtable operations comprehensively with good structure, useful pitfall callouts, and clear tool sequences. Its main weaknesses are the lack of executable examples (actual MCP call payloads), missing validation/feedback loops for destructive and batch operations, and some redundancy between sections that inflates token usage without adding proportional value.
Suggestions
Add at least one complete, copy-paste-ready MCP call example (e.g., a full RUBE_SEARCH_TOOLS invocation followed by AIRTABLE_CREATE_RECORD with actual JSON payload) to improve actionability.
Add explicit validation checkpoints for batch and destructive operations—e.g., 'Verify record count before bulk delete' or 'Confirm chunk was created successfully before sending next batch'.
Remove the Quick Reference table or the detailed parameter listings in Core Workflows to eliminate redundancy—keep one authoritative location for tool-to-parameter mappings.
Extract the formula syntax reference and ID format details into a separate reference file to improve progressive disclosure and reduce main skill length.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy—the Quick Reference table largely duplicates information already covered in the Core Workflows sections, and the Known Pitfalls section repeats ID format info and batch limits already mentioned inline. The boilerplate 'When to Use' and 'Limitations' sections add little value. | 2 / 3 |
Actionability | The skill provides concrete tool names, parameter names, formula syntax examples, and error codes, which is useful. However, there are no executable code examples or copy-paste-ready MCP call examples showing actual JSON payloads or invocation patterns—guidance stays at the level of listing tool names and parameters rather than showing complete invocations. | 2 / 3 |
Workflow Clarity | Multi-step workflows are clearly sequenced with labeled steps and prerequisite/optional annotations. However, there are no explicit validation checkpoints or feedback loops—for batch operations (chunking 10-record limits) and destructive deletes, there's no verify-before-proceeding or error-recovery guidance, which caps this at 2. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a useful table, but it's a long monolithic document (~150+ lines) with no references to external files for detailed content like formula syntax or advanced patterns. The formula syntax section and full quick reference table could be split out to keep the main skill leaner. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
846ac1c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.