Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas.
Install with Tessl CLI
npx tessl i github:Lingjie-chen/MT5 --skill airtable-automation69
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche (Airtable automation via specific tooling) but lacks concrete action verbs and explicit trigger guidance. It tells Claude what domain to work in but not when to select this skill or what specific operations it enables.
Suggestions
Add an explicit 'Use when...' clause with trigger phrases like 'when the user mentions Airtable, wants to manage database records, or references Airtable bases/tables'
Replace the noun list with concrete actions: 'Create, read, update, and delete Airtable records; manage bases, tables, fields, and views'
Include common user language variations like 'database', 'spreadsheet-database', or 'no-code database' to improve trigger term coverage
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Airtable) and lists general categories (records, bases, tables, fields, views) but doesn't describe concrete actions like 'create records', 'query tables', or 'update fields'. | 2 / 3 |
Completeness | Describes what it works with (Airtable components) but lacks an explicit 'Use when...' clause. The instruction to 'search tools first' is operational guidance, not a trigger condition. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Airtable', 'records', 'bases', 'tables', 'fields', 'views' but misses common user phrases like 'spreadsheet', 'database', 'add row', or 'lookup'. | 2 / 3 |
Distinctiveness Conflict Risk | Clearly specific to Airtable via Rube MCP (Composio), which creates a distinct niche unlikely to conflict with other database or spreadsheet skills. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with excellent workflow clarity and conciseness. The tool sequences, pitfalls, and quick reference table provide valuable guidance. However, the skill would benefit from concrete executable examples showing actual tool call payloads rather than just listing parameters.
Suggestions
Add 1-2 concrete examples showing complete tool call payloads (e.g., a full AIRTABLE_CREATE_RECORD call with actual fields object)
Include an example filterByFormula query with the expected LIST_RECORDS response structure
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of what Airtable is or how APIs work. Every section provides actionable information without padding, and the quick reference table is an excellent token-efficient summary. | 3 / 3 |
Actionability | While tool sequences and parameters are clearly listed, there are no executable code examples showing actual tool calls with real payloads. The guidance is specific but stops short of copy-paste ready examples with concrete input/output. | 2 / 3 |
Workflow Clarity | Multi-step workflows are clearly sequenced with labeled steps (Prerequisite, Required, Optional). The setup section includes explicit validation checkpoints (verify connection, confirm ACTIVE status before proceeding). Pitfalls sections provide error recovery guidance. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and a quick reference table, but everything is in one file. The external toolkit docs link is provided, but detailed formula syntax and pagination patterns could be split into separate reference files for better navigation. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.