Automate Basecamp project management, to-dos, messages, people, and to-do list organization via Rube MCP (Composio). Always search tools first for current schemas.
55
33%
Does it follow best practices?
Impact
94%
1.22xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/all-skills/skills/basecamp-automation/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche (Basecamp via Rube MCP/Composio) which makes it distinctive, but it lacks specificity in the concrete actions it can perform and entirely omits a 'Use when...' clause. The operational instruction about searching tools first is useful for Claude's behavior but doesn't help with skill selection.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Basecamp projects, to-dos, messages, scheduling, or team management.'
List more specific concrete actions, e.g., 'Create and manage to-dos, post messages and comments, organize to-do lists, add/remove people from projects, check project status.'
Include common user-facing trigger terms like 'tasks', 'Basecamp project', 'campfire', 'check-ins', or 'schedule' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Basecamp project management) and lists some actions/entities (to-dos, messages, people, to-do list organization), but doesn't describe specific concrete actions like 'create to-dos', 'send messages', or 'assign people to projects'. | 2 / 3 |
Completeness | Describes what it does (automate Basecamp project management tasks) but has no explicit 'Use when...' clause or equivalent trigger guidance. The instruction to 'always search tools first' is operational guidance for Claude, not a trigger condition. Per rubric rules, missing 'Use when' caps completeness at 2, and the 'when' is so weak it warrants a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Basecamp', 'to-dos', 'messages', 'people', and 'project management' which users might naturally say. However, it's missing common variations like 'tasks', 'comments', 'campfire', 'schedule', or 'Basecamp 3/4'. The mention of 'Rube MCP (Composio)' is technical jargon unlikely to be used by end users. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Basecamp' and 'Rube MCP (Composio)' makes this highly distinctive. It's unlikely to conflict with other skills since Basecamp is a specific product with a clear niche. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in coverage but severely over-engineered for its purpose. It suffers from extensive redundancy (pitfalls repeated in each workflow section AND a dedicated section, alternative tools listed everywhere, a quick reference table duplicating workflow content) and lacks concrete executable examples. The content would be far more effective at half the length with better progressive disclosure to separate reference material from core workflows.
Suggestions
Reduce redundancy by consolidating pitfalls into a single section and removing the quick reference table (or moving it to a separate REFERENCE.md file), cutting the document roughly in half.
Add concrete tool invocation examples showing actual parameter values, e.g., a complete RUBE_SEARCH_TOOLS call followed by a BASECAMP_POST_BUCKETS_TODOSETS_TODOLISTS call with realistic sample data.
Split detailed parameter documentation and the quick reference table into a separate REFERENCE.md file, keeping SKILL.md as a concise overview with links.
Add explicit validation/error-recovery steps to workflows, e.g., 'If BASECAMP_GET_BUCKETS_TODOSETS returns empty, verify the project has a to-do set enabled in its dock array.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~250+ lines, with significant redundancy. It lists alternative tools multiple times (e.g., 'Alternative' and 'Fallback' tools that do the same thing), repeats the same pitfalls across sections and again in a dedicated 'Known Pitfalls' section, and includes a massive quick reference table that largely duplicates the workflow sections. Claude doesn't need this level of hand-holding for API tool usage patterns. | 1 / 3 |
Actionability | The skill provides specific tool names, parameter lists, and clear sequencing, which is good. However, there are no executable code examples or concrete request/response snippets — everything is described abstractly with parameter names and types rather than showing actual tool invocations with example payloads. The HTML formatting example is a nice touch but is the only concrete example in the entire document. | 2 / 3 |
Workflow Clarity | Multi-step workflows are clearly sequenced with numbered steps and labeled as [Prerequisite], [Required], [Optional], etc. However, there are no explicit validation checkpoints or error recovery feedback loops. The setup section has a basic verification flow, but the core workflows lack 'if this fails, do X' guidance beyond the single message creation fallback note. | 2 / 3 |
Progressive Disclosure | The entire skill is a monolithic wall of text with no references to external files. All detailed parameter lists, pitfalls, and the full quick reference table are inlined. The content would benefit greatly from splitting parameter references, the quick reference table, and detailed pitfalls into separate files, keeping SKILL.md as a concise overview with links. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
7cc63f3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.