Diagnose and fix common Apollo.io API errors. Use when encountering Apollo API errors, debugging integration issues, or troubleshooting failed requests. Trigger with phrases like "apollo error", "apollo api error", "debug apollo", "apollo 401", "apollo 429", "apollo troubleshoot".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/apollo-pack/skills/apollo-common-errors/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong trigger terms, explicit 'when' guidance, and a clearly defined niche. Its main weakness is that the 'what' portion is somewhat generic — it could benefit from listing specific concrete actions beyond 'diagnose and fix' to help Claude understand the full scope of the skill's capabilities.
Suggestions
Add specific concrete actions to the 'what' clause, e.g., 'Diagnose and fix common Apollo.io API errors including authentication failures (401), rate limiting (429), malformed requests, and pagination issues.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (Apollo.io API errors) and some actions ('diagnose and fix'), but doesn't list specific concrete actions like parsing error codes, retrying rate-limited requests, or refreshing auth tokens. | 2 / 3 |
Completeness | Clearly answers both 'what' (diagnose and fix common Apollo.io API errors) and 'when' (encountering Apollo API errors, debugging integration issues, troubleshooting failed requests) with explicit trigger phrases. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including 'apollo error', 'apollo api error', 'debug apollo', specific HTTP status codes like '401' and '429', and 'apollo troubleshoot'. These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — targets a specific product (Apollo.io) and a specific problem domain (API errors/debugging). Very unlikely to conflict with other skills due to the narrow, well-defined niche. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, highly actionable troubleshooting guide with excellent concrete code examples and specific Apollo API details. Its main weaknesses are moderate verbosity (the full middleware example and step-by-step framing add length without proportional value) and a slightly artificial sequential workflow for what is essentially a reference document. The error reference table at the end is a strong addition that could serve as the primary content with detailed examples split into a bundle file.
Suggestions
Consider restructuring as a reference-first document: lead with the error reference table, then provide code snippets per error code, rather than forcing a sequential Step 1-6 workflow that users won't follow linearly.
Move the comprehensive error middleware (Step 6) into a separate bundle file (e.g., error-middleware.ts) and reference it from the main skill to reduce token cost.
Remove the Prerequisites section — Claude doesn't need to be told about Node.js/Python version requirements in a troubleshooting context.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with good code examples, but includes some unnecessary verbosity — the comprehensive error middleware in Step 6 is quite long and could be trimmed, and the step-by-step structure adds overhead for what is essentially a reference/lookup skill. The overview and prerequisites sections add minimal value for Claude. | 2 / 3 |
Actionability | Excellent actionability throughout — every error category has executable TypeScript code, specific cURL commands for diagnostics, concrete endpoint lists, exact header names, and specific rate limit numbers. The code is copy-paste ready and includes real Apollo API URLs and patterns. | 3 / 3 |
Workflow Clarity | The numbered steps provide a logical sequence for diagnosing errors, but this is fundamentally a reference/lookup skill rather than a sequential workflow. The steps are somewhat artificially sequenced — in practice you'd jump directly to the relevant error code. There's a good retry loop for 429 but no explicit validation checkpoint pattern (e.g., 'confirm fix worked before proceeding'). | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a summary reference table, but it's quite long (~180 lines of content) and could benefit from splitting detailed code examples into a separate file. The reference to 'apollo-debug-bundle' at the end is good but there are no bundle files to support it. The error reference table is a nice summary but duplicates information already covered in detail above. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.