Diagnose and fix common Apollo.io API errors. Use when encountering Apollo API errors, debugging integration issues, or troubleshooting failed requests. Trigger with phrases like "apollo error", "apollo api error", "debug apollo", "apollo 401", "apollo 429", "apollo troubleshoot".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/apollo-pack/skills/apollo-common-errors/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong trigger terms and clear 'when' guidance. Its main weakness is that the 'what' portion is somewhat general — it says 'diagnose and fix common errors' without specifying which types of errors or what concrete actions it takes. The explicit trigger phrases and Apollo-specific focus make it highly distinctive and easy to match.
Suggestions
Add more specific capabilities like 'resolve authentication (401) errors, handle rate limiting (429), debug malformed request payloads, troubleshoot webhook failures' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Apollo.io API errors) and some actions (diagnose, fix), but doesn't list specific concrete actions like 'resolve authentication failures, handle rate limiting, parse error responses'. | 2 / 3 |
Completeness | Clearly answers both 'what' (diagnose and fix Apollo.io API errors) and 'when' (explicit 'Use when' clause with trigger scenarios plus a 'Trigger with phrases' section listing specific terms). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including specific error codes ('apollo 401', 'apollo 429'), common phrases ('apollo error', 'debug apollo', 'apollo troubleshoot'), and variations users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — targets a specific third-party API (Apollo.io) with specific error scenarios. Very unlikely to conflict with other skills unless there's another Apollo-specific skill. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable troubleshooting guide with excellent Apollo-specific details like master vs. standard key distinctions, exact rate limits per endpoint category, and executable diagnostic code. Its main weaknesses are length (the full error middleware could be externalized) and the lack of an explicit diagnostic decision tree or feedback loop that guides Claude through a structured debugging process rather than presenting a catalog of fixes.
Suggestions
Add a brief diagnostic decision tree or flowchart at the top (e.g., 'Got error? → Check status code → Follow corresponding section') to improve workflow clarity.
Move the comprehensive error middleware (Step 6) to a separate referenced file to reduce inline content length and improve progressive disclosure.
Trim explanatory comments that Claude can infer (e.g., '// Most common cause: missing x-api-key header') to improve conciseness.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with useful Apollo-specific details (key types, rate limits, endpoint requirements), but the comprehensive error middleware in Step 6 is verbose and could be trimmed. Some explanatory comments are unnecessary for Claude. | 2 / 3 |
Actionability | Provides fully executable TypeScript code, specific cURL commands for diagnostics, concrete endpoint URLs, exact header names, and specific rate limit numbers. Everything is copy-paste ready with real API paths. | 3 / 3 |
Workflow Clarity | Steps are clearly numbered and sequenced from identification through handling each error type, but there's no explicit validation/feedback loop for the overall debugging process. The steps are more of a reference catalog than a true diagnostic workflow with checkpoints. | 2 / 3 |
Progressive Disclosure | The error reference table and resources section are well-organized, and there's a reference to 'apollo-debug-bundle' for next steps. However, the inline content is quite long (~180 lines of code) and the rate limit table, error middleware, and detailed code examples could be split into referenced files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
70e9fa4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.