Collect Apollo.io debug evidence for support. Use when preparing support tickets, documenting issues, or gathering diagnostic information for Apollo problems. Trigger with phrases like "apollo debug", "apollo support bundle", "collect apollo diagnostics", "apollo troubleshooting info".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/apollo-pack/skills/apollo-debug-bundle/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with excellent trigger terms and completeness, clearly specifying both when and what. Its main weakness is the lack of specific concrete actions—it says 'collect debug evidence' but doesn't enumerate what specific artifacts or steps are involved. The narrow Apollo.io focus makes it highly distinctive.
Suggestions
Add specific concrete actions such as 'capture logs, export configurations, gather error messages, compile system info' to improve specificity beyond the general 'collect evidence'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Apollo.io debug evidence) and the general action (collect), but doesn't list specific concrete actions like 'capture logs', 'export configuration', 'screenshot error states', etc. The description stays at a high level of 'collect evidence' without detailing what that entails. | 2 / 3 |
Completeness | Clearly answers both 'what' (collect Apollo.io debug evidence for support) and 'when' (preparing support tickets, documenting issues, gathering diagnostic information) with explicit trigger phrases provided. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms: 'apollo debug', 'apollo support bundle', 'collect apollo diagnostics', 'apollo troubleshooting info', plus contextual terms like 'support tickets', 'documenting issues', 'diagnostic information'. These cover natural variations a user would say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific to Apollo.io debug/support context with distinct trigger phrases. Unlikely to conflict with other skills due to the narrow niche of Apollo diagnostics collection. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a comprehensive, actionable Apollo.io debug bundle collector with real executable TypeScript and useful curl shortcuts. Its main weaknesses are verbosity — the full implementation could be a referenced script rather than inline — and the lack of explicit validation checkpoints in the workflow (e.g., verifying the API key is set before running tests). The error handling table and quick CLI examples are strong additions.
Suggestions
Move the full TypeScript implementation to a referenced script file (e.g., `src/scripts/debug-bundle.ts`) and keep only a brief usage example and key decision points in the SKILL.md body.
Add an explicit prerequisite check step (e.g., 'Verify APOLLO_API_KEY is set and non-empty before proceeding') as a validation checkpoint early in the workflow.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long with a full TypeScript implementation spread across 4 steps. While the code is useful, the `collectDebugBundle` function in Step 1 returns an incomplete bundle (doesn't call connectivity/endpoint tests itself), and some structure like the `## Current State` shell commands and `## Output` bullet list restate what the code already shows. The error handling table and curl examples add value though. | 2 / 3 |
Actionability | Provides fully executable TypeScript code with concrete API endpoints, specific headers, and real Apollo.io URLs. The curl one-liners in the Examples section are copy-paste ready for quick diagnostics. The error handling table maps specific symptoms to concrete next steps. | 3 / 3 |
Workflow Clarity | The 4-step sequence is clear and logically ordered (create bundle → test connectivity → test endpoints → output). However, there are no explicit validation checkpoints or feedback loops — no step verifies that prerequisites are met (e.g., checking APOLLO_API_KEY is set before proceeding), and no guidance on what to do if the script itself fails to compile or run. For a diagnostic tool, this is acceptable but not exemplary. | 2 / 3 |
Progressive Disclosure | The skill is a single monolithic file with ~150+ lines of inline code. The code could be referenced as a separate script file rather than embedded in full. The 'Next Steps' and 'Resources' sections provide some navigation, but the bulk of the content is inline rather than appropriately split for a skill overview. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.