Collect Customer.io debug evidence for support tickets. Use when creating support requests, investigating delivery failures, or documenting integration issues. Trigger: "customer.io debug", "customer.io support ticket", "collect customer.io logs", "customer.io diagnostics".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/customerio-pack/skills/customerio-debug-bundle/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong completeness and distinctiveness. It clearly identifies the domain (Customer.io), the purpose (debug evidence collection for support), and provides explicit trigger terms. The main weakness is that the specific actions/capabilities could be more concrete—listing exactly what debug evidence is collected would strengthen specificity.
Suggestions
Add more concrete actions to improve specificity, e.g., 'Captures API logs, exports delivery event timelines, documents webhook configurations, and compiles error traces for Customer.io support tickets.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Customer.io debug evidence) and some actions (collect, investigate, document), but doesn't list specific concrete actions like 'capture API response codes, export delivery logs, screenshot webhook configurations'. The actions remain somewhat high-level. | 2 / 3 |
Completeness | Clearly answers both 'what' (collect Customer.io debug evidence for support tickets) and 'when' (creating support requests, investigating delivery failures, documenting integration issues), with explicit trigger terms listed separately. | 3 / 3 |
Trigger Term Quality | Includes explicit trigger terms that users would naturally say: 'customer.io debug', 'customer.io support ticket', 'collect customer.io logs', 'customer.io diagnostics'. Also includes natural phrases like 'delivery failures' and 'integration issues' that users would mention. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific to Customer.io debug/support workflows. The combination of 'Customer.io' with 'debug evidence', 'support tickets', and 'diagnostics' creates a very clear niche that is unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with executable code for each diagnostic step and a useful error handling table. Its main weaknesses are verbosity (the full scripts could be referenced rather than inlined) and missing validation checkpoints between steps that would help Claude know when to stop or adjust the diagnostic process. The debug checklist at the end is a good touch but would be more effective if integrated into the workflow.
Suggestions
Move the full script contents to referenced files (e.g., scripts/debug-api.sh, scripts/debug-user.ts) and keep only usage examples and key output expectations in the SKILL.md
Add explicit validation checkpoints between steps, e.g., 'If Track API returns non-200, stop and check credentials before proceeding to Step 2'
Remove the overview paragraph since the step titles and checklist already convey the scope
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long with some unnecessary elements — the overview paragraph restates what the steps already show, comments in code are sometimes redundant, and the 'Current State' section with shell commands is a nice touch but the overall content could be tightened. The support report template includes placeholder steps-to-reproduce that Claude would know to include. | 2 / 3 |
Actionability | All steps contain fully executable bash scripts and TypeScript code with proper imports, error handling, and real API endpoints. The commands are copy-paste ready with concrete curl calls, SDK usage patterns, and specific environment variable names. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced (1-5) and the debug checklist provides a good summary, but there are no explicit validation checkpoints between steps or feedback loops for error recovery. For a diagnostic/debug bundle collection process, there's no guidance on what to do if Step 1 fails before proceeding to Step 2, and the error handling table is separate from the workflow rather than integrated. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a logical flow, but it's quite long (~180 lines of substantive content) and could benefit from splitting the individual scripts into separate files with the SKILL.md serving as an overview. The inline TypeScript files are already named as separate scripts but are fully embedded. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.