Set up Apollo.io monitoring and observability. Use when implementing logging, metrics, tracing, and alerting for Apollo integrations. Trigger with phrases like "apollo monitoring", "apollo metrics", "apollo observability", "apollo logging", "apollo alerts".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/apollo-pack/skills/apollo-observability/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description that clearly defines its scope around Apollo.io monitoring and observability. It excels in completeness with explicit 'Use when' and 'Trigger with' clauses, and has good trigger term coverage. The main weakness is that the specific capabilities could be more concrete—listing actual actions like configuring specific monitoring tools or setting up dashboards rather than broad categories.
Suggestions
Add more concrete actions beyond high-level categories, e.g., 'Configure Prometheus exporters, set up Grafana dashboards, define alert rules, implement distributed tracing with OpenTelemetry for Apollo integrations.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (Apollo.io monitoring/observability) and lists some actions (logging, metrics, tracing, alerting), but these are high-level categories rather than multiple specific concrete actions like 'configure dashboards, set up alert thresholds, integrate with Prometheus'. | 2 / 3 |
Completeness | The description clearly answers both 'what' (set up Apollo.io monitoring and observability) and 'when' (explicit 'Use when' clause for implementing logging, metrics, tracing, alerting, plus explicit trigger phrases). Both components are present and explicit. | 3 / 3 |
Trigger Term Quality | The description explicitly lists natural trigger phrases users would say: 'apollo monitoring', 'apollo metrics', 'apollo observability', 'apollo logging', 'apollo alerts'. These cover the main variations a user would naturally use when requesting this functionality. | 3 / 3 |
Distinctiveness Conflict Risk | The description is clearly scoped to Apollo.io monitoring/observability specifically, with distinct trigger terms that combine 'apollo' with monitoring-specific terms. This is unlikely to conflict with general monitoring skills or other Apollo skills focused on different aspects. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly actionable skill with production-quality, executable TypeScript and Prometheus alerting rules covering metrics, logging, tracing, and alerting for Apollo.io integrations. Its main weaknesses are the lack of validation/verification checkpoints between steps and the monolithic structure that could benefit from splitting detailed implementations into separate files. Some minor verbosity in the overview and repeated interceptor patterns could be tightened.
Suggestions
Add explicit validation checkpoints after key steps, e.g., 'Run `curl localhost:9090/metrics` and verify `apollo_requests_total` appears' after wiring up interceptors.
Split the detailed code implementations (metrics, logging, tracing, alerting) into separate referenced files and keep SKILL.md as a concise overview with quick-start wiring code.
Remove the overview paragraph that restates what the steps already demonstrate—the step titles and Output section already serve this purpose.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable code, but the overview section restates what the steps already show, and some inline comments are unnecessary. The skill is quite long (~200 lines) and could be tightened—e.g., the tracing interceptor pattern is very similar to the metrics interceptor and could be consolidated or abbreviated. | 2 / 3 |
Actionability | Every step provides fully executable TypeScript or YAML code with correct imports, proper types, and realistic configurations. The code is copy-paste ready with real library APIs (prom-client, pino, OpenTelemetry, express) and Apollo-specific endpoint paths. | 3 / 3 |
Workflow Clarity | Steps are clearly numbered and sequenced, but there are no validation checkpoints—no step verifies that metrics are actually being collected, that the interceptors are wired correctly, or that alerts fire as expected. For an observability setup involving multiple interconnected components, explicit verification steps (e.g., 'curl /metrics and confirm counters appear') would be expected. | 2 / 3 |
Progressive Disclosure | The skill is a monolithic document with all code inline. Given its length and the distinct concerns (metrics, logging, tracing, alerting), splitting detailed implementations into separate referenced files while keeping SKILL.md as an overview would improve navigation. The reference to 'apollo-incident-runbook' and 'apollo-cost-tuning' are good signals but the main content itself is not well-layered. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.