Set up Apollo.io monitoring and observability. Use when implementing logging, metrics, tracing, and alerting for Apollo integrations. Trigger with phrases like "apollo monitoring", "apollo metrics", "apollo observability", "apollo logging", "apollo alerts".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill apollo-observability84
Quality
77%
Does it follow best practices?
Impact
100%
1.44xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/apollo-pack/skills/apollo-observability/SKILL.mdDiscovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with explicit trigger terms and clear 'Use when' guidance. The main weakness is that the capabilities described are somewhat general (logging, metrics, tracing, alerting) rather than listing specific concrete actions. The description effectively distinguishes itself through Apollo-specific terminology.
Suggestions
Add more specific concrete actions like 'configure Apollo Studio metrics', 'set up distributed tracing with OpenTelemetry', or 'create alerting rules for GraphQL errors' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Apollo.io monitoring/observability) and lists general actions (logging, metrics, tracing, alerting), but doesn't describe concrete specific actions like 'configure Prometheus exporters' or 'set up Grafana dashboards'. | 2 / 3 |
Completeness | Clearly answers both what ('Set up Apollo.io monitoring and observability' with logging, metrics, tracing, alerting) and when ('Use when implementing...', 'Trigger with phrases like...'). Has explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Explicitly lists natural trigger phrases users would say: 'apollo monitoring', 'apollo metrics', 'apollo observability', 'apollo logging', 'apollo alerts'. Good coverage of common variations. | 3 / 3 |
Distinctiveness Conflict Risk | Very specific niche combining Apollo.io with monitoring/observability. The 'apollo' prefix on all trigger terms makes it unlikely to conflict with generic monitoring skills or other Apollo-unrelated skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive observability skill with excellent, production-ready code examples covering metrics, logging, tracing, and alerting. The main weaknesses are the lack of a clear implementation workflow (what order to set things up, how to verify each component works) and the monolithic structure that could benefit from splitting detailed configurations into referenced files.
Suggestions
Add a 'Setup Workflow' section at the top with numbered steps: 1) Install dependencies, 2) Configure metrics, 3) Verify metrics endpoint, 4) Set up logging, etc., with validation checkpoints
Move the Grafana dashboard JSON and Prometheus alert rules to separate referenced files (e.g., 'See [grafana-dashboard.json](./grafana-dashboard.json)') to improve progressive disclosure
Add a verification section showing how to confirm each observability component is working (e.g., curl commands to test /metrics endpoint, sample log output to expect)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some verbose sections. The code examples are well-structured but could be more condensed - some boilerplate (like full metric definitions) could be summarized with key examples rather than exhaustive listings. | 2 / 3 |
Actionability | Excellent actionability with fully executable TypeScript code, complete Prometheus alert rules in YAML, and a ready-to-use Grafana dashboard JSON. All code is copy-paste ready with proper imports and complete implementations. | 3 / 3 |
Workflow Clarity | The skill presents components (metrics, logging, tracing, alerting) but lacks a clear setup sequence or integration workflow. There's no explicit order for implementing these components or validation steps to verify the observability stack is working correctly. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but everything is inline in one large file. The Grafana dashboard JSON and Prometheus rules could be referenced as separate files. The 'Next Steps' reference to apollo-incident-runbook is good but more cross-references would help. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.