CtrlK
BlogDocsLog inGet started
Tessl Logo

apollo-observability

Set up Apollo.io monitoring and observability. Use when implementing logging, metrics, tracing, and alerting for Apollo integrations. Trigger with phrases like "apollo monitoring", "apollo metrics", "apollo observability", "apollo logging", "apollo alerts".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/apollo-pack/skills/apollo-observability/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly identifies its niche (Apollo.io monitoring/observability) and provides explicit trigger guidance. Its main weakness is that the capability descriptions are somewhat high-level—listing categories like 'logging, metrics, tracing' rather than specific concrete actions. The explicit trigger phrases and clear 'Use when' clause make it effective for skill selection.

Suggestions

Add more specific concrete actions beyond category names, e.g., 'configure request-level logging, set up latency/error-rate metrics, implement distributed tracing with OpenTelemetry, create alerting rules for Apollo gateway health'.

DimensionReasoningScore

Specificity

Names the domain (Apollo.io monitoring/observability) and lists some actions (logging, metrics, tracing, alerting), but these are fairly high-level categories rather than multiple specific concrete actions like 'configure Prometheus exporters, set up distributed tracing, create Grafana dashboards'.

2 / 3

Completeness

Clearly answers both 'what' (set up Apollo.io monitoring and observability) and 'when' (implementing logging, metrics, tracing, alerting for Apollo integrations), with explicit trigger phrases provided in a dedicated clause.

3 / 3

Trigger Term Quality

Explicitly lists natural trigger phrases users would say: 'apollo monitoring', 'apollo metrics', 'apollo observability', 'apollo logging', 'apollo alerts'. These cover the main variations a user would naturally use when requesting this functionality.

3 / 3

Distinctiveness Conflict Risk

The combination of 'Apollo.io' with 'monitoring/observability' creates a clear niche. The specific trigger terms all include 'apollo' which makes it unlikely to conflict with generic monitoring skills or other Apollo-related skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides highly actionable, executable code for Apollo.io observability covering metrics, logging, tracing, and alerting. However, it's overly long for a SKILL.md—the full implementations should be in referenced files with the main skill showing key patterns and wiring. It also lacks validation checkpoints to verify each step works before proceeding.

Suggestions

Add validation checkpoints after key steps (e.g., 'Verify metrics collection: curl localhost:9090/metrics | grep apollo_requests_total' after Step 6)

Move full code implementations to referenced files (e.g., See [metrics.ts](src/observability/metrics.ts)) and keep only the key patterns and wiring overview in SKILL.md

Add a quick-start section showing the minimal wiring to get all components connected, before diving into individual implementations

DimensionReasoningScore

Conciseness

The content is mostly efficient with executable code, but it's quite long (~200 lines of code) for what could be more modular. Some inline comments are helpful but the sheer volume of code in a single SKILL.md is borderline excessive—much of this could be referenced files.

2 / 3

Actionability

Fully executable TypeScript code with proper imports, complete metric definitions, working interceptors, Prometheus alert rules in valid YAML, and a ready-to-use metrics endpoint. Every step is copy-paste ready.

3 / 3

Workflow Clarity

Steps are clearly numbered and sequenced (metrics → interceptors → logging → tracing → alerts → endpoint), but there are no validation checkpoints—no step to verify metrics are being collected, no test commands, no way to confirm the setup works before proceeding to the next step.

2 / 3

Progressive Disclosure

The skill is a monolithic wall of code that would benefit from splitting detailed implementations into separate files. References to external resources and next steps exist, but the ~200 lines of inline code should be in referenced files with only key patterns shown in the SKILL.md overview.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.