Implement comprehensive observability for Gamma integrations. Use when setting up monitoring, logging, tracing, or building dashboards for Gamma API usage. Trigger with phrases like "gamma monitoring", "gamma logging", "gamma metrics", "gamma observability", "gamma dashboard".
77
73%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/gamma-pack/skills/gamma-observability/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description that clearly identifies its niche (Gamma observability), provides explicit 'Use when' guidance, and lists specific trigger phrases. Its main weakness is that the capability description stays at a somewhat high level—listing categories of observability (monitoring, logging, tracing, dashboards) rather than drilling into specific concrete actions within each category.
Suggestions
Add more specific concrete actions beyond category names, e.g., 'configure structured logging for Gamma API calls, set up distributed tracing with correlation IDs, create metric dashboards for API latency and error rates, define alerting rules for Gamma integration failures'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (observability for Gamma integrations) and mentions some actions (monitoring, logging, tracing, building dashboards), but these are fairly high-level categories rather than multiple specific concrete actions like 'configure alert thresholds, set up distributed tracing spans, create Grafana dashboards'. | 2 / 3 |
Completeness | The description clearly answers both 'what' (implement comprehensive observability for Gamma integrations) and 'when' (explicitly states 'Use when setting up monitoring, logging, tracing, or building dashboards' and provides trigger phrases), meeting the criteria for explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | The description explicitly lists natural trigger phrases users would say: 'gamma monitoring', 'gamma logging', 'gamma metrics', 'gamma observability', 'gamma dashboard'. It also includes related terms like 'tracing' and 'Gamma API usage', providing good coverage of natural terms. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of 'Gamma' as a specific platform/API and 'observability' as the domain creates a clear niche. The trigger terms are all prefixed with 'gamma', making it unlikely to conflict with generic monitoring skills or other platform-specific skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides highly actionable, executable code covering multiple observability concerns (metrics, logging, health checks, alerting) for Gamma integrations. However, it suffers from being a monolithic document with all code inline rather than split into referenced files, and it lacks validation/verification steps to confirm the observability pipeline is working correctly. The threshold tables and error handling sections are useful practical additions.
Suggestions
Split the large code blocks into separate bundle files (e.g., gamma-metrics.ts, logger.ts, health.ts, metrics-endpoint.ts, alerting-rules.yml) and reference them from the SKILL.md overview to improve progressive disclosure.
Add a validation step after setup (e.g., 'Verify metrics are being scraped: curl localhost:9090/api/v1/targets and confirm gamma-health target is UP') to improve workflow clarity.
Add a brief end-to-end verification workflow: trigger a test generation, confirm logs appear, check metrics endpoint, verify Prometheus scrape — this would close the feedback loop gap.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long with substantial inline code that could be split into referenced files. Some explanatory text is unnecessary (e.g., the overview paragraph explaining why observability is built around call patterns), but overall it's reasonably efficient without excessive padding. | 2 / 3 |
Actionability | Provides fully executable TypeScript code for an instrumented client, structured logging, health check endpoint, Prometheus metrics endpoint, and alerting rules in YAML. All code is copy-paste ready with concrete examples and specific metric names. | 3 / 3 |
Workflow Clarity | Steps are clearly numbered and sequenced (instrumented client → logging → health check → metrics → alerting), but there are no validation checkpoints or feedback loops. There's no step to verify that metrics are actually being scraped correctly, that the health check works, or how to test the full pipeline end-to-end. | 2 / 3 |
Progressive Disclosure | All content is monolithically inline with no bundle files to offload the substantial code blocks. The ~200 lines of code for metrics, logging, health checks, Prometheus endpoints, and alerting rules should be split into referenced files, with the SKILL.md serving as an overview. References to other skills exist but no supporting bundle structure. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.