CtrlK
BlogDocsLog inGet started
Tessl Logo

metrics

System metrics, telemetry, and performance monitoring

65

Quality

51%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./src/skills/bundled/metrics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too terse and abstract to effectively guide skill selection. It names a domain (system monitoring) but fails to specify concrete actions Claude can perform or when this skill should be triggered. The lack of actionable verbs and explicit usage guidance makes it difficult to distinguish from related skills.

Suggestions

Add concrete actions describing what the skill does, e.g., 'Collects CPU/memory usage, analyzes latency patterns, configures alerting thresholds, creates monitoring dashboards'

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about server health, resource utilization, performance bottlenecks, or setting up monitoring'

Include natural trigger terms users would say, such as 'CPU', 'memory', 'disk space', 'server load', 'Prometheus', 'Grafana', 'observability'

DimensionReasoningScore

Specificity

The description uses vague, abstract language ('metrics', 'telemetry', 'monitoring') without listing any concrete actions. It doesn't specify what Claude actually does - no verbs describing capabilities like 'collect', 'analyze', 'alert', or 'visualize'.

1 / 3

Completeness

The description only vaguely addresses 'what' (system metrics/monitoring domain) and completely lacks any 'when' guidance. There is no 'Use when...' clause or explicit trigger guidance.

1 / 3

Trigger Term Quality

Contains some relevant domain keywords ('metrics', 'telemetry', 'performance monitoring') that users might mention, but misses common variations like 'CPU usage', 'memory', 'latency', 'dashboards', 'alerts', 'observability', or specific tool names.

2 / 3

Distinctiveness Conflict Risk

The terms are somewhat specific to the monitoring/observability domain, but 'performance' and 'metrics' are broad enough to potentially conflict with database performance skills, application profiling skills, or analytics skills.

2 / 3

Total

6

/

12

Passed

Implementation

79%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive API reference that excels at providing concrete, executable code examples with excellent token efficiency. The main weaknesses are the lack of explicit setup/validation workflows and the monolithic structure that could benefit from progressive disclosure to separate files for detailed API documentation.

Suggestions

Add a quick-start workflow section showing the complete setup sequence: install -> configure -> validate configuration -> start collection -> verify metrics are being collected

Include error handling patterns and validation steps, especially for configuration errors and connection failures to storage/Prometheus

Consider splitting detailed API examples into a separate REFERENCE.md file, keeping SKILL.md focused on quick-start and common use cases

DimensionReasoningScore

Conciseness

The content is lean and efficient, presenting API references and code examples without unnecessary explanation. It assumes Claude understands TypeScript, metrics concepts, and system monitoring without over-explaining basics.

3 / 3

Actionability

Provides fully executable TypeScript code examples that are copy-paste ready. Every API method is demonstrated with concrete usage patterns, configuration options, and output formatting.

3 / 3

Workflow Clarity

While individual API calls are clear, there's no explicit workflow for setting up a complete monitoring system. Missing validation steps for configuration and no error handling patterns shown for when metrics collection fails.

2 / 3

Progressive Disclosure

Content is well-organized with clear sections and tables, but it's a monolithic reference document. Could benefit from splitting detailed API examples into separate files while keeping SKILL.md as a quick-start overview.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
alsk1992/CloddsBot
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.