CtrlK
BlogDocsLog inGet started
Tessl Logo

metrics

System metrics, telemetry, and performance monitoring

39

Quality

24%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./src/skills/bundled/metrics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too vague and noun-heavy, lacking any concrete actions or explicit trigger guidance. It names a broad domain but fails to explain what the skill actually does or when Claude should select it. Without verbs, a 'Use when...' clause, or specific trigger terms, it would be difficult for Claude to reliably choose this skill from a large pool.

Suggestions

Add concrete actions using verbs, e.g., 'Collects system metrics, configures telemetry pipelines, sets up performance dashboards and alerts.'

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about CPU usage, memory monitoring, latency tracking, observability, or setting up monitoring tools like Prometheus or Grafana.'

Include natural trigger term variations users would say, such as 'CPU', 'memory', 'disk usage', 'dashboards', 'alerts', 'observability', 'APM', 'Prometheus', 'Grafana'.

DimensionReasoningScore

Specificity

The description names a domain (system metrics, telemetry, performance monitoring) but lists no concrete actions. There are no verbs describing what the skill actually does—it's just a list of abstract nouns.

1 / 3

Completeness

The description weakly addresses 'what' (only naming a domain without concrete actions) and completely omits 'when' — there is no 'Use when...' clause or equivalent trigger guidance.

1 / 3

Trigger Term Quality

Terms like 'metrics', 'telemetry', and 'performance monitoring' are somewhat relevant keywords a user might use, but common variations like 'CPU usage', 'memory', 'dashboards', 'alerts', 'latency', 'observability', or 'APM' are missing.

2 / 3

Distinctiveness Conflict Risk

The terms 'system metrics', 'telemetry', and 'performance monitoring' provide some specificity to a monitoring domain, but could easily overlap with skills related to logging, infrastructure management, or DevOps tooling.

2 / 3

Total

6

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an exhaustive API reference that suffers from extreme verbosity—most of the content is repetitive console.log statements that demonstrate field access patterns Claude could easily infer. The content would benefit greatly from being condensed to interface/type definitions with brief usage snippets, with detailed examples moved to separate reference files. The lack of workflow guidance (setup → collect → query → alert) and error handling patterns limits its practical utility.

Suggestions

Replace verbose console.log examples with TypeScript interface definitions (e.g., `interface SystemMetrics { cpuUsage: number; memoryUsed: number; ... }`) and one brief usage example, cutting token count by ~60%.

Split into SKILL.md (overview + quick start + best practices) and separate reference files (API_REFERENCE.md, CHAT_COMMANDS.md) with clear navigation links.

Add a workflow section showing the typical setup-to-monitoring sequence: initialize → start collection → verify data flowing → set alerts → generate reports, with validation at each step.

Remove or drastically condense the chat commands section—listing commands without explaining the system they run in provides limited actionable value.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive console.log boilerplate that adds no instructional value. The repetitive pattern of logging every field inflates token count massively. Much of this could be condensed to interface definitions and brief usage examples.

1 / 3

Actionability

Provides concrete TypeScript code examples with specific API calls, but the code appears to be for a hypothetical 'clodds/metrics' library that may not exist, making it not truly executable. The chat commands section also lists commands without explaining the underlying system.

2 / 3

Workflow Clarity

The skill is structured as an API reference rather than a workflow, with sections logically organized by metric type. However, there's no guidance on sequencing (e.g., start collection before querying), no validation steps, and no error handling patterns for when metrics collection fails.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with ~250 lines of inline API reference that should be split into separate files. There are no references to external documents, and the entire API surface is dumped into a single file with no layering.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
alsk1992/CloddsBot
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.