Collect comprehensive infrastructure performance metrics across compute, storage, network, containers, load balancers, and databases. Use when monitoring system performance or troubleshooting infrastructure issues. Trigger with phrases like "collect infrastructure metrics", "monitor server performance", or "track system resources".
55
46%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/infrastructure-metrics-collector/skills/collecting-infrastructure-metrics/SKILL.mdQuality
Discovery
92%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly communicates its purpose, lists specific infrastructure domains, and provides explicit trigger guidance. Its main weakness is the broad scope covering many infrastructure areas, which could create overlap with more specialized monitoring skills. The inclusion of both a 'Use when' clause and explicit trigger phrases is a strong pattern.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific infrastructure domains: compute, storage, network, containers, load balancers, and databases. The description names concrete areas of metric collection rather than using vague language. | 3 / 3 |
Completeness | Clearly answers both 'what' (collect comprehensive infrastructure performance metrics across specific domains) and 'when' (explicit 'Use when' clause for monitoring/troubleshooting, plus explicit trigger phrases). | 3 / 3 |
Trigger Term Quality | Includes natural trigger phrases like 'collect infrastructure metrics', 'monitor server performance', 'track system resources', plus domain terms like 'compute', 'storage', 'network', 'containers', 'load balancers', 'databases' that users would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | While it specifies infrastructure metrics collection, terms like 'monitor server performance' and 'track system resources' could overlap with other monitoring or observability skills. The scope is broad (compute, storage, network, containers, load balancers, databases) which increases potential conflict with more specialized monitoring skills. | 2 / 3 |
Total | 11 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is almost entirely descriptive and abstract, reading more like a feature overview or marketing document than actionable instructions for Claude. It lacks any concrete code, configuration examples, CLI commands, or executable guidance. Every section describes what should happen in general terms without showing how to actually do it.
Suggestions
Replace the abstract 'Instructions' section with concrete, executable steps including actual CLI commands (e.g., `prometheus --config.file=prometheus.yml`) and example configuration files (e.g., a complete prometheus.yml scrape config).
Add copy-paste-ready configuration snippets for each supported tool (Prometheus, Datadog, CloudWatch) with real metric names, endpoints, and scrape intervals.
Remove the 'Overview', 'How It Works', 'When to Use This Skill', 'Integration', and 'Best Practices' sections entirely—they explain concepts Claude already knows and consume tokens without adding actionable value.
Add explicit validation checkpoints (e.g., 'Run `curl localhost:9090/api/v1/targets` to verify Prometheus is scraping targets successfully') and error recovery loops for each step in the workflow.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows. Sections like 'Overview', 'How It Works', 'When to Use This Skill', 'Integration', and 'Best Practices' are padded with generic descriptions that add no actionable value. The entire file reads like a product brochure rather than executable instructions. | 1 / 3 |
Actionability | No concrete code, commands, configuration snippets, or executable examples anywhere. The 'Examples' section describes what the skill 'will do' in abstract terms rather than showing actual Prometheus configs, Datadog agent YAML, CloudWatch CLI commands, or any copy-paste-ready content. The 'Instructions' section is a vague 6-step list with no specifics. | 1 / 3 |
Workflow Clarity | The workflow steps are entirely abstract ('Configure agent with target endpoints and metric types') with no concrete commands, no validation checkpoints, and no feedback loops. For a multi-step process involving agent installation and configuration across multiple infrastructure layers, there are no verification steps or error recovery sequences. | 1 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for detailed configurations, examples, or per-tool guides. All sections are at the same shallow level of abstraction with no depth anywhere. The 'Resources' section lists documentation names without links or file references. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
4dee593
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.