Collect comprehensive infrastructure performance metrics across compute, storage, network, containers, load balancers, and databases. Use when monitoring system performance or troubleshooting infrastructure issues. Trigger with phrases like "collect infrastructure metrics", "monitor server performance", or "track system resources".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill collecting-infrastructure-metrics60
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It provides specific infrastructure domains, includes natural trigger phrases users would actually say, clearly separates the 'what' and 'when' clauses, and establishes a distinct niche for infrastructure monitoring. The description uses proper third-person voice throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete domains: 'compute, storage, network, containers, load balancers, and databases' - this provides comprehensive coverage of infrastructure components and clearly describes the action of collecting performance metrics. | 3 / 3 |
Completeness | Clearly answers both what ('Collect comprehensive infrastructure performance metrics across compute, storage, network, containers, load balancers, and databases') and when ('Use when monitoring system performance or troubleshooting infrastructure issues') with explicit trigger phrases. | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger phrases users would say: 'collect infrastructure metrics', 'monitor server performance', 'track system resources', plus domain terms like 'compute', 'storage', 'network', 'containers' that users naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on infrastructure metrics collection with distinct triggers around monitoring and performance tracking. The specific mention of infrastructure components (containers, load balancers, databases) creates a well-defined scope unlikely to conflict with general coding or document skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
20%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a product description or README than actionable guidance for Claude. It extensively describes what the skill does and when to use it, but provides no concrete code, configuration examples, or executable commands. The content assumes Claude needs to be told what infrastructure monitoring is rather than how to actually configure specific tools.
Suggestions
Replace abstract descriptions with executable configuration examples (e.g., actual Prometheus scrape configs, Datadog agent YAML, CloudWatch metric filters)
Remove meta-sections like 'How It Works', 'When to Use This Skill', and 'Overview' - these explain concepts Claude already understands
Add concrete code snippets for each agent type showing actual installation commands and configuration file contents
Include validation steps with specific commands to verify metrics are being collected (e.g., 'curl localhost:9090/metrics' for Prometheus)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows. Sections like 'How It Works', 'When to Use This Skill', and 'Overview' are redundant meta-descriptions that don't add actionable value. The content explains what the skill does rather than how to do it. | 1 / 3 |
Actionability | No executable code, commands, or concrete configuration examples. The 'Instructions' section lists abstract steps like 'Configure agent with target endpoints' without showing actual configuration syntax, commands, or file formats. Examples describe what the skill 'will do' rather than showing how. | 1 / 3 |
Workflow Clarity | Steps are listed in the Instructions section with a logical sequence, and Error Handling provides troubleshooting guidance. However, there are no validation checkpoints, no feedback loops for verifying successful configuration, and no explicit verification steps between stages. | 2 / 3 |
Progressive Disclosure | Content is organized into sections but everything is inline in one file. The Resources section mentions external documentation but doesn't link to project-specific detailed guides. Content that could be split (agent-specific configs, dashboard templates) is described abstractly rather than referenced. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.