Execute this skill enables AI assistant to analyze capacity requirements and plan for future growth. it uses the capacity-planning-analyzer plugin to assess current utilization, forecast growth trends, and recommend scaling strategies. use this skill when the u... Use when analyzing code or data. Trigger with phrases like 'analyze', 'review', or 'examine'.
28
13%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/capacity-planning-analyzer/skills/analyzing-capacity-planning/SKILL.mdQuality
Discovery
27%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description suffers from internal contradictions—the body describes capacity planning but the trigger clause references generic code/data analysis. The text is truncated, uses first/third person inconsistently ('enables AI assistant'), and the trigger terms are far too generic to distinguish this skill from others. The capacity planning specifics are promising but are undermined by poor trigger guidance and sloppy formatting.
Suggestions
Replace the generic trigger clause with capacity-planning-specific triggers: 'Use when the user asks about capacity planning, resource utilization, scaling strategies, growth forecasting, or infrastructure sizing.'
Remove the truncated text and 'Execute this skill enables AI assistant' preamble; rewrite in clean third person (e.g., 'Analyzes capacity requirements and plans for future growth...').
Add natural trigger terms users would actually say, such as 'capacity', 'scaling', 'utilization', 'growth forecast', 'infrastructure planning', 'resource allocation'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names a domain (capacity planning) and some actions (analyze capacity requirements, assess current utilization, forecast growth trends, recommend scaling strategies), but the description is muddled by starting with 'Execute this skill enables AI assistant' and the truncated text ('the u...') undermines clarity. | 2 / 3 |
Completeness | It attempts to answer both 'what' (capacity planning analysis) and 'when' ('Use when analyzing code or data'), but the 'when' clause is contradictory—it says 'analyzing code or data' which doesn't match the capacity planning domain. The description is also truncated ('the u...'). | 2 / 3 |
Trigger Term Quality | The trigger terms provided ('analyze', 'review', 'examine') are extremely generic and would match virtually any analytical task. They are not natural keywords specific to capacity planning (e.g., 'capacity', 'scaling', 'utilization', 'growth forecast'). | 1 / 3 |
Distinctiveness Conflict Risk | The trigger clause 'Use when analyzing code or data' with generic triggers like 'analyze', 'review', 'examine' would conflict with virtually any analytical skill. The capacity planning specificity in the body is undermined by the overly broad trigger guidance. | 1 / 3 |
Total | 6 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely generic boilerplate with no actionable content. It never shows how to invoke the 'capacity-planning-analyzer' plugin, provides no code examples, no configuration, no sample input/output, and no concrete commands. Every section reads like a template placeholder rather than a real skill definition.
Suggestions
Add concrete plugin invocation syntax showing exactly how to call the capacity-planning-analyzer with specific parameters (e.g., `capacity_planning_analyze(metrics=['cpu', 'memory'], period='30d')`).
Replace the abstract examples with real input/output pairs showing actual data formats, such as a sample JSON input with utilization metrics and the expected structured output with recommendations.
Remove all generic filler sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain no specific information, or populate them with real details about the plugin's requirements and error codes.
Add a concrete workflow with validation steps, e.g., how to verify data quality before analysis, how to validate forecast accuracy, and what to do when the plugin returns unexpected results.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows. The 'Overview' section restates the title, 'How It Works' describes obvious steps at a high level, 'When to Use' repeats the description, and sections like 'Best Practices', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all generic filler with no specific content. Nearly every token is wasted. | 1 / 3 |
Actionability | There is no concrete code, no executable commands, no specific API calls, no configuration examples, and no actual plugin invocation syntax. The entire skill describes what it will do in abstract terms ('The skill will analyze...') without ever showing how. The 'Instructions' section is completely generic ('Invoke this skill when the trigger conditions are met'). | 1 / 3 |
Workflow Clarity | While numbered steps exist, they are entirely abstract descriptions of what happens rather than actionable workflows. There are no validation checkpoints, no error recovery loops, no specific commands to run, and no concrete sequence a user or Claude could follow. The 'Instructions' section is four generic bullet points with no real workflow. | 1 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files, no links to detailed documentation, and no meaningful structure beyond generic section headers. All sections contain shallow, repetitive content rather than being organized for progressive depth. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
4dee593
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.