Execute this skill enables AI assistant to analyze capacity requirements and plan for future growth. it uses the capacity-planning-analyzer plugin to assess current utilization, forecast growth trends, and recommend scaling strategies. use this skill when the u... Use when analyzing code or data. Trigger with phrases like 'analyze', 'review', or 'examine'.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill analyzing-capacity-planning32
Quality
17%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/capacity-planning-analyzer/skills/analyzing-capacity-planning/SKILL.mdDiscovery
27%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description suffers from internal contradictions - it describes capacity planning capabilities but then provides generic triggers for 'analyzing code or data'. The description is truncated, uses awkward meta-language ('Execute this skill enables AI assistant'), and the trigger terms are far too generic to distinguish this from other analytical skills.
Suggestions
Replace generic triggers ('analyze', 'review', 'examine') with domain-specific terms like 'capacity planning', 'scaling strategy', 'utilization analysis', 'growth forecast', 'infrastructure sizing'
Rewrite the 'Use when' clause to match the actual capability: 'Use when planning infrastructure capacity, forecasting resource needs, or evaluating scaling strategies'
Remove meta-language ('Execute this skill enables AI assistant') and use direct third-person voice: 'Analyzes capacity requirements and plans for future growth...'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (capacity planning) and some actions (analyze capacity requirements, assess utilization, forecast growth, recommend scaling strategies), but the description is muddled with meta-language ('Execute this skill enables AI assistant') and gets cut off mid-sentence. | 2 / 3 |
Completeness | Has a 'what' (capacity planning analysis) and includes a 'Use when' clause, but the 'when' is contradictory and unhelpful - it says 'analyzing code or data' which doesn't match the capacity planning focus. The description is also truncated ('when the u...'). | 2 / 3 |
Trigger Term Quality | The trigger terms provided ('analyze', 'review', 'examine') are extremely generic and would match almost any analytical task. Missing domain-specific terms like 'capacity', 'scaling', 'utilization', 'growth forecast', or 'infrastructure planning' that users would naturally say. | 1 / 3 |
Distinctiveness Conflict Risk | The generic triggers ('analyze', 'review', 'examine') and broad 'Use when analyzing code or data' would cause this skill to conflict with virtually any analysis-related skill. The capacity planning specificity is undermined by the overly broad trigger guidance. | 1 / 3 |
Total | 6 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a verbose template with no actionable content. It describes what a capacity planning analyzer might do conceptually but provides zero concrete implementation details, no actual commands or code to invoke the plugin, and generic placeholder sections. The content explains concepts Claude already understands while failing to provide the specific guidance needed to actually perform capacity planning analysis.
Suggestions
Replace abstract descriptions with concrete plugin invocation commands (e.g., actual CLI commands or API calls for the capacity-planning-analyzer plugin)
Add executable code examples showing how to analyze specific metrics, with sample input/output formats
Remove generic boilerplate sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain no specific information
Consolidate the Overview, How It Works, and When to Use sections into a single concise paragraph, then focus on actionable implementation details
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows. Sections like 'Overview', 'How It Works', and 'When to Use' repeat similar information. Generic boilerplate sections (Prerequisites, Instructions, Output, Error Handling, Resources) add no actionable value. | 1 / 3 |
Actionability | No concrete code, commands, or executable examples. The skill describes what it will do abstractly ('The skill will analyze...') but provides no actual implementation details, API calls, or specific commands to invoke the capacity-planning-analyzer plugin. | 1 / 3 |
Workflow Clarity | Steps are vague and lack specificity. No validation checkpoints, no concrete sequence of operations, and no feedback loops. The 'Instructions' section is generic placeholder text with no actual workflow guidance. | 1 / 3 |
Progressive Disclosure | Content is organized into sections with headers, but it's a monolithic document with no references to external files. The structure exists but contains too much redundant content that could be trimmed rather than split. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.