Maps architectural components in a codebase and measures their size to identify what should be extracted first. Use when asking "how big is each module?", "what components do I have?", "which service is too large?", "analyze codebase structure", "size my monolith", or planning where to start decomposing. Do NOT use for runtime performance sizing or infrastructure capacity planning.
71
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./packages/skills-catalog/skills/(architecture)/component-identification-sizing/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It provides specific concrete actions, includes a rich set of natural trigger phrases users would actually say, explicitly addresses both what and when, and includes negative triggers to prevent misuse. The 'Do NOT use' clause is a particularly strong addition for distinctiveness.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Maps architectural components', 'measures their size', 'identify what should be extracted first'. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (maps architectural components and measures size to identify extraction priorities) and 'when' (explicit 'Use when' clause with multiple trigger phrases, plus a 'Do NOT use' exclusion clause). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms: 'how big is each module', 'what components do I have', 'which service is too large', 'analyze codebase structure', 'size my monolith', 'decomposing'. These are phrases users would naturally say. The negative triggers ('Do NOT use for runtime performance sizing or infrastructure capacity planning') further improve precision. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche (codebase architectural analysis and sizing for decomposition). The explicit 'Do NOT use' clause for runtime performance and infrastructure capacity planning actively prevents conflicts with related but different skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in coverage but severely over-engineered for a SKILL.md file. It explains concepts Claude already understands (standard deviation, what components are, how to count statements), repeats threshold information multiple times, and includes extensive inline content that should be split into reference files. The core value—identifying leaf-node components and sizing them with statement counts and statistical analysis—is buried under excessive verbosity.
Suggestions
Cut the content by at least 60%: remove explanations of basic concepts (what a component is, how std dev works, what statements are per language), the 'Quick Start' prompt examples that just describe what the skill does, and deduplicate threshold information that appears in 3+ places.
Extract the output format templates, fitness function code, and language-specific implementation notes into separate reference files (e.g., OUTPUT_FORMATS.md, FITNESS_FUNCTIONS.md) and link to them from a concise overview.
Add explicit validation checkpoints in the workflow, such as 'Verify component count matches expected directory structure before proceeding to sizing' and 'Sanity-check total statement count against known codebase size'.
Replace the descriptive 'Usage Examples' (which just list what the skill will do) with a single concrete worked example showing actual input directory structure and resulting output table.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~300+ lines. It explains basic concepts Claude already knows (what a component is, how standard deviation works, what statements are in different languages), repeats information across sections (thresholds mentioned 3+ times), and includes extensive output format templates that could be much more concise. The 'Usage Examples' section describes what the skill will do rather than providing actionable content. | 1 / 3 |
Actionability | The skill provides some concrete guidance like the fitness function code examples and the output table formats, but much of the content is descriptive rather than instructive ('The skill will: 1. Map directory structures...'). The statement counting rules are vague pseudocode-level descriptions rather than executable tools. There are no actual commands or scripts to run for the analysis itself. | 2 / 3 |
Workflow Clarity | The three-phase process (Identify, Calculate, Assess) is clearly sequenced, and the analysis checklist provides verification steps. However, there are no explicit validation checkpoints or feedback loops for error recovery—e.g., what happens if component identification is ambiguous, or if statement counts seem wrong. The workflow describes what to do but lacks 'verify before proceeding' gates. | 2 / 3 |
Progressive Disclosure | The entire skill is a monolithic wall of text with no references to external files. Content that could be split out (detailed output format templates, language-specific statement counting rules, fitness function code, implementation notes per language) is all inline, making the document very long. The 'Next Steps' section references other patterns but the skill itself has no progressive disclosure structure. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
81e7e0d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.