This skill should be used when the user says "competitive analysis", "gap analysis", "competitive gap", "stress competitive", "compare competitors", "feature comparison", "competitive stress test", "market comparison", "competitor analysis", or wants to stress-test a product concept by conducting deep competitive gap analysis with feature comparison, gap identification, and positioning assessment. Produces a competitive report with a feature matrix, per-competitor analysis, and recommended concept updates.
90
88%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that excels across all dimensions. It provides an extensive list of explicit trigger terms, clearly describes what the skill does and what outputs it produces, and occupies a distinct niche. The only minor weakness is that it's somewhat front-loaded with trigger terms which makes it slightly less readable, but functionally it serves its purpose very well.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'stress-test a product concept', 'conducting deep competitive gap analysis', 'feature comparison', 'gap identification', 'positioning assessment', and specifies outputs like 'feature matrix', 'per-competitor analysis', and 'recommended concept updates'. | 3 / 3 |
Completeness | Clearly answers both 'what' (stress-test a product concept via competitive gap analysis with feature comparison, gap identification, positioning assessment, producing a competitive report) and 'when' (explicit trigger phrases listed at the start with 'This skill should be used when...'). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including 'competitive analysis', 'gap analysis', 'competitive gap', 'compare competitors', 'feature comparison', 'competitor analysis', 'market comparison', and 'competitive stress test'. These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused specifically on competitive gap analysis and stress-testing product concepts against competitors. The combination of competitive analysis + product concept stress testing creates a clear, unique identity unlikely to conflict with general analysis or product skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted, highly actionable skill with excellent workflow clarity and robust error handling. The multi-step process is clearly sequenced with appropriate validation checkpoints and fallback cascades. Minor weaknesses include some verbosity in agent invocation templates and the inability to verify referenced bundle files, though the overall structure demonstrates strong progressive disclosure intent.
Suggestions
Trim the agent invocation prompt template in Step 3 — Claude can construct appropriate prompts from concise instructions rather than needing the full delimited format spelled out.
Consider moving the detailed error handling scenarios to a reference file if the bundle supports it, keeping only a summary in the main skill body.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-structured but includes some verbose sections that could be tightened. The data availability table and fallback cascade are useful but the agent invocation prompt template in Step 3 is overly detailed with formatting instructions Claude doesn't need. Some sections like the constraints repeat information already implied by the workflow. | 2 / 3 |
Actionability | The skill provides highly concrete, step-by-step guidance with specific file paths, exact agent invocation parameters, precise output formatting, and clear decision trees for fallback scenarios. Each step specifies exactly what to read, extract, pass, and write, with specific file locations and structured prompts. | 3 / 3 |
Workflow Clarity | The 7-step workflow is clearly sequenced with explicit prerequisites, validation checkpoints (data availability checks, fallback cascades), error handling with retry logic and user prompts, and a clear final output summary. The error handling section provides specific recovery paths for multiple failure modes. | 3 / 3 |
Progressive Disclosure | The skill references external files (gap-analysis-framework.md, competitive-report-template.md) with clear paths, which is good progressive disclosure design. However, no bundle files were provided to verify these references exist, and the SKILL.md itself is quite long (~150 lines of substantive content) with some sections that could potentially be moved to reference files (e.g., the detailed agent prompt template in Step 3). | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
1fe948f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.