Detect, suggest, and evaluate GoF design patterns in TypeScript/JavaScript codebases. Use when refactoring code, applying singleton/factory/observer/strategy patterns, reviewing pattern quality, or finding stack-native alternatives for React, Angular, NestJS, and Vue.
71
67%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./examples/skills/design-patterns/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly communicates specific capabilities (detect, suggest, evaluate design patterns), names concrete patterns and frameworks, and includes an explicit 'Use when' clause with natural trigger terms. It is concise, uses third-person voice, and occupies a distinct niche that minimizes conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Detect, suggest, and evaluate GoF design patterns' along with specific pattern names (singleton/factory/observer/strategy) and specific frameworks (React, Angular, NestJS, Vue). Also mentions 'reviewing pattern quality' and 'finding stack-native alternatives'. | 3 / 3 |
Completeness | Clearly answers both 'what' (detect, suggest, and evaluate GoF design patterns in TS/JS codebases) and 'when' (explicit 'Use when' clause covering refactoring, applying specific patterns, reviewing pattern quality, or finding stack-native alternatives). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'design patterns', 'refactoring', 'singleton', 'factory', 'observer', 'strategy', 'TypeScript', 'JavaScript', 'React', 'Angular', 'NestJS', 'Vue', and 'GoF'. These cover a wide range of terms a developer would naturally use when seeking pattern guidance. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche: GoF design patterns specifically in TypeScript/JavaScript with named frameworks. Unlikely to conflict with general code review, refactoring, or framework-specific skills because the focus on design pattern detection/evaluation is very specific. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in scope but severely over-specified inline, violating token efficiency by including extensive output templates, exhaustive pattern/smell enumerations, and detailed scoring rubrics that Claude already knows or that belong in referenced files. The workflows are described at a conceptual level without concrete tool-use sequences or validation checkpoints. The reference file structure is well-designed but underutilized — most of that content should live in those files rather than being duplicated in the main SKILL.md.
Suggestions
Move the full JSON/Markdown output examples, code smell lists, detection heuristics, and scoring guidelines into the referenced files (signatures/*.yaml, checklists/*.md) and keep only a brief summary or single compact example in SKILL.md.
Replace high-level workflow descriptions with concrete tool-use sequences specifying which tools to call (e.g., Bash for grep/glob, file reads for validation) and add explicit validation checkpoints (e.g., 'If package.json not found, report error and stop').
Remove explanations of concepts Claude already knows (GoF pattern definitions, what SOLID principles are, what code smells are) and focus only on project-specific conventions and decision rules.
Add a concise 'Quick Start' section at the top showing one complete end-to-end example of a single mode (e.g., detection) with actual tool invocations rather than CLI-style pseudo-commands.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400+ lines. Includes extensive JSON output examples, full markdown report templates, detailed tables, and exhaustive enumeration of all 23 GoF patterns, code smells, detection heuristics, and scoring guidelines. Much of this (what GoF patterns are, how to detect them, scoring rubrics) is knowledge Claude already possesses. The output format examples alone consume hundreds of tokens that could be summarized or referenced externally. | 1 / 3 |
Actionability | Provides concrete output format examples and some detection heuristics (grep patterns, file naming conventions), but the actual execution guidance is pseudocode-level (numbered workflow steps like 'Stack Detection → Pattern Search → Classification'). The CLI-style invocations (`/design-patterns detect src/`) suggest a slash command interface but don't specify how to actually implement the analysis using available tools (Bash, file reading, etc.). The skill describes what to do conceptually but lacks executable tool-use sequences. | 2 / 3 |
Workflow Clarity | Each mode has a numbered workflow, but these are high-level descriptions rather than actionable step sequences with validation checkpoints. There are no explicit validation or error-handling steps (e.g., what happens if package.json is missing, if no patterns are found, or if confidence is ambiguous). The adaptation logic pseudocode (IF/ELSE) provides some decision structure but lacks feedback loops. | 2 / 3 |
Progressive Disclosure | References external files well (signatures/*.yaml, checklists/*.md, reference/*.md) and has a clear Reference Files section. However, the SKILL.md itself is monolithic — the full output format examples, the complete code smell list, the entire adaptation table, and detailed scoring guidelines are all inline when they should be in the referenced files. The skill would benefit greatly from moving the bulk content to reference files and keeping only the overview here. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
72%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 8 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (567 lines); consider splitting into references/ and linking | Warning |
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 8 / 11 Passed | |
4ef3dec
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.