CtrlK
BlogDocsLog inGet started
Tessl Logo

design-patterns

Analyze codebase for GoF design patterns - detection, suggestions, evaluation with stack-aware adaptations

Install with Tessl CLI

npx tessl i github:FlorianBruniaux/claude-code-ultimate-guide --skill design-patterns
What are skills?

71

1.29x

Quality

58%

Does it follow best practices?

Impact

100%

1.29x

Average score across 3 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./examples/skills/design-patterns/SKILL.md
SKILL.md
Review
Evals

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (GoF design patterns) but lacks the explicit trigger guidance needed for Claude to know when to select this skill. The actions listed are somewhat abstract, and the description would benefit from concrete examples of what 'detection' and 'suggestions' entail, plus natural language triggers users would actually use.

Suggestions

Add a 'Use when...' clause with trigger terms like 'design pattern', 'refactor to pattern', 'singleton', 'factory', 'observer', 'architecture review'

Replace abstract terms like 'stack-aware adaptations' with concrete actions such as 'recommends Java-specific implementations' or 'adapts patterns for Python idioms'

Include common user phrases like 'improve code structure', 'apply design patterns', or 'identify anti-patterns'

DimensionReasoningScore

Specificity

Names the domain (GoF design patterns) and lists some actions (detection, suggestions, evaluation), but these are somewhat abstract. 'Stack-aware adaptations' is vague and doesn't explain what concrete actions are performed.

2 / 3

Completeness

Describes what it does (analyze for patterns) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the 'what' is also weak, warranting a 1.

1 / 3

Trigger Term Quality

Includes 'GoF design patterns', 'codebase', and 'detection' which are relevant, but misses common variations users might say like 'refactor', 'architecture review', 'singleton', 'factory pattern', or specific pattern names.

2 / 3

Distinctiveness Conflict Risk

'GoF design patterns' provides some specificity that distinguishes it from general code analysis skills, but 'analyze codebase' is generic enough to potentially conflict with other code review or architecture skills.

2 / 3

Total

7

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill with excellent workflow clarity and appropriate progressive disclosure to reference files. The main weakness is verbosity - the full output format examples and detailed tables could be trimmed or moved to reference files, as the skill currently consumes significant tokens with example JSON/Markdown that could be summarized.

Suggestions

Move the full JSON and Markdown output examples to a separate `examples/output-formats.md` file and reference it, keeping only abbreviated schemas in the main skill

Condense the stack adaptation table into the referenced `signatures/stack-patterns.yaml` file rather than duplicating it inline

DimensionReasoningScore

Conciseness

The skill is comprehensive but verbose at ~500 lines. While the content is relevant, there's significant repetition (e.g., example invocations repeated, output formats shown in full detail that could be referenced). Some sections like the adaptation table and full JSON examples could be condensed or moved to reference files.

2 / 3

Actionability

Excellent actionability with concrete detection rules, specific grep patterns, executable code examples, and detailed JSON/Markdown output schemas. The workflow steps are specific and the example invocations show exact command syntax.

3 / 3

Workflow Clarity

Each operating mode has a clear numbered workflow with explicit phases. The methodology section breaks down each phase with specific steps, detection rules, and decision logic. The IF/ELSE adaptation logic provides clear decision trees for suggestions.

3 / 3

Progressive Disclosure

Well-structured with clear sections for each mode, methodology phases, and output formats. References 8 external files (patterns-index.yaml, detection-rules.yaml, etc.) for detailed content, keeping the main skill as an overview with navigation to deeper materials.

3 / 3

Total

11

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (566 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

8

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.