Bridge WHAT (intent) to HOW (implementation). Use when spec is clear but approach is not. Triggers on "shape this", "how should I build", "implementation approach".
64
54%
Does it follow best practices?
Impact
72%
1.53xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./hope/skills/shape/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description relies heavily on abstract meta-language ('Bridge WHAT to HOW') without specifying concrete actions, outputs, or domain. While it includes trigger terms and a 'Use when' clause, the vagueness of the capability description and the generic nature of the triggers make it poorly suited for disambiguation among multiple skills.
Suggestions
Replace the abstract 'Bridge WHAT to HOW' with specific concrete actions, e.g., 'Generates implementation plans, architecture diagrams, and technical design documents from product specifications'
Add domain-specific trigger terms and narrow the scope, e.g., 'Use when the user has a feature spec or requirements document and needs a technical design, system architecture, or step-by-step implementation plan'
Clarify what distinguishes this skill from general coding or planning skills by specifying the input format (e.g., specs, PRDs) and output format (e.g., implementation roadmap, architecture proposal)
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract language like 'Bridge WHAT (intent) to HOW (implementation)' without listing any concrete actions. There are no specific capabilities described—no mention of what kind of output is produced, what domain it applies to, or what concrete steps it performs. | 1 / 3 |
Completeness | It has a 'Use when' clause and trigger terms, which addresses the 'when' question. However, the 'what' is extremely vague—'Bridge WHAT to HOW' doesn't clearly explain what the skill actually does or produces. The 'what' is too abstract to be considered clearly answered. | 2 / 3 |
Trigger Term Quality | It includes some trigger phrases ('shape this', 'how should I build', 'implementation approach') which are somewhat natural, but the coverage is narrow and the terms are fairly generic. Users might say many other things when needing implementation guidance (e.g., 'architecture', 'design', 'plan how to build'). | 2 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic and could overlap with almost any planning, architecture, design, or coding skill. 'Bridge intent to implementation' could apply to countless different skills, and the trigger terms like 'how should I build' are broad enough to conflict with many other skills. | 1 / 3 |
Total | 6 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted, highly actionable skill with excellent workflow clarity and strong validation checkpoints. Its main weakness is that it packs a lot of content into a single file without leveraging progressive disclosure — the reasoning toolkit, audit checklists, and presentation rules could be split into referenced files. Some sections could be tightened for conciseness without losing clarity.
Suggestions
Extract the reasoning toolkit (Step 3, item 2) and audit checklists (Steps 1 and 5) into separate referenced files to improve progressive disclosure and reduce the main file's token footprint.
Tighten the Presentation section — the monospace formatting rules could be condensed into a brief example rather than described in multiple bullet points.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly detailed and well-structured, but includes some verbose explanations that could be tightened. For example, the reasoning toolkit in Step 3 lists many techniques with parenthetical guidance that adds length. Some instructions like 'Techniques are internal' and the philosophical framing ('every library not written is ~1000 bugs avoided') add flavor but consume tokens. However, it largely avoids explaining things Claude already knows. | 2 / 3 |
Actionability | The skill provides highly concrete, executable guidance: specific AskUserQuestion usage patterns, exact Skill tool invocation syntax with args, detailed formatting rules (ALL CAPS headers, ~50 char lines, ASCII tables), checklists for audits, and clear decision criteria (Type 1 vs Type 2, high-stakes vs exploratory). Each step has specific actions to take, not vague descriptions. | 3 / 3 |
Workflow Clarity | The 5-step workflow (Extract → Scope → Shape loop → Integrate → Emit) is clearly sequenced with explicit validation checkpoints (entry audit, exit audit), feedback loops (Explore further → regenerate, Revisit a dimension → return to shape loop), and clear branching logic (collapse to Step 1 → Step 5 for simple tasks). The pre-mortem and cross-dimension tension check serve as integration validation. | 3 / 3 |
Progressive Disclosure | The skill is a single monolithic file with substantial content (~150+ lines). While internally well-organized with clear headers and steps, it doesn't reference any external files for detailed content like the reasoning toolkit, audit checklists, or presentation formatting rules — all of which could be split out. The consult skill is referenced via the Skill tool, which is good, but the main file carries a lot of inline detail. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
6227365
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.