Transform vague prompts into precise, well-structured specifications using EARS (Easy Approach to Requirements Syntax) methodology. This skill should be used when users provide loose requirements, ambiguous feature descriptions, or need to enhance prompts for AI-generated code, products, or documents. Triggers include requests to "optimize my prompt", "improve this requirement", "make this more specific", or when raw requirements lack detail and structure.
Install with Tessl CLI
npx tessl i github:daymade/claude-code-skills --skill prompt-optimizerOverall
score
85%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with strong trigger term coverage and good completeness, explicitly stating both what the skill does and when to use it. The main weaknesses are moderate specificity (could list more concrete actions) and some potential overlap with general writing/editing or prompt engineering skills. The EARS methodology mention helps differentiate it but may not be familiar to all users.
Suggestions
Add 2-3 more specific concrete actions like 'generate EARS-formatted requirements', 'convert user stories to structured specs', or 'validate requirement completeness'
Strengthen distinctiveness by emphasizing the technical/software requirements focus to differentiate from general prompt improvement skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (EARS methodology, requirements) and the general action (transform vague prompts into specifications), but doesn't list multiple concrete actions like 'parse requirements', 'generate EARS templates', or 'validate syntax'. | 2 / 3 |
Completeness | Clearly answers both what (transform vague prompts into precise specifications using EARS) and when (explicit triggers listed including 'optimize my prompt', 'improve this requirement', and contextual conditions like 'raw requirements lack detail'). | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger phrases users would say: 'optimize my prompt', 'improve this requirement', 'make this more specific', plus contextual triggers like 'loose requirements' and 'ambiguous feature descriptions'. | 3 / 3 |
Distinctiveness Conflict Risk | The EARS methodology provides some distinctiveness, but phrases like 'improve this requirement' and 'make this more specific' could overlap with general writing improvement or editing skills. The prompt optimization angle could conflict with other prompt engineering skills. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with strong actionability and excellent progressive disclosure. The six-step workflow provides clear, sequenced guidance with concrete examples and checklists. Minor verbosity in the overview and some explanatory sections could be tightened, but overall the content is effective and well-organized.
Suggestions
Trim the overview section - remove the methodology attribution paragraph and condense the four-layer process description into a more compact format
Remove explanatory phrases like 'Examples must be realistic, specific, varied, and testable' - Claude understands example quality requirements
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary explanation (e.g., the methodology attribution paragraph, some verbose descriptions). The overview could be tighter, and some sections like 'Quality criteria' repeat concepts Claude would understand. | 2 / 3 |
Actionability | Provides concrete, executable guidance with specific EARS patterns, transformation checklists, real examples (reminder app transformation), and a complete output template. The six-step workflow gives clear, actionable instructions with specific criteria. | 3 / 3 |
Workflow Clarity | The six-step workflow is clearly sequenced with explicit phases, checklists, and decision points. Each step has clear inputs/outputs, and the transformation checklist provides validation criteria. The process is well-structured for a methodology-based skill. | 3 / 3 |
Progressive Disclosure | Excellent structure with a clear overview, well-organized main content, and clearly signaled one-level-deep references to four specific reference files. The 'When to load references' section provides explicit guidance on when to access additional materials. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.