tessl i github:softaworks/agent-toolkit --skill requirements-clarityClarify ambiguous requirements through focused dialogue before implementation. Use when requirements are unclear, features are complex (>2 days), or involve cross-team coordination. Ask two core questions - Why? (YAGNI check) and Simpler? (KISS check) - to ensure clarity before coding.
Validation
88%| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 14 / 16 Passed | |
Implementation
27%This skill is well-intentioned but severely over-engineered for its purpose. The core concept (clarify requirements before implementation) is buried under excessive templating, redundant explanations, and a complex scoring system that adds cognitive overhead without clear implementation guidance. The content would benefit from aggressive trimming to ~50 lines focusing on the two core questions mentioned in the description (Why? and Simpler?) with the PRD template moved to a separate reference file.
Suggestions
Extract the PRD template to a separate file (e.g., PRD_TEMPLATE.md) and reference it with a single link, reducing the main skill by ~80 lines
Remove the detailed scoring rubric explanation - instead provide a simple checklist of 5-7 key clarity indicators Claude can assess
Delete sections explaining what vague requirements look like - Claude already knows this; focus only on the clarification process
Add a concrete example showing a vague requirement transformed into 2-3 clarifying questions, demonstrating the expected interaction pattern
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~300 lines with significant redundancy. The scoring rubric is repeated conceptually multiple times, the PRD template is overly detailed for a skill file, and many sections explain obvious concepts (e.g., what constitutes vague requirements) that Claude already understands. | 1 / 3 |
Actionability | Provides concrete structure and templates, but lacks executable code examples. The process is described procedurally but relies on markdown templates rather than actual implementation. The scoring system is defined but how to actually calculate scores programmatically is unclear. | 2 / 3 |
Workflow Clarity | Steps are clearly numbered (1-4) with a logical sequence, but validation is weak. The only checkpoint is the 90-point threshold, with no guidance on handling edge cases like users who won't answer questions or conflicting requirements. Missing feedback loops for error recovery. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. The entire PRD template (~80 lines) is inline when it should be a separate reference file. The DO/DON'T section, behavioral guidelines, and success criteria could all be condensed or externalized. | 1 / 3 |
Total | 6 / 12 Passed |
Activation
85%This is a well-structured description that clearly defines both purpose and trigger conditions. The methodology (YAGNI/KISS questions) adds specificity, and the explicit complexity threshold (>2 days) helps with selection decisions. The main weakness is trigger term coverage - it relies on somewhat technical terminology that users might not naturally use.
Suggestions
Add more natural trigger terms users might say, such as 'spec', 'scope creep', 'planning', 'what should I build', or 'design discussion'
Consider replacing or supplementing YAGNI/KISS acronyms with plain language equivalents that users would more naturally use when seeking this help
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists specific concrete actions: 'Clarify ambiguous requirements through focused dialogue', 'Ask two core questions - Why? (YAGNI check) and Simpler? (KISS check)'. Describes a clear methodology with named techniques. | 3 / 3 |
Completeness | Clearly answers both what ('Clarify ambiguous requirements through focused dialogue') and when ('Use when requirements are unclear, features are complex (>2 days), or involve cross-team coordination') with explicit trigger conditions. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'requirements', 'unclear', 'complex', 'cross-team coordination', but uses technical jargon (YAGNI, KISS) that users may not naturally say. Missing common variations like 'spec', 'scope', 'planning', 'design discussion'. | 2 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on pre-implementation requirement clarification with specific triggers (unclear requirements, >2 days complexity, cross-team). Unlikely to conflict with coding, documentation, or other skills due to its distinct 'before coding' positioning. | 3 / 3 |
Total | 11 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.