tessl install https://github.com/softaworks/agent-toolkit --skill requirements-claritygithub.com/softaworks/agent-toolkit
Clarify ambiguous requirements through focused dialogue before implementation. Use when requirements are unclear, features are complex (>2 days), or involve cross-team coordination. Ask two core questions - Why? (YAGNI check) and Simpler? (KISS check) - to ensure clarity before coding.
Average Score
59%
Content
27%
Description
85%
Generated
Validations
Total score
14/16| Criteria | Score |
|---|---|
skill_md_line_count SKILL.md line count is 325 (<= 500) | |
frontmatter_valid YAML frontmatter is valid | |
name_field 'name' field is valid: 'requirements-clarity' | |
description_field 'description' field is valid (286 chars) | |
description_voice 'description' uses third person voice | |
description_trigger_hint Description includes an explicit trigger hint | |
compatibility_field 'compatibility' field not present (optional) | |
allowed_tools_field 'allowed-tools' field not present (optional) | |
metadata_version 'metadata' field is not a dictionary | |
metadata_field 'metadata' field not present (optional) | |
license_field 'license' field is missing | |
frontmatter_unknown_keys No unknown frontmatter keys found | |
body_present SKILL.md body is present | |
body_examples Examples detected (code fence or 'Example' wording) | |
body_output_format Output/return/format terms detected | |
body_steps Step-by-step structure detected (ordered list) |
Content
Suggestions 4
Total score
6/12| Dimension | Score |
|---|---|
conciseness Extremely verbose at ~300 lines with significant redundancy. The scoring rubric is repeated conceptually multiple times, the PRD template is overly detailed for a skill file, and many sections explain obvious concepts (e.g., what constitutes vague requirements) that Claude already understands. | 1/3 |
actionability Provides concrete structure and templates, but lacks executable code examples. The process is described procedurally but relies on markdown templates rather than actual implementation. The scoring system is defined but how to actually calculate scores programmatically is unclear. | 2/3 |
workflow_clarity Steps are clearly numbered (1-4) with a logical sequence, but validation is weak. The only checkpoint is the 90-point threshold, with no guidance on handling edge cases like users who won't answer questions or conflicting requirements. Missing feedback loops for error recovery. | 2/3 |
progressive_disclosure Monolithic wall of text with no references to external files. The entire PRD template (~80 lines) is inline when it should be a separate reference file. The DO/DON'T section, behavioral guidelines, and success criteria could all be condensed or externalized. | 1/3 |
Suggestions
Extract the PRD template to a separate file (e.g., PRD_TEMPLATE.md) and reference it with a single link, reducing the main skill by ~80 lines
Remove the detailed scoring rubric explanation - instead provide a simple checklist of 5-7 key clarity indicators Claude can assess
Delete sections explaining what vague requirements look like - Claude already knows this; focus only on the clarification process
Add a concrete example showing a vague requirement transformed into 2-3 clarifying questions, demonstrating the expected interaction pattern
Overall Assessment
This skill is well-intentioned but severely over-engineered for its purpose. The core concept (clarify requirements before implementation) is buried under excessive templating, redundant explanations, and a complex scoring system that adds cognitive overhead without clear implementation guidance. The content would benefit from aggressive trimming to ~50 lines focusing on the two core questions mentioned in the description (Why? and Simpler?) with the PRD template moved to a separate reference file.
Description
Suggestions 2
Total score
11/12| Dimension | Score |
|---|---|
specificity Lists specific concrete actions: 'Clarify ambiguous requirements through focused dialogue', 'Ask two core questions - Why? (YAGNI check) and Simpler? (KISS check)'. Describes a clear methodology with named techniques. | 3/3 |
completeness Clearly answers both what ('Clarify ambiguous requirements through focused dialogue') and when ('Use when requirements are unclear, features are complex (>2 days), or involve cross-team coordination') with explicit trigger conditions. | 3/3 |
trigger_term_quality Includes some relevant terms like 'requirements', 'unclear', 'complex', 'cross-team coordination', but uses technical jargon (YAGNI, KISS) that users may not naturally say. Missing common variations like 'spec', 'scope', 'planning', 'design discussion'. | 2/3 |
distinctiveness_conflict_risk Clear niche focused on pre-implementation requirement clarification with specific triggers (unclear requirements, >2 days complexity, cross-team). Unlikely to conflict with coding, documentation, or other skills due to its distinct 'before coding' positioning. | 3/3 |
Suggestions
Add more natural trigger terms users might say, such as 'spec', 'scope creep', 'planning', 'what should I build', or 'design discussion'
Consider replacing or supplementing YAGNI/KISS acronyms with plain language equivalents that users would more naturally use when seeking this help
Overall Assessment
This is a well-structured description that clearly defines both purpose and trigger conditions. The methodology (YAGNI/KISS questions) adds specificity, and the explicit complexity threshold (>2 days) helps with selection decisions. The main weakness is trigger term coverage - it relies on somewhat technical terminology that users might not naturally use.