Run a structured discovery session to build an Allium specification through conversation. Use when the user wants to create a new spec from scratch, elicit or gather requirements, capture domain behaviour, specify a feature or system, define what a system should do, or is describing functionality and needs help shaping it into a specification.
83
79%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/allium/skills/elicit/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with strong completeness and trigger term coverage. The explicit 'Use when' clause with multiple natural trigger phrases is well done. The main weaknesses are moderate specificity (it doesn't detail what concrete outputs or steps the discovery session involves) and some overlap risk with general requirements/specification skills due to the breadth of trigger terms.
Suggestions
Add 2-3 specific concrete actions or outputs beyond 'build a specification' (e.g., 'produces structured domain models, acceptance criteria, and behavioral scenarios')
Consider adding a brief negative boundary to reduce conflict risk (e.g., 'Not for editing existing Allium specs or running validations against them')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain ('Allium specification') and a key action ('structured discovery session'), but doesn't list multiple concrete actions beyond 'build a specification through conversation.' It lacks specifics about what the session produces or what steps are involved. | 2 / 3 |
Completeness | Clearly answers both 'what' (run a structured discovery session to build an Allium specification through conversation) and 'when' (explicit 'Use when' clause with multiple trigger scenarios covering creating specs, gathering requirements, capturing behaviour, etc.). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms: 'create a new spec', 'elicit or gather requirements', 'capture domain behaviour', 'specify a feature or system', 'define what a system should do', 'describing functionality', 'shaping it into a specification'. These are terms users would naturally use when they need this skill. | 3 / 3 |
Distinctiveness Conflict Risk | The 'Allium specification' term provides some distinctiveness, but terms like 'gather requirements', 'specify a feature', and 'define what a system should do' could overlap with general requirements engineering or documentation skills. The Allium-specific framing helps but the broad trigger terms increase conflict risk. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong elicitation skill with excellent actionability and workflow clarity. The four-phase methodology with concrete questions, redirect tables, and trap identification gives Claude a thorough playbook for conducting specification sessions. The main weakness is moderate verbosity—some conceptual overlap between the abstraction tests and some filler sentences that don't add value for Claude—and the content could benefit from splitting some reference material into separate files.
Suggestions
Trim overlapping abstraction tests (Why test, Could it be different, Template vs Instance) into a single concise decision framework or table to reduce redundancy
Consider extracting the 'Common elicitation traps' and 'Elicitation principles' sections into a separate reference file to keep the main skill leaner and improve progressive disclosure
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is well-structured and most content earns its place, but there's some verbosity in sections like the abstraction tests (Why test, Could it be different test, Template vs Instance test) that overlap conceptually. The tables and examples are helpful but could be tightened. Some explanations like 'The hardest part of specification is choosing what to include' are filler Claude doesn't need. | 2 / 3 |
Actionability | The skill provides highly concrete, actionable guidance: specific questions to ask at each phase, exact Allium syntax examples, redirect tables for common situations, concrete examples of good vs bad questioning, and clear session timing structure. The elicitation methodology phases give Claude a precise playbook to follow. | 3 / 3 |
Workflow Clarity | The four-phase elicitation methodology (Scope → Happy Path → Edge Cases → Refinement) is clearly sequenced with explicit outputs for each phase, specific 'watch for' checkpoints, and a concrete session structure with time allocations. The 'After elicitation' section provides clear handoff guidance to other agents. Each phase has validation through defined outputs. | 3 / 3 |
Progressive Disclosure | The skill references two external files (language-reference.md and library-spec-signals.md) which is appropriate, but the main body is quite long (~300+ lines) with sections like the abstraction tests and common traps that could potentially be split into reference files. The structure within the file is well-organized with clear headers, but the monolithic nature works against quick scanning. No bundle files were provided to verify reference accuracy. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
0f36ad4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.