Take a list of ideas, features, or initiatives and quickly prioritize them using an effective framework. Use when you have too many things and need to decide what to do first.
77
72%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./product-skills/skills/prioritize/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structural completeness with both a 'what' and 'when' clause, but lacks specificity in the concrete actions performed and the frameworks used. The trigger terms cover some natural language but miss important variations that users commonly employ when seeking prioritization help. The description would benefit from naming specific frameworks and adding more distinctive trigger terms.
Suggestions
Add specific concrete actions and framework names, e.g., 'Score and rank items using frameworks like RICE, MoSCoW, or impact-vs-effort matrices, and output a prioritized list.'
Expand trigger terms to include natural variations: 'backlog grooming', 'roadmap planning', 'rank features', 'what to build next', 'impact vs effort'.
Use third person voice consistently and make the 'when' clause more specific, e.g., 'Use when the user needs to rank a backlog, compare feature priorities, or decide which initiatives to pursue first.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (prioritization) and some inputs ('ideas, features, or initiatives') but doesn't specify concrete actions like scoring, ranking, creating matrices, or naming specific frameworks (e.g., RICE, MoSCoW, ICE). | 2 / 3 |
Completeness | Clearly answers both 'what' (take a list and prioritize using a framework) and 'when' ('Use when you have too many things and need to decide what to do first'), with an explicit trigger clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'prioritize', 'ideas', 'features', 'initiatives', but misses common natural variations users might say such as 'backlog', 'roadmap', 'rank', 'RICE', 'MoSCoW', 'impact vs effort', 'what to work on next'. | 2 / 3 |
Distinctiveness Conflict Risk | The description is somewhat specific to prioritization but the vague phrasing 'too many things' and 'decide what to do first' could overlap with general decision-making, planning, or project management skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable prioritization skill with a clear multi-step workflow and good validation checkpoints. Its main weakness is moderate verbosity — the RICE framework explanation and prompt template could be more concise given Claude's existing knowledge. The skill would also benefit from bundle files to support alternative frameworks mentioned in the tips.
Suggestions
Trim the RICE scoring definitions to just the scale values (e.g., 'Impact: 3/2/1/0.5/0.25') since Claude already understands what Reach, Impact, Confidence, and Effort mean conceptually.
Extract alternative framework templates (ICE, Impact/Effort, Opportunity Score) into separate bundle files and reference them from the Tips section for better progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately efficient but includes some unnecessary verbosity — the intro paragraph restates what the description already says, the prompt template includes lengthy inline explanations of RICE scoring that Claude already knows, and the placeholder syntax is overly verbose. Could be tightened significantly. | 2 / 3 |
Actionability | The skill provides a fully concrete, step-by-step process with a specific scoring framework (RICE), an explicit table format for output, clear scoring scales, and a structured recommendation format. The prompt template is copy-paste ready and the rules section provides specific guidance for edge cases. | 3 / 3 |
Workflow Clarity | The four-step process is clearly sequenced (Clarify → Score → Sanity Check → Recommend), with the sanity check serving as an explicit validation checkpoint that catches framework failures, dependency issues, and quick wins. The rules section adds important guardrails about when to override the math. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections, but everything is inline in a single file. The RICE scoring definitions and the full prompt template could be separated into reference files, especially since the skill suggests alternative frameworks (ICE, Impact/Effort) that could each have their own templates. No bundle files exist to offload detail. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
221ffaa
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.