Unified issue discovery and creation. Create issues from GitHub/text, discover issues via multi-perspective analysis, or prompt-driven iterative exploration. Triggers on "issue:new", "issue:discover", "issue:discover-by-prompt", "create issue", "discover issues", "find issues".
79
75%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.codex/skills/issue-discover/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description that concisely covers what the skill does (issue creation and discovery through multiple methods), when to use it (explicit trigger terms), and uses third-person voice throughout. The explicit listing of both command-style and natural language triggers is particularly effective for skill selection. Minor improvement could include mentioning the output format or scope of 'multi-perspective analysis'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: creating issues from GitHub/text, discovering issues via multi-perspective analysis, and prompt-driven iterative exploration. These are distinct, concrete capabilities. | 3 / 3 |
Completeness | Clearly answers 'what' (create issues from GitHub/text, discover issues via multi-perspective analysis, prompt-driven exploration) and 'when' with explicit triggers ('Triggers on' clause lists specific trigger terms). | 3 / 3 |
Trigger Term Quality | Includes both natural language triggers ('create issue', 'discover issues', 'find issues') and command-style triggers ('issue:new', 'issue:discover', 'issue:discover-by-prompt'). Good coverage of terms users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of issue discovery and creation with specific trigger commands like 'issue:new' and 'issue:discover' creates a clear niche. The mention of multi-perspective analysis and prompt-driven exploration further distinguishes it from generic issue tracking skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is highly actionable with concrete code examples, CLI commands, and clear routing logic, but suffers significantly from verbosity. Content that should be in separate reference files (subagent API, auto-detection logic, detailed flow diagrams) is inlined, making the SKILL.md a monolithic document. The workflow lacks explicit validation checkpoints despite involving multi-phase operations with subagent lifecycle management.
Suggestions
Move the Subagent API Reference section and auto-detection JavaScript logic to separate reference files (e.g., references/subagent-api.md, references/auto-detect.md) and link to them from SKILL.md
Remove redundant flow descriptions — the architecture diagram, execution flow, and data flow sections all describe the same routing logic in different formats; consolidate to one
Add explicit validation checkpoints after phase execution (e.g., verify issue was created successfully before offering next steps, verify discoveries file exists before offering export)
Trim explanatory text that Claude can infer — e.g., the 'Key Design Principles' section and comments like '// BLOCKS (wait for user response)' repeated multiple times
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~300+ lines. Contains extensive ASCII diagrams, redundant flow descriptions (execution flow described 3 different ways), auto-detection logic as full JavaScript that Claude could infer, and detailed subagent API reference that likely belongs in a separate reference file. Much of this repeats information or explains things Claude already knows. | 1 / 3 |
Actionability | Provides fully concrete, executable code examples throughout: CLI commands with exact flags, JavaScript API calls with complete signatures, specific routing logic, and a comprehensive usage section with real command examples. The action selection, subagent lifecycle, and data access patterns are all copy-paste ready. | 3 / 3 |
Workflow Clarity | The multi-step workflow is described with clear sequencing and phase routing, but validation checkpoints are largely missing. There's no explicit validation after issue creation, no verification that phase documents loaded correctly, and error handling is a simple table rather than integrated feedback loops. The 'DO NOT STOP' rule and lifecycle management are good but lack verify-then-proceed patterns. | 2 / 3 |
Progressive Disclosure | Good use of phase documents (phases/01-04) with a clear reference table and 'load when' conditions. However, the SKILL.md itself is monolithic — the subagent API reference, auto-detection logic, post-phase next steps code, and detailed data flow diagrams should be in separate reference files rather than inline. The architecture overview and execution flow are described redundantly. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
0f8e801
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.