Unified issue discovery and creation. Create issues from GitHub/text, discover issues via multi-perspective analysis, or prompt-driven iterative exploration. Triggers on "issue:new", "issue:discover", "issue:discover-by-prompt", "create issue", "discover issues", "find issues".
73
67%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.codex/skills/issue-discover/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that concisely covers what the skill does (issue creation and discovery through multiple methods), when to use it (explicit trigger terms), and uses appropriate third-person voice. The description is well-structured, specific, and provides clear differentiation from other potential skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: creating issues from GitHub/text, discovering issues via multi-perspective analysis, and prompt-driven iterative exploration. These are distinct, concrete capabilities. | 3 / 3 |
Completeness | Clearly answers 'what' (create issues from GitHub/text, discover issues via multi-perspective analysis, prompt-driven exploration) and 'when' with explicit trigger terms listed after 'Triggers on'. The trigger guidance is explicit and comprehensive. | 3 / 3 |
Trigger Term Quality | Includes both natural language triggers ('create issue', 'discover issues', 'find issues') and command-style triggers ('issue:new', 'issue:discover', 'issue:discover-by-prompt'). Good coverage of terms users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | The description carves out a clear niche around issue discovery and creation with specific trigger commands. The combination of GitHub issues, multi-perspective analysis, and prompt-driven exploration is distinctive and unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill attempts to be a comprehensive orchestrator document but suffers from significant verbosity and redundancy — the same flow is described via ASCII diagrams, prose, tables, and code blocks multiple times. The progressive disclosure principle is stated but not practiced in the SKILL.md itself, which inlines extensive API references and pseudocode that should be in supporting files. The actionability is moderate, with useful CLI command tables and flag references, but core execution logic is deferred to unverifiable phase documents.
Suggestions
Reduce redundancy by removing duplicate flow descriptions — keep either the ASCII diagram OR the execution flow section, not both, and consolidate the data flow section into the main execution flow.
Move the subagent API reference (spawn_agent, wait_agent, followup_task, close_agent) to a separate reference file like `subagent-api.md` and link to it, practicing the progressive disclosure the skill preaches.
Remove the JavaScript detectAction function — describe the auto-detection rules as a simple priority list (5 lines) instead of 20+ lines of illustrative code that Claude doesn't need.
Add explicit validation checkpoints: after issue creation verify with `ccw issue status <id>`, after discovery verify findings count is non-zero before proceeding to post-phase.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~300+ lines. Contains extensive ASCII diagrams, redundant flow descriptions (execution flow described 3 different ways), JavaScript pseudocode for auto-detection logic that Claude could infer, and detailed subagent API reference that likely belongs in a separate file. The architecture overview, data flow, and execution flow sections are largely redundant with each other. | 1 / 3 |
Actionability | Provides concrete CLI commands, JavaScript code snippets for action selection and subagent usage, and a clear flag reference. However, much of the code is pseudocode/illustrative rather than truly executable (e.g., the detectAction function, the request_user_input examples), and the actual phase execution logic is deferred entirely to external phase documents which aren't provided. | 2 / 3 |
Workflow Clarity | The multi-step workflow is described with clear routing logic and phase references, and there's an error handling table. However, validation checkpoints are largely missing — there's no explicit verification after issue creation, no feedback loop for discovery quality, and the error handling is superficial ('report error, suggest manual intervention'). The 'DO NOT STOP' rule conflicts with the blocking request_user_input calls, creating ambiguity. | 2 / 3 |
Progressive Disclosure | Good phase document references in a clear table (phases/01-04), and the 'Single Phase Load' principle is sound. However, the SKILL.md itself is monolithic — the subagent API reference, auto-detection logic, and detailed post-phase JavaScript examples should be in separate reference files. No bundle files were provided, so the referenced phase documents can't be verified. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
227244f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.