Fetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]
79
Quality
72%
Does it follow best practices?
Impact
100%
2.00xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./openclaw/skills/gh-issues/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description effectively communicates specific capabilities around GitHub issue automation and PR workflows, with good distinctiveness. However, it relies heavily on CLI flag documentation rather than natural language, and lacks an explicit 'Use when...' clause that would help Claude know when to select this skill.
Suggestions
Add an explicit 'Use when...' clause with natural trigger phrases like 'Use when the user wants to automate GitHub issue fixes, batch process bugs, or monitor pull request reviews'
Include natural language variations of key terms: 'pull requests' alongside 'PRs', 'bug fixes', 'automated code fixes', 'GitHub automation'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Fetch GitHub issues', 'spawn sub-agents to implement fixes', 'open PRs', 'monitor and address PR review comments'. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers 'what' (fetch issues, implement fixes, open PRs, monitor reviews), but lacks an explicit 'Use when...' clause. The 'when' is only implied through the command syntax rather than stated explicitly. | 2 / 3 |
Trigger Term Quality | Contains relevant keywords like 'GitHub issues', 'PRs', 'review comments', but the description is dominated by CLI flags rather than natural language terms users would say. Missing variations like 'pull requests', 'bug fixes', 'code review'. | 2 / 3 |
Distinctiveness Conflict Risk | Very specific niche combining GitHub issues, automated PR creation via sub-agents, and review monitoring. The '/gh-issues' command and detailed flags make it clearly distinguishable from generic GitHub or code skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly sophisticated orchestration skill with excellent actionability and workflow clarity. The multi-phase structure with explicit validation checkpoints, claim-based deduplication, and comprehensive error handling demonstrates expert-level design. However, the document's length and lack of progressive disclosure to external files makes it harder to navigate, and some redundancy (repeated token setup, similar sub-agent configurations) could be consolidated.
Suggestions
Extract the sub-agent task prompts (fix agent and review handler) into separate reference files (e.g., FIX_AGENT_PROMPT.md, REVIEW_AGENT_PROMPT.md) and reference them from the main skill
Consolidate the repeated GH_TOKEN resolution logic into a single 'Token Setup' section referenced by other phases instead of duplicating the instructions
Consider moving the detailed flag table and API endpoint reference to an appendix or separate REFERENCE.md file, keeping only essential flags inline
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some redundant explanations (e.g., repeated GH_TOKEN setup instructions across multiple sections, verbose flag tables). Some sections could be tightened, though most content is necessary for the complex orchestration task. | 2 / 3 |
Actionability | Excellent actionability with fully executable curl commands, complete git commands, specific API endpoints, and copy-paste ready sub-agent prompts. Every step includes concrete, runnable code rather than pseudocode or vague descriptions. | 3 / 3 |
Workflow Clarity | Outstanding workflow clarity with 6 clearly numbered phases, explicit validation checkpoints (pre-flight checks in Phase 4), error handling paths, and feedback loops (confidence check, test-fix-retry). The claim-based tracking and cursor system for cron mode show sophisticated state management. | 3 / 3 |
Progressive Disclosure | The skill is a monolithic ~600-line document with no references to external files. While internally well-organized with clear headers, the sub-agent prompts and detailed API examples could be split into separate reference files to improve scannability. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
63%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 7 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (866 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 7 / 11 Passed | |
8763418
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.