Fetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]
72
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 100%
↑ 2.00xAgent success when using this skill
Validation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description effectively communicates specific capabilities around GitHub issue automation and PR workflows, with good distinctiveness. However, it's overly focused on CLI syntax rather than natural language triggers, and lacks an explicit 'Use when...' clause that would help Claude know when to select this skill.
Suggestions
Add a 'Use when...' clause with natural trigger terms like 'when the user wants to automate GitHub issue fixes', 'batch process bugs', or 'auto-respond to PR reviews'
Include natural language variations alongside CLI flags, such as 'pull requests', 'bug fixes', 'automated code review responses', 'GitHub automation'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Fetch GitHub issues', 'spawn sub-agents to implement fixes', 'open PRs', 'monitor and address PR review comments'. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers 'what' (fetch issues, implement fixes, open PRs, monitor reviews), but lacks an explicit 'Use when...' clause. The CLI usage syntax implies when to use it but doesn't provide explicit trigger guidance for Claude's skill selection. | 2 / 3 |
Trigger Term Quality | Contains relevant keywords like 'GitHub issues', 'PRs', 'PR review comments', but the description is dominated by CLI flags rather than natural language terms users would say. Missing variations like 'pull requests', 'bug fixes', 'code review'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific GitHub workflow focus, sub-agent spawning, and PR monitoring. The combination of issue fetching, automated fixes, and review monitoring creates a clear niche unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
55%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is highly actionable and has excellent workflow clarity with proper validation checkpoints and error handling. However, it severely violates conciseness principles—the content is extremely verbose with repeated code blocks (GH_TOKEN setup appears multiple times), and the massive inline sub-agent prompts should be extracted to separate files. The lack of progressive disclosure makes this difficult to navigate and wastes significant context window space.
Suggestions
Extract the sub-agent task prompts (fix agent and review handler) into separate files like `FIX_AGENT_PROMPT.md` and `REVIEW_AGENT_PROMPT.md`, referencing them from the main skill
Consolidate the repeated GH_TOKEN resolution logic into a single 'Token Setup' section referenced by all phases instead of duplicating it 5+ times
Add a quick-start section at the top showing the most common usage pattern before diving into the 6-phase breakdown
Create a separate REFERENCE.md for the detailed flag documentation table and API endpoint specifications
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~700+ lines. Contains excessive repetition (GH_TOKEN setup appears 5+ times), redundant explanations, and over-specified details that Claude could infer. The sub-agent prompts alone are massive when they could reference shared instructions. | 1 / 3 |
Actionability | Highly actionable with complete, executable curl commands, git operations, and JSON payloads. Every step includes copy-paste ready code with proper variable substitution patterns. The sub-agent prompts are fully specified. | 3 / 3 |
Workflow Clarity | Excellent multi-phase workflow with explicit sequencing (6 phases), clear validation checkpoints (pre-flight checks, confidence scoring), error handling paths, and feedback loops (retry logic, watch mode polling). Destructive operations have proper guards. | 3 / 3 |
Progressive Disclosure | Monolithic wall of text with no external file references. The sub-agent prompts (~100+ lines each) should be in separate files. No navigation aids, no quick-start section, everything is inline creating a massive single document. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
63%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 7 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (866 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 7 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.