CtrlK
BlogDocsLog inGet started
Tessl Logo

gh-issues

Fetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]

72

2.00x
Quality

61%

Does it follow best practices?

Impact

100%

2.00x

Average score across 3 eval scenarios

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./openclaw/skills/gh-issues/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description effectively communicates a specific, multi-step GitHub automation workflow with concrete actions. However, it lacks an explicit 'Use when...' clause, and the bulk of the description is consumed by CLI flag documentation rather than natural trigger terms that would help Claude select this skill appropriately. The CLI syntax, while useful for execution, doesn't aid in skill selection.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user wants to automatically fix GitHub issues, create pull requests from issues, or monitor and respond to PR review feedback.'

Replace or supplement the CLI flag listing with natural language trigger terms like 'pull requests', 'bug fixes', 'code review comments', 'automated issue resolution', 'GitHub automation'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: fetch GitHub issues, spawn sub-agents to implement fixes, open PRs, monitor PR review comments, and address review comments. These are clear, actionable capabilities.

3 / 3

Completeness

The 'what' is well covered (fetch issues, implement fixes, open PRs, monitor reviews). However, there is no explicit 'Use when...' clause or equivalent trigger guidance — the 'Usage:' line describes CLI syntax, not when Claude should select this skill.

2 / 3

Trigger Term Quality

Contains good keywords like 'GitHub issues', 'PRs', 'PR review comments', 'fixes', but the description is heavily dominated by CLI flag documentation rather than natural language terms users would say. Missing natural phrases like 'bug fixes', 'pull requests', 'code review'.

2 / 3

Distinctiveness Conflict Risk

Very distinct niche: automated GitHub issue triage, fix implementation via sub-agents, and PR lifecycle management. The specific workflow (issues → fixes → PRs → review monitoring) is unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is impressively thorough and highly actionable, with excellent workflow clarity including validation checkpoints, error handling, and multiple operational modes (cron, watch, dry-run, fork). However, it suffers severely from verbosity and lack of progressive disclosure — the ~600+ line monolithic file repeats patterns (especially GH_TOKEN resolution and curl headers), includes massive inline sub-agent prompts that could be templated externally, and would consume an enormous portion of the context window. The content would benefit greatly from splitting into a concise orchestrator overview with referenced template files.

Suggestions

Extract the two sub-agent prompt templates (fix agent and review agent) into separate referenced files (e.g., FIX_AGENT_PROMPT.md and REVIEW_AGENT_PROMPT.md) to dramatically reduce the main file size

Consolidate the GH_TOKEN resolution logic into a single referenced section or utility, rather than repeating it 3+ times across the main body and both sub-agent prompts

Remove explanations Claude can infer, such as how to parse HTTPS vs SSH git URLs, what milestone numbers vs titles are, and basic curl/git patterns — instead just show the commands

Move the detailed Phase 6 review analysis (actionability criteria, Greptile parsing, comment classification) into a separate REVIEW_ANALYSIS.md reference file

DimensionReasoningScore

Conciseness

This skill is extremely verbose at ~600+ lines. While the complexity of the task justifies significant length, there is substantial redundancy — GH_TOKEN resolution is repeated 3+ times across the main body and sub-agent prompts, the same curl patterns are shown repeatedly, and explanations that Claude could infer (like extracting owner/repo from git URLs) are spelled out in excessive detail. The sub-agent prompts alone are massive and could reference shared instructions rather than duplicating them.

1 / 3

Actionability

The skill provides fully executable curl commands, git commands, and specific API endpoints with exact URL patterns. Every step includes concrete, copy-paste-ready code examples with proper headers, query parameters, and error handling patterns. The sub-agent prompts are complete task specifications.

3 / 3

Workflow Clarity

The 6-phase workflow is clearly sequenced with explicit validation checkpoints throughout — Phase 4 has 7 pre-flight checks including dirty tree detection, remote access verification, token validation, existing PR checks, branch existence checks, and claim-based deduplication. Error handling and feedback loops are well-defined (e.g., confidence checks in sub-agents, test retry logic, watch mode polling).

3 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to external files. The sub-agent prompt templates alone are hundreds of lines that could be in separate template files. The review handler sub-agent prompt, the main fix sub-agent prompt, argument parsing tables, and Phase 6's multi-step review analysis could all be split into referenced documents. Everything is inline in one massive file.

1 / 3

Total

8

/

12

Passed

Validation

63%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation7 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (866 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

metadata_field

'metadata' should map string keys to string values

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

7

/

11

Passed

Repository
trpc-group/trpc-agent-go
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.