CtrlK
BlogDocsLog inGet started
Tessl Logo

review-pr

Analyze a GitHub pull request including diff, comments, related issues, and local code context

Install with Tessl CLI

npx tessl i github:dlt-hub/dlt --skill review-pr
What are skills?

67

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (GitHub pull requests) and lists relevant components to analyze, but lacks explicit trigger guidance ('Use when...') which is critical for skill selection. The verb 'analyze' is too general, and common user terms like 'PR' or 'code review' are missing.

Suggestions

Add a 'Use when...' clause with explicit triggers like 'Use when the user asks to review a PR, analyze pull request changes, or needs help with code review on GitHub'

Include common term variations: 'PR', 'code review', 'merge request', 'review changes'

Replace generic 'Analyze' with specific actions like 'Review code changes, summarize discussion threads, identify related issues, and provide feedback on pull requests'

DimensionReasoningScore

Specificity

Names the domain (GitHub pull request) and lists several actions (analyze diff, comments, related issues, local code context), but uses the general verb 'analyze' rather than listing multiple specific concrete actions like 'review changes', 'summarize feedback', 'identify conflicts'.

2 / 3

Completeness

Describes what the skill does (analyze PR components) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'GitHub', 'pull request', 'diff', 'comments', 'issues' that users might say, but misses common variations like 'PR', 'code review', 'merge request', or 'review changes'.

2 / 3

Distinctiveness Conflict Risk

Fairly specific to GitHub PRs which creates some distinctiveness, but 'analyze' and 'code context' could overlap with general code review or GitHub issue skills without clearer boundaries.

2 / 3

Total

7

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted, highly actionable skill for PR review with excellent workflow clarity and concrete executable commands. The main weakness is moderate verbosity in explanatory text and a somewhat monolithic structure that could benefit from splitting detailed subsections (especially the extensive test coverage section) into separate reference documents.

Suggestions

Trim explanatory phrases like 'Referred to as X below' and 'If absent, there are no special instructions' - Claude can infer these

Consider extracting the detailed test coverage evaluation (7a-7e) into a separate TEST_REVIEW.md reference file to improve progressive disclosure

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some redundant explanations (e.g., explaining what reviewer instructions are, verbose verification steps). The step-by-step format is appropriate for the complexity but could be tightened in places.

2 / 3

Actionability

Provides fully executable commands throughout (gh pr view, gh pr diff, python scripts, git commands). Each step has concrete, copy-paste ready commands with specific flags and JSON fields to extract.

3 / 3

Workflow Clarity

Excellent multi-step workflow with clear sequencing, explicit validation checkpoints (verify cwd, check CI status), conditional logic (if pyproject.toml touched), and parallel execution guidance. Includes error handling (stop with error if cwd cannot be set).

3 / 3

Progressive Disclosure

Content is well-structured with clear sections, but it's a monolithic document that could benefit from splitting detailed sub-processes (like test coverage evaluation) into separate reference files. References to external skills (/create-worktree) and files (@CLAUDE.md) are appropriately one-level deep.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.