Fetch a Jira issue and propose an implementation plan based on codebase analysis
64
56%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/run-jira/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear and distinct capability—fetching Jira issues and proposing implementation plans—but lacks explicit trigger guidance ('Use when...') and misses common user-facing keywords like 'ticket', 'story', or 'task'. It would benefit from expanded trigger terms and an explicit 'when to use' clause to help Claude reliably select this skill.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to implement a Jira ticket, plan work for a Jira issue, or analyze the codebase for a specific task.'
Include common trigger term variations such as 'ticket', 'story', 'task', 'bug', 'JIRA', and 'plan implementation' to improve matching against natural user language.
Expand on what the implementation plan entails (e.g., 'identifies relevant files, proposes code changes, and outlines steps') to increase specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Jira) and two actions (fetch a Jira issue, propose an implementation plan based on codebase analysis), but doesn't elaborate on what the implementation plan includes or what kind of codebase analysis is performed. | 2 / 3 |
Completeness | Describes what the skill does but has no explicit 'Use when...' clause or equivalent trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent (not even implied beyond the what), this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes 'Jira issue' and 'implementation plan' which are relevant keywords, but misses common variations like 'ticket', 'JIRA', 'story', 'task', 'bug', 'sprint', or 'plan implementation' that users might naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of Jira issue fetching and codebase-based implementation planning is a fairly distinct niche that is unlikely to conflict with other skills. The Jira-specific trigger makes it clearly distinguishable. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, actionable skill that clearly guides Claude through fetching a Jira issue and proposing an implementation plan. Its main strengths are specific tool names/parameters and a logical multi-step workflow. Weaknesses include some unnecessary explanatory text and missing validation checkpoints between steps, particularly around verifying that linked resources were successfully retrieved.
Suggestions
Remove the explanatory sentence in Step 3 ('This step is critical — linked resources often contain...') as it explains rationale Claude doesn't need.
Add a brief validation checkpoint after Step 3, e.g., 'If no linked resources could be fetched, note this limitation and proceed with available context only.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Mostly efficient but includes some unnecessary explanation. Phrases like 'This step is critical — linked resources often contain the root cause analysis, timelines, and technical details that the Jira description alone does not capture' explain rationale Claude doesn't need. The step 2 summary template is somewhat verbose but provides useful structure. | 2 / 3 |
Actionability | Provides specific MCP tool names, exact parameter names and values (cloudId, fields array), concrete tool names for each step (Glob, Grep, Read, Task with subagent_type=Explore, EnterPlanMode), and specific URL patterns to match for linked resources. This is highly actionable and copy-paste ready. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced and logically ordered, with a good early exit condition (issue not found). However, there are no validation checkpoints between steps 3-5 — for example, no verification that linked resources were successfully fetched before proceeding to codebase analysis, and no feedback loop if codebase exploration doesn't yield relevant results. | 2 / 3 |
Progressive Disclosure | For a skill of this size (~50 lines) with no need for external references, the content is well-organized into clear sequential sections. Each step has a distinct purpose and the structure is easy to scan. No bundle files are needed for this workflow-oriented skill. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0f36ad4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.