**CRITICAL**: Use for ALL CVE discovery and listing. DO NOT call get_cves directly. Use when: "show critical CVEs", "CVEs on hostname X", "remediatable vulnerabilities", "impact of CVE-X", risk assessment. NOT for remediation (use `/remediation`). System-level: FIRST reply = pagination prompt (Step -1). Parsing: references/01-cve-response-parser.py.
70
62%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./rh-sre/skills/cve-impact/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is functional and well-structured for skill selection, with strong trigger terms and clear boundaries. Its main weakness is that the specificity of concrete actions could be improved—it tells Claude when to use it more than what it actually does. The system-level implementation details (pagination prompt, parser reference) are internal instructions that don't help with skill selection and add noise.
Suggestions
Add more specific capability descriptions beyond 'discovery and listing'—e.g., 'Searches, filters, and lists CVEs by hostname, severity, remediability status. Provides risk assessment and impact details for specific CVEs.'
Remove or separate the system-level implementation details (Step -1 pagination prompt, parser reference) from the description field, as these don't aid skill selection and reduce clarity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (CVE discovery and listing) and mentions some actions like discovery, listing, risk assessment, but doesn't list multiple concrete actions in detail. The description focuses more on routing/usage instructions than specific capabilities. | 2 / 3 |
Completeness | Clearly answers both 'what' (CVE discovery and listing) and 'when' with an explicit 'Use when:' clause listing specific trigger phrases. Also includes a 'NOT for' exclusion clause which adds clarity on boundaries. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would actually say: 'show critical CVEs', 'CVEs on hostname X', 'remediatable vulnerabilities', 'impact of CVE-X', 'risk assessment'. These cover multiple realistic user phrasings. | 3 / 3 |
Distinctiveness Conflict Risk | Very clearly scoped to CVE discovery/listing with an explicit exclusion ('NOT for remediation, use /remediation'), making it highly distinguishable from related skills. The boundary between this skill and the remediation skill is explicitly drawn. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill has a well-structured workflow with important safety gates (HITL pagination prompts, MCP validation) and good reference file organization. However, it suffers severely from redundancy—the same information (tool parameters, HITL prompts, tool listings) is repeated 3-4 times throughout the document, making it extremely token-inefficient. Steps 6 and 7 are essentially empty, and the document consultation requirements before every step add verbosity without proportional value.
Suggestions
Eliminate redundancy: consolidate MCP tool parameters into a single section (either Prerequisites OR Dependencies OR Tools Reference, not all three) and reference it from workflow steps.
Remove the duplicate HITL pagination prompts—define them once in Step -1 and reference that section from Step 1's flow selection, or move them entirely into the flow files.
Flesh out Steps 6 and 7 with concrete guidance or remove them and reference an external file, rather than leaving them as empty stubs.
Remove the Best Practices section (generic advice Claude already knows) and the explanatory text about when to use this skill vs remediation—this routing logic belongs in a parent skill or the description metadata.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with massive redundancy. The HITL pagination prompt appears three times nearly verbatim (Step -1, then again in Step 1 for flows 02 and 03). MCP tool parameters and dependencies are listed in Prerequisites, then again in Steps 2-3, then again in Dependencies, then again in Tools Reference—four repetitions. Best practices section states obvious things Claude already knows. | 1 / 3 |
Actionability | Provides specific MCP tool names, parameters, and expected output formats which is good. However, much of the actual execution logic is deferred to external flow files (01-account-cves.md, 02-system-all-cves.md, etc.) rather than being directly executable. The parser invocation commands are concrete and copy-paste ready, but Steps 6 and 7 are essentially empty stubs with no actionable content. | 2 / 3 |
Workflow Clarity | The workflow has clear step numbering and the HITL gate is well-defined with explicit wait-for-user instructions. However, Steps 6 and 7 are skeletal with no validation or detail. The mandatory document consultation before every step adds process overhead but the validation checkpoints (MCP validation, HITL confirmation) are good. Missing feedback loops for error recovery within steps. | 2 / 3 |
Progressive Disclosure | Good use of reference files for flows, output templates, examples, and error handling. However, the main SKILL.md is far too long with duplicated content that should have been consolidated or moved to references. The Dependencies and Tools Reference sections at the bottom largely duplicate the Prerequisites section. The reference table is well-organized but the inline content bloat undermines the progressive disclosure pattern. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
600eabe
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.