CtrlK
BlogDocsLog inGet started
Tessl Logo

pt-report-creation

Creates penetration test deliverables for executive and technical audiences, including prioritized findings and remediation plans. Use when drafting, structuring, or finalizing pen test reports from collected evidence.

80

1.24x
Quality

70%

Does it follow best practices?

Impact

98%

1.24x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/pt-report-creation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly defines a specific niche (pen test report writing), includes explicit 'Use when' guidance, and uses natural trigger terms. The main weakness is that the capability listing could be more granular—specifying concrete actions like generating executive summaries, formatting vulnerability tables, or mapping findings to frameworks would strengthen specificity.

Suggestions

Expand the capability list with more concrete actions, e.g., 'generate executive summaries, format vulnerability tables, map findings to CVSS scores, and draft remediation timelines'.

DimensionReasoningScore

Specificity

Names the domain (penetration test deliverables) and some actions (drafting, structuring, finalizing), mentions audiences and components (prioritized findings, remediation plans), but doesn't list multiple granular concrete actions like 'generate executive summary, format vulnerability tables, calculate risk scores'.

2 / 3

Completeness

Clearly answers both what ('Creates penetration test deliverables for executive and technical audiences, including prioritized findings and remediation plans') and when ('Use when drafting, structuring, or finalizing pen test reports from collected evidence') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural trigger terms users would say: 'penetration test', 'pen test reports', 'findings', 'remediation plans', 'executive', 'technical audiences', 'evidence', 'deliverables'. Good coverage of how users naturally refer to this task.

3 / 3

Distinctiveness Conflict Risk

Penetration testing report writing is a very specific niche. The combination of 'pen test', 'findings', 'remediation plans', and 'collected evidence' makes this highly distinctive and unlikely to conflict with other skills like general report writing or security analysis.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a competent but somewhat generic pen test report skill. Its main strength is the concrete report template, which gives Claude a usable structure. Its weaknesses are the lack of concrete examples (e.g., a filled-in finding showing proper severity justification and evidence formatting), the absence of feedback loops in the QA process, and missed opportunities for progressive disclosure to supplementary materials like severity scales or example reports.

Suggestions

Add a concrete example of a fully filled-in finding (e.g., an SQL injection finding with severity justification, evidence screenshots description, reproduction steps, and specific remediation) so Claude knows the expected quality bar.

Add a feedback loop to the QA step: explicitly state 'If QA finds unsupported claims or missing evidence, return to step 3 and revise before finalizing.'

Include or reference a severity rating framework (e.g., CVSS-based or custom risk matrix) so findings are consistently rated rather than leaving severity assignment ambiguous.

Move the full report template to a separate TEMPLATE.md file and reference it from the main skill, keeping the SKILL.md focused on workflow and quality criteria.

DimensionReasoningScore

Conciseness

The content is reasonably efficient but includes some unnecessary framing (e.g., the 'Objectives' section restates what's obvious from the title and description). The workflow steps could be tighter, though nothing is egregiously verbose.

2 / 3

Actionability

The report template is a concrete artifact Claude can use, which is good. However, the workflow steps are descriptive rather than executable—they tell Claude what to do at a high level ('consolidate outputs', 'deduplicate related findings') without showing concrete examples of how findings should actually be written, what good vs bad severity ratings look like, or example filled-in findings.

2 / 3

Workflow Clarity

The 5-step workflow is clearly sequenced and the final QA pass serves as a validation checkpoint. However, there are no feedback loops—if the QA pass finds issues, there's no explicit 'fix and re-check' cycle. For a report creation process where errors in evidence traceability or unsupported claims could be significant, the validation is mentioned but not structured as an iterative checkpoint.

2 / 3

Progressive Disclosure

The content is organized into logical sections (workflow, template, quality checks) which is good. However, the template is fairly long and could be split into a separate reference file. There are no references to external files for things like severity rating scales, example findings, or methodology details that would benefit from progressive disclosure.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
santosomar/ethical-hacking-agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.