CtrlK
BlogDocsLog inGet started
Tessl Logo

owasp-top10-2025-cwe-reviewer

Review real software repositories for likely security issues using the local OWASP Top 10:2025 category set, official OWASP-mapped CWE lists, and canonical MITRE CWE records. Use when auditing source code, configuration, IaC, pipelines, dependencies, auth flows, crypto, logging, or error handling, and when the goal is evidence-based findings mapped to both CWE and OWASP 2025 with confidence levels and coverage gaps.

93

Quality

92%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly articulates what the skill does (security review using OWASP Top 10:2025 and CWE frameworks), when to use it (comprehensive 'Use when' clause with many specific trigger scenarios), and what distinguishes it (evidence-based findings with confidence levels and coverage gaps). It uses proper third-person voice, includes rich natural trigger terms, and carves out a distinct niche that would be easy for Claude to differentiate from other skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: reviewing repositories for security issues, using OWASP Top 10:2025 categories, OWASP-mapped CWE lists, and MITRE CWE records. Specifies concrete outputs like evidence-based findings mapped to CWE and OWASP 2025 with confidence levels and coverage gaps.

3 / 3

Completeness

Clearly answers both 'what' (review repositories for security issues using OWASP Top 10:2025, CWE lists, and MITRE CWE records) and 'when' (explicit 'Use when' clause covering auditing source code, configuration, IaC, pipelines, dependencies, auth flows, crypto, logging, error handling, and when the goal is evidence-based findings).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'security issues', 'auditing source code', 'configuration', 'IaC', 'pipelines', 'dependencies', 'auth flows', 'crypto', 'logging', 'error handling', 'OWASP', 'CWE'. These are terms a developer or security professional would naturally use when requesting a security audit.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: OWASP Top 10:2025-based security auditing with CWE mapping, confidence levels, and coverage gaps. The specificity of the methodology (OWASP 2025, MITRE CWE records) and output format (evidence-based findings with confidence levels) makes it very unlikely to conflict with generic code review or other security skills.

3 / 3

Total

12

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted, highly actionable security review skill with excellent workflow structure and appropriate progressive disclosure to supporting knowledge files. Its main weakness is moderate verbosity—some sections (attack-surface inventory, category review focus, precision guardrails) could be tightened without losing clarity, especially since they partially overlap with referenced files like detection_playbook.md. Overall, it's a strong skill that provides Claude with clear, defensible guidance for a complex task.

DimensionReasoningScore

Conciseness

The skill is thorough and mostly earns its length given the complexity of security auditing, but some sections are verbose—e.g., the attack-surface inventory in Step 2 is an exhaustive enumeration that could be more compact, and the Category Review Focus section largely restates what the detection_playbook.md already covers. Some guardrails in Step 5 are repetitive with the status rules in Step 3.

2 / 3

Actionability

The skill provides highly concrete, executable guidance: specific file paths to consult, exact status labels and their definitions, precise severity/confidence scales, a complete output template with field-level structure, and explicit decision rules for when to emit or suppress findings. Every step tells Claude exactly what to do rather than describing concepts abstractly.

3 / 3

Workflow Clarity

The six-step workflow is clearly sequenced with logical dependencies (detect context → build inventory → generate candidates → map to OWASP → enforce precision → prioritize). Validation checkpoints are embedded throughout: Step 5 acts as a comprehensive validation gate with explicit pass/fail criteria, and the status rules in Step 3 provide a feedback loop for evidence quality. The output contract enforces traceability back through the workflow.

3 / 3

Progressive Disclosure

The skill cleanly references external files for detailed content (detection_playbook.md for checklists, finding.schema.json for JSON validation, knowledge/*.json for data) while keeping the SKILL.md as a self-contained workflow overview. References are one level deep and clearly signaled with explicit file paths and conditions for when to use each resource.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
NikitaSkripchenko/owasp-top10-cwe-reviewer-skill
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.