Standardize usage of Dynamic Application Security Testing (DAST) tools (ZAP, Nuclei, Nikto) and custom AI-driven curl probes for adversarial system testing. Use when advising on or running dynamic security scans on local/staging environments.
67
61%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./.github/skills/common/common-dast-tooling/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly identifies its domain (DAST), lists specific tools (ZAP, Nuclei, Nikto), and provides an explicit 'Use when' trigger clause scoped to local/staging environments. It uses proper third-person voice and contains rich trigger terms that users would naturally use when requesting this type of work.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and tools: DAST tools (ZAP, Nuclei, Nikto), custom AI-driven curl probes, and adversarial system testing. These are concrete, named capabilities rather than vague abstractions. | 3 / 3 |
Completeness | Clearly answers both what ('Standardize usage of DAST tools and custom AI-driven curl probes for adversarial system testing') and when ('Use when advising on or running dynamic security scans on local/staging environments') with an explicit 'Use when' clause. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'DAST', 'ZAP', 'Nuclei', 'Nikto', 'security scans', 'dynamic security', 'curl probes', 'adversarial testing', 'staging environments'. Good coverage of both acronyms and full terms. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche: specifically about dynamic application security testing with named tools and scoped to local/staging environments. Unlikely to conflict with static analysis, general security, or other testing skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a reasonable high-level overview of DAST tooling categories and important safety constraints (no production scanning, capped scans), but critically lacks executable examples — no actual commands for Nuclei, ZAP, Nikto, or curl are shown. The workflow is not sequenced, and the content reads more like a policy document than an actionable skill that Claude can follow step-by-step.
Suggestions
Add concrete, copy-paste-ready command examples for each tool (e.g., `nuclei -u http://localhost:8080 -t cves/ -max-duration 10m`, `zap-cli quick-scan --self-contained http://staging.local`).
Define a clear numbered workflow: e.g., 1) Configure target, 2) Run automated scan with caps, 3) Review findings, 4) Run targeted curl probes for gaps, 5) Validate/triage results.
Include at least one complete curl probe example with expected output interpretation, rather than just describing probe categories.
Add a validation/verification step after scanning (e.g., 'Confirm scan completed without errors by checking exit code and reviewing summary output before proceeding to analysis').
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Mostly efficient with good use of tables and bullet points, but some lines are slightly verbose (e.g., 'Best for fast, template-based CVE/Misconfiguration scanning' could be tighter). The scoring impact table, while useful, adds bulk without being directly actionable for Claude. | 2 / 3 |
Actionability | No executable commands or concrete code examples are provided. Tool names are listed but actual command-line invocations are deferred entirely to a reference file. The curl probing section describes categories of probes but gives no actual curl commands. This is descriptive rather than instructive. | 1 / 3 |
Workflow Clarity | There is no clear multi-step workflow or sequenced process for running a DAST scan. Steps like 'run scan → review findings → validate → remediate' are absent. For operations involving security scanning (potentially destructive/disruptive), the lack of any validation checkpoints or feedback loops is a significant gap. | 1 / 3 |
Progressive Disclosure | There is a reference to an implementation guide and an external OWASP link, which is good structure. However, the main file lacks enough actionable quick-start content to stand on its own — it defers too much to the reference file without providing even a minimal executable example inline. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 9 / 11 Passed | |
4c72e76
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.