Static application security testing for code-level vulnerabilities. Use when the user says "SAST scan", "find SQL injection", "check for XSS", "static analysis", "endor sast", "code security scan", or wants to find injection flaws, hardcoded credentials, and insecure patterns in source code. Do NOT use for dependency vulnerabilities (/endor-sca), secrets scanning (/endor-secrets), or viewing pre-computed AI SAST findings (/endor-ai-sast).
86
82%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It provides specific capabilities, comprehensive natural trigger terms, explicit 'Use when' and 'Do NOT use' clauses, and clear differentiation from related skills. The negative boundary-setting with references to sibling skills is a particularly strong feature for disambiguation in a multi-skill environment.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and vulnerability types: 'SQL injection', 'XSS', 'injection flaws', 'hardcoded credentials', 'insecure patterns in source code'. Clearly describes what the skill does with concrete examples. | 3 / 3 |
Completeness | Clearly answers both 'what' (static application security testing for code-level vulnerabilities, finding injection flaws, hardcoded credentials, insecure patterns) and 'when' (explicit 'Use when...' clause with multiple trigger phrases). Also includes explicit 'Do NOT use' guidance for disambiguation. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'SAST scan', 'find SQL injection', 'check for XSS', 'static analysis', 'endor sast', 'code security scan'. These are realistic phrases a user would naturally type. | 3 / 3 |
Distinctiveness Conflict Risk | Exceptionally distinctive — explicitly differentiates itself from related skills (/endor-sca for dependency vulnerabilities, /endor-secrets for secrets scanning, /endor-ai-sast for AI SAST findings). This negative boundary-setting makes conflict risk very low. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable SAST scanning skill with concrete tool invocations, CLI fallbacks, and a well-defined output template. Its main weaknesses are the lack of validation checkpoints within the workflow steps and some inline content (vulnerability table, secure patterns) that could be externalized for better progressive disclosure. The content is mostly efficient but includes some reference material that Claude likely already knows.
Suggestions
Add explicit validation checkpoints between workflow steps, e.g., 'Verify scan returned results before proceeding to Step 2' and 'If get_resource fails, retry or skip finding with a note'
Move the vulnerability categories table and language-specific secure patterns into separate reference files, linking to them from the main skill for progressive disclosure
Trim the AI false positive reduction explanation—Claude doesn't need the marketing context about Code Pro; just state the flag and note it requires Code Pro license
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The vulnerability categories table and language-specific secure patterns are useful reference material, but the vulnerability table is somewhat redundant given Claude's existing security knowledge. The AI false positive reduction section could be tighter. Overall mostly efficient with some areas that could be trimmed. | 2 / 3 |
Actionability | Provides concrete MCP tool invocations with specific parameters, executable CLI commands with exact flags, a detailed output template with placeholders, and specific language-level secure coding patterns. The guidance is copy-paste ready and leaves little ambiguity about what to do. | 3 / 3 |
Workflow Clarity | The 4-step workflow is clearly sequenced and logical, but lacks explicit validation checkpoints or error recovery between steps. For example, there's no guidance on what to do if Step 2 (get_resource) fails for a finding, or how to verify the scan completed successfully before proceeding. The error handling table at the end partially addresses this but is disconnected from the workflow steps. | 2 / 3 |
Progressive Disclosure | The content is reasonably well-structured with clear sections, and there's one reference to an external file (references/data-sources.md). However, the language-specific secure patterns and vulnerability categories table could be split into separate reference files to keep the main skill leaner. The output template is quite long inline. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
344e7ff
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.