Infrastructure as Code security scanning for Terraform, Kubernetes, CloudFormation, and Azure ARM. Detects misconfigurations, security risks, and compliance violations before deployment. Use when: - User asks to scan Terraform files or modules - User mentions "infrastructure security" or "IaC scan" - User is working with Kubernetes manifests - User asks about CloudFormation or ARM template security - Agent is generating or modifying infrastructure code
85
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines its scope, lists concrete capabilities, and provides explicit trigger conditions. It covers multiple specific technologies while maintaining a focused niche in IaC security scanning. The 'Use when' clause with bullet points is particularly effective at guiding skill selection.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple concrete actions ('security scanning', 'detects misconfigurations, security risks, and compliance violations') and names specific technologies (Terraform, Kubernetes, CloudFormation, Azure ARM). These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (IaC security scanning, detecting misconfigurations/security risks/compliance violations) and 'when' with an explicit 'Use when:' clause listing five specific trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger terms users would actually say: 'Terraform files', 'infrastructure security', 'IaC scan', 'Kubernetes manifests', 'CloudFormation', 'ARM template security', and even covers the agent-initiated case of generating infrastructure code. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: IaC security scanning for specific platforms. The combination of security scanning + infrastructure-as-code + named platforms (Terraform, Kubernetes, CloudFormation, ARM) makes it very unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with a clear five-phase workflow and good verification steps, but it suffers from being overly long with inline remediation examples that Claude likely already knows. The core scanning instructions use pseudocode rather than executable tool invocations, reducing actionability. Moving platform-specific remediation examples to separate files and tightening the scan invocation syntax would significantly improve this skill.
Suggestions
Make scan invocations executable by using actual tool call syntax or CLI commands instead of pseudocode like 'Run snyk_iac_scan with: - path: <directory>'
Move platform-specific remediation examples (Terraform, K8s, CloudFormation fixes) to separate reference files (e.g., TERRAFORM_FIXES.md, K8S_FIXES.md) and link from the main skill
Remove the Discovery phase's file-type identification details — Claude already knows what Terraform and Kubernetes files look like
Trim the supported IaC formats table; Claude knows common file extensions and can infer format from content
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-structured but includes some content Claude already knows (e.g., the supported IaC formats table with file extensions, the discovery phase explaining what Terraform/K8s files look like). The remediation examples are extensive and could be trimmed or moved to a separate reference file, as many are standard security patterns Claude would know. | 2 / 3 |
Actionability | The scan commands reference a tool called 'snyk_iac_scan' but use pseudocode-style invocations ('Run snyk_iac_scan with: - path: <directory>') rather than actual executable commands or proper tool call syntax. The remediation code examples are concrete and copy-paste ready, but the core scanning workflow lacks executable specificity. | 2 / 3 |
Workflow Clarity | The five-phase workflow (Discovery → Execute Scan → Analyze Results → Remediation → Verification) is clearly sequenced with explicit goals per phase. Phase 5 includes a re-scan verification step with before/after comparison, creating a proper feedback loop for confirming fixes are effective. | 3 / 3 |
Progressive Disclosure | The content is well-organized with clear sections and headers, but it's monolithic — all remediation examples for Terraform, Kubernetes, and CloudFormation are inline rather than split into separate reference files. At ~250+ lines, the remediation examples and format-specific details would benefit from being in linked files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
adb5a9a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.