Auditing Terraform infrastructure-as-code for security misconfigurations using Checkov, tfsec, Terrascan, and OPA/Rego policies to detect overly permissive IAM policies, public resource exposure, missing encryption, and insecure defaults before cloud deployment.
69
62%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/auditing-terraform-infrastructure-for-security/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, highly specific description that clearly names concrete tools and security issues it addresses, making it very distinctive among skills. Its main weakness is the lack of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill. The domain terminology is excellent and naturally matches what users in this space would say.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to scan Terraform code for security issues, run Checkov/tfsec/Terrascan, review IaC for compliance, or check cloud infrastructure configurations before deployment.'
Consider adding common file extensions or patterns as triggers, such as '.tf files', 'terraform plan', or 'HCL' to catch additional natural user phrasing.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: auditing Terraform IaC, detecting overly permissive IAM policies, public resource exposure, missing encryption, and insecure defaults. Also names specific tools: Checkov, tfsec, Terrascan, and OPA/Rego policies. | 3 / 3 |
Completeness | The 'what' is thoroughly covered with specific tools and detection targets, but there is no explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied by the phrase 'before cloud deployment'. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'Terraform', 'security', 'Checkov', 'tfsec', 'Terrascan', 'OPA', 'Rego', 'IAM policies', 'encryption', 'cloud deployment', 'infrastructure-as-code'. These are terms a user working in this domain would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: Terraform-specific security auditing using named tools (Checkov, tfsec, Terrascan, OPA/Rego). This is unlikely to conflict with general coding skills, generic security skills, or other IaC tools. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at actionability with concrete, executable commands, real Rego policies, and a complete CI/CD pipeline example. However, it is significantly over-engineered for a SKILL.md file—it includes glossary definitions Claude doesn't need, redundant tool descriptions, and all content inlined in one massive file. The workflow lacks validation checkpoints between steps, which is important for a security auditing process.
Suggestions
Remove the 'Key Concepts' table entirely—Claude already knows what IaC, shift-left security, and Terraform plans are. Remove or drastically trim the 'Tools & Systems' section since tool descriptions are already evident from usage in the workflow.
Extract the OPA/Rego policies into a separate POLICIES.md file, the CI/CD YAML into a CI_CD.md file, and the output format into an OUTPUT_FORMAT.md file, with clear one-level-deep references from the main skill.
Add explicit validation checkpoints: after each scan step, include guidance on reviewing results, triaging findings by severity, and deciding whether to proceed or fix issues before moving to the next tool.
Trim the 'Common Scenarios' section—it largely restates the workflow steps and adds little new information beyond the pitfalls paragraph, which could be a brief 'Gotchas' note in the main workflow.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~250+ lines. It includes a 'Key Concepts' table explaining things Claude already knows (what IaC is, what 'shift left security' means, what a Terraform Plan is), a 'Tools & Systems' section that largely repeats information from the workflow steps, and a lengthy 'Common Scenarios' section that restates the workflow. The 'When to Use' and 'Do not use' sections are also unnecessarily detailed. | 1 / 3 |
Actionability | The skill provides fully executable bash commands, complete Rego policy files, and a ready-to-use GitHub Actions YAML workflow. Every step includes copy-paste ready commands with real flags and options, and the OPA policies are concrete and functional. | 3 / 3 |
Workflow Clarity | The six steps are clearly sequenced and logically ordered (scan → custom policies → CI/CD → state audit). However, there are no validation checkpoints or feedback loops between steps—no guidance on what to do when a scan fails, no 'verify results before proceeding' gates, and no error recovery steps. For a security auditing workflow involving potentially destructive pipeline blocking, this is a gap. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. The OPA policies, CI/CD YAML, output format template, and common scenarios could all be split into separate referenced files. Everything is inlined in a single massive document, making it expensive to load for simple use cases. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.