Rosetta planning, coding, and reviewing skill for IaC implementation (Terraform, Polumi, CloudFormation, ARM, Bicep, Crossplane, CDK, Helm, Kustomize, etc). MUST use when implementing features, fixing bugs, or making code changes to any IaC.
48
51%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./instructions/r2/core/skills/coding-iac/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description with strong trigger term coverage and clear 'when to use' guidance. The main weakness is that the capability description ('planning, coding, and reviewing') is somewhat generic and could benefit from more specific concrete actions. There is also a typo ('Polumi' should be 'Pulumi') and the term 'Rosetta' is unexplained, which could cause confusion.
Suggestions
Replace generic actions 'planning, coding, and reviewing' with more specific capabilities like 'generate modules, validate configurations, refactor resource definitions, review security posture'
Fix the typo 'Polumi' to 'Pulumi' and clarify what 'Rosetta' refers to (if it's a methodology or framework name, briefly explain it)
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (IaC) and lists specific tools (Terraform, Pulumi, CloudFormation, etc.), and mentions actions like 'planning, coding, and reviewing,' but doesn't describe concrete specific actions (e.g., 'generate modules,' 'validate configurations,' 'refactor resource definitions'). | 2 / 3 |
Completeness | Clearly answers both 'what' (planning, coding, and reviewing for IaC implementation across multiple tools) and 'when' ('MUST use when implementing features, fixing bugs, or making code changes to any IaC'), with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: Terraform, Pulumi, CloudFormation, ARM, Bicep, Crossplane, CDK, Helm, Kustomize, IaC, plus action terms like 'implementing features,' 'fixing bugs,' and 'code changes.' These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche focused on Infrastructure as Code tools. The extensive list of specific IaC technologies (Terraform, Pulumi, CloudFormation, ARM, Bicep, Crossplane, CDK, Helm, Kustomize) makes it very unlikely to conflict with non-IaC skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill attempts to be comprehensive but fails on execution: it is extremely verbose yet lacks any concrete, executable examples. The heavy reliance on ALL CAPS warnings and repeated CRITICAL markers creates noise rather than clarity. The content reads more like an organizational policy document than an actionable skill file, with abstract checklists replacing the specific commands, code snippets, and tool invocations that would make it useful.
Suggestions
Replace abstract instructions with concrete, executable examples: show actual CLI commands (e.g., `checkov -d .`, `tfsec .`, `terraform validate`), actual module usage snippets, and actual template references rather than just listing tool names.
Eliminate redundant emphasis and repeated points — consolidate all CRITICAL rules into a single, concise 'Constraints' section instead of scattering them across planning, coding, and review sections.
Extract the detailed review checklist, error handling catalog, and tool configurations into separate referenced files (e.g., REVIEW_CHECKLIST.md, ERROR_HANDLING.md) and keep SKILL.md as a concise overview with clear links.
Add at least one end-to-end worked example showing the full workflow from request analysis through code generation, validation, and review report output.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with excessive use of ALL CAPS, bold CRITICAL warnings, and redundant emphasis. Many points are repeated across sections (e.g., 'check existing infrastructure' appears multiple times). Contains filler like 'Etc.' entries, explanations of obvious concepts, and motivational threats ('COST OF SKIPPING: SECURITY INCIDENT WITH CIO, CISO, AND MULTIMILLION FINES!') that waste tokens without adding actionable value. | 1 / 3 |
Actionability | Despite its length, the skill contains zero executable code examples, no concrete commands, no specific CLI invocations, and no template snippets. Instructions are abstract ('Check resource name availability', 'Use formatting, linking, multi-engine security scanning') rather than showing exactly how to do these things. Lists of tool names are provided but no actual usage commands or configurations. | 1 / 3 |
Workflow Clarity | There is a discernible multi-phase workflow (planning → coding → review → documentation → error handling → self-healing), and the self-healing loop has explicit retry conditions and stop conditions. However, the planning section has inconsistent numbering (Step 3 appears without Steps 1-2 being labeled), validation checkpoints within the coding phase are vague ('run security and validation tools'), and the review section lists tools without showing how they integrate into a sequenced pipeline. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files despite being complex enough to warrant them. The review checklist, error handling catalog, and tool configurations could each be separate reference files. There are references to a 'load-context skill' and 'Rosetta prep steps' that are never linked or explained, making navigation impossible. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0f3002b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.