Best practices for using Pulumi Automation API to programmatically orchestrate infrastructure operations. Covers multi-stack orchestration, embedding Pulumi in applications, architecture choices, and common patterns.
Install with Tessl CLI
npx tessl i github:pulumi/agent-skills --skill pulumi-automation-api64
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies the technology domain (Pulumi Automation API) but reads more like a topic outline than actionable skill guidance. It lacks explicit trigger conditions for when Claude should use this skill and doesn't include enough natural user keywords. The description would benefit from concrete actions and clear 'Use when...' guidance.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios like 'Use when the user asks about programmatic infrastructure deployment, Pulumi SDK usage, or automating stack operations'
Include natural trigger terms users would say: 'IaC', 'infrastructure as code', 'Pulumi SDK', 'programmatic deployment', 'stack automation'
Replace vague phrases like 'architecture choices' and 'common patterns' with specific actions like 'create and destroy stacks programmatically', 'embed infrastructure provisioning in CI/CD pipelines', 'manage multiple stacks from code'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Pulumi Automation API) and mentions some actions like 'multi-stack orchestration' and 'embedding Pulumi in applications', but lacks concrete specific actions like 'create stacks', 'deploy resources', or 'manage state'. | 2 / 3 |
Completeness | Describes what the skill covers but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the 'what' is also weak, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes 'Pulumi Automation API' and 'infrastructure operations' which are relevant, but misses common user terms like 'IaC', 'infrastructure as code', 'programmatic deployment', 'stack management', or 'Pulumi SDK'. | 2 / 3 |
Distinctiveness Conflict Risk | Specific to Pulumi Automation API which provides some distinction, but 'infrastructure operations' and 'architecture choices' are generic enough to potentially overlap with other IaC or DevOps skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with excellent actionability through concrete, executable code examples covering the main Automation API use cases. The content could be more concise by trimming explanatory prose that Claude doesn't need, and workflow clarity would benefit from explicit validation steps in the deployment patterns, especially for the multi-stack orchestration which involves potentially destructive operations.
Suggestions
Remove or significantly trim the 'What is Automation API' section - Claude understands what programmatic API access means
Add validation checkpoints to the multi-stack orchestration pattern, such as checking stack.outputs() or verifying resource state before proceeding to dependent stacks
Add a rollback or recovery pattern showing what to do when a mid-sequence stack deployment fails
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary explanation, such as the 'What is Automation API' section explaining concepts Claude likely knows. The 'When NOT to Use' section is brief but could be trimmed, and some prose could be tightened. | 2 / 3 |
Actionability | Provides fully executable TypeScript code examples throughout, including multi-stack orchestration, configuration passing, output reading, error handling, and parallel operations. All examples are copy-paste ready with proper imports. | 3 / 3 |
Workflow Clarity | Multi-stack deployment and destroy sequences are clearly shown with correct ordering. However, there are no explicit validation checkpoints or feedback loops for error recovery - the error handling section catches errors but doesn't show a validate-fix-retry pattern for deployment failures. | 2 / 3 |
Progressive Disclosure | Well-organized with clear sections progressing from overview to specific patterns. References to related skills and external documentation are clearly signaled at the end. The quick reference table provides excellent navigation for common scenarios. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.