CtrlK
BlogDocsLog inGet started
Tessl Logo

deploying-airflow

Deploy Airflow DAGs and projects. Use when the user wants to deploy code, push DAGs, set up CI/CD, deploy to production, or asks about deployment strategies for Airflow.

82

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/deploying-airflow/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly communicates its purpose and when to use it. The 'Use when...' clause provides good trigger coverage for deployment-related queries in the Airflow context. The main weakness is that the 'what' portion could be more specific about the concrete actions or methods involved in deployment (e.g., specific CI/CD tools, deployment targets).

DimensionReasoningScore

Specificity

Names the domain (Airflow DAGs/projects) and mentions deployment-related actions (deploy code, push DAGs, set up CI/CD, deploy to production), but the actions are somewhat generic and not deeply specific about concrete capabilities like 'configure GitHub Actions pipelines' or 'sync DAG folders to S3'.

2 / 3

Completeness

Clearly answers both 'what' (deploy Airflow DAGs and projects) and 'when' with an explicit 'Use when...' clause listing multiple trigger scenarios including deploying code, pushing DAGs, setting up CI/CD, and asking about deployment strategies.

3 / 3

Trigger Term Quality

Includes strong natural trigger terms users would actually say: 'deploy code', 'push DAGs', 'CI/CD', 'deploy to production', 'deployment strategies', 'Airflow'. Good coverage of common variations a user might use when asking about Airflow deployment.

3 / 3

Distinctiveness Conflict Risk

The combination of 'Airflow' + 'DAGs' + 'deployment' creates a clear niche that is unlikely to conflict with generic deployment skills or generic Airflow skills (like DAG authoring). The triggers are well-scoped to deployment-specific tasks.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable deployment guide covering three deployment paths (Astro, Docker Compose, Kubernetes) with executable commands and complete configuration examples. Its main weaknesses are the lack of validation/verification steps after deployments and the monolithic structure that inlines lengthy YAML configurations rather than splitting them into referenced files. The content is slightly verbose in places but generally earns its space with deployment-specific details.

Suggestions

Add explicit validation steps after each deployment path (e.g., 'Verify: curl http://localhost:8080/health should return healthy' or 'kubectl get pods -n airflow — all pods should be Running')

Move the full docker-compose.yaml and values.yaml into separate referenced files (e.g., DOCKER_COMPOSE.md, HELM_VALUES.md) and keep only minimal/key snippets inline

Add a troubleshooting or error recovery section for common deployment failures (e.g., image build failures, pod CrashLoopBackOff, database migration issues)

DimensionReasoningScore

Conciseness

The skill is fairly comprehensive but includes some unnecessary verbosity—e.g., explaining what each Airflow service does (scheduler, triggerer, etc.) which Claude already knows, and the lengthy docker-compose.yaml and values.yaml could be trimmed. However, most content is relevant deployment-specific configuration that Claude wouldn't inherently know.

2 / 3

Actionability

The skill provides fully executable, copy-paste ready commands and configuration files throughout—complete docker-compose.yaml, Dockerfile, helm commands, kubectl commands, and astro CLI commands with clear context for when to use each.

3 / 3

Workflow Clarity

While individual commands and configurations are clear, there are no explicit validation checkpoints or feedback loops. For example, after deploying via Helm or Docker Compose, there's no 'verify the deployment succeeded' step, no error recovery guidance, and the package installation workflow (rebuild Docker) lacks validation that packages installed correctly.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and tables, but it's quite long (~250 lines of substantive content) with detailed YAML configurations inline that could be split into separate reference files. The 'Related Skills' section at the end is good, but the main body could benefit from being an overview that points to detailed deployment guides per platform.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
astronomer/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.