Workflow and best practices for writing Apache Airflow DAGs. Use when the user wants to create a new DAG, write pipeline code, or asks about DAG patterns and conventions. For testing and debugging DAGs, see the testing-dags skill.
89
87%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description that clearly identifies its domain (Airflow DAG writing), provides explicit trigger guidance via a 'Use when' clause, and helpfully distinguishes itself from a related testing skill. The main weakness is that the 'what' portion could be more specific about the concrete actions and capabilities covered, such as defining operators, setting schedules, or configuring task dependencies.
Suggestions
Add more specific concrete actions to the capability description, e.g., 'Guides creation of Airflow DAGs including defining operators, setting schedules, configuring task dependencies, and structuring pipeline code.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Apache Airflow DAGs) and mentions some actions ('create a new DAG', 'write pipeline code', 'DAG patterns and conventions'), but doesn't list specific concrete actions like defining operators, setting schedules, configuring dependencies, or handling retries. | 2 / 3 |
Completeness | Clearly answers both 'what' (workflow and best practices for writing Airflow DAGs) and 'when' (explicit 'Use when' clause covering creating DAGs, writing pipeline code, or asking about patterns). Also helpfully delineates scope by pointing to a separate testing skill. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms: 'Airflow', 'DAG', 'DAGs', 'pipeline code', 'DAG patterns', 'conventions', 'create a new DAG'. These are terms users would naturally use when asking about Airflow DAG authoring. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to Apache Airflow DAG authoring with explicit boundary drawn against the testing-dags skill. The combination of 'Airflow', 'DAG', and 'pipeline code' creates a distinct niche unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that provides a clear, phased workflow for DAG authoring with strong validation checkpoints and good progressive disclosure to related skills. The actionability is excellent with concrete CLI commands throughout. The main weakness is moderate verbosity—the ASCII diagram and some thin phases could be tightened to save tokens.
Suggestions
Replace the ASCII workflow diagram with a compact numbered list (e.g., '1. Discover → 2. Plan → 3. Implement → 4. Validate → 5. Test → 6. Iterate') to save ~15 lines of tokens.
Consider merging the thin Plan and Implement phases into briefer inline guidance, as they contain minimal unique actionable content compared to the Discovery and Validate phases.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary elements: the large ASCII workflow diagram could be replaced with a simple numbered list, and some phases (Plan, Implement) are thin on unique content while still taking up space. The discovery example questions are a nice touch but slightly verbose. | 2 / 3 |
Actionability | Provides concrete, executable CLI commands throughout (af dags errors, af dags get, af runs trigger-wait), clear tables mapping commands to purposes, and specific file patterns to glob. Each phase has actionable steps rather than vague descriptions. | 3 / 3 |
Workflow Clarity | The six-phase workflow is clearly sequenced with explicit validation checkpoints in Phase 4 (check errors → verify DAG exists → check warnings → explore structure). Phase 6 provides a clear feedback loop: fix → check errors → re-validate → re-test. The 'if your file appears → fix and retry' pattern is a good error recovery loop. | 3 / 3 |
Progressive Disclosure | Excellent progressive disclosure: the skill provides a clear overview with well-signaled references to testing-dags skill, reference/best-practices.md, and related skills. Testing is appropriately delegated rather than duplicated. All references are one level deep and clearly labeled. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
166c98a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.