Automate ML workflows with Airflow, Kubeflow, MLflow. Use for reproducible pipelines, retraining schedules, MLOps, or encountering task failures, dependency errors, experiment tracking issues.
Install with Tessl CLI
npx tessl i github:secondsky/claude-skills --skill ml-pipeline-automation89
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description with excellent trigger term coverage and completeness. The main weakness is the somewhat vague action verb 'automate' - the description would benefit from listing more specific capabilities like creating DAGs, configuring retraining schedules, or debugging pipeline failures. Overall, it provides enough detail for Claude to correctly select this skill for ML orchestration tasks.
Suggestions
Replace 'Automate ML workflows' with more specific actions like 'Create DAGs, configure retraining schedules, debug pipeline failures, set up experiment tracking'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (ML workflows) and specific tools (Airflow, Kubeflow, MLflow), but actions are somewhat vague - 'automate' is broad and doesn't list concrete actions like 'create DAGs', 'configure pipelines', or 'set up experiment tracking'. | 2 / 3 |
Completeness | Clearly answers both what ('Automate ML workflows with Airflow, Kubeflow, MLflow') and when ('Use for reproducible pipelines, retraining schedules, MLOps, or encountering task failures, dependency errors, experiment tracking issues') with explicit trigger scenarios. | 3 / 3 |
Trigger Term Quality | Good coverage of natural terms users would say: 'ML workflows', 'Airflow', 'Kubeflow', 'MLflow', 'pipelines', 'retraining', 'MLOps', 'task failures', 'dependency errors', 'experiment tracking'. These are terms practitioners naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on ML pipeline orchestration tools. The specific tool names (Airflow, Kubeflow, MLflow) and problem types (task failures, dependency errors) create distinct triggers unlikely to conflict with general coding or data science skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, well-structured skill with excellent actionability and clear workflow guidance. The main weakness is some verbosity in explaining concepts Claude already knows (ML pipeline stages, 'When to Use This Skill' section). The code examples are production-ready and the Known Issues section provides valuable, specific troubleshooting patterns.
Suggestions
Remove or significantly condense the 'When to Use This Skill' section - Claude can infer appropriate use cases from the content itself
Remove the 'Core Concepts > Pipeline Stages' section as these are basic ML concepts Claude already understands
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary sections like 'When to Use This Skill' that Claude can infer, and the 'Core Concepts' pipeline stages section explains basic ML workflow concepts Claude already knows. However, the code examples are generally lean and the comparison table is efficient. | 2 / 3 |
Actionability | Excellent executable code throughout - the Quick Start provides copy-paste ready commands, all code examples are complete and runnable Python with proper imports, and specific commands like 'airflow dags trigger' are provided rather than vague instructions. | 3 / 3 |
Workflow Clarity | The Quick Start has clear numbered steps, the Known Issues section provides explicit problem-solution patterns with validation (e.g., 'Always validate XCom pulls'), and the document includes proper sequencing with task dependencies shown via >> operators. | 3 / 3 |
Progressive Disclosure | Well-structured with Quick Start for immediate use, followed by progressively detailed sections. The 'When to Load References' section clearly signals one-level-deep references to specific files (airflow-patterns.md, kubeflow-mlflow.md, pipeline-monitoring.md) with clear descriptions of when to use each. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.