tessl i github:jeffallan/claude-skills --skill ml-pipelineUse when building ML pipelines, orchestrating training workflows, automating model lifecycle, implementing feature stores, or managing experiment tracking systems.
Validation
75%| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_examples | No examples detected (no code fences and no 'Example' wording) | Warning |
Total | 12 / 16 Passed | |
Implementation
42%This skill has good structural organization with clear progressive disclosure through reference files, but critically lacks actionable content. It reads more like a role description than executable guidance - there are no code examples, no concrete commands, and no actual templates despite promising them. The MUST DO/MUST NOT DO constraints are useful but would benefit from concrete examples of violations and correct implementations.
Suggestions
Add concrete, executable code examples for at least one complete pipeline (e.g., a minimal Kubeflow pipeline definition with MLflow tracking)
Replace the 'Output Templates' description with actual templates showing expected code structure and configuration formats
Add validation checkpoints to the Core Workflow (e.g., 'Validate data schema before proceeding to training', 'Verify experiment logged successfully')
Remove or significantly condense the 'Role Definition', 'When to Use', and 'Knowledge Reference' sections as they add little actionable value
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary sections like 'Role Definition' that restates what Claude already knows, and 'Knowledge Reference' which is just a list of technologies Claude is familiar with. The 'When to Use This Skill' section largely duplicates the description. | 2 / 3 |
Actionability | The skill provides no concrete code examples, commands, or executable guidance. It describes what to do at a high level ('Implement feature engineering', 'Configure distributed training') but never shows how. The 'Output Templates' section lists what to provide but gives no actual templates or examples. | 1 / 3 |
Workflow Clarity | The 'Core Workflow' provides a clear 5-step sequence, but lacks validation checkpoints and feedback loops. For ML pipelines involving potentially destructive batch operations and complex multi-step processes, the absence of explicit validation steps between stages is a gap. | 2 / 3 |
Progressive Disclosure | The reference table provides clear, well-organized one-level-deep references to detailed guidance files. Topics are clearly signaled with 'Load When' conditions, making navigation straightforward and appropriate for the skill's complexity. | 3 / 3 |
Total | 8 / 12 Passed |
Activation
48%This description is inverted from the typical pattern - it only provides trigger conditions without explaining what the skill actually does. While the trigger terms are relevant and natural for MLOps practitioners, the complete absence of capability descriptions makes it impossible to understand what actions this skill enables. The description needs a 'what' component before the 'when' clause.
Suggestions
Add a capability statement before the 'Use when' clause describing concrete actions (e.g., 'Configures Airflow DAGs, sets up MLflow experiment tracking, implements feature store schemas, and automates model deployment pipelines.')
Include specific tool names if applicable (e.g., Kubeflow, MLflow, Feast, DVC) to increase distinctiveness and help users recognize when this skill applies
Make actions more concrete - instead of 'orchestrating training workflows', specify 'create training pipeline configurations' or 'define model training schedules'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (ML pipelines) and lists several actions (orchestrating, automating, implementing, managing), but these are somewhat abstract rather than concrete specific actions like 'create DAG configurations' or 'set up MLflow tracking'. | 2 / 3 |
Completeness | Only provides 'when' guidance but completely lacks the 'what does this do' component. The description is entirely a 'Use when...' clause without explaining what capabilities or actions the skill provides. | 1 / 3 |
Trigger Term Quality | Good coverage of natural terms users would say: 'ML pipelines', 'training workflows', 'model lifecycle', 'feature stores', 'experiment tracking' are all terms practitioners commonly use when discussing MLOps tasks. | 3 / 3 |
Distinctiveness Conflict Risk | The MLOps domain is fairly specific, but terms like 'ML pipelines' and 'training workflows' could overlap with general ML/data science skills. The lack of specific tools or concrete actions reduces distinctiveness. | 2 / 3 |
Total | 8 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.