Build automated machine learning pipelines with feature engineering, model selection, and hyperparameter tuning. Use when automating ML workflows from data preparation through model deployment. Trigger with phrases like "build automl pipeline", "automate ml workflow", or "create automated training pipeline".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill building-automl-pipelinesOverall
score
61%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It provides specific capabilities, explicit 'Use when' guidance, natural trigger phrases, and a clear niche that distinguishes it from general ML or data processing skills. The description follows best practices by using third person voice and providing concrete, actionable trigger terms.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'feature engineering, model selection, and hyperparameter tuning' along with the pipeline scope 'from data preparation through model deployment'. Uses third person voice correctly. | 3 / 3 |
Completeness | Clearly answers both what ('Build automated machine learning pipelines with feature engineering, model selection, and hyperparameter tuning') and when ('Use when automating ML workflows...') with explicit trigger phrases provided. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'build automl pipeline', 'automate ml workflow', 'create automated training pipeline', plus domain terms like 'ML workflows', 'model deployment'. These are realistic phrases users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on automated ML pipelines and AutoML workflows. The specific trigger phrases like 'automl pipeline' and 'automated training pipeline' distinguish it from general ML or data science skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is essentially a skeleton that defers all substantive guidance to external reference files. It lacks any executable code examples, has poorly structured workflow steps (duplicate numbered lists), and provides no concrete implementation details. The content describes what an AutoML pipeline should do rather than instructing how to build one.
Suggestions
Add a concrete, executable quick-start code example showing a minimal AutoML pipeline (e.g., using PyCaret or Auto-sklearn) that can be copy-pasted and run
Fix the workflow structure - merge the two numbered lists into a single coherent sequence with explicit validation checkpoints (e.g., 'Verify data quality checks pass before proceeding')
Include at least one inline example with specific code rather than deferring everything to external files
Add validation/verification steps such as checking model performance thresholds before deployment and data quality gates
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content has some unnecessary padding (verbose prerequisites list, generic resource descriptions) but isn't excessively verbose. The numbered steps are reasonably concise but could be tighter. | 2 / 3 |
Actionability | No executable code provided despite being a coding skill. Steps are abstract ('Initialize AutoML pipeline with configuration') without concrete implementation. The actual guidance is deferred to external files. | 1 / 3 |
Workflow Clarity | Steps are listed but poorly organized (two separate numbered lists starting at 1), no validation checkpoints, no feedback loops for error recovery. Missing critical validation steps for ML pipelines like data validation results or model performance thresholds. | 1 / 3 |
Progressive Disclosure | References external files appropriately (implementation.md, errors.md, examples.md) but the main content is too sparse - it's essentially just a table of contents pointing elsewhere. The overview should contain more actionable quick-start content. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.