CtrlK
BlogDocsLog inGet started
Tessl Logo

building-automl-pipelines

Build automated machine learning pipelines with feature engineering, model selection, and hyperparameter tuning. Use when automating ML workflows from data preparation through model deployment. Trigger with phrases like "build automl pipeline", "automate ml workflow", or "create automated training pipeline".

61

Quality

53%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/ai-ml/automl-pipeline-builder/skills/building-automl-pipelines/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly articulates specific capabilities (feature engineering, model selection, hyperparameter tuning), provides explicit 'Use when' guidance, and includes natural trigger phrases. It is well-scoped to the AutoML pipeline niche, making it easily distinguishable from related but different ML skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'feature engineering', 'model selection', 'hyperparameter tuning', and covers the pipeline from 'data preparation through model deployment'.

3 / 3

Completeness

Clearly answers both what ('Build automated machine learning pipelines with feature engineering, model selection, and hyperparameter tuning') and when ('Use when automating ML workflows from data preparation through model deployment') with explicit trigger phrases.

3 / 3

Trigger Term Quality

Includes natural trigger phrases users would say: 'build automl pipeline', 'automate ml workflow', 'create automated training pipeline', plus domain terms like 'feature engineering', 'hyperparameter tuning', and 'ML workflows'.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to automated ML pipelines specifically, with distinct triggers like 'automl pipeline' and 'automated training pipeline' that are unlikely to conflict with general data science or individual model training skills.

3 / 3

Total

12

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a high-level outline with no executable content. It lists abstract planning steps, explains concepts Claude already knows (what Auto-sklearn and TPOT are), and defers all actual implementation to reference files that don't exist. The skill fails to provide any concrete, copy-paste-ready code or specific commands for building an AutoML pipeline.

Suggestions

Add a complete, executable Python code example showing a minimal AutoML pipeline (e.g., using PyCaret or Auto-sklearn) from data loading through model export.

Replace the abstract planning steps with a concrete numbered workflow including specific commands, validation checkpoints (e.g., 'verify data shape after preprocessing'), and error recovery steps.

Remove the prerequisites section and resources descriptions—Claude already knows what these libraries are. Instead, use that space for a concrete quick-start example.

Either provide the referenced bundle files (implementation.md, errors.md, examples.md) or inline the essential content so the skill is self-contained and actionable.

DimensionReasoningScore

Conciseness

The skill is verbose with unnecessary prerequisites that Claude already knows (understanding problem types, knowledge of evaluation metrics). The instructions are padded with planning steps that don't add actionable value, and the resources section explains what each library is rather than providing concrete usage.

1 / 3

Actionability

There is no executable code anywhere in the skill. The instructions are abstract planning steps ('Identify problem type', 'Define evaluation metrics') rather than concrete commands or code. All implementation details are deferred to referenced files that don't exist in the bundle.

1 / 3

Workflow Clarity

The numbered steps are vague planning activities without clear sequencing—the numbering even restarts mid-list (two separate sequences starting at 1). There are no validation checkpoints, no feedback loops, and step 3 of the second sequence abruptly ends with 'Initialize AutoML pipeline with configuration' before deferring everything to a reference file.

1 / 3

Progressive Disclosure

The skill attempts progressive disclosure by referencing implementation.md, errors.md, and examples.md, but none of these files exist in the bundle. The main file itself contains too little actionable content—it's essentially an empty shell pointing to missing references, with the overview content being too thin to serve as a useful standalone guide.

2 / 3

Total

5

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.