CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

training-machine-learning-models

tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill training-machine-learning-models

Build train machine learning models with automated workflows. Analyzes datasets, selects model types (classification, regression), configures parameters, trains with cross-validation, and saves model artifacts. Use when asked to "train model" or "evalua... Trigger with relevant phrases based on skill purpose.

43%

Overall

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Implementation

7%

This skill is essentially a placeholder template with no actionable content. It describes what an ML training skill would do conceptually but provides zero executable guidance - no code, no specific libraries, no actual training commands, no model persistence code. The content is padded with generic boilerplate sections and explanations of ML concepts Claude already understands.

Suggestions

Replace the abstract descriptions with actual executable Python code showing how to load data, train models (e.g., using scikit-learn), and save artifacts (e.g., using joblib or pickle)

Remove all generic boilerplate sections (Error Handling, Resources, Prerequisites, Integration) that contain no specific information

Add concrete code examples for classification and regression with specific libraries, parameters, and validation steps

Include actual validation checkpoints like 'verify data shape', 'check for missing values', 'validate model metrics before saving'

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanation of concepts Claude already knows (what classification/regression are, what cross-validation is). The 'Overview', 'How It Works', and 'When to Use' sections are redundant and padded. Generic boilerplate sections like 'Error Handling', 'Resources', and 'Prerequisites' add no value.

1 / 3

Actionability

No executable code, no specific commands, no concrete implementation details. The entire skill describes what will happen abstractly ('The skill will...') rather than providing actual instructions Claude can follow. No library names, no code snippets, no parameter configurations.

1 / 3

Workflow Clarity

Steps are vague descriptions ('analyze', 'select', 'train') without any concrete validation checkpoints. No actual workflow for how to perform these operations, no error recovery, no verification steps. The 'Instructions' section is completely generic placeholder text.

1 / 3

Progressive Disclosure

Content has section headers providing some structure, but it's a monolithic document with no references to external files. The content that exists is all inline despite being mostly filler that could be removed entirely rather than split out.

2 / 3

Total

5

/

12

Passed

Activation

68%

The description has strong specificity in describing ML training capabilities and is distinctive within its domain. However, it appears truncated and contains placeholder text ('Trigger with relevant phrases based on skill purpose') which undermines the trigger term quality and completeness. The incomplete 'Use when' clause significantly weakens the description's utility for skill selection.

Suggestions

Complete the truncated 'Use when' clause with full trigger phrases like 'train model', 'evaluate model', 'build classifier', 'machine learning', 'ML pipeline'

Remove the placeholder text 'Trigger with relevant phrases based on skill purpose' and replace with actual natural language triggers users would say

Add file type triggers if applicable (e.g., 'when working with .csv datasets', 'scikit-learn', 'model.pkl')

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Analyzes datasets, selects model types (classification, regression), configures parameters, trains with cross-validation, and saves model artifacts.'

3 / 3

Completeness

The 'what' is well-covered with specific ML workflow actions, but the 'when' clause is incomplete/truncated ('evalua...') and ends with unhelpful placeholder text rather than explicit trigger guidance.

2 / 3

Trigger Term Quality

Includes some natural keywords like 'train model' and 'evalua...' (truncated), but the description is cut off and ends with generic placeholder text 'Trigger with relevant phrases based on skill purpose' instead of actual trigger terms.

2 / 3

Distinctiveness Conflict Risk

Clear ML training niche with distinct terminology (cross-validation, model artifacts, classification/regression) that would not conflict with general data processing or other skills.

3 / 3

Total

10

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.