CtrlK
BlogDocsLog inGet started
Tessl Logo

train-test-splitter

Train Test Splitter - Auto-activating skill for ML Training. Triggers on: train test splitter, train test splitter Part of the ML Training skill category.

34

1.04x
Quality

3%

Does it follow best practices?

Impact

89%

1.04x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/train-test-splitter/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely thin and auto-generated in nature, providing almost no useful information beyond the skill's name. It lacks concrete actions, meaningful trigger terms, and any explicit 'when to use' guidance. It would be very difficult for Claude to reliably select this skill from a pool of ML-related skills.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Splits datasets into training and testing subsets with configurable ratios, supports stratified splitting, and handles CSV/DataFrame inputs.'

Add a 'Use when...' clause with natural trigger terms like 'split data into train and test', 'holdout set', 'validation split', 'train/test ratio', 'partition dataset'.

Remove the duplicate trigger term and expand with varied natural language phrases users would actually say when needing this functionality.

DimensionReasoningScore

Specificity

The description only names the skill ('Train Test Splitter') and mentions 'ML Training' as a category, but does not describe any concrete actions like splitting datasets, specifying ratios, stratified sampling, or handling data formats.

1 / 3

Completeness

The description fails to clearly answer 'what does this do' beyond the name itself, and the 'when' clause is essentially just restating the skill name as a trigger rather than providing meaningful guidance on when to activate.

1 / 3

Trigger Term Quality

The trigger terms listed are just 'train test splitter' repeated twice. It misses natural variations users would say like 'split data', 'train/test split', 'holdout set', 'validation split', 'split dataset', or 'cross-validation'.

1 / 3

Distinctiveness Conflict Risk

The term 'train test splitter' is somewhat specific to a particular ML task, which gives it some distinctiveness, but the lack of detail about what it actually does and the generic 'ML Training' category could cause overlap with other ML-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty template/placeholder with no actual instructional content. It repeatedly references 'train test splitter' without ever explaining how to perform a train/test split, providing code examples (e.g., sklearn's train_test_split), or describing any workflow. The entire content could be replaced by a single line and would convey the same amount of information.

Suggestions

Add executable code examples using sklearn.model_selection.train_test_split with common parameters (test_size, random_state, stratify) and pandas DataFrames.

Include concrete guidance on stratification for imbalanced datasets, time-series splits, and choosing appropriate split ratios with brief examples.

Remove all boilerplate sections (Purpose, When to Use, Capabilities, Example Triggers) that describe the skill meta-information rather than teaching how to do train/test splitting.

Add a validation step showing how to verify split proportions and class distributions after splitting.

DimensionReasoningScore

Conciseness

The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats 'train test splitter' excessively, and provides zero actual technical content about splitting data into train/test sets.

1 / 3

Actionability

There is no concrete code, no executable commands, no specific examples of how to perform a train/test split. The content is entirely abstract descriptions like 'Provides step-by-step guidance' without actually providing any guidance.

1 / 3

Workflow Clarity

No workflow steps are defined at all. There is no sequence of operations, no validation checkpoints, and no actual process described for splitting data into train and test sets.

1 / 3

Progressive Disclosure

The content has section headers but they contain no substantive information. There are no references to detailed materials, no examples, and no structured content to navigate. It's a template with no real content filled in.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.