Train Test Splitter - Auto-activating skill for ML Training. Triggers on: train test splitter, train test splitter Part of the ML Training skill category.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill train-test-splitter37
Quality
7%
Does it follow best practices?
Impact
89%
1.04xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/train-test-splitter/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely underdeveloped, essentially just restating the skill name without explaining capabilities or providing meaningful trigger guidance. It lacks any concrete actions (e.g., 'splits datasets into training and test sets with configurable ratios') and has redundant, narrow trigger terms that miss common user phrasings.
Suggestions
Add specific actions the skill performs, e.g., 'Splits datasets into training, validation, and test sets with configurable ratios. Supports stratified splitting for classification tasks and random seed control for reproducibility.'
Include a 'Use when...' clause with natural trigger terms: 'Use when the user mentions splitting data, creating train/test sets, holdout validation, data partitioning, or preparing datasets for ML training.'
Remove the duplicate trigger term and expand to include variations like 'split data', 'training set', 'test set', 'validation split', 'sklearn train_test_split', 'data partition'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only names the skill ('Train Test Splitter') without describing any concrete actions. It doesn't explain what the skill actually does - no mention of splitting datasets, ratios, stratification, or any specific capabilities. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name itself, and the 'when' guidance is essentially just repeating the skill name as a trigger. There's no explicit 'Use when...' clause or meaningful trigger guidance. | 1 / 3 |
Trigger Term Quality | The trigger terms listed are redundant ('train test splitter, train test splitter' - duplicated) and overly narrow. Missing natural variations users would say like 'split data', 'training set', 'test set', 'validation split', 'holdout set', or 'data partitioning'. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'train test splitter' is fairly specific to ML data splitting, which provides some distinctiveness. However, the vague 'ML Training skill category' could overlap with other ML-related skills, and the lack of specific use cases increases conflict risk. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a hollow template with no actual content. It describes what a train/test splitter skill would do but provides zero actionable guidance, code examples, or specific instructions. The entire content could be replaced with a single code snippet showing sklearn's train_test_split() and be infinitely more useful.
Suggestions
Add executable code examples showing train_test_split usage with sklearn, including stratification and random_state parameters
Include specific guidance on choosing split ratios (e.g., 80/20, 70/15/15 for train/val/test) with rationale for different dataset sizes
Provide concrete examples of common pitfalls like data leakage, time-series splitting requirements, and stratification for imbalanced datasets
Remove all generic boilerplate sections ('Purpose', 'Capabilities', 'Example Triggers') and replace with actual technical content
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic boilerplate that explains nothing specific about train/test splitting. Phrases like 'provides automated assistance' and 'follows industry best practices' are filler that Claude already knows. | 1 / 3 |
Actionability | No concrete code, commands, or specific guidance is provided. The skill describes what it does abstractly ('provides step-by-step guidance') but never actually provides any guidance, examples, or executable content. | 1 / 3 |
Workflow Clarity | No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains zero actual steps for performing train/test splitting. | 1 / 3 |
Progressive Disclosure | The content is organized into clear sections with headers, but there are no references to detailed materials, no links to examples, and the sections themselves contain no substantive content to disclose. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.