Data Normalization Tool - Auto-activating skill for ML Training. Triggers on: data normalization tool, data normalization tool Part of the ML Training skill category.
36
3%
Does it follow best practices?
Impact
99%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/data-normalization-tool/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely weak across all dimensions. It reads as an auto-generated template with no substantive content—just a label repeated as trigger terms with no concrete actions, no natural user keywords, and no explicit guidance on when to activate. It would be nearly indistinguishable from other data processing skills in a large skill library.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Applies min-max scaling, z-score standardization, log transforms, and one-hot encoding to prepare datasets for ML training.'
Add a 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to normalize, scale, or standardize data, mentions feature scaling, min-max normalization, z-score, or needs to preprocess numerical features for machine learning.'
Remove the redundant duplicate trigger term and replace with varied natural language phrases users would actually say, such as 'scale features', 'standardize columns', 'normalize dataset', 'preprocessing for training'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('data normalization') but describes no concrete actions. There are no specific capabilities listed such as scaling, standardizing, encoding, or transforming data. 'Data Normalization Tool' is essentially just a label, not a description of what it does. | 1 / 3 |
Completeness | The description fails to clearly answer 'what does this do' beyond the name itself, and the 'when' guidance is limited to a redundant trigger phrase. There is no explicit 'Use when...' clause with meaningful trigger scenarios. | 1 / 3 |
Trigger Term Quality | The trigger terms listed are just 'data normalization tool' repeated twice. There are no natural keyword variations a user might say, such as 'normalize features', 'scale data', 'standardize columns', 'min-max scaling', 'z-score', or 'feature scaling'. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'data normalization' is somewhat specific to a particular ML preprocessing step, which provides some distinctiveness. However, it could overlap with general data preprocessing or feature engineering skills, and the lack of specificity makes boundaries unclear. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty template with no actual instructional content. It repeatedly references 'data normalization tool' without ever defining what data normalization entails, providing any executable code, or offering any concrete guidance. It fails on every dimension of the rubric.
Suggestions
Add concrete, executable code examples for common normalization techniques (e.g., min-max scaling with sklearn's MinMaxScaler, z-score normalization with StandardScaler, robust scaling).
Define a clear workflow: load data → inspect distributions → choose normalization method → apply → validate output ranges → save/export, with explicit validation checkpoints.
Remove all meta-description sections ('When to Use', 'Example Triggers', 'Capabilities') that describe the skill abstractly and replace with actual technical content.
Include specific guidance on when to use different normalization approaches (e.g., min-max for bounded features, z-score for Gaussian-distributed data, log transforms for skewed data) with concrete examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section restates the same vague idea ('data normalization tool') without adding substance. | 1 / 3 |
Actionability | There is zero concrete guidance—no code, no commands, no specific techniques, no examples of data normalization methods (min-max, z-score, etc.). The content only describes what the skill would do rather than actually doing it. | 1 / 3 |
Workflow Clarity | No workflow is defined. The skill claims to provide 'step-by-step guidance' but contains no steps, no sequence, and no validation checkpoints whatsoever. | 1 / 3 |
Progressive Disclosure | The content is a flat, monolithic block of vague descriptions with no references to detailed materials, no links to examples or API references, and no meaningful structural organization beyond boilerplate headings. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3076d78
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.