Hyperparameter Tuner - Auto-activating skill for ML Training. Triggers on: hyperparameter tuner, hyperparameter tuner Part of the ML Training skill category.
32
Quality
3%
Does it follow best practices?
Impact
77%
0.88xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/hyperparameter-tuner/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely underdeveloped, essentially just restating the skill name without explaining capabilities or providing usage guidance. It lacks concrete actions, has redundant trigger terms, and provides no 'Use when' clause. The description would fail to help Claude distinguish this skill from other ML-related skills in a large skill library.
Suggestions
Add specific actions the skill performs, e.g., 'Performs grid search, random search, and Bayesian optimization to find optimal model hyperparameters'
Add a 'Use when...' clause with natural trigger terms like 'tune hyperparameters', 'optimize learning rate', 'grid search', 'find best parameters', 'HPO', 'model tuning'
Remove the duplicate trigger term and expand with variations users would naturally say when needing hyperparameter optimization
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only names the domain ('Hyperparameter Tuner', 'ML Training') but provides no concrete actions. It doesn't explain what the skill actually does - no verbs describing capabilities like 'optimizes', 'searches', 'tunes parameters', etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name, and has no 'Use when...' clause or equivalent guidance for when Claude should select this skill. Both what and when are very weak. | 1 / 3 |
Trigger Term Quality | The trigger terms are redundant ('hyperparameter tuner' listed twice) and miss natural variations users would say like 'tune hyperparameters', 'grid search', 'learning rate', 'model optimization', 'parameter search', or 'HPO'. | 1 / 3 |
Distinctiveness Conflict Risk | While 'hyperparameter tuner' is a specific ML concept that wouldn't conflict with most skills, the lack of detail means it could overlap with other ML-related skills. The category mention 'ML Training' is somewhat distinctive but not enough. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is essentially a placeholder template with no actual hyperparameter tuning content. It describes capabilities it doesn't demonstrate and provides zero actionable guidance - no code examples, no specific tuning strategies, no library recommendations, and no workflows. The entire content could be replaced with actual hyperparameter tuning instructions.
Suggestions
Add executable code examples showing hyperparameter tuning with common libraries (e.g., Optuna, Ray Tune, sklearn GridSearchCV)
Define a clear workflow: 1) Define search space, 2) Choose search strategy, 3) Run trials, 4) Validate best params, 5) Retrain with optimal config
Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with specific tuning techniques and when to use each
Include concrete examples of search spaces, objective functions, and early stopping criteria
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic boilerplate that explains nothing Claude doesn't already know. Phrases like 'provides automated assistance' and 'follows industry best practices' are meaningless filler with no actual hyperparameter tuning information. | 1 / 3 |
Actionability | No concrete code, commands, or specific guidance is provided. The skill describes what it claims to do but never actually shows how to tune hyperparameters - no examples of grid search, random search, Bayesian optimization, or any tuning library usage. | 1 / 3 |
Workflow Clarity | No workflow is defined. The skill mentions 'step-by-step guidance' but provides none. There are no actual steps, no validation checkpoints, and no process for hyperparameter tuning tasks. | 1 / 3 |
Progressive Disclosure | The content is a flat, uninformative structure with no references to detailed materials, no links to examples, and no organization beyond generic section headers that contain no useful content. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
f17dd51
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.