CtrlK
BlogDocsLog inGet started
Tessl Logo

tuning-hyperparameters

Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.

49

Quality

38%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/ai-ml/hyperparameter-tuner/skills/tuning-hyperparameters/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

77%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has strong specificity with concrete methods listed and includes an explicit 'Use when' clause, which is good. However, the final sentence 'Trigger with relevant phrases based on skill purpose' is meaningless filler that adds no information. The trigger terms could be more comprehensive, and 'optimize model' is broad enough to risk conflicts with other ML-related skills.

Suggestions

Remove the vague filler sentence 'Trigger with relevant phrases based on skill purpose' and replace it with additional natural trigger terms like 'hyperparameter tuning', 'parameter sweep', 'model selection', 'GridSearchCV', 'best parameters'.

Make the 'Use when' clause more distinctive by specifying the context more precisely, e.g., 'Use when asked to tune hyperparameters, run parameter sweeps, or find optimal model configurations—not for model architecture changes or feature engineering.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization' and 'Finds best parameter configurations to maximize performance.' These are concrete methods and outcomes.

3 / 3

Completeness

Clearly answers both 'what' (optimize hyperparameters using grid search, random search, or Bayesian optimization to maximize performance) and 'when' (Use when asked to 'tune hyperparameters' or 'optimize model'). The explicit 'Use when...' clause is present with trigger phrases.

3 / 3

Trigger Term Quality

Includes some natural keywords like 'tune hyperparameters', 'optimize model', and method names (grid search, random search, Bayesian optimization), but the final sentence 'Trigger with relevant phrases based on skill purpose' is vague filler that adds no value. Missing common variations like 'hyperparameter tuning', 'model selection', 'parameter sweep', 'cross-validation'.

2 / 3

Distinctiveness Conflict Risk

The phrase 'optimize model' is somewhat generic and could overlap with skills related to model architecture optimization, feature engineering, or general ML workflows. The specific mention of hyperparameter methods (grid search, random search, Bayesian optimization) helps, but 'optimize model' as a trigger is broad enough to cause conflicts.

2 / 3

Total

10

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a template filled with generic placeholder text. It contains no executable code, no concrete commands, no specific library usage patterns, and no actionable guidance. Nearly every section describes what the skill does in abstract terms rather than providing Claude with the specific instructions needed to actually perform hyperparameter tuning.

Suggestions

Replace the abstract 'How It Works' and 'Examples' sections with concrete, executable Python code snippets showing GridSearchCV, RandomizedSearchCV, and Optuna usage patterns with real parameter grids.

Remove the generic boilerplate sections (Overview, When to Use, Integration, Prerequisites, Instructions, Output, Error Handling, Resources) that explain nothing Claude doesn't already know, and replace with lean, actionable content.

Add a concrete workflow with validation steps, e.g.: define search space → run search with cross-validation → verify results aren't overfit by checking test set performance → report best params.

Include specific code examples with common hyperparameter ranges for popular models (e.g., RandomForest n_estimators=[100,200,500], max_depth=[5,10,20,None]) so Claude can immediately generate working tuning code.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanation of concepts Claude already knows. The 'Overview', 'How It Works', 'When to Use', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' sections are almost entirely filler that explains nothing Claude doesn't already understand. The content reads like a marketing document rather than an actionable skill.

1 / 3

Actionability

No executable code anywhere in the skill. Examples describe what the skill 'will do' in abstract terms rather than providing concrete, copy-paste-ready code snippets. The 'Instructions' section is completely generic ('Invoke this skill when the trigger conditions are met') with zero specificity.

1 / 3

Workflow Clarity

The 'How It Works' section lists abstract phases (Analyzing Requirements, Generating Code, etc.) but provides no concrete steps, commands, or validation checkpoints. The examples describe outcomes without showing the actual workflow. No feedback loops or error recovery steps are defined.

1 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to external files and no bundle files to support it. Sections like 'Resources' mention 'Project documentation' and 'Related skills and commands' without any actual links or paths. The structure is flat and uninformative.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.