CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

tuning-hyperparameters

tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill tuning-hyperparameters

Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.

45%

Overall

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Implementation

0%

This skill content is a template-style placeholder that provides no actionable guidance for hyperparameter tuning. It explains concepts Claude already understands, lacks any executable code examples, and contains generic boilerplate sections ('Prerequisites', 'Instructions', 'Error Handling') that add no value. The skill fails to deliver on its promise of helping with hyperparameter optimization.

Suggestions

Replace the verbose explanations with executable Python code examples showing grid search with scikit-learn's GridSearchCV and Bayesian optimization with Optuna

Remove generic template sections (Prerequisites, Instructions, Error Handling, Integration) and replace with specific parameter configurations and search space definitions

Add concrete code snippets for each search strategy (grid, random, Bayesian) that can be directly adapted for different models

Include validation steps such as checking for data leakage in cross-validation and verifying search completion before reporting results

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanations of concepts Claude already knows (what hyperparameter tuning is, how ML works). The 'Overview', 'How It Works', 'When to Use', and 'Integration' sections are largely redundant padding that don't add actionable value.

1 / 3

Actionability

No executable code provided despite being a code-generation skill. Examples describe what the skill 'will do' rather than showing actual implementation. The 'Instructions' section is completely generic ('Invoke this skill when trigger conditions are met') with no concrete guidance.

1 / 3

Workflow Clarity

The 'How It Works' section describes abstract steps without any concrete implementation details. No validation checkpoints, no error recovery workflows, and no actual sequence of commands or code to execute.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Content is poorly organized with redundant sections (Overview, How It Works, When to Use all overlap). The 'Resources' section mentions documentation but provides no actual links.

1 / 3

Total

4

/

12

Passed

Activation

85%

This is a solid skill description with clear capabilities and explicit trigger guidance. The main weakness is the final sentence 'Trigger with relevant phrases based on skill purpose' which is meaningless filler that should be replaced with additional natural trigger terms. The description would benefit from more keyword variations users might naturally say.

Suggestions

Remove the vague filler sentence 'Trigger with relevant phrases based on skill purpose' and replace with additional natural trigger terms like 'parameter tuning', 'model optimization', 'hyperparameter search', 'GridSearchCV', 'Optuna'

Add file type or framework mentions if applicable (e.g., 'scikit-learn models', 'PyTorch', 'TensorFlow') to improve trigger term coverage

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'grid search, random search, or Bayesian optimization' and 'Finds best parameter configurations to maximize performance'. These are concrete, actionable capabilities.

3 / 3

Completeness

Clearly answers both what (optimize hyperparameters using specific methods) and when (explicit 'Use when asked to...' clause with trigger phrases). The 'Use when' clause is present and explicit.

3 / 3

Trigger Term Quality

Includes some natural keywords like 'tune hyperparameters' and 'optimize model', but the final sentence 'Trigger with relevant phrases based on skill purpose' is vague filler that adds no value. Missing common variations like 'parameter tuning', 'model optimization', 'hyperparameter search', or 'ML tuning'.

2 / 3

Distinctiveness Conflict Risk

Clear niche focused specifically on hyperparameter optimization with distinct ML-specific triggers. Unlikely to conflict with general coding or data processing skills due to specific terminology like 'hyperparameters', 'grid search', 'Bayesian optimization'.

3 / 3

Total

11

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.