CtrlK
BlogDocsLog inGet started
Tessl Logo

learning-rate-scheduler

Learning Rate Scheduler - Auto-activating skill for ML Training. Triggers on: learning rate scheduler, learning rate scheduler Part of the ML Training skill category.

35

1.02x
Quality

3%

Does it follow best practices?

Impact

93%

1.02x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/learning-rate-scheduler/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a title and category label with no substantive content. It fails to describe what the skill does, lists no concrete actions, and repeats the same trigger term twice without covering natural user language variations. It would be nearly useless for Claude to differentiate this skill from other ML-related skills in a large skill library.

Suggestions

Add concrete actions the skill performs, e.g., 'Implements and configures learning rate schedulers such as cosine annealing, step decay, warmup, and ReduceLROnPlateau for PyTorch and TensorFlow training loops.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about lr schedules, learning rate warmup, cosine annealing, step decay, cyclical learning rates, or adjusting learning rates during training.'

Remove the duplicate trigger term and expand coverage to include abbreviations and synonyms users naturally use (e.g., 'lr schedule', 'lr decay', 'learning rate policy').

DimensionReasoningScore

Specificity

The description names the domain ('ML Training') and the concept ('Learning Rate Scheduler') but does not describe any concrete actions. There are no verbs indicating what the skill actually does—no 'configures', 'adjusts', 'implements', or similar.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond naming the topic, and the 'when' clause is essentially just the skill name repeated. There is no explicit 'Use when...' guidance with meaningful triggers.

1 / 3

Trigger Term Quality

The only trigger term listed is 'learning rate scheduler' repeated twice. It misses natural variations users might say such as 'lr schedule', 'warmup', 'cosine annealing', 'step decay', 'learning rate decay', 'reduce lr on plateau', etc.

1 / 3

Distinctiveness Conflict Risk

The term 'learning rate scheduler' is fairly specific to a niche within ML, so it is somewhat distinguishable from other skills. However, the lack of concrete actions and the generic 'ML Training' category label could cause overlap with other ML training-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty shell—a template with placeholder text that provides no actual learning rate scheduler knowledge. It contains no executable code, no specific scheduler types (StepLR, CosineAnnealingLR, ReduceLROnPlateau, etc.), no configuration examples, and no workflow guidance. It is entirely non-functional as a skill.

Suggestions

Add concrete, executable code examples for common learning rate schedulers (e.g., PyTorch's StepLR, CosineAnnealingLR, OneCycleLR) with typical hyperparameter values and usage patterns.

Include a decision workflow: when to use which scheduler based on training scenario (e.g., fine-tuning vs training from scratch, short vs long training runs).

Provide a practical example showing scheduler integration into a training loop with validation checkpoints (e.g., monitoring loss to decide if scheduler is working).

Remove all meta-description sections (Purpose, When to Use, Capabilities, Example Triggers) and replace with actual technical content—the current content teaches Claude nothing it doesn't already know.

DimensionReasoningScore

Conciseness

The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section describes rather than instructs, wasting tokens on information that adds no value.

1 / 3

Actionability

There is zero concrete guidance—no code examples, no specific scheduler implementations (e.g., StepLR, CosineAnnealing, OneCycleLR), no commands, no configurations. The skill describes itself rather than teaching Claude how to do anything.

1 / 3

Workflow Clarity

No workflow is defined at all. There are no steps, no sequences, no validation checkpoints. The 'step-by-step guidance' mentioned in Capabilities is never actually provided.

1 / 3

Progressive Disclosure

The content is a monolithic block of vague descriptions with no references to detailed materials, no links to examples or API references, and no meaningful structural organization beyond boilerplate headings.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.