CtrlK
BlogDocsLog inGet started
Tessl Logo

mixed-precision-trainer

Mixed Precision Trainer - Auto-activating skill for ML Training. Triggers on: mixed precision trainer, mixed precision trainer Part of the ML Training skill category.

36

1.07x
Quality

3%

Does it follow best practices?

Impact

96%

1.07x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/mixed-precision-trainer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a placeholder that names the skill and its category but provides no substantive information about what it does or when it should be used. It lacks concrete actions, meaningful trigger terms, and explicit usage guidance, making it nearly useless for skill selection among multiple options.

Suggestions

Add concrete actions describing what the skill does, e.g., 'Configures and implements mixed precision training with FP16/BF16 computation, manages loss scaling, and optimizes GPU memory usage for faster model training.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user mentions mixed precision, FP16, BF16, half precision, AMP, automatic mixed precision, loss scaling, or wants to speed up training with reduced precision.'

Remove the duplicate trigger term and expand the trigger list to include common variations and related concepts users would naturally mention.

DimensionReasoningScore

Specificity

The description names the domain ('ML Training') and the concept ('Mixed Precision Trainer') but does not describe any concrete actions. There are no verbs indicating what the skill actually does—no 'trains models', 'configures precision settings', 'converts FP32 to FP16', etc.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond naming itself, and the 'when' clause is essentially just the skill name repeated. There is no explicit 'Use when...' guidance with meaningful triggers.

1 / 3

Trigger Term Quality

The only trigger terms listed are 'mixed precision trainer' repeated twice. There are no natural variations a user might say such as 'FP16', 'half precision', 'AMP', 'automatic mixed precision', 'mixed precision training', 'loss scaling', or 'gradient scaling'.

1 / 3

Distinctiveness Conflict Risk

The term 'mixed precision' is somewhat specific to a niche within ML training, which provides some distinctiveness. However, the lack of concrete actions and the generic 'ML Training' category label could cause overlap with other ML training skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty shell with no substantive content. It consists entirely of meta-descriptions about what the skill would do without actually providing any technical guidance on mixed precision training. There is no code, no concrete instructions, no workflow, and no references to useful resources.

Suggestions

Add concrete, executable code examples for mixed precision training using PyTorch AMP (torch.cuda.amp.autocast, GradScaler) and/or TensorFlow mixed_precision API.

Define a clear multi-step workflow: e.g., 1) Configure mixed precision policy, 2) Wrap model/optimizer, 3) Handle loss scaling, 4) Validate numerical stability—with explicit validation checkpoints.

Remove all meta-description sections (Purpose, When to Use, Capabilities, Example Triggers) and replace with actionable technical content covering common pitfalls like NaN losses, ops that don't support FP16, and gradient scaling strategies.

Add references to advanced topics in separate files (e.g., debugging NaN issues, custom autocast contexts, benchmarking FP16 vs FP32 performance) for progressive disclosure.

DimensionReasoningScore

Conciseness

The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section describes rather than instructs, wasting tokens on information that adds no value.

1 / 3

Actionability

There is zero concrete guidance—no code, no commands, no specific techniques for mixed precision training. The skill contains no executable examples, no API references, no library usage (e.g., torch.cuda.amp, tf.keras.mixed_precision), and no actionable steps whatsoever.

1 / 3

Workflow Clarity

No workflow is defined. There are no steps, no sequence, no validation checkpoints. The skill merely claims it 'provides step-by-step guidance' without actually providing any.

1 / 3

Progressive Disclosure

The content is a monolithic block of vague descriptions with no references to detailed materials, no links to examples or advanced guides, and no meaningful structural organization beyond boilerplate headings.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.