Optimize deep learning models using Adam, SGD, and learning rate scheduling to improve accuracy and reduce training time. Use when asked to "optimize deep learning model" or "improve model performance". Trigger with phrases like 'optimize', 'performance', or 'speed up'.
49
38%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/deep-learning-optimizer/skills/optimizing-deep-learning-models/SKILL.mdQuality
Discovery
77%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description does a good job of specifying concrete techniques (Adam, SGD, learning rate scheduling) and includes explicit 'Use when' guidance with trigger phrases. However, the trigger terms are overly generic ('optimize', 'performance', 'speed up') which creates conflict risk with non-deep-learning optimization skills, and it misses natural user phrases specific to the deep learning training domain.
Suggestions
Add more domain-specific trigger terms users would naturally say, such as 'learning rate', 'training loss', 'convergence', 'hyperparameter tuning', 'optimizer', 'epochs', or 'loss not decreasing'.
Make trigger terms more distinctive by scoping them explicitly to deep learning, e.g., 'Use when asked about deep learning optimizer selection, learning rate schedules, or training convergence issues' to reduce conflict with general optimization skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: optimizing deep learning models using Adam, SGD, and learning rate scheduling, with clear goals of improving accuracy and reducing training time. | 3 / 3 |
Completeness | Clearly answers both 'what' (optimize deep learning models using Adam, SGD, learning rate scheduling) and 'when' (explicit 'Use when' clause and trigger phrases are provided). | 3 / 3 |
Trigger Term Quality | Includes some relevant trigger terms like 'optimize', 'performance', 'speed up', 'deep learning model', but misses common variations users might say such as 'training loop', 'hyperparameter tuning', 'convergence', 'loss not decreasing', 'learning rate', or 'optimizer'. The listed triggers are also somewhat generic and could apply to non-deep-learning optimization. | 2 / 3 |
Distinctiveness Conflict Risk | The trigger terms 'optimize', 'performance', and 'speed up' are quite generic and could easily conflict with skills related to code optimization, database optimization, or general performance tuning. The 'deep learning' qualifier helps but the broad triggers weaken distinctiveness. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely boilerplate and abstract description with zero actionable content. Despite being about deep learning optimization (Adam, SGD, learning rate scheduling), it contains no code examples, no specific configurations, no concrete parameter recommendations, and no executable guidance. The content reads like a template that was never filled in with actual technical substance.
Suggestions
Add concrete, executable code examples showing PyTorch/TensorFlow optimizer setup (e.g., Adam with specific hyperparameters, SGD with momentum, learning rate schedulers like CosineAnnealingLR or ReduceLROnPlateau).
Remove all boilerplate sections that provide no value: 'Overview', 'When to Use This Skill', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all filler as currently written.
Replace the abstract 'How It Works' and 'Examples' sections with a concrete workflow: e.g., 1) Profile current training loop, 2) Apply specific optimizer config with code, 3) Validate by comparing loss curves, 4) Iterate if metrics don't improve.
Add specific, actionable guidance such as recommended learning rates for common architectures, when to use Adam vs SGD, concrete regularization code snippets, and batch size heuristics.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive padding. Sections like 'Overview', 'When to Use This Skill', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all filler that explain nothing Claude doesn't already know. The 'How It Works' section describes abstract steps rather than providing actionable content. The entire file could be reduced to ~20% of its size without losing any useful information. | 1 / 3 |
Actionability | No concrete code, no executable examples, no specific commands, no actual optimizer configurations, no learning rate schedules, no code snippets for Adam/SGD setup. The examples describe what 'the skill will do' in abstract terms rather than showing how to do it. For a skill about deep learning optimization, the complete absence of PyTorch/TensorFlow code is a critical failure. | 1 / 3 |
Workflow Clarity | The 'How It Works' steps are entirely abstract ('Analyze Model', 'Identify Optimizations', 'Apply Optimizations') with no concrete details about what to actually do at each step. No validation checkpoints, no feedback loops, no specific metrics to check. The 'Instructions' section is completely generic boilerplate ('Invoke this skill when trigger conditions are met'). | 1 / 3 |
Progressive Disclosure | No bundle files are provided, yet the content is a monolithic wall of text with many sections that are either empty boilerplate or redundant. There are no references to external files, and the content that is present is poorly organized — generic sections like 'Resources' point to nothing specific ('Project documentation', 'Related skills and commands'). | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.