tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill optimizing-deep-learning-modelsOptimize deep learning models using Adam, SGD, and learning rate scheduling to improve accuracy and reduce training time. Use when asked to "optimize deep learning model" or "improve model performance". Trigger with phrases like 'optimize', 'performance', or 'speed up'.
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
7%This skill content is largely boilerplate with no actionable guidance. It describes what optimization is conceptually but provides zero executable code, specific commands, or concrete examples. The content explains things Claude already knows (what Adam/SGD are, what regularization does) while failing to provide the actual implementation details that would make this skill useful.
Suggestions
Replace abstract descriptions with executable Python code showing actual optimizer configuration (e.g., `torch.optim.Adam(model.parameters(), lr=0.001)` with learning rate scheduler setup)
Add concrete before/after examples with specific code snippets showing how to implement each optimization technique mentioned
Remove generic boilerplate sections (Prerequisites, Error Handling, Resources, Integration) that provide no skill-specific value
Include a validation workflow showing how to measure and compare model performance before and after optimization (e.g., tracking loss curves, accuracy metrics)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with redundant sections (Overview repeated twice), explains concepts Claude already knows (what optimizers are, basic ML concepts), and includes generic boilerplate sections (Prerequisites, Error Handling, Resources) that add no value. | 1 / 3 |
Actionability | No executable code, no specific commands, no concrete examples. Everything is abstract description ('Apply techniques', 'Generate optimized code') without showing actual implementation. The examples describe what the skill 'will do' rather than providing copy-paste ready code. | 1 / 3 |
Workflow Clarity | The 4-step 'How It Works' is vague and lacks any validation checkpoints. No concrete sequence for actually optimizing a model, no feedback loops, and no verification steps. Steps like 'Apply Optimizations' give no actionable guidance. | 1 / 3 |
Progressive Disclosure | Content is organized into sections with headers, but it's a monolithic document with no references to external files. The structure exists but contains too much filler content that could be removed rather than split into separate files. | 2 / 3 |
Total | 5 / 12 Passed |
Activation
77%This description has strong specificity with concrete techniques and outcomes, and includes explicit 'Use when' guidance making it complete. However, the trigger terms are somewhat generic ('optimize', 'performance', 'speed up') which could cause conflicts with non-ML optimization skills, and the keyword coverage misses common ML training terminology users might naturally use.
Suggestions
Add more domain-specific trigger terms users would naturally say, such as 'training', 'neural network', 'hyperparameter', 'convergence', 'loss function', or 'epochs'
Make triggers more distinctive by emphasizing ML-specific context, e.g., 'Use when optimizing neural network training, tuning hyperparameters, or improving model convergence' to reduce conflict with general optimization skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Optimize deep learning models using Adam, SGD, and learning rate scheduling to improve accuracy and reduce training time.' Names specific techniques (Adam, SGD, learning rate scheduling) and concrete outcomes (improve accuracy, reduce training time). | 3 / 3 |
Completeness | Clearly answers both what ('Optimize deep learning models using Adam, SGD, and learning rate scheduling') and when ('Use when asked to optimize deep learning model or improve model performance') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords ('optimize', 'performance', 'speed up', 'deep learning model') but missing common variations users might say like 'training', 'neural network', 'hyperparameter tuning', 'convergence', 'loss', or 'epochs'. | 2 / 3 |
Distinctiveness Conflict Risk | While 'deep learning' and specific optimizers (Adam, SGD) provide some distinction, generic triggers like 'optimize', 'performance', and 'speed up' could easily conflict with other optimization-related skills (code optimization, database optimization, etc.). | 2 / 3 |
Total | 10 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.