Distributed Training Setup - Auto-activating skill for ML Training. Triggers on: distributed training setup, distributed training setup Part of the ML Training skill category.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill distributed-training-setupOverall
score
19%
Does it follow best practices?
Validation for skill structure
Activation
7%This description is severely underdeveloped, essentially just restating the skill name without providing any meaningful information about capabilities or usage triggers. It fails to help Claude distinguish when to select this skill, as it lacks concrete actions, natural user language, and explicit guidance on when to activate.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Configures multi-node training environments, sets up data parallelism, troubleshoots GPU communication issues'
Include a 'Use when...' clause with natural trigger terms like 'multi-GPU training', 'scale training across nodes', 'horovod setup', 'PyTorch DDP', 'training cluster'
Remove the redundant trigger term and expand with variations users would naturally say when needing distributed training help
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only mentions 'Distributed Training Setup' without describing any concrete actions. It doesn't explain what the skill actually does - no verbs describing capabilities like 'configures', 'deploys', 'monitors', etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the title, and the 'when' guidance is just a circular reference to the skill name. There's no explicit 'Use when...' clause with meaningful trigger scenarios. | 1 / 3 |
Trigger Term Quality | The trigger terms are redundant ('distributed training setup' listed twice) and overly narrow. Missing natural variations users might say like 'multi-GPU', 'parallel training', 'horovod', 'PyTorch distributed', 'cluster training', etc. | 1 / 3 |
Distinctiveness Conflict Risk | While 'distributed training' is a specific domain within ML, the lack of detail about what aspects it covers (setup vs monitoring vs debugging) could cause overlap with other ML training skills. The 'ML Training skill category' mention suggests potential conflicts. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%This skill is essentially a placeholder with no substantive content. It describes what a distributed training skill should do without providing any actual guidance, code examples, or concrete instructions. The content would be useless for actually helping someone set up distributed training.
Suggestions
Add executable code examples for common distributed training frameworks (PyTorch DDP, Horovod, TensorFlow MirroredStrategy) with copy-paste ready configurations
Define a clear workflow with validation checkpoints: environment setup -> cluster configuration -> communication backend selection -> data sharding strategy -> launch commands -> verification steps
Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with specific technical guidance
Add references to separate files for advanced topics like multi-node setup, fault tolerance, and performance tuning
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic boilerplate that provides no actual information about distributed training. Phrases like 'provides automated assistance' and 'follows industry best practices' are meaningless filler that Claude already understands. | 1 / 3 |
Actionability | No concrete code, commands, or specific guidance is provided. The skill describes what it does in abstract terms but never shows how to actually set up distributed training - no PyTorch DDP examples, no Horovod configs, no actual implementation details. | 1 / 3 |
Workflow Clarity | No workflow is defined. 'Provides step-by-step guidance' is claimed but no actual steps are given. Distributed training setup involves complex multi-step processes (cluster config, communication backends, data sharding) that are completely absent. | 1 / 3 |
Progressive Disclosure | The content is a monolithic block of vague descriptions with no structure for actual learning. No references to detailed documentation, no links to examples, and no organization of content by complexity or use case. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
69%Validation — 11 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 11 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.