CtrlK
BlogDocsLog inGet started
Tessl Logo

distributed-training-setup

Distributed Training Setup - Auto-activating skill for ML Training. Triggers on: distributed training setup, distributed training setup Part of the ML Training skill category.

35

0.97x

Quality

3%

Does it follow best practices?

Impact

93%

0.97x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/distributed-training-setup/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a placeholder that provides almost no useful information for skill selection. It repeats the skill name as its own trigger term and fails to describe any concrete capabilities or use cases. The description would be nearly useless for Claude to distinguish this skill from other ML-related skills.

Suggestions

Add specific concrete actions like 'Configure multi-GPU training, set up data parallelism, optimize distributed communication, troubleshoot cluster synchronization issues'

Include diverse natural trigger terms users would say: 'multi-node', 'parallel training', 'GPU cluster', 'horovod', 'DeepSpeed', 'scale training across machines'

Add an explicit 'Use when...' clause: 'Use when the user needs to train models across multiple GPUs or machines, mentions scaling training, or asks about frameworks like Horovod, DeepSpeed, or PyTorch DDP'

DimensionReasoningScore

Specificity

The description only names the domain 'Distributed Training Setup' without listing any concrete actions. There are no specific capabilities mentioned like 'configure multi-GPU training', 'set up data parallelism', or 'optimize cluster communication'.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond the title, and the 'when' guidance is just a circular restatement of the skill name. There is no explicit 'Use when...' clause with meaningful trigger guidance.

1 / 3

Trigger Term Quality

The trigger terms are redundant ('distributed training setup' repeated twice) and overly narrow. Missing natural variations users might say like 'multi-node training', 'parallel training', 'GPU cluster', 'horovod', 'DeepSpeed', or 'scale training'.

1 / 3

Distinctiveness Conflict Risk

While 'distributed training' is a specific ML subdomain, the lack of detail means it could overlap with general ML training skills. The mention of 'ML Training skill category' suggests potential conflicts within that category.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder with no substantive content. It describes meta-information about what the skill claims to do but provides absolutely no actual guidance on distributed training setup - no code examples, no configuration templates, no framework-specific instructions, and no workflow for setting up multi-GPU or multi-node training.

Suggestions

Add concrete, executable code examples for at least one framework (e.g., PyTorch DistributedDataParallel setup with torchrun command)

Include a clear workflow with steps: environment setup → configuration → launch command → validation/debugging

Provide specific configuration examples (e.g., NCCL environment variables, world size, rank assignment)

Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with actual technical content

DimensionReasoningScore

Conciseness

The content is padded with generic boilerplate that provides no actual information about distributed training. Phrases like 'provides automated assistance' and 'follows industry best practices' are meaningless filler that Claude already understands.

1 / 3

Actionability

There is zero concrete guidance - no code, no commands, no specific steps, no configuration examples. The skill describes what it claims to do rather than actually instructing how to do distributed training setup.

1 / 3

Workflow Clarity

No workflow is provided whatsoever. For a complex topic like distributed training (which involves multiple nodes, configuration, synchronization), there are no steps, no validation checkpoints, and no sequence of operations.

1 / 3

Progressive Disclosure

The content is a flat, uninformative structure with no references to detailed materials, no links to framework-specific guides (PyTorch DDP, TensorFlow distributed strategies), and no organization of content by complexity or use case.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.