CtrlK
BlogDocsLog inGet started
Tessl Logo

distributed-training-setup

Distributed Training Setup - Auto-activating skill for ML Training. Triggers on: distributed training setup, distributed training setup Part of the ML Training skill category.

35

0.97x

Quality

3%

Does it follow best practices?

Impact

93%

0.97x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/distributed-training-setup/SKILL.md
SKILL.md
Quality
Evals
Security

Evaluation results

93%

-7%

Scaling a PyTorch Image Classifier to Multiple GPUs

PyTorch DDP best practices

Criteria
Without context
With context

Process group init

100%

100%

Backend selection

100%

100%

DDP model wrapping

100%

100%

Rank/world_size from env

100%

62%

Distributed sampler

100%

100%

Rank-0 only output

100%

100%

Process group cleanup

100%

75%

torchrun compatible

100%

100%

Step-by-step launch guide

100%

66%

Validation output present

100%

100%

No large file downloads

100%

100%

Device placement

100%

100%

Without context: $0.4211 · 3m 38s · 21 turns · 21 in / 7,434 out tokens

With context: $0.6023 · 3m 45s · 33 turns · 65 in / 7,718 out tokens

87%

1%

Multi-GPU Training Acceleration for a TensorFlow NLP Model

TensorFlow distributed strategy

Criteria
Without context
With context

Distribution strategy used

100%

100%

Model inside strategy scope

100%

100%

Global batch size scaling

100%

80%

Config file present

75%

100%

Config consistency

50%

100%

No large downloads

100%

100%

Validation output present

100%

100%

Step-by-step structure

100%

100%

Production error handling

100%

62%

Pip-installable deps

100%

100%

Checkpoint or save

0%

0%

Without context: $0.4816 · 2m 26s · 27 turns · 26 in / 7,288 out tokens

With context: $0.6956 · 3m · 37 turns · 328 in / 9,188 out tokens

100%

Setting Up a Reproducible Distributed ML Training Pipeline

Full ML pipeline with experiment tracking

Criteria
Without context
With context

Data prep module

100%

100%

Distributed training used

100%

100%

Imports data_prep

100%

100%

Hyperparameter config

100%

100%

Config loaded in training

100%

100%

Experiment tracking

100%

100%

Experiment log written

100%

100%

Reproducibility info

100%

100%

Validation run captured

100%

100%

Synthetic data only

100%

100%

Step-by-step structure

100%

100%

Large file cleanup

100%

100%

Without context: $0.7654 · 5m 31s · 36 turns · 36 in / 11,818 out tokens

With context: $0.5856 · 3m 49s · 31 turns · 103 in / 8,026 out tokens

Repository
jeremylongshore/claude-code-plugins-plus-skills
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.