CtrlK
BlogDocsLog inGet started
Tessl Logo

distributed-training-setup

Distributed Training Setup - Auto-activating skill for ML Training. Triggers on: distributed training setup, distributed training setup Part of the ML Training skill category.

35

0.97x
Quality

3%

Does it follow best practices?

Impact

93%

0.97x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/distributed-training-setup/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely thin and template-like, essentially restating the skill name without providing any concrete capabilities, use cases, or meaningful trigger terms. It fails to help Claude distinguish when to select this skill, and the duplicated trigger term suggests auto-generated boilerplate rather than a thoughtfully crafted description.

Suggestions

Add specific concrete actions such as 'Configures multi-GPU and multi-node training, sets up data/model parallelism, integrates with frameworks like Horovod, DeepSpeed, and PyTorch DDP'.

Add an explicit 'Use when...' clause with natural trigger terms like 'multi-GPU', 'scale training', 'data parallel', 'model parallel', 'distributed computing', 'multi-node', 'training across GPUs'.

Remove the duplicated trigger term and replace with diverse, natural keywords users would actually use when requesting help with distributed training.

DimensionReasoningScore

Specificity

The description only names the domain ('distributed training setup' and 'ML Training') but lists no concrete actions. There are no specific capabilities like 'configure multi-GPU training', 'set up data parallelism', or 'configure Horovod/DeepSpeed'.

1 / 3

Completeness

The 'what' is essentially absent beyond the title, and the 'when' is only a redundant trigger phrase with no explicit 'Use when...' clause or meaningful guidance on when to select this skill.

1 / 3

Trigger Term Quality

The trigger terms are just 'distributed training setup' repeated twice. Missing natural variations users would say like 'multi-GPU', 'data parallel', 'model parallel', 'Horovod', 'DeepSpeed', 'multi-node training', 'scaling training', etc.

1 / 3

Distinctiveness Conflict Risk

The phrase 'distributed training' is somewhat specific to a niche within ML, which provides some distinctiveness. However, the lack of concrete actions or detailed triggers means it could overlap with other ML training skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder template with no actual instructional content. It contains no executable code, no concrete steps, no framework-specific guidance (despite tagging PyTorch, TensorFlow, sklearn), and no references to supplementary materials. It would provide Claude with zero useful information for helping users set up distributed training.

Suggestions

Add concrete, executable code examples for at least one framework (e.g., PyTorch DDP setup with torchrun launcher, model wrapping, and gradient synchronization).

Define a clear multi-step workflow with validation checkpoints: environment setup → launch configuration → model wrapping → training loop → verification that distributed training is functioning correctly.

Remove all meta-description sections ('Purpose', 'When to Use', 'Capabilities', 'Example Triggers') and replace with actionable technical content covering common distributed training patterns (DDP, FSDP, DeepSpeed, Horovod).

Add references to separate files for advanced topics (e.g., multi-node setup, mixed precision with distributed training, fault tolerance) to provide progressive disclosure.

DimensionReasoningScore

Conciseness

The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section restates the same vague idea ('distributed training setup') without adding substance.

1 / 3

Actionability

There is zero concrete guidance—no code, no commands, no specific configurations, no framework-specific instructions. Phrases like 'Provides step-by-step guidance' and 'Generates production-ready code' describe capabilities without demonstrating them.

1 / 3

Workflow Clarity

No workflow is defined at all. Distributed training setup is inherently multi-step (environment config, launcher setup, model wrapping, gradient sync, validation) but none of these steps are mentioned, let alone sequenced with checkpoints.

1 / 3

Progressive Disclosure

There are no references to external files, no structured breakdown of topics, and no navigation aids. The content is a flat, shallow description with no depth or pointers to deeper material.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.