Wandb Experiment Logger - Auto-activating skill for ML Training. Triggers on: wandb experiment logger, wandb experiment logger Part of the ML Training skill category.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill wandb-experiment-loggerOverall
score
19%
Does it follow best practices?
Validation for skill structure
Activation
7%This description is severely underdeveloped, essentially serving as a placeholder rather than a functional skill description. It names the tool and category but provides no information about what actions the skill performs or when Claude should select it. The repeated trigger term suggests auto-generated content without human refinement.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Logs training metrics, tracks hyperparameters, saves model artifacts, and visualizes experiment results using Weights & Biases.'
Include a 'Use when...' clause with natural trigger terms like 'track experiments', 'log training metrics', 'weights and biases', 'wandb', 'ML experiment tracking', or 'compare training runs'.
Expand trigger terms to include common variations: 'wandb', 'weights and biases', 'W&B', 'experiment tracking', 'training logs', 'ML metrics'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only names the tool ('Wandb Experiment Logger') and category ('ML Training') without describing any concrete actions. No specific capabilities like 'log metrics', 'track experiments', or 'visualize training runs' are mentioned. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond naming itself, and provides no 'when should Claude use it' guidance. The 'Triggers on' field is redundant and doesn't constitute proper usage guidance. | 1 / 3 |
Trigger Term Quality | The trigger terms listed are just 'wandb experiment logger' repeated twice. Missing natural variations users would say like 'weights and biases', 'log training', 'track experiments', 'ML metrics', or 'training runs'. | 1 / 3 |
Distinctiveness Conflict Risk | While 'wandb' is a specific tool name that provides some distinctiveness, the lack of concrete actions means it could overlap with other ML logging or experiment tracking skills. The category mention 'ML Training' is too broad. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%This skill is a placeholder template with no actual wandb experiment logging content. It contains only generic boilerplate text that could apply to any skill, with zero actionable guidance, code examples, or wandb-specific information. The skill fails to teach Claude anything about wandb logging.
Suggestions
Add executable Python code showing wandb.init(), wandb.log(), and wandb.finish() with concrete examples
Include a clear workflow: 1) Initialize run with config, 2) Log metrics during training, 3) Log artifacts, 4) Finish run with validation
Remove all generic template text (Purpose, When to Use, Capabilities sections) and replace with actual wandb patterns and best practices
Add specific examples for common use cases: logging training loss, saving model checkpoints, comparing runs, hyperparameter sweeps
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler text with no actual wandb-specific information. It explains generic concepts Claude already knows (what triggers are, what capabilities mean) without providing any concrete wandb logging guidance. | 1 / 3 |
Actionability | No executable code, no wandb API examples, no concrete commands. The content describes rather than instructs - phrases like 'Provides step-by-step guidance' without actually providing any steps. | 1 / 3 |
Workflow Clarity | No workflow is defined. There are no steps for setting up wandb, initializing experiments, logging metrics, or any validation checkpoints. The skill promises guidance but delivers none. | 1 / 3 |
Progressive Disclosure | No structure beyond generic headings. No references to detailed documentation, no links to wandb API docs or examples, and no organization of content by complexity or use case. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
69%Validation — 11 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 11 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.