CtrlK
BlogDocsLog inGet started
Tessl Logo

tensorboard-visualizer

Tensorboard Visualizer - Auto-activating skill for ML Training. Triggers on: tensorboard visualizer, tensorboard visualizer Part of the ML Training skill category.

36

1.00x
Quality

3%

Does it follow best practices?

Impact

100%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/tensorboard-visualizer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely weak across all dimensions. It essentially just names the skill and its category without describing any concrete capabilities, use cases, or natural trigger terms. The trigger terms are the skill name repeated verbatim, providing no useful matching surface for user queries.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Launches TensorBoard servers, visualizes training metrics (loss, accuracy, learning rate), displays model graphs, and compares experiment runs.'

Add a 'Use when...' clause with natural trigger terms like 'Use when the user wants to visualize training logs, plot loss curves, monitor training progress, view TensorBoard dashboards, or compare ML experiment metrics.'

Remove the duplicate trigger term 'tensorboard visualizer' and expand with varied natural phrases users might say, such as 'training visualization', 'view training metrics', 'loss plot', 'TensorBoard', 'training curves'.

DimensionReasoningScore

Specificity

The description names 'Tensorboard Visualizer' and 'ML Training' but does not describe any concrete actions. There are no verbs indicating what the skill actually does (e.g., launch tensorboard, visualize training metrics, plot loss curves).

1 / 3

Completeness

The description fails to answer 'what does this do' beyond naming itself, and the 'when' clause is essentially just the skill name repeated. There is no explicit 'Use when...' guidance with meaningful triggers.

1 / 3

Trigger Term Quality

The only trigger terms listed are 'tensorboard visualizer' repeated twice. It misses natural user phrases like 'training logs', 'loss curves', 'visualize metrics', 'tensorboard', 'training progress', 'learning rate plot', etc.

1 / 3

Distinctiveness Conflict Risk

The mention of 'Tensorboard' provides some specificity to a particular tool, which helps distinguish it from generic visualization or ML skills. However, the lack of concrete actions means it could still overlap with other ML training or visualization skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty template with no actual instructional content. It contains only meta-descriptions of what the skill would do without providing any concrete guidance on TensorBoard visualization—no code examples, no commands, no workflows, and no references to supplementary materials. It fails on every dimension of the rubric.

Suggestions

Add concrete, executable code examples showing how to set up TensorBoard logging (e.g., `SummaryWriter` in PyTorch or `tf.summary` in TensorFlow) with specific API calls.

Define a clear workflow: 1) Add logging to training loop, 2) Launch TensorBoard, 3) Verify visualizations appear correctly—with explicit commands for each step.

Remove all meta-description sections ('When to Use', 'Capabilities', 'Example Triggers') and replace with actual technical content such as common visualization patterns (scalars, histograms, images, embeddings).

Add references to advanced topics (custom plugins, remote TensorBoard, comparing experiments) as separate linked files if needed.

DimensionReasoningScore

Conciseness

The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section restates the same vague idea ('tensorboard visualizer') without adding substance.

1 / 3

Actionability

There is zero concrete guidance—no code, no commands, no TensorBoard API usage, no configuration examples. The skill describes rather than instructs, offering nothing executable or copy-paste ready.

1 / 3

Workflow Clarity

No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains none. There are no validation checkpoints or sequenced instructions.

1 / 3

Progressive Disclosure

The content is a monolithic block of vague descriptions with no references to detailed materials, no links to examples or API docs, and no meaningful structural organization beyond boilerplate headings.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.