CtrlK
BlogDocsLog inGet started
Tessl Logo

tensorboard-visualizer

Tensorboard Visualizer - Auto-activating skill for ML Training. Triggers on: tensorboard visualizer, tensorboard visualizer Part of the ML Training skill category.

36

1.00x

Quality

3%

Does it follow best practices?

Impact

100%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/tensorboard-visualizer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is severely underdeveloped, essentially serving as a placeholder rather than a functional skill description. It lacks concrete actions, meaningful trigger terms, and any guidance on when to use the skill. The only redeeming quality is that 'Tensorboard' is a recognizable specific tool name.

Suggestions

Add specific capabilities: 'Visualize training metrics, plot loss/accuracy curves, compare experiment runs, analyze model convergence, display embedding projections.'

Add a 'Use when...' clause: 'Use when the user mentions TensorBoard, training visualization, loss curves, training metrics, model training progress, or wants to analyze ML experiment logs.'

Expand trigger terms to include natural variations: 'tensorboard', 'training logs', 'loss plots', 'training metrics', 'experiment tracking', 'model training visualization'.

DimensionReasoningScore

Specificity

The description only names the tool ('Tensorboard Visualizer') and category ('ML Training') without describing any concrete actions. No specific capabilities like 'visualize training metrics', 'plot loss curves', or 'compare model runs' are mentioned.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond naming itself, and has no 'Use when...' clause or equivalent guidance for when Claude should select this skill. Both components are essentially missing.

1 / 3

Trigger Term Quality

The trigger terms listed are just 'tensorboard visualizer' repeated twice. Missing natural variations users would say like 'tensorboard', 'training logs', 'loss curves', 'metrics visualization', 'model training graphs', or 'TensorBoard'.

1 / 3

Distinctiveness Conflict Risk

While 'Tensorboard' is a specific tool name that provides some distinctiveness, the vague 'ML Training' category and lack of specific triggers could cause overlap with other ML visualization or training monitoring skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder with no actionable content. It describes what a TensorBoard visualizer skill would do but provides zero actual guidance, code examples, or workflows for using TensorBoard. The entire content could be replaced with a single sentence and would convey the same (minimal) information.

Suggestions

Add executable code examples showing how to launch TensorBoard and log data (e.g., `from torch.utils.tensorboard import SummaryWriter`)

Include specific workflows for common tasks: logging scalars, images, histograms, and model graphs with validation steps

Remove generic boilerplate sections ('Capabilities', 'Example Triggers') and replace with concrete, copy-paste-ready code snippets

Add references to advanced topics like custom plugins, remote TensorBoard, or comparing multiple runs in separate documentation files

DimensionReasoningScore

Conciseness

The content is padded with generic boilerplate that explains nothing specific about TensorBoard. Phrases like 'provides automated assistance' and 'follows industry best practices' are filler that Claude already understands.

1 / 3

Actionability

No concrete code, commands, or specific guidance is provided. The skill describes what it does abstractly ('provides step-by-step guidance') but never actually provides any guidance, examples, or executable instructions for using TensorBoard.

1 / 3

Workflow Clarity

No workflow, steps, or process is defined. The content only describes trigger conditions and vague capabilities without any actual sequence of actions for visualizing data in TensorBoard.

1 / 3

Progressive Disclosure

The content is a flat, uninformative structure with no references to detailed documentation, examples, or related files. There's nothing to disclose progressively because there's no substantive content.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.