CtrlK
BlogDocsLog inGet started
Tessl Logo

tensorflow-model-trainer

Tensorflow Model Trainer - Auto-activating skill for ML Training. Triggers on: tensorflow model trainer, tensorflow model trainer Part of the ML Training skill category.

36

1.56x
Quality

3%

Does it follow best practices?

Impact

100%

1.56x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/tensorflow-model-trainer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely weak across all dimensions. It reads as an auto-generated stub with no concrete actions, duplicated trigger terms, and no explicit guidance on when Claude should select this skill. It would be nearly useless for skill selection in a multi-skill environment.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Builds, compiles, and trains TensorFlow/Keras models, configures layers, tunes hyperparameters, and evaluates model performance.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to train a neural network, build a TensorFlow model, use Keras, fine-tune a deep learning model, or work with .h5/.pb model files.'

Remove the duplicated trigger term and replace with diverse natural keywords users would actually say, such as 'deep learning', 'neural network', 'keras', 'model training', 'epochs', 'loss function'.

DimensionReasoningScore

Specificity

The description provides no concrete actions. It only names itself ('Tensorflow Model Trainer') and its category ('ML Training') without describing what it actually does—no mention of specific capabilities like training models, tuning hyperparameters, loading datasets, etc.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no 'Use when...' clause and no meaningful explanation of capabilities—only a category label and a duplicated trigger term.

1 / 3

Trigger Term Quality

The trigger terms are just the skill name repeated twice ('tensorflow model trainer, tensorflow model trainer'). There are no natural user keywords like 'train a model', 'TensorFlow', 'neural network', 'deep learning', 'fit model', '.h5', 'keras', etc.

1 / 3

Distinctiveness Conflict Risk

The mention of 'Tensorflow' and 'ML Training' provides some domain specificity that distinguishes it from non-ML skills, but the lack of concrete actions or detailed triggers means it could easily overlap with other ML-related skills (e.g., PyTorch training, general ML skills).

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty template with no substantive content. It contains only generic boilerplate descriptions that repeat the skill name without providing any actual instructions, code examples, or workflows for TensorFlow model training. It fails on every dimension because it teaches Claude nothing actionable.

Suggestions

Add concrete, executable TensorFlow code examples covering common training workflows (e.g., data loading with tf.data, model definition with tf.keras, training loop with model.fit, and saving/loading checkpoints).

Define a clear multi-step workflow with validation checkpoints, such as: data preparation → model architecture → compilation → training → evaluation → export, with specific commands and verification steps at each stage.

Remove all generic boilerplate ('This skill provides automated assistance...', 'Provides step-by-step guidance...') and replace with actual technical content like hyperparameter tuning strategies, experiment tracking setup, and common pitfalls.

Add references to separate files for advanced topics (e.g., distributed training, custom training loops, TensorBoard integration) to enable progressive disclosure.

DimensionReasoningScore

Conciseness

The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats 'tensorflow model trainer' excessively, and provides zero actual technical content or instructions.

1 / 3

Actionability

There is no concrete code, no executable commands, no specific examples, and no actual guidance on how to train a TensorFlow model. Every section is vague and abstract.

1 / 3

Workflow Clarity

No workflow is defined at all. There are no steps, no sequences, no validation checkpoints—just generic claims about providing 'step-by-step guidance' without any actual steps.

1 / 3

Progressive Disclosure

The content is a flat, monolithic block of generic text with no references to detailed materials, no links to examples or advanced guides, and no meaningful structural organization.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.