CtrlK
BlogDocsLog inGet started
Tessl Logo

tensorflow-model-trainer

Tensorflow Model Trainer - Auto-activating skill for ML Training. Triggers on: tensorflow model trainer, tensorflow model trainer Part of the ML Training skill category.

36

1.56x
Quality

3%

Does it follow best practices?

Impact

100%

1.56x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/tensorflow-model-trainer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely weak—it reads as an auto-generated template with no substantive content. It lacks concrete actions, meaningful trigger terms, and any explicit guidance on when Claude should select this skill. The repeated trigger term and boilerplate category mention provide almost no useful information for skill selection.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Builds, trains, and evaluates TensorFlow/Keras neural network models, configures layers, tunes hyperparameters, and exports saved models.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to train a neural network, build a TensorFlow model, use Keras, fine-tune a deep learning model, or work with .h5/.pb model files.'

Remove the redundant duplicate trigger term and replace with diverse natural keywords users would actually say, such as 'deep learning', 'neural network', 'keras', 'model training', 'epochs', 'loss function'.

DimensionReasoningScore

Specificity

The description names the domain ('ML Training', 'Tensorflow') but provides no concrete actions. There is no mention of what the skill actually does—no specific capabilities like 'trains models', 'tunes hyperparameters', 'loads datasets', etc.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no explanation of capabilities and no explicit 'Use when...' clause—only a redundant trigger phrase.

1 / 3

Trigger Term Quality

The only trigger terms listed are 'tensorflow model trainer' repeated twice. There are no natural user keywords like 'train a model', 'TensorFlow', 'deep learning', 'neural network', 'fit model', '.h5', 'keras', etc.

1 / 3

Distinctiveness Conflict Risk

The mention of 'Tensorflow' provides some specificity that distinguishes it from generic ML skills, but 'ML Training' is broad and could overlap with PyTorch, scikit-learn, or other training-related skills. Without concrete actions, the distinction is weak.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty template with no substantive content. It contains only generic boilerplate descriptions that repeat the skill name without providing any actual instructions, code examples, workflows, or technical guidance for TensorFlow model training. It fails on every dimension of the rubric.

Suggestions

Add concrete, executable TensorFlow code examples covering common training workflows (e.g., data loading with tf.data, model definition with tf.keras, training loop with model.fit, and checkpointing).

Define a clear multi-step workflow with validation checkpoints, such as: data preparation → model architecture → compilation → training → evaluation → export, with specific commands and verification steps at each stage.

Remove all generic boilerplate ('This skill provides automated assistance...', 'Provides step-by-step guidance...') and replace with actual technical content that Claude doesn't already know, such as project-specific conventions, preferred hyperparameter ranges, or experiment tracking patterns.

Add references to separate files for advanced topics like hyperparameter tuning strategies, distributed training setup, and experiment tracking integration (e.g., with TensorBoard or MLflow).

DimensionReasoningScore

Conciseness

The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats 'tensorflow model trainer' excessively, and provides zero actual technical content or instructions.

1 / 3

Actionability

There is no concrete code, no executable commands, no specific examples, and no actual guidance on how to train a TensorFlow model. Every section is vague and abstract.

1 / 3

Workflow Clarity

No workflow is defined at all. There are no steps, no sequences, no validation checkpoints—just generic claims about providing 'step-by-step guidance' without actually providing any.

1 / 3

Progressive Disclosure

The content is a flat, monolithic block of generic text with no structure pointing to detailed resources, no references to external files, and no meaningful organization of content.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.