CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-data-ml-model

Agent skill for data-ml-model - invoke with $agent-data-ml-model

39

1.16x
Quality

7%

Does it follow best practices?

Impact

93%

1.16x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-data-ml-model/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that provides essentially no useful information for skill selection. It reads as a placeholder or auto-generated stub, containing only the skill's internal name and invocation syntax. It fails on every dimension by not describing any capabilities, triggers, or use cases.

Suggestions

Replace the entire description with concrete actions the skill performs, e.g., 'Trains, evaluates, and deploys machine learning models on structured datasets. Supports regression, classification, and clustering tasks.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to train a model, make predictions, evaluate ML performance, or work with scikit-learn, TensorFlow, or similar frameworks.'

Remove the invocation syntax ('invoke with $agent-data-ml-model') from the description, as this is operational metadata that does not help Claude decide when to select the skill.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for data-ml-model' is entirely vague and abstract, providing no information about what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states it's an 'agent skill' and how to invoke it, which is metadata, not functional description.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. 'data-ml-model' is a hyphenated internal identifier, not a natural language term. Users would say things like 'train a model', 'machine learning', 'predict', etc.

1 / 3

Distinctiveness Conflict Risk

The description is so vague that 'data-ml-model' could overlap with any data processing, machine learning, modeling, or analytics skill. There are no distinct triggers to differentiate it.

1 / 3

Total

4

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is dominated by extensive YAML frontmatter configuration that consumes most of the token budget, while the actual instructional body content is generic and describes ML concepts Claude already knows well. The workflow lacks validation checkpoints and concrete guidance, reading more like a textbook table of contents than an actionable skill. The single code example is the only concrete element but is too generic to provide meaningful value.

Suggestions

Remove or drastically reduce the YAML frontmatter and replace generic ML concept descriptions with project-specific patterns, conventions, or constraints that Claude wouldn't already know.

Add explicit validation checkpoints to the workflow (e.g., 'Verify no data leakage: assert X_test not in training set', 'Check model metrics exceed baseline before proceeding to deployment').

Replace abstract bullet points like 'Handle missing values' with concrete, executable code examples showing the specific approach preferred in this project.

Add progressive disclosure by splitting detailed topics (e.g., deployment steps, evaluation procedures) into referenced files and keeping SKILL.md as a concise overview with clear navigation links.

DimensionReasoningScore

Conciseness

The vast majority of the file is YAML frontmatter configuration that is not actionable skill content. The body content itself lists generic ML concepts Claude already knows (what EDA is, what feature scaling is, what cross-validation is). The bullet lists of responsibilities and workflow steps are textbook-level descriptions that add no novel information.

1 / 3

Actionability

There is one concrete code example showing a sklearn pipeline pattern, which is somewhat useful but generic. The rest of the content is abstract descriptions ('Handle missing values', 'Feature selection', 'Monitoring setup') without specific commands, concrete examples, or executable guidance for particular scenarios.

2 / 3

Workflow Clarity

The workflow lists 5 high-level phases but lacks specific sequencing details, validation checkpoints, or error recovery steps. For a complex ML pipeline involving model training and deployment (potentially destructive/batch operations), there are no validation gates, no feedback loops, and no explicit verification steps between phases.

1 / 3

Progressive Disclosure

The content is a monolithic document with no references to external files for detailed topics like deployment, evaluation metrics, or specific model architectures. The YAML frontmatter dominates the file, and the body content mixes overview-level and detail-level information without clear navigation or separation.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/ruflo
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.