Agent skill for data-ml-model - invoke with $agent-data-ml-model
43
13%
Does it follow best practices?
Impact
93%
1.16xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-data-ml-model/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an extremely weak description that provides essentially no useful information for skill selection. It reads as a placeholder or auto-generated stub, containing only the skill's internal name and invocation syntax. It fails on every dimension by not describing any capabilities, triggers, or use cases.
Suggestions
Replace the entire description with concrete actions the skill performs, e.g., 'Trains, evaluates, and deploys machine learning models on structured datasets. Supports regression, classification, and clustering tasks.'
Add an explicit 'Use when...' clause with natural trigger terms like 'train a model', 'machine learning', 'predict', 'ML pipeline', 'model accuracy', 'dataset', 'feature engineering'.
Remove the invocation syntax ('invoke with $agent-data-ml-model') from the description as it wastes space that should be used for capability and trigger information.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for data-ml-model' is entirely vague and abstract, providing no information about what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states it's an 'agent skill' and how to invoke it, which is metadata, not functional description. | 1 / 3 |
Trigger Term Quality | There are no natural keywords a user would say. 'data-ml-model' is a hyphenated internal identifier, not a natural language term. Users would say things like 'train a model', 'machine learning', 'predict', etc. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so generic that 'data-ml-model' could overlap with any data processing, machine learning, or modeling skill. There are no distinct triggers or boundaries defined. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is dominated by an excessively large YAML frontmatter block full of configuration metadata that doesn't provide actionable guidance. The body content reads like a generic ML textbook outline, listing concepts Claude already knows without providing project-specific, concrete, or novel instructions. The single code example is partially useful but uses a placeholder class, and the workflow lacks validation checkpoints critical for ML pipelines.
Suggestions
Remove or drastically reduce the YAML frontmatter to only essential metadata; move configuration details to a separate config file if needed.
Replace generic ML concept lists ('Handle missing values', 'Feature scaling') with project-specific patterns, concrete code snippets, or links to detailed reference files.
Add explicit validation checkpoints to the workflow (e.g., 'Verify no data leakage before training', 'Check model performance exceeds baseline threshold before proceeding to deployment').
Split detailed content (evaluation metrics, deployment steps, preprocessing patterns) into separate referenced files to improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The vast majority of the file is YAML frontmatter with metadata that Claude already knows or that serves no actionable purpose (triggers, hooks, examples, integration configs). The body content explains basic ML concepts (what EDA is, what feature scaling is) that Claude already knows well. The workflow is essentially a textbook table of contents. | 1 / 3 |
Actionability | There is one concrete code example showing a sklearn pipeline pattern, which is executable. However, the rest of the content is high-level bullet points ('Handle missing values', 'Feature selection') without specific techniques, commands, or concrete guidance. The code example uses a placeholder 'ModelClass()' rather than a real class. | 2 / 3 |
Workflow Clarity | The 5-step ML workflow is listed in a logical sequence, but there are no validation checkpoints, no error recovery steps, and no feedback loops. For a complex multi-step process involving model training and deployment, the absence of explicit validation steps (e.g., checking data quality before training, validating model performance thresholds before deployment) is a significant gap. | 2 / 3 |
Progressive Disclosure | The content is a monolithic file with an enormous YAML frontmatter block followed by a flat body. There are no references to external files for detailed topics like deployment, evaluation metrics, or advanced techniques. Everything is crammed into one file without clear navigation or layered structure. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
322b2ae
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.