CtrlK
BlogDocsLog inGet started
Tessl Logo

building-classification-models

Build and evaluate classification models for supervised learning tasks with labeled data. Use when requesting "build a classifier", "create classification model", or "train classifier". Trigger with relevant phrases based on skill purpose.

45

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/ai-ml/classification-model-builder/skills/building-classification-models/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description covers the basics with an explicit 'Use when' clause and identifies the classification domain, but it lacks depth in specific capabilities and natural trigger term coverage. The final sentence ('Trigger with relevant phrases based on skill purpose') is meaningless filler that adds no discriminative value and weakens the overall quality.

Suggestions

List more specific concrete actions such as 'perform feature selection, tune hyperparameters, generate confusion matrices, compute accuracy/precision/recall metrics, support algorithms like logistic regression, random forest, and SVM'.

Expand trigger terms with natural user phrases like 'predict categories', 'classify data', 'categorize items', 'label prediction', 'binary classification', 'multi-class classification'.

Remove the vague filler sentence 'Trigger with relevant phrases based on skill purpose' and replace it with actual distinguishing details that separate this skill from general ML or regression skills.

DimensionReasoningScore

Specificity

Names the domain (classification models, supervised learning) and a general action (build and evaluate), but does not list multiple specific concrete actions like feature engineering, hyperparameter tuning, cross-validation, confusion matrix generation, etc.

2 / 3

Completeness

Explicitly answers both 'what' (build and evaluate classification models for supervised learning tasks with labeled data) and 'when' (with a 'Use when' clause listing specific trigger phrases). Both components are present and explicit.

3 / 3

Trigger Term Quality

Includes some relevant trigger phrases like 'build a classifier', 'create classification model', 'train classifier', but misses many natural variations users might say such as 'predict categories', 'label prediction', 'logistic regression', 'random forest', 'classify data', 'categorize'. The final sentence 'Trigger with relevant phrases based on skill purpose' is vague filler that adds no value.

2 / 3

Distinctiveness Conflict Risk

Somewhat specific to classification but could overlap with general ML/data science skills, regression modeling skills, or broader 'build a model' requests. The distinction between classification and other supervised learning tasks (e.g., regression) is mentioned but not strongly reinforced with unique triggers.

2 / 3

Total

9

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a template-like placeholder with no substantive content. It contains no executable code, no specific library recommendations, no concrete workflows, and no actionable guidance for building classification models. Nearly every section consists of generic filler text that could apply to any skill, and it repeatedly references a nonexistent 'classification-model-builder plugin'.

Suggestions

Replace the abstract descriptions with concrete, executable Python code examples showing a complete classification pipeline (e.g., using scikit-learn with train/test split, model fitting, and evaluation metrics).

Remove all sections that explain concepts Claude already knows (what classification is, what data quality means, generic error handling) and focus only on project-specific conventions or non-obvious implementation details.

Add a concrete workflow with validation checkpoints, e.g., data validation step, cross-validation results check, and a specific sequence of commands/code blocks to follow.

Either remove references to the nonexistent 'classification-model-builder plugin' or provide actual integration code and bundle files that support it.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive padding. Explains concepts Claude already knows (what classification is, what data quality means, what hyperparameter tuning is). Sections like 'How It Works', 'When to Use This Skill', 'Overview', 'Best Practices', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all filler with no actionable content. References a nonexistent 'classification-model-builder plugin' repeatedly.

1 / 3

Actionability

Contains zero executable code, no concrete commands, no specific library recommendations, no actual implementation guidance. The examples describe what 'the skill will' do in abstract terms rather than providing any concrete code or steps. Instructions like 'Invoke this skill when the trigger conditions are met' and 'Provide necessary context and parameters' are completely vague.

1 / 3

Workflow Clarity

No clear, actionable workflow is defined. The 'How It Works' section describes an abstract 3-step process with no specifics. The 'Instructions' section is a generic 4-step placeholder that could apply to literally anything. No validation checkpoints, no error recovery loops, no concrete sequencing of actual ML pipeline steps.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files and no bundle files provided. Content is poorly organized with many redundant sections (Overview, How It Works, When to Use, Examples, Best Practices, Integration, Prerequisites, Instructions, Output, Error Handling, Resources) that mostly contain filler. The 'Resources' section lists 'Project documentation' and 'Related skills and commands' with no actual links.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.