tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill building-classification-modelsBuild and evaluate classification models for supervised learning tasks with labeled data. Use when requesting "build a classifier", "create classification model", or "train classifier". Trigger with relevant phrases based on skill purpose.
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
7%This skill content is almost entirely abstract description with no actionable guidance. It explains what a classification model builder would do conceptually but provides zero executable code, no specific library recommendations, no concrete workflows, and no actual implementation details Claude could use to build a classifier.
Suggestions
Replace abstract descriptions with executable Python code examples using specific libraries (e.g., scikit-learn) showing actual model training, evaluation, and prediction workflows
Add concrete validation steps with specific commands/code for checking data quality, model performance thresholds, and error handling
Remove sections that explain concepts Claude already knows (Overview, How It Works descriptions) and replace with copy-paste ready code snippets
Provide specific metric thresholds, algorithm selection criteria, and hyperparameter tuning examples rather than generic best practices
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive padding explaining concepts Claude already knows (what classification is, how models work). Sections like 'How It Works', 'Overview', and generic 'Instructions' add no actionable value and waste tokens. | 1 / 3 |
Actionability | No executable code, no concrete commands, no specific algorithms or libraries mentioned. Examples describe what 'the skill will do' abstractly rather than providing actual implementation guidance Claude can follow. | 1 / 3 |
Workflow Clarity | Steps are vague abstractions ('analyze', 'generate code') with no concrete sequence. No validation checkpoints, no specific commands, and no feedback loops for error recovery in what should be a multi-step ML workflow. | 1 / 3 |
Progressive Disclosure | Content has section headers providing some structure, but everything is inline in one monolithic file. References to 'classification-model-builder plugin' and 'project documentation' are vague with no actual links or file paths. | 2 / 3 |
Total | 5 / 12 Passed |
Activation
67%The description adequately covers the what and when, earning good marks for completeness. However, it lacks specific concrete actions beyond 'build and evaluate', and the trigger terms are limited with a meaningless filler phrase at the end. The description would benefit from more specific capabilities and natural user language variations.
Suggestions
Replace the vague 'Trigger with relevant phrases based on skill purpose' with actual trigger terms like 'predict categories', 'binary classification', 'multi-class prediction', or specific algorithm names.
Add specific concrete actions such as 'train decision trees, evaluate with confusion matrices, perform hyperparameter tuning, compare model accuracy'.
Include common user phrases and file types like 'categorize data', 'predict labels', 'sklearn classifier', or 'classification accuracy'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (classification models, supervised learning) and general action (build and evaluate), but lacks specific concrete actions like 'train decision trees, evaluate with confusion matrices, perform cross-validation'. | 2 / 3 |
Completeness | Explicitly answers both what (build and evaluate classification models for supervised learning) and when (Use when requesting specific phrases). Has a clear 'Use when...' clause with trigger examples. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords ('build a classifier', 'create classification model', 'train classifier') but the final sentence 'Trigger with relevant phrases based on skill purpose' is vague filler that adds no value. Missing common variations like 'predict categories', 'label data', 'random forest', 'logistic regression'. | 2 / 3 |
Distinctiveness Conflict Risk | Somewhat specific to classification but could overlap with general ML skills, regression modeling, or data science skills. The 'supervised learning' qualifier helps but 'labeled data' is broad. | 2 / 3 |
Total | 9 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.