tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill engineering-features-for-machine-learningExecute create, select, and transform features to improve machine learning model performance. Handles feature scaling, encoding, and importance analysis. Use when asked to "engineer features" or "select features". Trigger with relevant phrases based on skill purpose.
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
13%This skill content is largely boilerplate with minimal actionable guidance. It describes what feature engineering is and what the skill conceptually does, but provides no executable code, specific commands, or concrete implementation details. The content is padded with generic sections that add no value and could be removed entirely.
Suggestions
Replace abstract descriptions with executable Python code examples showing actual feature-engineering-toolkit usage (e.g., specific function calls, import statements, complete working snippets)
Remove generic boilerplate sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain only placeholder text
Add concrete validation steps showing how to verify feature engineering results (e.g., checking feature distributions, correlation analysis commands)
Consolidate the duplicated overview content and remove explanations of basic ML concepts Claude already understands
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with redundant explanations (overview repeated twice), generic boilerplate sections (Prerequisites, Instructions, Output, Error Handling, Resources) that add no value, and explains concepts Claude already knows like what feature engineering is. | 1 / 3 |
Actionability | No executable code provided despite claiming to generate Python code. Examples describe what the skill 'will do' abstractly rather than showing actual code, commands, or concrete implementation details. | 1 / 3 |
Workflow Clarity | The 'How It Works' section provides a basic 4-step sequence, but lacks validation checkpoints, error recovery steps, or concrete verification methods. The workflow is conceptual rather than actionable. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. Generic placeholder sections (Resources, Prerequisites) provide no actual links or navigation. Content that could be split (examples, best practices) is all inline without clear organization. | 1 / 3 |
Total | 5 / 12 Passed |
Activation
67%The description adequately covers the what and when requirements with explicit trigger guidance, earning good marks for completeness. However, it suffers from some vague language ('Trigger with relevant phrases based on skill purpose' is meaningless filler) and could benefit from more specific concrete actions and natural trigger term variations that users would actually say.
Suggestions
Remove the vague filler phrase 'Trigger with relevant phrases based on skill purpose' and replace with specific trigger terms like 'normalize data', 'one-hot encoding', 'feature extraction', 'preprocessing for ML'.
Add more specific concrete actions such as 'create polynomial features', 'handle missing values', 'generate interaction terms', 'perform PCA' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (feature engineering/ML) and lists some actions (create, select, transform, scaling, encoding, importance analysis), but the actions are somewhat generic ML terms rather than highly specific concrete operations. | 2 / 3 |
Completeness | Clearly answers both what (create, select, transform features, scaling, encoding, importance analysis) and when (explicit 'Use when asked to...' clause with trigger phrases), meeting the requirement for explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes some natural keywords like 'engineer features' and 'select features', but the final sentence 'Trigger with relevant phrases based on skill purpose' is vague filler that adds no value. Missing common variations like 'feature extraction', 'preprocessing', 'one-hot encoding', 'normalize data'. | 2 / 3 |
Distinctiveness Conflict Risk | Reasonably specific to feature engineering within ML, but could overlap with general data preprocessing or ML model training skills. Terms like 'scaling' and 'encoding' are used in broader data processing contexts. | 2 / 3 |
Total | 9 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.