Analyze datasets by running clustering algorithms (K-means, DBSCAN, hierarchical) to identify data groups. Use when requesting "run clustering", "cluster analysis", or "group data points". Trigger with relevant phrases based on skill purpose.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill running-clustering-algorithms51
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
85%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a reasonably strong skill description that clearly identifies its purpose (clustering analysis) with specific algorithm names and explicit trigger guidance. The main weakness is the final sentence 'Trigger with relevant phrases based on skill purpose' which is meaningless filler that adds no value and could be replaced with additional natural trigger terms.
Suggestions
Remove the vague final sentence 'Trigger with relevant phrases based on skill purpose' and replace with additional natural trigger terms like 'segmentation', 'find groups', 'unsupervised learning', or 'pattern discovery'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'running clustering algorithms' with explicit algorithm types (K-means, DBSCAN, hierarchical) and the purpose 'identify data groups'. | 3 / 3 |
Completeness | Clearly answers both what (analyze datasets with clustering algorithms to identify groups) and when (explicit 'Use when' clause with trigger phrases). The structure follows the expected pattern despite the weak final sentence. | 3 / 3 |
Trigger Term Quality | Includes some natural keywords like 'run clustering', 'cluster analysis', 'group data points', but the final sentence 'Trigger with relevant phrases based on skill purpose' is vague filler that doesn't add actual trigger terms. Missing common variations like 'segmentation', 'find patterns', or 'unsupervised learning'. | 2 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on clustering algorithms with distinct triggers. Unlikely to conflict with other data analysis skills due to the specific algorithm names and clustering-focused terminology. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is a template-style document that describes what clustering is and what the skill would do, rather than providing actionable instructions Claude can execute. It lacks any executable code examples, specific library usage patterns, or concrete parameter recommendations. The content is padded with generic sections that provide no unique value.
Suggestions
Replace the abstract 'How It Works' and 'Examples' sections with actual executable Python code snippets showing K-means, DBSCAN, and hierarchical clustering implementations using scikit-learn
Remove generic filler sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain no specific information
Add concrete parameter guidance with specific values (e.g., 'For customer segmentation with ~1000 records, start with n_clusters=5 and use elbow method to optimize')
Include validation steps showing how to evaluate clustering quality with actual code for silhouette score calculation and visualization
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with unnecessary explanations of concepts Claude already knows (what clustering is, how ML libraries work). Sections like 'How It Works', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are generic filler that add no actionable value. | 1 / 3 |
Actionability | No executable code provided despite being a code-generation skill. Examples describe what 'the skill will do' abstractly rather than showing actual Python code, commands, or copy-paste ready snippets. The 'Best Practices' section gives vague advice without concrete implementation. | 1 / 3 |
Workflow Clarity | The 'How It Works' section lists steps in sequence, but lacks validation checkpoints, error recovery steps, or concrete verification methods. No feedback loops for when clustering fails or produces poor results. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. Generic sections ('Resources', 'Integration') mention concepts without linking to actual documentation. Content that could be split (algorithm-specific guides, code templates) is either missing or vaguely described inline. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.