AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features.
57
Quality
37%
Does it follow best practices?
Impact
94%
1.08xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ai-ml/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (AI/ML) and lists relevant sub-areas, but lacks concrete action verbs and explicit trigger guidance. It reads more like a category label than an actionable skill description, making it difficult for Claude to know precisely when to select this skill over others.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when building chatbots, implementing semantic search, creating AI agents, or integrating LLM APIs'
Replace category labels with concrete actions: instead of 'RAG implementation' use 'Build retrieval-augmented generation systems with vector databases and embeddings'
Include common user terms and variations: 'chatbot', 'embeddings', 'vector search', 'prompt engineering', 'OpenAI', 'Claude API', 'fine-tuning'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (AI/ML) and lists several areas (LLM development, RAG, agent architecture, ML pipelines, AI features), but these are high-level categories rather than concrete actions. No specific verbs describing what actions are performed. | 2 / 3 |
Completeness | Describes 'what' at a high level (AI/ML workflow covering various areas) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes relevant technical terms like 'LLM', 'RAG', 'agent architecture', 'ML pipelines' that users might mention, but missing common variations like 'chatbot', 'embeddings', 'vector database', 'prompt engineering', or 'fine-tuning'. | 2 / 3 |
Distinctiveness Conflict Risk | The AI/ML focus provides some distinction, but 'AI-powered features' and 'ML pipelines' are broad enough to potentially overlap with general coding skills or data processing skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill functions as a workflow orchestrator/index rather than a standalone skill, which explains its abstract nature. While it provides good structure and navigation to other skills, it lacks any concrete, executable guidance of its own. The document would benefit from at least one concrete example per phase showing actual implementation rather than just delegation.
Suggestions
Add at least one concrete code example per major phase (e.g., actual LLM API call, vector database query, agent tool definition) to make the skill actionable on its own
Include validation checkpoints within phases - e.g., 'Verify RAG retrieval accuracy exceeds 80% before proceeding to Phase 4'
Replace vague actions like 'Choose appropriate models' with decision criteria or a quick reference table (e.g., 'GPT-4 for complex reasoning, Claude for long context, Gemini for multimodal')
Consolidate the repetitive phase structure - consider a table format for Skills/Actions to reduce token usage while preserving information
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is moderately efficient but includes repetitive structure across phases (Skills to Invoke, Actions, Copy-Paste Prompts pattern repeated 7 times). The checklists add value but the overall document could be tightened significantly. | 2 / 3 |
Actionability | The content is almost entirely abstract references to other skills with vague action items like 'Define AI use cases' and 'Choose appropriate models'. No concrete code, commands, or executable examples are provided - just delegation prompts. | 1 / 3 |
Workflow Clarity | Phases are clearly sequenced and the checklist provides validation points, but there are no feedback loops, error recovery steps, or explicit validation checkpoints within the workflow phases themselves. The 'Actions' are high-level descriptions without verification steps. | 2 / 3 |
Progressive Disclosure | The document is well-structured as an orchestration overview with clear references to specific skills via @ mentions. Content is appropriately split into phases with clear navigation, and references are one level deep to specific skill files. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8692fbc
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.