Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns.
35
19%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agent/skills/ai-product/SKILL.mdQuality
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has significant structural problems - the 'Use when' clause contains placeholder text instead of actual triggers, making it non-functional for skill selection. The opening sentence is marketing fluff rather than actionable description, and the capabilities listed are topic areas rather than concrete actions Claude can perform.
Suggestions
Replace the broken 'Use when: keywords, file_patterns, code_patterns' with actual trigger terms like 'Use when building AI features, integrating LLMs, implementing RAG, designing AI chatbots, or optimizing API costs'
Remove the marketing opener ('Every product will be AI-powered...') and replace with concrete actions like 'Designs LLM integration architectures, implements RAG pipelines, writes production-ready prompts'
Add specific file patterns or code patterns users might mention, such as 'OpenAI API', 'vector databases', 'embedding models', 'langchain', '.prompt files'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names domain (AI/LLM integration) and lists several areas (RAG architecture, prompt engineering, AI UX, cost optimization), but these are high-level categories rather than concrete actions. No specific verbs describing what the skill actually does. | 2 / 3 |
Completeness | The 'what' is vague (covers topics but not actions), and the 'when' is completely broken with placeholder text 'Use when: keywords, file_patterns, code_patterns' - this is non-functional guidance. | 1 / 3 |
Trigger Term Quality | The 'Use when' clause literally contains placeholder text 'keywords, file_patterns, code_patterns' instead of actual trigger terms. Terms like 'LLM', 'RAG', 'prompt engineering' appear in the body but the trigger section is broken. | 1 / 3 |
Distinctiveness Conflict Risk | The AI/LLM focus provides some distinctiveness, but terms like 'cost optimization' and 'UX' are generic enough to overlap with other skills. The broken trigger section makes conflict assessment difficult. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill functions as a bare index page rather than a functional skill. It has a verbose persona section but provides zero actionable content - no code examples, no workflow guidance, and no context for when to use each sub-skill. The progressive disclosure structure is present but undermined by the complete absence of overview content.
Suggestions
Add a quick-start section with at least one concrete, executable code example demonstrating a core AI product pattern (e.g., structured output with Pydantic)
Include brief 1-sentence descriptions for each sub-skill link so readers know what they'll find without clicking through
Add a decision tree or workflow showing when to apply each pattern (e.g., 'For user-facing features → streaming; For data extraction → structured output')
Remove or significantly trim the persona paragraph - the principles can be stated in 1-2 lines
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The persona description is somewhat verbose and could be trimmed - Claude doesn't need to be told about debugging at 3am or shipping to millions. However, the overall structure is lean with just links to sub-skills. | 2 / 3 |
Actionability | The skill provides no concrete code, commands, or executable guidance - it's entirely a table of contents pointing to other files with no actionable content in the main skill itself. | 1 / 3 |
Workflow Clarity | There is no workflow, sequence, or process described. The skill is purely organizational with no guidance on when to use which sub-skill or how they relate to each other in a development workflow. | 1 / 3 |
Progressive Disclosure | The skill does use one-level-deep references to sub-skills appropriately, but lacks any quick-start content or overview of what each sub-skill covers - it's just a list of links without context or navigation guidance. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
332e58b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.