Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns.
35
Quality
19%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agent/skills/ai-product/SKILL.mdQuality
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description suffers from a broken 'Use when' clause containing placeholder text instead of actual triggers, which severely undermines its utility for skill selection. The opening sentence is marketing fluff rather than functional description, and while it mentions relevant AI topics, it lacks concrete action verbs describing what the skill actually does.
Suggestions
Replace the placeholder 'Use when: keywords, file_patterns, code_patterns' with actual trigger terms like 'Use when building LLM-powered features, implementing RAG systems, designing AI chatbots, or optimizing API costs'
Remove the marketing opener and replace with concrete actions: 'Implements LLM integration patterns, designs RAG pipelines, engineers production prompts, builds trustworthy AI interfaces'
Add natural user keywords: 'OpenAI API', 'embeddings', 'vector database', 'token costs', 'hallucination', 'context window'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names domain (AI/LLM integration) and lists several areas (RAG architecture, prompt engineering, AI UX, cost optimization), but these are high-level categories rather than concrete actions. No specific verbs describing what the skill actually does. | 2 / 3 |
Completeness | The 'what' is vague (covers topics but doesn't specify actions), and the 'when' is completely broken with placeholder text instead of actual triggers. The description fails to answer either question adequately. | 1 / 3 |
Trigger Term Quality | The 'Use when' clause contains placeholder text ('keywords, file_patterns, code_patterns') instead of actual trigger terms. Terms like 'LLM', 'RAG', 'prompt engineering' appear in the body but the trigger section is broken. | 1 / 3 |
Distinctiveness Conflict Risk | The AI/LLM domain is somewhat specific, and terms like 'RAG architecture' and 'prompt engineering' provide some distinctiveness, but the broad scope ('every product will be AI-powered') and broken triggers create overlap risk. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is essentially a table of contents with a verbose persona preamble. It lacks any actionable content, concrete guidance, or workflow structure in the main file. The sub-skill organization shows promise but the main skill provides no value on its own - it should either include quick-start content or explain how the modules relate.
Suggestions
Add a quick-start section with at least one concrete, executable code example (e.g., basic structured output pattern)
Include a brief decision tree or workflow explaining when to use each sub-skill module
Remove or drastically shorten the persona description - focus on actionable guidance instead
Add a 'Common patterns' section with copy-paste ready code snippets for the most frequent use cases
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The persona description is somewhat verbose and could be trimmed - Claude doesn't need to be told about debugging at 3am or shipping to millions. However, the overall structure is lean with just links to sub-skills. | 2 / 3 |
Actionability | No concrete code, commands, or executable guidance provided. The skill is entirely abstract, consisting only of a persona description and links to other files with no actionable content in the main skill itself. | 1 / 3 |
Workflow Clarity | No workflow, sequence, or process is defined. The skill provides no guidance on when to use which sub-skill, how they relate to each other, or what order to approach AI product development tasks. | 1 / 3 |
Progressive Disclosure | References to sub-skills are one level deep and clearly organized with good/bad pattern indicators (✓/❌). However, there's no overview content explaining when to use each module or how they fit together - it's just a list of links. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
3395991
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.