Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill ai-product48
Quality
27%
Does it follow best practices?
Impact
79%
1.51xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ai-product/SKILL.mdDiscovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description leads with marketing rhetoric ('The question is whether you'll build it right or ship a demo that falls apart') rather than functional capability statements. While it mentions relevant technical domains (LLM integration, RAG, prompts), it lacks explicit trigger guidance and appears truncated. The tone is inappropriate for a skill description that should help Claude select the right tool.
Suggestions
Remove the marketing/opinion opening and start with concrete actions in third person (e.g., 'Implements LLM integration patterns, designs RAG pipelines, engineers prompts for production systems')
Add an explicit 'Use when...' clause with natural trigger terms like 'building AI features', 'adding LLM to app', 'vector search', 'embeddings', 'chatbot integration'
Complete the truncated description to fully enumerate the specific capabilities covered by this skill
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (AI/LLM integration) and mentions some specific areas like 'RAG architecture' and 'prompt', but the description is truncated and doesn't list comprehensive concrete actions. The opening is more marketing fluff than capability description. | 2 / 3 |
Completeness | The description partially addresses 'what' (LLM integration patterns, RAG architecture) but is truncated and provides no 'Use when...' clause or explicit trigger guidance. The opening sentence is opinion/marketing rather than functional description. | 1 / 3 |
Trigger Term Quality | Contains some relevant technical keywords like 'AI-powered', 'LLM integration', 'RAG architecture', and 'prompt', but these are more technical jargon than natural user language. Missing common variations users might say like 'chatbot', 'embeddings', 'vector database', 'AI app'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'LLM integration patterns' and 'RAG architecture' provides some specificity, but 'AI-powered' is extremely generic and could overlap with many AI-related skills. The truncation makes it harder to assess full distinctiveness. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like an outline or skeleton rather than actionable guidance. It identifies important AI product development concerns (validation, streaming, prompt versioning) but fails to provide any executable code or concrete implementation details. The Sharp Edges table is particularly problematic as it promises solutions but delivers only empty comment placeholders.
Suggestions
Add complete, executable code examples for each pattern (e.g., actual function calling implementation with schema validation, streaming response handling code)
Fill in the Sharp Edges solutions with real code snippets instead of comment placeholders
Add a step-by-step workflow for implementing a basic LLM feature with validation checkpoints (e.g., 1. Define schema → 2. Implement call → 3. Validate response → 4. Handle errors)
Remove the persona paragraph - it adds no actionable value and wastes tokens
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The persona introduction is unnecessary fluff that Claude doesn't need. The patterns and anti-patterns sections are reasonably concise but the 'Sharp Edges' table has incomplete solutions (just comments like '# Always validate output:' with no actual code). | 2 / 3 |
Actionability | Critically lacking executable guidance. Patterns mention concepts like 'function calling or JSON mode' without code examples. The Sharp Edges table promises solutions but only shows comment placeholders ('# Always validate output:') with no actual implementation code. | 1 / 3 |
Workflow Clarity | No clear workflows or sequences are provided. The skill lists concepts and warnings but never shows how to actually implement an AI feature step-by-step. No validation checkpoints or feedback loops for the complex operations mentioned. | 1 / 3 |
Progressive Disclosure | Content is organized into logical sections (Patterns, Anti-Patterns, Sharp Edges) which provides some structure. However, there are no references to detailed documentation, and the Sharp Edges table appears truncated/incomplete rather than properly linking to detailed guides. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.