Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...
48
Quality
27%
Does it follow best practices?
Impact
79%
1.51xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-ai-product/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description leads with marketing rhetoric ('The question is whether you'll build it right or ship a demo that falls apart') rather than functional capability description. While it mentions relevant technical domains (LLM integration, RAG, prompts), it lacks explicit trigger guidance and appears truncated, significantly limiting its usefulness for skill selection.
Suggestions
Remove the marketing/opinion opening and replace with direct capability statements in third person (e.g., 'Implements LLM integration patterns, designs RAG pipelines, engineers production-ready prompts')
Add an explicit 'Use when...' clause with natural trigger terms like 'building AI features', 'adding LLM to app', 'chatbot', 'embeddings', 'vector search', 'prompt engineering'
Complete the truncated description to fully enumerate the specific capabilities covered by this skill
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names domain (AI/LLM integration) and mentions some areas like 'RAG architecture, prompt...' but the description is truncated and doesn't list comprehensive concrete actions. The opening is more marketing fluff than capability description. | 2 / 3 |
Completeness | The description partially addresses 'what' (LLM integration patterns, RAG architecture) but is truncated and has no explicit 'Use when...' clause or trigger guidance. The opening sentence is opinion/marketing rather than functional description. | 1 / 3 |
Trigger Term Quality | Contains some relevant technical keywords like 'AI-powered', 'LLM integration', 'RAG architecture', 'prompt' that users might mention, but missing common variations and natural phrases users would say (e.g., 'chatbot', 'embeddings', 'vector database', 'AI features'). | 2 / 3 |
Distinctiveness Conflict Risk | The AI/LLM focus provides some distinctiveness, but 'AI-powered' is extremely broad and could overlap with many AI-related skills. The truncated nature makes it harder to assess full conflict potential. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like an outline or skeleton rather than actionable guidance. It identifies important AI product development concerns but fails to provide the concrete code examples, specific implementations, or executable workflows that would make it useful. The Sharp Edges table is particularly problematic as it promises solutions but delivers only empty comment placeholders.
Suggestions
Add complete, executable code examples for each pattern (e.g., actual Pydantic schema validation, streaming implementation with a specific SDK)
Fill in the Sharp Edges solutions with real code snippets instead of comment placeholders, or remove the incomplete table
Include a concrete workflow for at least one common task (e.g., 'Adding LLM feature to existing endpoint' with numbered steps and validation checkpoints)
Remove the persona paragraph - it adds tokens without actionable value
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is relatively lean and avoids explaining basic concepts Claude knows, but the persona introduction is unnecessary padding. The Sharp Edges table has incomplete solutions (just comments like '# Always validate output:' without actual code). | 2 / 3 |
Actionability | The skill describes patterns and anti-patterns but provides no executable code, concrete examples, or copy-paste ready implementations. The Sharp Edges table promises solutions but only shows comment placeholders without actual code. | 1 / 3 |
Workflow Clarity | No clear multi-step workflows are defined. The patterns section lists concepts without sequencing or validation checkpoints. There's no guidance on how to actually implement structured output validation, streaming, or prompt versioning. | 1 / 3 |
Progressive Disclosure | The content is organized into logical sections (Patterns, Anti-Patterns, Sharp Edges), but there are no references to detailed documentation. The Sharp Edges table appears truncated with incomplete solutions that should either be filled in or linked to separate files. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
5c5ae21
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.