Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns.
44
19%
Does it follow best practices?
Impact
80%
1.37xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ai-product/SKILL.mdRAG structured output and async patterns
Structured output format
0%
70%
No raw text parsing
100%
100%
Input sanitization
30%
100%
Token estimation
0%
75%
Context window management
50%
100%
Async LLM call
0%
100%
Output validation
60%
100%
Retry or fallback
100%
100%
Structured response fields
100%
100%
Prompt as named variable
100%
100%
No unconditional context dump
87%
100%
Prompt versioning, streaming, and cost tracking
Prompts in separate module
100%
100%
Named versioned prompts
100%
100%
Streaming enabled
0%
100%
Incremental chunk processing
0%
100%
Token usage recorded
100%
80%
Cost logged per request
100%
100%
Output validated
20%
100%
Factual content addressed
0%
0%
Structured output
100%
100%
Async LLM calls
0%
0%
API failure handling
0%
37%
README covers prompt management
100%
100%
Input sanitization and output safety validation
Input sanitization present
60%
100%
Injection pattern handling
100%
100%
Structured output parsing
40%
0%
Output schema validation
20%
0%
Factual/compliance claim handling
0%
50%
Safe fallback on validation failure
100%
100%
Retry on API failure
100%
100%
Graceful degradation
100%
100%
Async call pattern
0%
0%
Multi-layer defence
100%
100%
No production shortcuts
100%
100%
ae2cadd
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.