Builds AI-native products using OpenAI's development philosophy and modern AI UX patterns. Use when integrating AI features, designing for model improvements, implementing evals as product specs, or creating AI-first experiences. Based on Kevin Weil (OpenAI CPO) on building for future models, hybrid approaches, and cost optimization.
80
76%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./ai-product-patterns/SKILL.mdClaude uses this skill when:
The Exponential Improvement Mindset:
"The AI model you're using today is the worst AI model you will ever use for the rest of your life. What computers can do changes every two months."
Core Principle:
How to Apply:
DON'T:
- "AI can't do X, so we won't support it"
- Build fallbacks that limit model capabilities
- Design UI that assumes current limitations
DO:
- Build interfaces that scale with model improvements
- Design for the capability you want, not current reality
- Test with future models in mind
- Make it easy to swap/upgrade modelsExample:
Feature: "AI code review"
❌ Current-Model Thinking:
- "Models can't catch logic bugs, only style"
- Limit to linting and formatting
- Don't even try complex reasoning
✅ Future-Model Thinking:
- Design for full logic review capability
- Start with style, but UI supports deeper analysis
- As models improve, feature gets better automatically
- Progressive: Basic → Advanced → Expert reviewTest Cases = Product Requirements:
"At OpenAI, evals are the product spec. If you can define what good looks like in test cases, you've defined the product."
The Approach:
Traditional PM:
Requirement: "Search should return relevant results"AI-Native PM:
// Eval as Product Spec
const searchEvals = [
{
query: "best PM frameworks",
expectedResults: ["RICE", "LNO", "Jobs-to-be-Done"],
quality: "all3InTop5",
},
{
query: "how to prioritize features",
expectedResults: ["Shreyas Doshi", "Marty Cagan"],
quality: "relevantInTop3",
},
{
query: "shiip prodcut", // typo
correctAs: "ship product",
quality: "handleTypos",
},
];How to Write Evals:
1. Define Success Cases:
- Input: [specific user query/action]
- Expected: [what good output looks like]
- Quality bar: [how to measure success]
2. Define Failure Cases:
- Input: [edge case, adversarial, error]
- Expected: [graceful handling]
- Quality bar: [minimum acceptable]
3. Make Evals Runnable:
- Automated tests
- Run on every model change
- Track quality over timeExample:
// Product Requirement as Eval
describe("AI Recommendations", () => {
test("cold start: new user gets popular items", async () => {
const newUser = { signupDate: today, interactions: [] };
const recs = await getRecommendations(newUser);
expect(recs).toIncludePopularItems();
expect(recs.length).toBeGreaterThan(5);
});
test("personalized: returning user gets relevant items", async () => {
const user = { interests: ["PM", "AI", "startups"] };
const recs = await getRecommendations(user);
expect(recs).toMatchInterests(user.interests);
expect(recs).toHaveDiversity(); // Not all same topic
});
test("quality bar: recommendations >70% click rate", async () => {
const users = await getTestUsers(100);
const clickRate = await measureClickRate(users);
expect(clickRate).toBeGreaterThan(0.7);
});
});AI + Traditional Code:
"Don't make everything AI. Use AI where it shines, traditional code where it's reliable."
When to Use AI:
When to Use Traditional Code:
Hybrid Patterns:
Pattern 1: AI for Intent, Code for Execution
// Hybrid: AI understands, code executes
async function processUserQuery(query) {
// AI: Understand intent
const intent = await ai.classify(query, {
types: ["search", "create", "update", "delete"]
});
// Traditional: Execute deterministically
switch(intent.type) {
case "search": return search(intent.params);
case "create": return create(intent.params);
// ... reliable code paths
}
}Pattern 2: AI with Rule-Based Fallbacks
// Hybrid: AI primary, rules backup
async function moderateContent(content) {
// Fast rules-based check first
if (containsProfanity(content)) return "reject";
if (content.length > 10000) return "reject";
// AI for nuanced cases
const aiModeration = await ai.moderate(content);
// Hybrid decision
if (aiModeration.confidence > 0.9) {
return aiModeration.decision;
} else {
return "human_review"; // Uncertain → human
}
}Pattern 3: AI + Ranking/Filtering
// Hybrid: AI generates, code filters
async function generateRecommendations(user) {
// AI: Generate candidates
const candidates = await ai.recommend(user, { count: 50 });
// Code: Apply business rules
const filtered = candidates
.filter(item => item.inStock)
.filter(item => item.price <= user.budget)
.filter(item => !user.previouslyPurchased(item));
// Code: Apply ranking logic
return filtered
.sort((a, b) => scoringFunction(a, b))
.slice(0, 10);
}Streaming:
// Show results as they arrive
for await (const chunk of ai.stream(prompt)) {
updateUI(chunk); // Immediate feedback
}Progressive Disclosure:
[AI working...] → [Preview...] → [Full results]Retry and Refinement:
User: "Find PM articles"
AI: [shows results]
User: "More about prioritization"
AI: [refines results]Confidence Indicators:
if (result.confidence > 0.9) {
show(result); // High confidence
} else if (result.confidence > 0.5) {
show(result, { disclaimer: "AI-generated, verify" });
} else {
show("I'm not confident. Try rephrasing?");
}Cost-Aware Patterns:
// Progressive cost
if (simpleQuery) {
return await smallModel(query); // Fast, cheap
} else {
return await largeModel(query); // Slow, expensive
}FEATURE DECISION
│
├─ Deterministic logic needed? ────YES──→ TRADITIONAL CODE
│ (math, validation, access)
│ NO ↓
│
├─ Pattern matching / NLP? ────────YES──→ AI (with fallbacks)
│ (understanding intent, ambiguity)
│ NO ↓
│
├─ Creative generation? ───────────YES──→ AI (with human oversight)
│ (writing, images, ideas)
│ NO ↓
│
├─ Improves with more data? ───────YES──→ AI + ML
│ (recommendations, personalization)
│ NO ↓
│
└─ Use TRADITIONAL CODE ←──────────────────┘
(More reliable for this use case)# AI Feature: [Name]
## What It Does
User goal: [describe job to be done]
AI capability: [what AI makes possible]
## Evals (Product Spec)
### Success Cases
```javascript
test("handles typical user query", async () => {
const input = "[example]";
const output = await aiFeature(input);
expect(output).toMatch("[expected]");
});
test("handles edge case", async () => {
// Define edge cases as tests
});### Template 2: AI Cost Optimization
```markdown
# AI Feature: [Name]
## Cost Structure
- Model: [GPT-4, Claude, etc.]
- Cost per call: [$X]
- Expected volume: [X calls/day]
- Monthly cost: [estimate]
## Optimization Strategies
### 1. Caching
- [ ] Cache common queries
- [ ] Cache user context
- [ ] Expiry: [duration]
### 2. Model Routing
- [ ] Simple queries → small model
- [ ] Complex queries → large model
- [ ] Threshold: [define]
### 3. Batching
- [ ] Group similar requests
- [ ] Process in batches
- [ ] Update frequency: [timing]
### 4. Prompt Optimization
- [ ] Minimize token count
- [ ] Reusable system prompts
- [ ] Structured outputs (JSON)
### 5. Hybrid Approaches
- [ ] Rules-based preprocessing
- [ ] AI only when needed
- [ ] Fallback to deterministic# Feature: [Name]
## UX Patterns
### Streaming Response
```javascript
// Show results as they arrive
for await (const chunk of stream) {
appendToUI(chunk);
}## Quick Reference Card
### 🤖 AI Product Checklist
**Before Building:**
- [ ] Evals written (test cases = product spec)
- [ ] Hybrid approach defined (AI + traditional code)
- [ ] Model improvement plan (design for future capabilities)
- [ ] Cost estimate (per call, monthly)
- [ ] Quality bar defined (accuracy, latency, cost)
**During Build:**
- [ ] Implementing streaming (for responsiveness)
- [ ] Adding confidence indicators
- [ ] Building retry/refinement flows
- [ ] Caching common queries
- [ ] Fallbacks for failures
**Before Ship:**
- [ ] Evals passing (quality bar met)
- [ ] Cost within budget
- [ ] Error states handled
- [ ] Model swappable (not locked to one provider)
- [ ] Monitoring in place
---
## Real-World Examples
### Example 1: OpenAI's ChatGPT Memory
**Challenge:** Users want persistent context
**AI-Native Approach:**
- Built for models that would improve memory
- Started simple, designed for sophisticated future
- Evals: "Remembers facts across sessions"
- Hybrid: Explicit memory + AI interpretation
**Result:** Feature improves as models improve
---
### Example 2: AI Search Implementation
**Challenge:** Traditional search missing intent
**Hybrid Approach:**
```javascript
async function search(query) {
// Traditional: Exact matches (fast, cheap)
const exactMatches = await traditionalSearch(query);
if (exactMatches.length > 10) return exactMatches;
// AI: Semantic search (smart, expensive)
const semanticResults = await aiSearch(query);
// Hybrid: Combine and rank
return dedupe([...exactMatches, ...semanticResults]);
}Challenge: AI costs too high
Solution:
Problem: Using AI where traditional code is better Fix: Use hybrid approach - AI where it shines, code where it's reliable
Problem: "Models can't do X, so we won't support it" Fix: Build for future capabilities, room to grow
Problem: Subjective quality, no measurement Fix: Evals as product specs - define good in test cases
Problem: Expensive AI calls without optimization Fix: Cache, batch, route to smaller models
Kevin Weil:
"If you're building and the product is right on the edge of what's possible, keep going. In two months, there's going to be a better model."
On Evals:
"At OpenAI, we write evals as product specs. If you can define good output in test cases, you've defined the product."
On Model Improvements:
"The AI model you're using today is the worst AI model you will ever use for the rest of your life."
53530ef
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.