Optimize Ideogram API performance with caching, batching, and connection pooling. Use when experiencing slow API responses, implementing caching strategies, or optimizing request throughput for Ideogram integrations. Trigger with phrases like "ideogram performance", "optimize ideogram", "ideogram latency", "ideogram caching", "ideogram slow", "ideogram batch".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill ideogram-performance-tuning94
Quality
93%
Does it follow best practices?
Impact
99%
1.57xAverage score across 3 eval scenarios
Optimize Ideogram API performance with caching, batching, and connection pooling.
| Operation | P50 | P95 | P99 |
|---|---|---|---|
| Read | 50ms | 150ms | 300ms |
| Write | 100ms | 250ms | 500ms |
| List | 75ms | 200ms | 400ms |
import { LRUCache } from 'lru-cache';
const cache = new LRUCache<string, any>({
max: 1000,
ttl: 60000, // 1 minute
updateAgeOnGet: true,
});
async function cachedIdeogramRequest<T>(
key: string,
fetcher: () => Promise<T>,
ttl?: number
): Promise<T> {
const cached = cache.get(key);
if (cached) return cached as T;
const result = await fetcher();
cache.set(key, result, { ttl });
return result;
}import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL);
async function cachedWithRedis<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds = 60
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const result = await fetcher();
await redis.setex(key, ttlSeconds, JSON.stringify(result));
return result;
}import DataLoader from 'dataloader';
const ideogramLoader = new DataLoader<string, any>(
async (ids) => {
// Batch fetch from Ideogram
const results = await ideogramClient.batchGet(ids);
return ids.map(id => results.find(r => r.id === id) || null);
},
{
maxBatchSize: 100,
batchScheduleFn: callback => setTimeout(callback, 10),
}
);
// Usage - automatically batched
const [item1, item2, item3] = await Promise.all([
ideogramLoader.load('id-1'),
ideogramLoader.load('id-2'),
ideogramLoader.load('id-3'),
]);import { Agent } from 'https';
// Keep-alive connection pooling
const agent = new Agent({
keepAlive: true,
maxSockets: 10,
maxFreeSockets: 5,
timeout: 30000,
});
const client = new IdeogramClient({
apiKey: process.env.IDEOGRAM_API_KEY!,
httpAgent: agent,
});async function* paginatedIdeogramList<T>(
fetcher: (cursor?: string) => Promise<{ data: T[]; nextCursor?: string }>
): AsyncGenerator<T> {
let cursor: string | undefined;
do {
const { data, nextCursor } = await fetcher(cursor);
for (const item of data) {
yield item;
}
cursor = nextCursor;
} while (cursor);
}
// Usage
for await (const item of paginatedIdeogramList(cursor =>
ideogramClient.list({ cursor, limit: 100 })
)) {
await process(item);
}async function measuredIdeogramCall<T>(
operation: string,
fn: () => Promise<T>
): Promise<T> {
const start = performance.now();
try {
const result = await fn();
const duration = performance.now() - start;
console.log({ operation, duration, status: 'success' });
return result;
} catch (error) {
const duration = performance.now() - start;
console.error({ operation, duration, status: 'error', error });
throw error;
}
}Measure current latency for critical Ideogram operations.
Add response caching for frequently accessed data.
Use DataLoader or similar for automatic request batching.
Configure connection pooling with keep-alive.
| Issue | Cause | Solution |
|---|---|---|
| Cache miss storm | TTL expired | Use stale-while-revalidate |
| Batch timeout | Too many items | Reduce batch size |
| Connection exhausted | No pooling | Configure max sockets |
| Memory pressure | Cache too large | Set max cache entries |
const withPerformance = <T>(name: string, fn: () => Promise<T>) =>
measuredIdeogramCall(name, () =>
cachedIdeogramRequest(`cache:${name}`, fn)
);For cost optimization, see ideogram-cost-tuning.
213e2bd
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.