Compressed caveman-style prose for AI coding agents — cuts ~65% output tokens while keeping full technical accuracy
96
100%
Does it follow best practices?
Impact
96%
1.00xAverage score across 38 eval scenarios
Passed
No known issues
{
"context": "Tests whether the response provides a sound Redis caching design for a read-heavy, infrequently-updated catalog.",
"type": "weighted_checklist",
"checklist": [
{
"name": "Proposes cache key structure",
"description": "Defines a sensible cache key pattern that accounts for pagination, filters, or query parameters",
"max_score": 12
},
{
"name": "Sets appropriate TTL",
"description": "Suggests a TTL that balances freshness vs performance given the few-times-per-day update frequency",
"max_score": 10
},
{
"name": "Describes invalidation strategy",
"description": "Explains how to invalidate cache when products are updated (event-driven, key deletion, versioned keys, or pattern-based)",
"max_score": 15
},
{
"name": "Addresses cache stampede or thundering herd",
"description": "Mentions the risk of many concurrent requests hitting the DB when cache expires and suggests mitigation (locking, stale-while-revalidate, jittered TTL)",
"max_score": 10
},
{
"name": "No incorrect information",
"description": "Response does not contain factually wrong statements about Redis or caching patterns",
"max_score": 10
}
]
}evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10
scenario-11
scenario-12
scenario-13
scenario-14
scenario-15
scenario-16
scenario-17
scenario-18
scenario-19
scenario-20
scenario-21
scenario-22
scenario-23
scenario-24
scenario-25
scenario-26
scenario-27
scenario-28
scenario-29
scenario-30
scenario-31
scenario-32
scenario-33
scenario-34
scenario-35
scenario-36
scenario-37