PostgreSQL patterns for Python with psycopg and asyncpg — connection pooling,
99
99%
Does it follow best practices?
Impact
99%
1.15xAverage score across 5 eval scenarios
Passed
No known issues
{
"context": "Tests whether the agent uses asyncpg connection pool with proper lifecycle, $1/$2 parameterized queries, bulk insert for event ingestion, async context managers for connections, server-side cursors for large result sets, and transactions for multi-step operations. The task describes business requirements without naming these patterns.",
"type": "weighted_checklist",
"checklist": [
{
"name": "asyncpg pool with create_pool",
"description": "Database setup uses 'await asyncpg.create_pool()' with min_size and max_size parameters, not individual connections per request or psycopg",
"max_score": 14
},
{
"name": "Pool lifecycle management",
"description": "Pool is created in a startup handler (FastAPI lifespan or on_event) and closed in a shutdown handler -- not created at module import time",
"max_score": 10
},
{
"name": "Async context managers for connections",
"description": "All database operations use 'async with pool.acquire() as conn:' context manager -- no bare acquire() calls without guaranteed release",
"max_score": 12
},
{
"name": "Bulk insert for event ingestion",
"description": "Event batch ingestion uses copy_records_to_table or executemany for efficient bulk insert -- NOT individual INSERT statements in a loop",
"max_score": 14
},
{
"name": "$1/$2 parameterized queries",
"description": "All queries use $1, $2, $N numbered placeholders with positional args -- no f-strings, .format(), string concatenation, or %s placeholders",
"max_score": 12
},
{
"name": "Transaction for daily summary generation",
"description": "Daily summary generation wraps the read + upsert operations in 'async with conn.transaction():' for atomicity",
"max_score": 10
},
{
"name": "Server-side cursor for user activity",
"description": "The user activity report function uses a cursor or streaming pattern (conn.cursor with async iteration or batched fetching) for large result sets -- not fetchall() which loads everything into memory",
"max_score": 10
},
{
"name": "DATABASE_URL from environment",
"description": "Database connection string is read from an environment variable (os.getenv or os.environ), not hardcoded",
"max_score": 8
},
{
"name": "command_timeout on pool",
"description": "Pool is configured with command_timeout or equivalent timeout to prevent queries from hanging indefinitely",
"max_score": 5
},
{
"name": "No SQL injection vectors",
"description": "No dynamic SQL built with f-strings or string concatenation anywhere in the codebase -- all user inputs go through parameterized queries",
"max_score": 5
}
]
}evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
skills
postgresql-python-best-practices