PostgreSQL patterns for Python with psycopg and asyncpg — connection pooling,
99
99%
Does it follow best practices?
Impact
99%
1.15xAverage score across 5 eval scenarios
Passed
No known issues
{
"context": "Tests whether the agent uses psycopg_pool.ConnectionPool with proper config, %s parameterized queries, context managers for connections, transactions for atomic batch inserts and archival, COPY or executemany for bulk operations, and proper shutdown cleanup. The task describes business needs without naming patterns.",
"type": "weighted_checklist",
"checklist": [
{
"name": "psycopg_pool ConnectionPool",
"description": "Database setup uses psycopg_pool.ConnectionPool with min_size, max_size, and timeout parameters -- not creating individual psycopg.connect() calls per request",
"max_score": 14
},
{
"name": "Context managers for connections",
"description": "All database access uses 'with pool.connection() as conn:' or equivalent context manager pattern -- connections are never manually acquired without guaranteed return to pool",
"max_score": 12
},
{
"name": "%s parameterized queries",
"description": "All queries use %s placeholders with tuple args for psycopg 3 -- no f-strings, .format(), or string concatenation for SQL values",
"max_score": 12
},
{
"name": "Transaction for batch event logging",
"description": "The log_events function wraps the batch insert in 'with conn.transaction():' so all events in a batch either succeed or fail atomically",
"max_score": 12
},
{
"name": "Bulk insert for events",
"description": "Batch event insertion uses executemany or COPY (cursor().copy) for efficient bulk loading -- not individual INSERT statements in a loop",
"max_score": 10
},
{
"name": "Transaction for archive operation",
"description": "The archive function uses a transaction to ensure events are inserted into archive AND deleted from events atomically -- no window where data exists in both or neither table",
"max_score": 10
},
{
"name": "Pool shutdown cleanup",
"description": "Pool is closed on application shutdown using atexit.register(pool.close) or Flask teardown/shutdown hook",
"max_score": 8
},
{
"name": "DATABASE_URL from environment",
"description": "Connection string comes from os.getenv('DATABASE_URL') or os.environ, not hardcoded credentials",
"max_score": 7
},
{
"name": "Dict row factory",
"description": "Query functions use psycopg.rows.dict_row for readable dict results instead of tuple indexing",
"max_score": 7
},
{
"name": "Batched archival to avoid long locks",
"description": "Archive function processes events in batches (e.g., 1000-5000 at a time) rather than moving millions of rows in a single transaction that holds locks for too long",
"max_score": 8
}
]
}evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
skills
postgresql-python-best-practices