Load data from databases to dataframes, the fastest way.
86
Quality
Pending
Does it follow best practices?
Impact
86%
1.04xAverage score across 10 eval scenarios
Build a database performance benchmark tool that efficiently loads and compares large datasets from a database to measure memory usage and execution time.
@generates
def load_dataset(connection_string: str, query: str) -> tuple:
"""
Load a dataset from a database and track memory usage.
Args:
connection_string: Database connection string
query: SQL query to execute
Returns:
A tuple of (dataframe, peak_memory_mb) where:
- dataframe: The loaded data as a pandas DataFrame
- peak_memory_mb: Peak memory usage in megabytes during loading
"""
pass
def compare_performance(connection_string: str, query: str) -> dict:
"""
Compare execution time between standard and optimized data loading approaches.
Args:
connection_string: Database connection string
query: SQL query to execute (should return at least 100K rows for meaningful comparison)
Returns:
A dictionary containing:
{
'standard_time_seconds': float,
'optimized_time_seconds': float,
'speedup_factor': float # standard_time / optimized_time
}
"""
passProvides high-performance database-to-dataframe loading with memory optimization.
Provides standard dataframe operations and pandas.read_sql for comparison baseline.
Provides memory usage tracking capabilities.
Install with Tessl CLI
npx tessl i tessl/pypi-connectorxdocs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10