or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/aws-lambda-powertools@3.19.x
tile.json

tessl/pypi-aws-lambda-powertools

tessl install tessl/pypi-aws-lambda-powertools@3.19.0

Comprehensive developer toolkit implementing serverless best practices for AWS Lambda functions in Python

Agent Success

Agent success rate when using this tile

89%

Improvement

Agent success rate improvement when using this tile compared to baseline

1.22x

Baseline

Agent success rate without this tile

73%

task.mdevals/scenario-10/

Large CSV Data Processor

A Lambda function that processes large CSV files stored in S3 by reading specific rows without loading the entire file into memory.

Capabilities

Process large CSV files from S3 efficiently

Your Lambda function receives events containing S3 bucket and key information for CSV files that may be larger than the available Lambda memory. The function must read and process these files efficiently.

  • Given an S3 object path "s3://my-bucket/data.csv.gz" containing a gzipped CSV with headers and 1000 rows, the function reads row 500 and returns its data as a dictionary @test
  • Given an S3 object "s3://my-bucket/small.csv" with 10 rows, seeking to row 5, reading it, then seeking back to row 2 and reading it returns the correct data for both rows @test
  • Given an S3 object "s3://my-bucket/data.csv.gz" (gzipped CSV), the function opens it as a stream, decompresses it on-the-fly, and processes it as CSV without downloading the entire file @test
  • Given a CSV file with columns "id", "name", "value", reading the first 3 data rows (after header) returns a list of 3 dictionaries with the correct keys and values @test

Implementation

The Lambda handler should accept events in the following format:

{
    "bucket": "my-bucket",
    "key": "path/to/file.csv.gz",
    "row_index": 100  # optional, which row to read (0-indexed after header)
}

Your handler should:

  1. Open the S3 object as a seekable stream
  2. Apply appropriate transformations (decompression, CSV parsing)
  3. Read the requested row(s) efficiently
  4. Return the data as a dictionary or list of dictionaries

@generates

API

def lambda_handler(event: dict, context) -> dict:
    """
    Process CSV files from S3 efficiently using streaming.

    Args:
        event: Lambda event containing 'bucket', 'key', and optional 'row_index'
        context: Lambda context object

    Returns:
        Dictionary containing the processed row data
    """
    pass

Dependencies { .dependencies }

aws-lambda-powertools { .dependency }

Provides S3 streaming capabilities with seekable IO and transformation support.