tessl install tessl/pypi-kedro@1.1.0Kedro helps you build production-ready data and analytics pipelines
Agent Success
Agent success rate when using this tile
98%
Improvement
Agent success rate improvement when using this tile compared to baseline
1.32x
Baseline
Agent success rate without this tile
74%
A data processing system that executes computational tasks in parallel while safely sharing intermediate data between worker processes.
Build a data processing pipeline with three sequential nodes:
Execute the pipeline using parallel processing with memory sharing:
@generates
def double_values(data: list) -> list:
"""
Double each value in the input list.
Args:
data: List of numeric values
Returns:
List with each value doubled
"""
pass
def filter_values(data: list) -> list:
"""
Filter values greater than 5.
Args:
data: List of numeric values
Returns:
List containing only values > 5
"""
pass
def calculate_stats(data: list) -> dict:
"""
Calculate sum and mean of values.
Args:
data: List of numeric values
Returns:
Dictionary with 'sum' and 'mean' keys
"""
pass
def create_pipeline():
"""
Create a pipeline with three nodes for transformation, filtering, and aggregation.
Returns:
A pipeline object configured for the data processing workflow
"""
pass
def run_pipeline(input_data: list) -> dict:
"""
Execute the pipeline in parallel mode with shared memory datasets.
Args:
input_data: List of numeric values to process
Returns:
Dictionary with summary statistics containing 'sum' and 'mean'
"""
passProvides pipeline construction and parallel execution capabilities