Kedro helps you build production-ready data and analytics pipelines
Overall
score
98%
Build a data processing application that executes a multi-step data transformation pipeline in sequential order.
Your application should implement a data processing pipeline with the following steps:
The pipeline should:
@generates
def create_pipeline():
"""
Create and return a data processing pipeline.
Returns:
A pipeline object containing all processing nodes
"""
pass
def run_pipeline(pipeline, catalog, runner):
"""
Execute the pipeline using the provided runner and catalog.
Args:
pipeline: The pipeline to execute
catalog: Data catalog for loading and saving datasets
runner: The runner to use for execution
Returns:
Execution results or output data
"""
pass
def create_catalog():
"""
Create and return a data catalog with all required datasets.
Returns:
A catalog object containing dataset definitions
"""
passProvides data pipeline framework and execution capabilities.
Install with Tessl CLI
npx tessl i tessl/pypi-kedro