Python client library for Modal, a serverless cloud computing platform that enables developers to run Python code in the cloud with on-demand access to compute resources.
85
A serverless pipeline scheduler that executes machine learning workloads with appropriate compute resources based on task requirements.
Build a pipeline scheduler that can run different types of ML tasks (data preprocessing, model training, and inference) on cloud infrastructure. Each task type requires different compute resources:
The scheduler should configure appropriate compute resources for each task type and execute a simple pipeline that runs all three tasks in sequence.
Your implementation must:
task_name parameter and return a completion message@generates
def preprocess_data(task_name: str) -> str:
"""
Runs data preprocessing task with CPU-optimized resources.
Args:
task_name: Name of the preprocessing task
Returns:
Completion message
"""
pass
def train_model(task_name: str) -> str:
"""
Runs model training task with GPU resources.
Args:
task_name: Name of the training task
Returns:
Completion message
"""
pass
def run_inference(task_name: str) -> str:
"""
Runs inference task with lightweight resources.
Args:
task_name: Name of the inference task
Returns:
Completion message
"""
pass
def run_pipeline() -> None:
"""
Orchestrates the complete ML pipeline by running preprocessing,
training, and inference tasks in sequence.
"""
passProvides serverless cloud compute with configurable resources.
Install with Tessl CLI
npx tessl i tessl/pypi-modaldocs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10