CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-parsl

Parallel scripting library for executing workflows across diverse computing resources

Pending
Overview
Eval results
Files

launchers.mddocs/

Launchers

Parsl launchers are wrappers that modify user-submitted commands to work with specific execution environments and resource managers. They handle the details of launching worker processes across nodes and cores on different HPC systems and computing platforms.

Capabilities

SimpleLauncher

Basic launcher that returns commands unchanged, suitable for single-node applications and MPI applications where the provider handles job allocation.

class SimpleLauncher:
    def __init__(self, debug=True):
        """
        Basic launcher with no command modification.
        
        Parameters:
        - debug: Enable debug logging in generated scripts (default: True)
        
        Limitations:
        - Only supports single node per block (warns if nodes_per_block > 1)
        """

Usage Example:

from parsl.launchers import SimpleLauncher
from parsl.providers import LocalProvider

provider = LocalProvider(
    launcher=SimpleLauncher(debug=True)
)

SingleNodeLauncher

Launches multiple parallel command invocations on a single node using bash job control, ideal for multi-core single-node systems.

class SingleNodeLauncher:
    def __init__(self, debug=True, fail_on_any=False):
        """
        Single-node parallel launcher using bash job control.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        - fail_on_any: If True, fail if any worker fails; if False, fail only if all workers fail
        
        Features:
        - Uses bash background processes and wait
        - Sets CORES environment variable
        - Sophisticated error handling
        """

Usage Example:

from parsl.launchers import SingleNodeLauncher
from parsl.providers import LocalProvider, AWSProvider

# Local multi-core execution
local_provider = LocalProvider(
    launcher=SingleNodeLauncher(fail_on_any=True)
)

# AWS single-instance execution
aws_provider = AWSProvider(
    launcher=SingleNodeLauncher(debug=True)
)

SrunLauncher

Uses SLURM's srun to launch workers across allocated nodes, the most common launcher for SLURM-based HPC systems.

class SrunLauncher:
    def __init__(self, debug=True, overrides=''):
        """
        SLURM srun launcher for multi-node execution.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        - overrides: Additional arguments passed to srun command
        
        Features:
        - Uses SLURM environment variables
        - Single srun call with --ntasks for all workers
        - Integrates with SLURM job allocation
        """

Usage Example:

from parsl.launchers import SrunLauncher
from parsl.providers import SlurmProvider

slurm_provider = SlurmProvider(
    partition='compute',
    launcher=SrunLauncher(
        overrides='--constraint=haswell --qos=premium'
    ),
    nodes_per_block=2,
    walltime='01:00:00'
)

SrunMPILauncher

Specialized launcher for MPI applications using multiple independent srun calls, providing isolated MPI environments for each worker block.

class SrunMPILauncher:
    def __init__(self, debug=True, overrides=''):
        """
        SLURM srun launcher optimized for MPI applications.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        - overrides: Additional arguments passed to srun command
        
        Features:
        - Independent srun calls for MPI environment setup
        - Handles complex node/task distributions
        - Uses --exclusive flag when appropriate
        """

Usage Example:

from parsl.launchers import SrunMPILauncher
from parsl.providers import SlurmProvider

mpi_provider = SlurmProvider(
    partition='mpi',
    launcher=SrunMPILauncher(
        overrides='--exclusive --ntasks-per-node=16'
    ),
    nodes_per_block=4,
    walltime='02:00:00'
)

AprunLauncher

Cray-specific launcher using aprun for Cray supercomputing systems with ALPS (Application Level Placement Scheduler).

class AprunLauncher:
    def __init__(self, debug=True, overrides=''):
        """
        Cray aprun launcher for Cray systems.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        - overrides: Additional arguments passed to aprun command
        
        Features:
        - Uses aprun -n for total tasks and -N for tasks per node
        - Single aprun call for all workers
        - Cray ALPS integration
        """

Usage Example:

from parsl.launchers import AprunLauncher
from parsl.providers import TorqueProvider

cray_provider = TorqueProvider(
    launcher=AprunLauncher(
        overrides='-cc depth'
    ),
    nodes_per_block=2,
    walltime='01:00:00'
)

JsrunLauncher

IBM-specific launcher using jsrun for IBM Power systems like Summit and Sierra.

class JsrunLauncher:
    def __init__(self, debug=True, overrides=''):
        """
        IBM jsrun launcher for IBM Power systems.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        - overrides: Additional arguments passed to jsrun command
        
        Features:
        - Uses jsrun -n for total tasks and -r for tasks per node
        - Designed for IBM Power systems
        - LSF integration
        """

Usage Example:

from parsl.launchers import JsrunLauncher
from parsl.providers import LSFProvider

summit_provider = LSFProvider(
    queue='batch',
    launcher=JsrunLauncher(
        overrides='-g 1 --smpiargs="none"'
    ),
    nodes_per_block=2,
    walltime='01:00:00'
)

MpiExecLauncher

Uses mpiexec to launch workers across nodes, suitable for Intel MPI and MPICH environments with hostfile support.

class MpiExecLauncher:
    def __init__(self, debug=True, bind_cmd='--bind-to', overrides=''):
        """
        MPI launcher using mpiexec with hostfile support.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        - bind_cmd: CPU binding argument name (default: '--bind-to')
        - overrides: Additional arguments passed to mpiexec
        
        Features:
        - Uses hostfile from $PBS_NODEFILE or localhost
        - Supports CPU binding configuration
        - Works with Intel MPI and MPICH
        """

Usage Example:

from parsl.launchers import MpiExecLauncher
from parsl.providers import PBSProProvider

pbs_provider = PBSProProvider(
    queue='regular',
    launcher=MpiExecLauncher(
        bind_cmd='--bind-to',
        overrides='--depth=4 --cc=depth'
    ),
    nodes_per_block=4,
    walltime='02:00:00'
)

MpiRunLauncher

Uses OpenMPI's mpirun to launch workers, providing simpler setup compared to MpiExecLauncher.

class MpiRunLauncher:
    def __init__(self, debug=True, bash_location='/bin/bash', overrides=''):
        """
        OpenMPI mpirun launcher.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        - bash_location: Path to bash executable (default: '/bin/bash')
        - overrides: Additional arguments passed to mpirun
        
        Features:
        - OpenMPI-style mpirun launcher
        - Direct process count specification
        - Simpler than MpiExecLauncher
        """

Usage Example:

from parsl.launchers import MpiRunLauncher
from parsl.providers import LocalProvider

openmpi_provider = LocalProvider(
    launcher=MpiRunLauncher(
        overrides='--oversubscribe'
    ),
    init_blocks=1,
    max_blocks=2
)

GnuParallelLauncher

Uses GNU Parallel with SSH to distribute workers across nodes, suitable for heterogeneous clusters with SSH access.

class GnuParallelLauncher:
    def __init__(self, debug=True):
        """
        GNU Parallel launcher with SSH distribution.
        
        Parameters:
        - debug: Enable debug logging (default: True)
        
        Prerequisites:
        - GNU Parallel installed
        - Passwordless SSH between nodes
        - $PBS_NODEFILE environment variable
        
        Features:
        - SSH-based node distribution
        - Parallel execution with job logging
        - Works with PBS-based systems
        """

Usage Example:

from parsl.launchers import GnuParallelLauncher
from parsl.providers import TorqueProvider

parallel_provider = TorqueProvider(
    queue='parallel',
    launcher=GnuParallelLauncher(debug=True),
    nodes_per_block=4,
    walltime='01:00:00'
)

WrappedLauncher

Flexible launcher that wraps commands with arbitrary prefix commands, useful for containerization, profiling, or environment setup.

class WrappedLauncher:
    def __init__(self, prepend, debug=True):
        """
        Flexible command wrapper launcher.
        
        Parameters:
        - prepend: Command to prepend before the user command
        - debug: Enable debug logging (default: True)
        
        Features:
        - Arbitrary command prefixing
        - Useful for containers, profiling, environment setup
        - Ignores multi-node/multi-task configurations
        """

Usage Example:

from parsl.launchers import WrappedLauncher
from parsl.providers import LocalProvider

# Container execution
container_provider = LocalProvider(
    launcher=WrappedLauncher('docker run -it --rm myimage')
)

# Profiling execution  
profile_provider = LocalProvider(
    launcher=WrappedLauncher('time')
)

# Environment setup
env_provider = LocalProvider(
    launcher=WrappedLauncher('source activate myenv &&')
)

Launcher Selection Guide

By System Type

SLURM Systems: Use SrunLauncher for general workloads, SrunMPILauncher for concurrent MPI applications Cray Systems: Use AprunLauncher with appropriate overrides IBM Power Systems: Use JsrunLauncher for Summit/Sierra-class systems
PBS/Torque Systems: Use MpiExecLauncher or GnuParallelLauncher Local/Cloud Systems: Use SingleNodeLauncher for multi-core or SimpleLauncher for single-process

By Workload Type

Single-Node Parallel: SingleNodeLauncher Multi-Node Parallel: System-appropriate launcher (SrunLauncher, AprunLauncher, etc.) MPI Applications: SimpleLauncher (if MPI launcher handled separately) or SrunMPILauncher Containerized Apps: WrappedLauncher with container commands Special Requirements: WrappedLauncher with custom commands

Error Handling

class BadLauncher(Exception):
    """Raised when inappropriate launcher types are provided."""

All launchers validate their configuration and raise BadLauncher for incompatible settings.

Common Parameters

Most launchers support these common parameters:

  • debug: Enable verbose logging in generated scripts
  • overrides: Additional command-line arguments for system launchers
  • Multi-node awareness: Launchers handle tasks_per_node and nodes_per_block parameters appropriately

Integration with Providers

Launchers work with execution providers to handle the complete job submission and execution pipeline:

from parsl.config import Config
from parsl.executors import HighThroughputExecutor
from parsl.providers import SlurmProvider
from parsl.launchers import SrunLauncher

config = Config(
    executors=[
        HighThroughputExecutor(
            provider=SlurmProvider(
                partition='compute',
                launcher=SrunLauncher(overrides='--constraint=haswell'),
                nodes_per_block=2,
                walltime='01:00:00'
            )
        )
    ]
)

This creates a complete execution pipeline: Config → Executor → Provider → Launcher → Worker processes.

Install with Tessl CLI

npx tessl i tessl/pypi-parsl

docs

app-decorators.md

configuration.md

data-management.md

executors.md

index.md

launchers.md

monitoring.md

providers.md

workflow-management.md

tile.json