or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

app-decorators.mdconfiguration.mddata-management.mdexecutors.mdindex.mdlaunchers.mdmonitoring.mdproviders.mdworkflow-management.md

launchers.mddocs/

0

# Launchers

1

2

Parsl launchers are wrappers that modify user-submitted commands to work with specific execution environments and resource managers. They handle the details of launching worker processes across nodes and cores on different HPC systems and computing platforms.

3

4

## Capabilities

5

6

### SimpleLauncher

7

8

Basic launcher that returns commands unchanged, suitable for single-node applications and MPI applications where the provider handles job allocation.

9

10

```python { .api }

11

class SimpleLauncher:

12

def __init__(self, debug=True):

13

"""

14

Basic launcher with no command modification.

15

16

Parameters:

17

- debug: Enable debug logging in generated scripts (default: True)

18

19

Limitations:

20

- Only supports single node per block (warns if nodes_per_block > 1)

21

"""

22

```

23

24

**Usage Example:**

25

26

```python

27

from parsl.launchers import SimpleLauncher

28

from parsl.providers import LocalProvider

29

30

provider = LocalProvider(

31

launcher=SimpleLauncher(debug=True)

32

)

33

```

34

35

### SingleNodeLauncher

36

37

Launches multiple parallel command invocations on a single node using bash job control, ideal for multi-core single-node systems.

38

39

```python { .api }

40

class SingleNodeLauncher:

41

def __init__(self, debug=True, fail_on_any=False):

42

"""

43

Single-node parallel launcher using bash job control.

44

45

Parameters:

46

- debug: Enable debug logging (default: True)

47

- fail_on_any: If True, fail if any worker fails; if False, fail only if all workers fail

48

49

Features:

50

- Uses bash background processes and wait

51

- Sets CORES environment variable

52

- Sophisticated error handling

53

"""

54

```

55

56

**Usage Example:**

57

58

```python

59

from parsl.launchers import SingleNodeLauncher

60

from parsl.providers import LocalProvider, AWSProvider

61

62

# Local multi-core execution

63

local_provider = LocalProvider(

64

launcher=SingleNodeLauncher(fail_on_any=True)

65

)

66

67

# AWS single-instance execution

68

aws_provider = AWSProvider(

69

launcher=SingleNodeLauncher(debug=True)

70

)

71

```

72

73

### SrunLauncher

74

75

Uses SLURM's `srun` to launch workers across allocated nodes, the most common launcher for SLURM-based HPC systems.

76

77

```python { .api }

78

class SrunLauncher:

79

def __init__(self, debug=True, overrides=''):

80

"""

81

SLURM srun launcher for multi-node execution.

82

83

Parameters:

84

- debug: Enable debug logging (default: True)

85

- overrides: Additional arguments passed to srun command

86

87

Features:

88

- Uses SLURM environment variables

89

- Single srun call with --ntasks for all workers

90

- Integrates with SLURM job allocation

91

"""

92

```

93

94

**Usage Example:**

95

96

```python

97

from parsl.launchers import SrunLauncher

98

from parsl.providers import SlurmProvider

99

100

slurm_provider = SlurmProvider(

101

partition='compute',

102

launcher=SrunLauncher(

103

overrides='--constraint=haswell --qos=premium'

104

),

105

nodes_per_block=2,

106

walltime='01:00:00'

107

)

108

```

109

110

### SrunMPILauncher

111

112

Specialized launcher for MPI applications using multiple independent `srun` calls, providing isolated MPI environments for each worker block.

113

114

```python { .api }

115

class SrunMPILauncher:

116

def __init__(self, debug=True, overrides=''):

117

"""

118

SLURM srun launcher optimized for MPI applications.

119

120

Parameters:

121

- debug: Enable debug logging (default: True)

122

- overrides: Additional arguments passed to srun command

123

124

Features:

125

- Independent srun calls for MPI environment setup

126

- Handles complex node/task distributions

127

- Uses --exclusive flag when appropriate

128

"""

129

```

130

131

**Usage Example:**

132

133

```python

134

from parsl.launchers import SrunMPILauncher

135

from parsl.providers import SlurmProvider

136

137

mpi_provider = SlurmProvider(

138

partition='mpi',

139

launcher=SrunMPILauncher(

140

overrides='--exclusive --ntasks-per-node=16'

141

),

142

nodes_per_block=4,

143

walltime='02:00:00'

144

)

145

```

146

147

### AprunLauncher

148

149

Cray-specific launcher using `aprun` for Cray supercomputing systems with ALPS (Application Level Placement Scheduler).

150

151

```python { .api }

152

class AprunLauncher:

153

def __init__(self, debug=True, overrides=''):

154

"""

155

Cray aprun launcher for Cray systems.

156

157

Parameters:

158

- debug: Enable debug logging (default: True)

159

- overrides: Additional arguments passed to aprun command

160

161

Features:

162

- Uses aprun -n for total tasks and -N for tasks per node

163

- Single aprun call for all workers

164

- Cray ALPS integration

165

"""

166

```

167

168

**Usage Example:**

169

170

```python

171

from parsl.launchers import AprunLauncher

172

from parsl.providers import TorqueProvider

173

174

cray_provider = TorqueProvider(

175

launcher=AprunLauncher(

176

overrides='-cc depth'

177

),

178

nodes_per_block=2,

179

walltime='01:00:00'

180

)

181

```

182

183

### JsrunLauncher

184

185

IBM-specific launcher using `jsrun` for IBM Power systems like Summit and Sierra.

186

187

```python { .api }

188

class JsrunLauncher:

189

def __init__(self, debug=True, overrides=''):

190

"""

191

IBM jsrun launcher for IBM Power systems.

192

193

Parameters:

194

- debug: Enable debug logging (default: True)

195

- overrides: Additional arguments passed to jsrun command

196

197

Features:

198

- Uses jsrun -n for total tasks and -r for tasks per node

199

- Designed for IBM Power systems

200

- LSF integration

201

"""

202

```

203

204

**Usage Example:**

205

206

```python

207

from parsl.launchers import JsrunLauncher

208

from parsl.providers import LSFProvider

209

210

summit_provider = LSFProvider(

211

queue='batch',

212

launcher=JsrunLauncher(

213

overrides='-g 1 --smpiargs="none"'

214

),

215

nodes_per_block=2,

216

walltime='01:00:00'

217

)

218

```

219

220

### MpiExecLauncher

221

222

Uses `mpiexec` to launch workers across nodes, suitable for Intel MPI and MPICH environments with hostfile support.

223

224

```python { .api }

225

class MpiExecLauncher:

226

def __init__(self, debug=True, bind_cmd='--bind-to', overrides=''):

227

"""

228

MPI launcher using mpiexec with hostfile support.

229

230

Parameters:

231

- debug: Enable debug logging (default: True)

232

- bind_cmd: CPU binding argument name (default: '--bind-to')

233

- overrides: Additional arguments passed to mpiexec

234

235

Features:

236

- Uses hostfile from $PBS_NODEFILE or localhost

237

- Supports CPU binding configuration

238

- Works with Intel MPI and MPICH

239

"""

240

```

241

242

**Usage Example:**

243

244

```python

245

from parsl.launchers import MpiExecLauncher

246

from parsl.providers import PBSProProvider

247

248

pbs_provider = PBSProProvider(

249

queue='regular',

250

launcher=MpiExecLauncher(

251

bind_cmd='--bind-to',

252

overrides='--depth=4 --cc=depth'

253

),

254

nodes_per_block=4,

255

walltime='02:00:00'

256

)

257

```

258

259

### MpiRunLauncher

260

261

Uses OpenMPI's `mpirun` to launch workers, providing simpler setup compared to MpiExecLauncher.

262

263

```python { .api }

264

class MpiRunLauncher:

265

def __init__(self, debug=True, bash_location='/bin/bash', overrides=''):

266

"""

267

OpenMPI mpirun launcher.

268

269

Parameters:

270

- debug: Enable debug logging (default: True)

271

- bash_location: Path to bash executable (default: '/bin/bash')

272

- overrides: Additional arguments passed to mpirun

273

274

Features:

275

- OpenMPI-style mpirun launcher

276

- Direct process count specification

277

- Simpler than MpiExecLauncher

278

"""

279

```

280

281

**Usage Example:**

282

283

```python

284

from parsl.launchers import MpiRunLauncher

285

from parsl.providers import LocalProvider

286

287

openmpi_provider = LocalProvider(

288

launcher=MpiRunLauncher(

289

overrides='--oversubscribe'

290

),

291

init_blocks=1,

292

max_blocks=2

293

)

294

```

295

296

### GnuParallelLauncher

297

298

Uses GNU Parallel with SSH to distribute workers across nodes, suitable for heterogeneous clusters with SSH access.

299

300

```python { .api }

301

class GnuParallelLauncher:

302

def __init__(self, debug=True):

303

"""

304

GNU Parallel launcher with SSH distribution.

305

306

Parameters:

307

- debug: Enable debug logging (default: True)

308

309

Prerequisites:

310

- GNU Parallel installed

311

- Passwordless SSH between nodes

312

- $PBS_NODEFILE environment variable

313

314

Features:

315

- SSH-based node distribution

316

- Parallel execution with job logging

317

- Works with PBS-based systems

318

"""

319

```

320

321

**Usage Example:**

322

323

```python

324

from parsl.launchers import GnuParallelLauncher

325

from parsl.providers import TorqueProvider

326

327

parallel_provider = TorqueProvider(

328

queue='parallel',

329

launcher=GnuParallelLauncher(debug=True),

330

nodes_per_block=4,

331

walltime='01:00:00'

332

)

333

```

334

335

### WrappedLauncher

336

337

Flexible launcher that wraps commands with arbitrary prefix commands, useful for containerization, profiling, or environment setup.

338

339

```python { .api }

340

class WrappedLauncher:

341

def __init__(self, prepend, debug=True):

342

"""

343

Flexible command wrapper launcher.

344

345

Parameters:

346

- prepend: Command to prepend before the user command

347

- debug: Enable debug logging (default: True)

348

349

Features:

350

- Arbitrary command prefixing

351

- Useful for containers, profiling, environment setup

352

- Ignores multi-node/multi-task configurations

353

"""

354

```

355

356

**Usage Example:**

357

358

```python

359

from parsl.launchers import WrappedLauncher

360

from parsl.providers import LocalProvider

361

362

# Container execution

363

container_provider = LocalProvider(

364

launcher=WrappedLauncher('docker run -it --rm myimage')

365

)

366

367

# Profiling execution

368

profile_provider = LocalProvider(

369

launcher=WrappedLauncher('time')

370

)

371

372

# Environment setup

373

env_provider = LocalProvider(

374

launcher=WrappedLauncher('source activate myenv &&')

375

)

376

```

377

378

## Launcher Selection Guide

379

380

### By System Type

381

382

**SLURM Systems**: Use `SrunLauncher` for general workloads, `SrunMPILauncher` for concurrent MPI applications

383

**Cray Systems**: Use `AprunLauncher` with appropriate overrides

384

**IBM Power Systems**: Use `JsrunLauncher` for Summit/Sierra-class systems

385

**PBS/Torque Systems**: Use `MpiExecLauncher` or `GnuParallelLauncher`

386

**Local/Cloud Systems**: Use `SingleNodeLauncher` for multi-core or `SimpleLauncher` for single-process

387

388

### By Workload Type

389

390

**Single-Node Parallel**: `SingleNodeLauncher`

391

**Multi-Node Parallel**: System-appropriate launcher (`SrunLauncher`, `AprunLauncher`, etc.)

392

**MPI Applications**: `SimpleLauncher` (if MPI launcher handled separately) or `SrunMPILauncher`

393

**Containerized Apps**: `WrappedLauncher` with container commands

394

**Special Requirements**: `WrappedLauncher` with custom commands

395

396

## Error Handling

397

398

```python { .api }

399

class BadLauncher(Exception):

400

"""Raised when inappropriate launcher types are provided."""

401

```

402

403

All launchers validate their configuration and raise `BadLauncher` for incompatible settings.

404

405

## Common Parameters

406

407

Most launchers support these common parameters:

408

409

- **debug**: Enable verbose logging in generated scripts

410

- **overrides**: Additional command-line arguments for system launchers

411

- **Multi-node awareness**: Launchers handle `tasks_per_node` and `nodes_per_block` parameters appropriately

412

413

## Integration with Providers

414

415

Launchers work with execution providers to handle the complete job submission and execution pipeline:

416

417

```python

418

from parsl.config import Config

419

from parsl.executors import HighThroughputExecutor

420

from parsl.providers import SlurmProvider

421

from parsl.launchers import SrunLauncher

422

423

config = Config(

424

executors=[

425

HighThroughputExecutor(

426

provider=SlurmProvider(

427

partition='compute',

428

launcher=SrunLauncher(overrides='--constraint=haswell'),

429

nodes_per_block=2,

430

walltime='01:00:00'

431

)

432

)

433

]

434

)

435

```

436

437

This creates a complete execution pipeline: Config → Executor → Provider → Launcher → Worker processes.