0
# Command Line Interface and Tools
1
2
PySD provides command-line tools for model translation, batch execution, benchmarking, and file processing, enabling automated workflows and integration with larger data analysis pipelines.
3
4
## Capabilities
5
6
### Command Line Interface
7
8
Main CLI entry point for PySD operations including model translation and batch simulation.
9
10
```python { .api }
11
def main(args):
12
"""
13
Main CLI entry point.
14
15
Parameters:
16
- args: list - Command line arguments
17
18
Available commands:
19
- translate: Convert Vensim/XMILE models to Python
20
- run: Execute model simulations
21
- help: Display command help
22
23
Examples:
24
- pysd translate model.mdl
25
- pysd run model.py --final-time 100
26
- pysd --help
27
"""
28
```
29
30
#### Usage Examples
31
32
Command line model translation:
33
```bash
34
# Translate Vensim model
35
pysd translate population_model.mdl
36
37
# Translate with options
38
pysd translate large_model.mdl --split-views --encoding utf-8
39
40
# Translate XMILE model
41
pysd translate stella_model.xml
42
```
43
44
Command line simulation:
45
```bash
46
# Run translated model
47
pysd run population_model.py
48
49
# Run with custom parameters
50
pysd run model.py --final-time 50 --time-step 0.25
51
52
# Run with parameter file
53
pysd run model.py --params-file config.json
54
```
55
56
### Internal CLI Functions
57
58
Lower-level functions used by the CLI interface.
59
60
```python { .api }
61
def load(model_file, data_files, missing_values, split_views, **kwargs):
62
"""
63
CLI model loading function.
64
65
Parameters:
66
- model_file: str - Path to model file (.mdl, .xml, or .py)
67
- data_files: list - External data files
68
- missing_values: str - Missing value handling strategy
69
- split_views: bool - Whether to split views for large models
70
- **kwargs: Additional loading options
71
72
Returns:
73
Model: Loaded PySD model object
74
"""
75
76
def create_configuration(model, options):
77
"""
78
Create run configuration from CLI options.
79
80
Parameters:
81
- model: Model - PySD model object
82
- options: dict - CLI configuration options
83
84
Returns:
85
dict: Simulation configuration parameters
86
"""
87
```
88
89
### Benchmarking and Testing Tools
90
91
Tools for model validation, performance testing, and output comparison.
92
93
```python { .api }
94
def runner(model_file, canonical_file=None, data_files=None,
95
missing_values="warning", split_views=False, **kwargs):
96
"""
97
Run model and compare with canonical output.
98
99
Parameters:
100
- model_file: str - Path to model file
101
- canonical_file: str or None - Reference output file for comparison
102
- data_files: list or None - External data files
103
- missing_values: str - Missing value handling
104
- split_views: bool - Split model views
105
- **kwargs: Additional run parameters (final_time, time_step, etc.)
106
107
Returns:
108
pandas.DataFrame: Simulation results
109
110
Raises:
111
AssertionError: If output doesn't match canonical within tolerance
112
"""
113
114
def assert_frames_close(actual, expected, rtol=1e-3, atol=1e-6,
115
check_names=True, check_dtype=False):
116
"""
117
Compare simulation outputs with specified tolerance.
118
119
Parameters:
120
- actual: pandas.DataFrame - Actual simulation results
121
- expected: pandas.DataFrame - Expected reference results
122
- rtol: float - Relative tolerance for numerical comparison
123
- atol: float - Absolute tolerance for numerical comparison
124
- check_names: bool - Whether to check column names match
125
- check_dtype: bool - Whether to check data types match
126
127
Raises:
128
AssertionError: If DataFrames don't match within tolerance
129
"""
130
131
def assert_allclose(x, y, rtol=1e-5, atol=1e-5):
132
"""
133
Compare arrays with tolerance.
134
135
Parameters:
136
- x: array-like - First array
137
- y: array-like - Second array
138
- rtol: float - Relative tolerance
139
- atol: float - Absolute tolerance
140
141
Raises:
142
AssertionError: If arrays don't match within tolerance
143
"""
144
```
145
146
#### Usage Examples
147
148
```python
149
from pysd.tools.benchmarking import runner, assert_frames_close
150
import pandas as pd
151
152
# Run model and compare with reference
153
results = runner(
154
'test_model.mdl',
155
canonical_file='reference_output.csv',
156
final_time=50,
157
time_step=0.25
158
)
159
160
# Manual comparison of results
161
actual_results = pd.read_csv('actual_output.csv')
162
expected_results = pd.read_csv('expected_output.csv')
163
164
assert_frames_close(
165
actual_results,
166
expected_results,
167
rtol=1e-3, # 0.1% relative tolerance
168
atol=1e-6 # Absolute tolerance
169
)
170
```
171
172
### NetCDF File Processing
173
174
Tools for processing netCDF simulation outputs and converting between formats.
175
176
```python { .api }
177
class NCFile:
178
"""
179
NetCDF file processing class.
180
181
Handles reading, processing, and converting PySD simulation outputs
182
stored in netCDF format. Supports various output formats and
183
data extraction operations.
184
185
Methods:
186
- __init__(filename, parallel=False) - Initialize with netCDF file
187
- to_text_file(outfile="result.tab", sep="\t") - Convert to text format
188
- get_varnames() - Get list of variable names in file
189
- get_coords() - Get coordinate information
190
- get_data(varname) - Extract specific variable data
191
- close() - Close file and release resources
192
"""
193
```
194
195
#### Usage Examples
196
197
```python
198
from pysd.tools.ncfiles import NCFile
199
200
# Open netCDF simulation output
201
nc_file = NCFile('simulation_results.nc')
202
203
# Convert to tab-delimited text file
204
nc_file.to_text_file('results.tab', sep='\t')
205
206
# Convert to CSV format
207
nc_file.to_text_file('results.csv', sep=',')
208
209
# Extract specific variable data
210
population_data = nc_file.get_data('Population')
211
time_coords = nc_file.get_coords()['time']
212
213
# Get available variables
214
variables = nc_file.get_varnames()
215
print(f"Available variables: {variables}")
216
217
# Clean up
218
nc_file.close()
219
```
220
221
### Automated Testing Workflows
222
223
Integration with testing frameworks for model validation.
224
225
```python
226
import pytest
227
from pysd.tools.benchmarking import runner, assert_frames_close
228
229
def test_population_model():
230
"""Test population model against known output."""
231
results = runner(
232
'population_model.mdl',
233
canonical_file='population_reference.csv',
234
final_time=100
235
)
236
237
# Additional custom checks
238
final_population = results['Population'].iloc[-1]
239
assert 9000 < final_population < 11000, "Population should stabilize around 10000"
240
241
def test_economic_model_sensitivity():
242
"""Test model sensitivity to parameter changes."""
243
base_results = runner('economic_model.mdl', final_time=50)
244
245
# Test with different parameter
246
modified_results = runner(
247
'economic_model.mdl',
248
final_time=50,
249
params={'growth_rate': 0.05}
250
)
251
252
# Compare outcomes
253
base_gdp = base_results['GDP'].iloc[-1]
254
modified_gdp = modified_results['GDP'].iloc[-1]
255
256
assert modified_gdp > base_gdp, "Higher growth rate should increase GDP"
257
```
258
259
### Batch Processing
260
261
Scripts and utilities for processing multiple models or parameter sets.
262
263
```python
264
import pysd
265
from pathlib import Path
266
267
def batch_translate_models(input_dir, output_dir):
268
"""Translate all models in directory."""
269
input_path = Path(input_dir)
270
output_path = Path(output_dir)
271
output_path.mkdir(exist_ok=True)
272
273
for model_file in input_path.glob('*.mdl'):
274
try:
275
print(f"Translating {model_file.name}...")
276
model = pysd.read_vensim(str(model_file))
277
print(f"Successfully translated {model_file.name}")
278
except Exception as e:
279
print(f"Error translating {model_file.name}: {e}")
280
281
def batch_run_scenarios(model_file, scenarios, output_dir):
282
"""Run model with multiple parameter scenarios."""
283
model = pysd.load(model_file)
284
output_path = Path(output_dir)
285
output_path.mkdir(exist_ok=True)
286
287
for i, scenario in enumerate(scenarios):
288
print(f"Running scenario {i+1}/{len(scenarios)}...")
289
results = model.run(params=scenario)
290
results.to_csv(output_path / f'scenario_{i+1}.csv')
291
model.reload() # Reset for next scenario
292
```
293
294
### Integration with External Tools
295
296
PySD CLI tools integrate with external data analysis and workflow tools:
297
298
#### Make/Build Systems
299
```makefile
300
# Makefile for automated model processing
301
translate: model.mdl
302
pysd translate model.mdl
303
304
run: model.py
305
pysd run model.py --final-time 100 --output results.csv
306
307
test: model.py reference.csv
308
python -m pytest test_model.py
309
```
310
311
#### Shell Scripts
312
```bash
313
#!/bin/bash
314
# Batch process multiple models
315
for model in models/*.mdl; do
316
echo "Processing $model..."
317
pysd translate "$model"
318
base=$(basename "$model" .mdl)
319
pysd run "models/${base}.py" --output "results/${base}_results.csv"
320
done
321
```
322
323
#### Python Scripts
324
```python
325
#!/usr/bin/env python
326
"""Automated model validation pipeline."""
327
328
import sys
329
from pathlib import Path
330
from pysd.tools.benchmarking import runner, assert_frames_close
331
332
def validate_model(model_path, reference_path):
333
"""Validate single model against reference."""
334
try:
335
results = runner(str(model_path), canonical_file=str(reference_path))
336
print(f"✓ {model_path.name} passed validation")
337
return True
338
except AssertionError as e:
339
print(f"✗ {model_path.name} failed validation: {e}")
340
return False
341
342
if __name__ == "__main__":
343
models_dir = Path("models")
344
references_dir = Path("references")
345
346
success_count = 0
347
total_count = 0
348
349
for model_file in models_dir.glob("*.mdl"):
350
reference_file = references_dir / f"{model_file.stem}_reference.csv"
351
if reference_file.exists():
352
total_count += 1
353
if validate_model(model_file, reference_file):
354
success_count += 1
355
356
print(f"\nValidation Results: {success_count}/{total_count} models passed")
357
sys.exit(0 if success_count == total_count else 1)
358
```
359
360
### Error Handling and Diagnostics
361
362
CLI tools provide comprehensive error reporting:
363
364
- **Translation errors**: Syntax issues in source models
365
- **Runtime errors**: Parameter or data issues during simulation
366
- **Comparison errors**: Output validation failures
367
- **File errors**: Missing or corrupted data files
368
369
```python
370
import logging
371
372
# Configure logging for detailed diagnostics
373
logging.basicConfig(level=logging.DEBUG)
374
375
try:
376
results = runner('problematic_model.mdl', canonical_file='reference.csv')
377
except Exception as e:
378
logging.error(f"Model execution failed: {e}")
379
# Additional diagnostic information available in logs
380
```
381
382
### Performance Monitoring
383
384
Tools for monitoring and optimizing model performance:
385
386
```python
387
import time
388
from pysd.tools.benchmarking import runner
389
390
def benchmark_model(model_file, iterations=10):
391
"""Benchmark model execution time."""
392
times = []
393
394
for i in range(iterations):
395
start_time = time.time()
396
runner(model_file, final_time=100)
397
end_time = time.time()
398
times.append(end_time - start_time)
399
400
avg_time = sum(times) / len(times)
401
print(f"Average execution time: {avg_time:.2f} seconds")
402
print(f"Min: {min(times):.2f}s, Max: {max(times):.2f}s")
403
404
return times
405
```