0
# Utility Functions
1
2
PySD's utils module provides essential utility functions for data manipulation, file handling, coordinate processing, and model management. These functions support the internal workings of PySD models and provide useful utilities for model developers.
3
4
## Capabilities
5
6
### Array and Coordinate Manipulation
7
8
Functions for working with xarray DataArrays, coordinate systems, and data reshaping.
9
10
```python { .api }
11
def xrsplit(array):
12
"""
13
Split an array to a list of all the components.
14
15
Parameters:
16
- array: xarray.DataArray - Array to split
17
18
Returns:
19
list: List of shape 0 xarray.DataArrays with coordinates
20
"""
21
22
def rearrange(data, dims, coords):
23
"""
24
Returns a xarray.DataArray object with the given coords and dims.
25
26
Parameters:
27
- data: float or xarray.DataArray - Input data to rearrange
28
- dims: list - Ordered list of the dimensions
29
- coords: dict - Dictionary of dimension names as keys with their values
30
31
Returns:
32
xarray.DataArray: Rearranged data array
33
"""
34
35
def compute_shape(coords, reshape_len=None, py_name=""):
36
"""
37
Computes the 'shape' of a coords dictionary.
38
39
Parameters:
40
- coords: dict - Ordered dictionary of dimension names as keys with their values
41
- reshape_len: int or None - Number of dimensions of the output shape
42
- py_name: str - Name to print if an error is raised
43
44
Returns:
45
list: Shape of the ordered dictionary or desired table/vector
46
"""
47
```
48
49
### Model Data and File Management
50
51
Functions for loading model components, handling file I/O, and managing model data structures.
52
53
```python { .api }
54
def load_model_data(root, model_name):
55
"""
56
Used for models split in several files. Loads subscripts and modules dictionaries.
57
58
Parameters:
59
- root: pathlib.Path - Path to the model file
60
- model_name: str - Name of the model without file type extension
61
62
Returns:
63
tuple: (subscripts dict, modules dict)
64
"""
65
66
def load_modules(module_name, module_content, work_dir, submodules):
67
"""
68
Used to load model modules from the main model file when split_views=True.
69
70
Parameters:
71
- module_name: str - Name of the module to load
72
- module_content: dict or list - Content of the module
73
- work_dir: pathlib.Path - Path to the module file
74
- submodules: list - List updated at every recursive iteration
75
76
Returns:
77
str: String representations of modules/submodules to execute
78
"""
79
80
def load_outputs(file_name, transpose=False, columns=None, encoding=None):
81
"""
82
Load outputs file from CSV or tab-delimited format.
83
84
Parameters:
85
- file_name: str or pathlib.Path - Output file to read (must be .csv or .tab)
86
- transpose: bool - If True reads transposed outputs file (default False)
87
- columns: list or None - List of column names to load (default None for all)
88
- encoding: str or None - Encoding type to read file (default None)
89
90
Returns:
91
pandas.DataFrame: DataFrame with the output values
92
"""
93
```
94
95
### String and Name Processing
96
97
Functions for handling model element names, case-insensitive searches, and return column processing.
98
99
```python { .api }
100
def get_return_elements(return_columns, namespace):
101
"""
102
Takes a list of return elements formatted in Vensim's format Varname[Sub1, Sub2]
103
and returns model elements and their addresses.
104
105
Parameters:
106
- return_columns: list - List of return column strings
107
- namespace: dict - Model namespace dictionary
108
109
Returns:
110
tuple: (capture_elements list, return_addresses dict)
111
"""
112
113
def get_key_and_value_by_insensitive_key_or_value(key, dict):
114
"""
115
Search for key or value in dictionary ignoring case sensitivity.
116
117
Parameters:
118
- key: str - Key or value to look for in the dictionary
119
- dict: dict - Dictionary to search in
120
121
Returns:
122
tuple: (real_key, real_value) or (None, None) if not found
123
"""
124
```
125
126
### System and Time Utilities
127
128
Functions for system operations, time handling, and file encoding detection.
129
130
```python { .api }
131
def get_current_computer_time():
132
"""
133
Returns the current machine time. Needed to mock machine time in tests.
134
135
Returns:
136
datetime.datetime: Current machine time
137
"""
138
139
def detect_encoding(filename):
140
"""
141
Detects the encoding of a file.
142
143
Parameters:
144
- filename: str - Name of the file to detect encoding
145
146
Returns:
147
str: The encoding of the file
148
"""
149
150
def print_objects_format(object_set, text):
151
"""
152
Return a printable version of variables in object_set with the header text.
153
154
Parameters:
155
- object_set: set - Set of objects to format
156
- text: str - Header text
157
158
Returns:
159
str: Formatted string representation
160
"""
161
```
162
163
### Utility Classes
164
165
Helper classes for dependency tracking, progress monitoring, and dimension management.
166
167
```python { .api }
168
class Dependencies:
169
"""
170
Representation of variables dependencies.
171
172
Attributes:
173
- c_vars: set - Set of all selected model variables
174
- d_deps: dict - Dictionary of dependencies needed to run vars and modules
175
- s_deps: set - Set of stateful objects to update when integrating
176
"""
177
178
class ProgressBar:
179
"""
180
Progress bar for integration.
181
182
Methods:
183
- __init__(max_value=None): Initialize progress bar
184
- update(): Update progress bar
185
- finish(): Finish progress bar
186
"""
187
188
class UniqueDims:
189
"""
190
Helper class to create unique dimension names for data_vars with the same
191
dimension name but different coords in xarray Datasets.
192
193
Methods:
194
- __init__(original_dim_name): Initialize with original dimension name
195
- name_new_dim(dim_name, coords): Returns new or existing dimension name
196
- is_new(coords): Checks if coords is already in unique_dims list
197
"""
198
```
199
200
#### Usage Examples
201
202
Working with coordinates and arrays:
203
204
```python
205
from pysd import utils
206
import xarray as xr
207
208
# Split an array into components
209
data = xr.DataArray([1, 2, 3], dims=['x'], coords={'x': ['a', 'b', 'c']})
210
components = utils.xrsplit(data)
211
212
# Rearrange data with new dimensions
213
dims = ['time', 'variable']
214
coords = {'time': [0, 1, 2], 'variable': ['x', 'y']}
215
reshaped = utils.rearrange(data, dims, coords)
216
217
# Compute shape of coordinates
218
shape = utils.compute_shape(coords)
219
```
220
221
Loading model outputs:
222
223
```python
224
# Load CSV output file
225
outputs = utils.load_outputs('model_results.csv')
226
227
# Load specific columns with custom encoding
228
selected_outputs = utils.load_outputs(
229
'results.csv',
230
columns=['Time', 'Population', 'Birth Rate'],
231
encoding='utf-8'
232
)
233
234
# Load transposed output file
235
transposed = utils.load_outputs('results.tab', transpose=True)
236
```
237
238
## Types
239
240
```python { .api }
241
# Utility type definitions
242
Dependencies = dataclass # With c_vars, d_deps, s_deps attributes
243
ProgressBar = class # Progress monitoring utility
244
UniqueDims = class # Dimension name management utility
245
```