or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

command-line.mdcore-parsing.mdindex.mdnamelist-objects.mdparser-configuration.mdutility-functions.md

utility-functions.mddocs/

0

# Utility Functions and Advanced Components

1

2

Low-level utility functions and specialized classes for advanced users who need fine-grained control over parsing, type conversion, and array indexing. These components provide the foundation for f90nml's robust Fortran namelist processing capabilities.

3

4

## Capabilities

5

6

### Type Conversion Functions

7

8

Functions for converting between Fortran and Python data types, handling the nuances of Fortran syntax and representation.

9

10

```python { .api }

11

def pyfloat(v_str):

12

"""

13

Convert string representation of Fortran floating point to Python float.

14

15

Handles Fortran-specific exponential notation including 'D' and 'E' formats,

16

and implicit exponent signs.

17

18

Args:

19

v_str: str - String representation of Fortran floating point number

20

21

Returns:

22

float: Equivalent Python floating point value

23

24

Raises:

25

ValueError: If string cannot be converted to float

26

"""

27

28

def pycomplex(v_str):

29

"""

30

Convert string representation of Fortran complex number to Python complex.

31

32

Parses Fortran complex format: (real_part, imaginary_part)

33

34

Args:

35

v_str: str - String in format "(x.x, y.y)" representing complex number

36

37

Returns:

38

complex: Equivalent Python complex number

39

40

Raises:

41

ValueError: If string is not in valid complex format

42

"""

43

44

def pybool(v_str, strict_logical=True):

45

"""

46

Convert string representation of Fortran logical to Python boolean.

47

48

Supports various Fortran logical formats: .true./.false., .t./.f., true/false, t/f

49

50

Args:

51

v_str: str - String representation of Fortran logical value

52

strict_logical: bool - If True, requires Fortran logical format (default: True)

53

54

Returns:

55

bool: Equivalent Python boolean value

56

57

Raises:

58

ValueError: If string is not a valid logical constant

59

"""

60

61

def pystr(v_str):

62

"""

63

Convert string representation of Fortran string to Python string.

64

65

Handles quoted strings and escaped quote characters within strings.

66

67

Args:

68

v_str: str - Fortran string with or without delimiters

69

70

Returns:

71

str: Processed Python string with escaped quotes resolved

72

"""

73

```

74

75

**Usage Examples:**

76

77

```python

78

import f90nml.fpy as fpy

79

80

# Convert Fortran floating point formats

81

float_val = fpy.pyfloat('1.23E+4') # Scientific notation

82

float_val = fpy.pyfloat('1.23D-4') # Fortran double precision notation

83

float_val = fpy.pyfloat('1.23+4') # Implicit exponent sign

84

85

# Convert Fortran complex numbers

86

complex_val = fpy.pycomplex('(1.5, -2.3)') # Standard complex format

87

88

# Convert Fortran logical values

89

bool_val = fpy.pybool('.true.') # Standard Fortran format

90

bool_val = fpy.pybool('.t.') # Abbreviated format

91

bool_val = fpy.pybool('true', strict_logical=False) # Relaxed parsing

92

93

# Convert Fortran strings

94

str_val = fpy.pystr("'Hello World'") # Single-quoted string

95

str_val = fpy.pystr('"He said ""Hi"""') # Escaped quotes

96

```

97

98

### Multidimensional Array Indexing

99

100

Iterator class for traversing multidimensional arrays using Fortran's column-major indexing convention.

101

102

```python { .api }

103

class FIndex:

104

"""

105

Column-major multidimensional index iterator.

106

107

Provides iteration over multidimensional array indices following

108

Fortran's column-major storage convention, essential for correctly

109

handling complex array assignments in namelists.

110

"""

111

112

def __init__(self, bounds, first=None):

113

"""

114

Initialize index iterator with dimension bounds.

115

116

Args:

117

bounds: list of tuples - [(start, end, step), ...] for each dimension

118

first: int, optional - Global starting index override

119

"""

120

121

def __iter__(self):

122

"""Return iterator object."""

123

124

def __next__(self):

125

"""

126

Get next index tuple in column-major order.

127

128

Returns:

129

tuple: Index coordinates for current position

130

131

Raises:

132

StopIteration: When all indices have been traversed

133

"""

134

```

135

136

**Usage Examples:**

137

138

```python

139

from f90nml.findex import FIndex

140

141

# Create iterator for 2D array with bounds (1:3, 1:2)

142

bounds = [(1, 4, 1), (1, 3, 1)] # (start, end+1, step)

143

idx_iter = FIndex(bounds)

144

145

# Iterate through indices in column-major order

146

for indices in idx_iter:

147

print(f"Array index: {indices}")

148

# Output: (1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2)

149

150

# Handle arrays with custom starting indices

151

bounds = [(10, 13, 1), (5, 8, 1)] # Array(10:12, 5:7)

152

idx_iter = FIndex(bounds)

153

for indices in idx_iter:

154

print(f"Custom index: {indices}")

155

# Output: (10, 5), (11, 5), (12, 5), (10, 6), (11, 6), (12, 6), (10, 7), (11, 7), (12, 7)

156

157

# Handle step sizes for strided access

158

bounds = [(1, 10, 2), (1, 6, 2)] # Every other element

159

idx_iter = FIndex(bounds)

160

for indices in idx_iter:

161

print(f"Strided index: {indices}")

162

# Output: (1, 1), (3, 1), (5, 1), (7, 1), (9, 1), (1, 3), (3, 3), (5, 3), (7, 3), (9, 3), (1, 5), ...

163

```

164

165

### Low-Level Tokenization

166

167

Tokenizer class for lexical analysis of Fortran namelist syntax, providing fine-grained control over parsing behavior.

168

169

```python { .api }

170

class Tokenizer:

171

"""

172

Fortran namelist tokenizer for lexical analysis.

173

174

Provides low-level tokenization of Fortran namelist source code,

175

handling string parsing, comment recognition, and punctuation

176

according to Fortran language rules.

177

"""

178

179

def __init__(self):

180

"""Initialize tokenizer with default configuration."""

181

182

def parse(self, line):

183

"""

184

Tokenize a line of Fortran namelist source.

185

186

Args:

187

line: str - Line of Fortran source code to tokenize

188

189

Returns:

190

list: List of token strings from the input line

191

"""

192

```

193

194

**Usage Examples:**

195

196

```python

197

from f90nml.tokenizer import Tokenizer

198

199

# Create tokenizer

200

tokenizer = Tokenizer()

201

202

# Tokenize namelist lines

203

line = "&config_nml input='data.nc' steps=100 /"

204

tokens = tokenizer.parse(line)

205

print(tokens)

206

# Output: ['&', 'config_nml', 'input', '=', "'data.nc'", 'steps', '=', '100', '/']

207

208

# Handle complex expressions

209

line = "matrix(1:3, 2:4) = 1.0, 2.0, 3.0, 4.0, 5.0, 6.0"

210

tokens = tokenizer.parse(line)

211

print(tokens)

212

# Output: ['matrix', '(', '1', ':', '3', ',', '2', ':', '4', ')', '=', '1.0', ',', '2.0', ...]

213

214

# Configure comment tokens

215

tokenizer.comment_tokens = '!#' # Support both ! and # comments

216

line = "value = 42 # This is a comment"

217

tokens = tokenizer.parse(line)

218

print(tokens)

219

# Output: ['value', '=', '42', ' # This is a comment']

220

```

221

222

## Advanced Integration Patterns

223

224

These utilities are primarily used internally by f90nml but can be leveraged for advanced use cases:

225

226

```python

227

# Custom parser with specialized type conversion

228

parser = f90nml.Parser()

229

parser.strict_logical = False # Enable relaxed boolean parsing

230

231

# Direct access to type conversion for validation

232

try:

233

value = f90nml.fpy.pyfloat(user_input)

234

print(f"Valid float: {value}")

235

except ValueError:

236

print("Invalid floating point format")

237

238

# Array iteration for custom processing

239

bounds = [(1, 4, 1), (1, 3, 1)]

240

for idx in f90nml.findex.FIndex(bounds):

241

# Process array element at index idx

242

process_element(idx)

243

244

# Custom tokenization for preprocessing

245

tokenizer = f90nml.tokenizer.Tokenizer()

246

tokenizer.comment_tokens = '!#%' # Custom comment characters

247

for line in source_lines:

248

tokens = tokenizer.parse(line)

249

# Process tokens before parsing

250

processed_tokens = preprocess(tokens)

251

```

252

253

## Integration with Core Functionality

254

255

These utilities integrate seamlessly with the main f90nml functions:

256

257

- **Type conversion functions** are automatically used by `read()` and `reads()` during parsing

258

- **FIndex** is used internally for multidimensional array assignments and access

259

- **Tokenizer** performs the lexical analysis phase of all parsing operations

260

- **Parser configuration** can be customized to use different type conversion behaviors

261

262

For most users, the high-level functions (`read()`, `write()`, `patch()`) provide sufficient functionality. These utilities are valuable for:

263

264

- Building custom parsing tools

265

- Implementing domain-specific namelist processors

266

- Debugging parsing issues

267

- Extending f90nml functionality

268

- Integration with other Fortran processing tools