or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

curried.mddicttoolz.mdfunctoolz.mdindex.mditertoolz.mdsandbox.md

index.mddocs/

0

# Toolz

1

2

A comprehensive Python library providing list processing tools and functional utilities. Toolz implements functional programming patterns inspired by languages like Clojure, offering three main modules: itertoolz for operations on iterables, functoolz for higher-order functions, and dicttoolz for dictionary operations.

3

4

## Package Information

5

6

- **Package Name**: toolz

7

- **Language**: Python

8

- **Installation**: `pip install toolz`

9

- **Python Requirements**: >=3.8

10

11

## Core Imports

12

13

```python

14

import toolz

15

```

16

17

Module-specific imports:

18

19

```python

20

from toolz import groupby, map, filter, compose, merge

21

from toolz.itertoolz import unique, take, partition

22

from toolz.functoolz import curry, pipe, memoize

23

from toolz.dicttoolz import assoc, get_in, valmap

24

```

25

26

Curried import (all functions automatically curried):

27

28

```python

29

import toolz.curried as toolz

30

```

31

32

## Basic Usage

33

34

```python

35

import toolz

36

from toolz import groupby, pipe, curry, assoc

37

38

# Group data by a key function

39

names = ['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank']

40

grouped = groupby(len, names)

41

# {3: ['Bob', 'Dan'], 5: ['Alice', 'Edith', 'Frank'], 7: ['Charlie']}

42

43

# Function composition with pipe

44

data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

45

result = pipe(

46

data,

47

lambda x: filter(lambda n: n % 2 == 0, x), # even numbers

48

lambda x: map(lambda n: n * 2, x), # double them

49

list # convert to list

50

)

51

# [4, 8, 12, 16, 20]

52

53

# Dictionary operations

54

person = {'name': 'Alice', 'age': 30}

55

updated = assoc(person, 'city', 'New York')

56

# {'name': 'Alice', 'age': 30, 'city': 'New York'}

57

58

# Curried functions for partial application

59

from toolz.curried import map, filter

60

double = map(lambda x: x * 2)

61

evens = filter(lambda x: x % 2 == 0)

62

63

pipeline = toolz.compose(list, double, evens)

64

result = pipeline([1, 2, 3, 4, 5, 6])

65

# [4, 8, 12]

66

```

67

68

## Architecture

69

70

Toolz follows functional programming principles with three core design patterns:

71

72

- **Immutable Operations**: Functions return new data structures without modifying inputs

73

- **Composability**: Functions work seamlessly together in data processing pipelines

74

- **Lazy Evaluation**: Many functions return iterators for memory efficiency

75

76

The library is organized into logical modules:

77

- **itertoolz**: Iterator/sequence operations (filtering, grouping, partitioning)

78

- **functoolz**: Function composition and higher-order utilities

79

- **dicttoolz**: Dictionary manipulation and nested access

80

- **recipes**: Higher-level compositions of core functions

81

- **curried**: Automatic partial application versions of all functions

82

83

## Capabilities

84

85

### Iterator Operations

86

87

Comprehensive sequence processing including filtering, grouping, partitioning, and transformation operations. These functions work with any iterable and form the backbone of data processing pipelines.

88

89

```python { .api }

90

def groupby(key, seq): ...

91

def unique(seq, key=None): ...

92

def take(n, seq): ...

93

def partition(n, seq, pad=no_pad): ...

94

def frequencies(seq): ...

95

def merge_sorted(*seqs, **kwargs): ...

96

```

97

98

[Iterator Operations](./itertoolz.md)

99

100

### Function Composition

101

102

Higher-order functions for composing, currying, and transforming functions. Enables elegant functional programming patterns and pipeline creation.

103

104

```python { .api }

105

def compose(*funcs): ...

106

def pipe(data, *funcs): ...

107

def curry(*args, **kwargs): ...

108

def memoize(func, cache=None, key=None): ...

109

def thread_first(val, *forms): ...

110

def juxt(*funcs): ...

111

```

112

113

[Function Composition](./functoolz.md)

114

115

### Dictionary Operations

116

117

Immutable dictionary manipulation including merging, filtering, mapping, and nested access operations. All operations return new dictionaries without modifying inputs.

118

119

```python { .api }

120

def merge(*dicts, **kwargs): ...

121

def assoc(d, key, value, factory=dict): ...

122

def get_in(keys, coll, default=None, no_default=False): ...

123

def valmap(func, d, factory=dict): ...

124

def keyfilter(predicate, d, factory=dict): ...

125

def update_in(d, keys, func, default=None, factory=dict): ...

126

```

127

128

[Dictionary Operations](./dicttoolz.md)

129

130

### Curried Functions

131

132

All toolz functions available in curried form for automatic partial application. Enables more concise functional programming style and easier function composition.

133

134

```python { .api }

135

import toolz.curried as toolz

136

# All functions automatically support partial application

137

```

138

139

[Curried Functions](./curried.md)

140

141

### Recipe Functions

142

143

Higher-level compositions built from core toolz functions, providing common functional programming patterns.

144

145

```python { .api }

146

def countby(key, seq):

147

"""

148

Count elements of a collection by a key function.

149

150

Parameters:

151

- key: function to compute grouping key, or attribute name

152

- seq: iterable sequence to count

153

154

Returns:

155

Dictionary mapping keys to occurrence counts

156

"""

157

158

def partitionby(func, seq):

159

"""

160

Partition a sequence according to a function.

161

162

Partition seq into a sequence of tuples such that, when traversing seq,

163

every time the output of func changes a new tuple is started.

164

165

Parameters:

166

- func: function that determines partition boundaries

167

- seq: iterable sequence to partition

168

169

Returns:

170

Iterator of tuples representing consecutive groups

171

"""

172

```

173

174

**`countby(key, seq)`** - Count elements by key function, combining groupby with counting:

175

176

```python

177

from toolz import countby

178

179

# Count word lengths

180

words = ['apple', 'banana', 'cherry', 'date']

181

counts = countby(len, words)

182

# {5: 2, 6: 1, 4: 1}

183

184

# Count by type

185

data = [1, 'a', 2.5, 'b', 3, 'c']

186

type_counts = countby(type, data)

187

# {<class 'int'>: 2, <class 'str'>: 3, <class 'float'>: 1}

188

```

189

190

**`partitionby(func, seq)`** - Partition sequence into groups where function returns same value:

191

192

```python

193

from toolz import partitionby

194

195

# Partition by boolean condition

196

numbers = [1, 3, 5, 2, 4, 6, 7, 9]

197

groups = list(partitionby(lambda x: x % 2 == 0, numbers))

198

# [[1, 3, 5], [2, 4, 6], [7, 9]]

199

200

# Partition by first letter

201

words = ['apple', 'apricot', 'banana', 'blueberry', 'cherry']

202

groups = list(partitionby(lambda w: w[0], words))

203

# [['apple', 'apricot'], ['banana', 'blueberry'], ['cherry']]

204

```

205

206

### Sandbox Functions

207

208

Experimental and specialized utility functions for advanced use cases including hash key utilities, parallel processing, and additional sequence operations.

209

210

```python { .api }

211

class EqualityHashKey:

212

"""Create hash key using equality comparisons for unhashable types."""

213

def __init__(self, key, item): ...

214

215

def unzip(seq):

216

"""Inverse of zip - unpack sequence of tuples into separate sequences."""

217

218

def fold(binop, seq, default=no_default, map=map, chunksize=128, combine=None):

219

"""Reduce without guarantee of ordered reduction for parallel processing."""

220

```

221

222

[Sandbox Functions](./sandbox.md)

223

224

## Types

225

226

```python { .api }

227

# Sentinel values for default parameters

228

no_default = '__no__default__'

229

no_pad = '__no__pad__'

230

231

# Function signature inspection results

232

class curry:

233

"""Curry a callable for partial application."""

234

def __init__(self, *args, **kwargs): ...

235

def bind(self, *args, **kwargs): ...

236

def call(self, *args, **kwargs): ...

237

238

class Compose:

239

"""Function composition class for multiple function pipeline."""

240

def __init__(self, funcs): ...

241

def __call__(self, *args, **kwargs): ...

242

243

class InstanceProperty:

244

"""Property that returns different value when accessed on class vs instance."""

245

def __init__(self, fget=None, fset=None, fdel=None, doc=None, classval=None): ...

246

247

class juxt:

248

"""Create function that calls several functions with same arguments."""

249

def __init__(self, *funcs): ...

250

def __call__(self, *args, **kwargs): ...

251

252

class excepts:

253

"""Create function with functional try/except block."""

254

def __init__(self, exc, func, handler=return_none): ...

255

def __call__(self, *args, **kwargs): ...

256

```

257

258

## Common Patterns

259

260

### Data Processing Pipelines

261

262

```python

263

from toolz import pipe, filter, map, groupby, valmap

264

265

data = [

266

{'name': 'Alice', 'age': 25, 'dept': 'engineering'},

267

{'name': 'Bob', 'age': 30, 'dept': 'engineering'},

268

{'name': 'Carol', 'age': 35, 'dept': 'marketing'},

269

{'name': 'Dave', 'age': 40, 'dept': 'marketing'}

270

]

271

272

# Process and analyze the data

273

result = pipe(

274

data,

275

lambda x: filter(lambda p: p['age'] >= 30, x), # adults 30+

276

lambda x: groupby(lambda p: p['dept'], x), # group by dept

277

lambda x: valmap(len, x) # count per dept

278

)

279

# {'engineering': 1, 'marketing': 2}

280

```

281

282

### Functional Composition

283

284

```python

285

from toolz import compose, curry

286

from toolz.curried import map, filter

287

288

# Create reusable pipeline components

289

@curry

290

def multiply_by(factor, x):

291

return x * factor

292

293

double = multiply_by(2)

294

is_even = lambda x: x % 2 == 0

295

296

# Compose into pipeline

297

process_evens = compose(

298

list,

299

map(double),

300

filter(is_even)

301

)

302

303

result = process_evens([1, 2, 3, 4, 5, 6])

304

# [4, 8, 12]

305

```