or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

curried.mddicttoolz.mdfunctoolz.mdindex.mditertoolz.mdsandbox.md

sandbox.mddocs/

0

# Sandbox Functions

1

2

Experimental and specialized utility functions for advanced use cases. These functions provide additional functionality beyond the core toolz capabilities.

3

4

## Package Access

5

6

```python

7

from toolz.sandbox import EqualityHashKey, unzip, fold

8

```

9

10

## Capabilities

11

12

### Hash Key Utilities

13

14

Utilities for creating hash keys from normally unhashable types.

15

16

```python { .api }

17

class EqualityHashKey:

18

"""

19

Create a hash key that uses equality comparisons between items.

20

21

This may be used to create hash keys for otherwise unhashable types.

22

Adding N EqualityHashKey items to a hash container may require O(N**2) operations.

23

A suitable key function such as tuple or frozenset is usually preferred if possible.

24

"""

25

26

def __init__(self, key, item):

27

"""

28

Create equality-based hash key.

29

30

Parameters:

31

- key: function or index that returns a hashable object to distinguish unequal items

32

- item: the item to create a hash key for

33

"""

34

```

35

36

### Sequence Utilities

37

38

Additional sequence manipulation functions.

39

40

```python { .api }

41

def unzip(seq):

42

"""

43

Inverse of zip. Unpack a sequence of tuples into separate sequences.

44

45

Parameters:

46

- seq: sequence of tuples to unpack

47

48

Returns:

49

Tuple of sequences, one for each position in the input tuples

50

"""

51

```

52

53

### Parallel Processing

54

55

Functions for parallel and distributed computation.

56

57

```python { .api }

58

def fold(binop, seq, default=no_default, map=map, chunksize=128, combine=None):

59

"""

60

Reduce without guarantee of ordered reduction.

61

62

Enables parallel reduction by chunking the sequence and distributing work

63

across multiple map operations.

64

65

Parameters:

66

- binop: associative binary operator for reduction

67

- seq: sequence to be aggregated

68

- default: identity element (optional)

69

- map: map implementation to use (can be parallel)

70

- chunksize: number of elements per chunk

71

- combine: binary operator to combine intermediate results (optional)

72

73

Returns:

74

Result of folding binop across the sequence

75

"""

76

```

77

78

## Usage Examples

79

80

### Creating Hash Keys for Unhashable Types

81

82

```python

83

from toolz.sandbox import EqualityHashKey

84

from toolz import unique, curry

85

86

# Create hash keys for lists and other unhashable types

87

EqualityHashDefault = curry(EqualityHashKey, None)

88

EqualityHashLen = curry(EqualityHashKey, len)

89

90

# Use with collections that need hashable items

91

data = [[], (), [1], [1], [2]]

92

unique_items = list(unique(data, key=EqualityHashDefault))

93

# [[], (), [1], [2]]

94

95

# Better performance with meaningful key function

96

unique_by_len = list(unique(data, key=EqualityHashLen))

97

# [[], [1], [2]] (keeps first of each length)

98

```

99

100

### Sequence Unzipping

101

102

```python

103

from toolz.sandbox import unzip

104

105

# Unpack pairs into separate sequences

106

pairs = [(1, 'a'), (2, 'b'), (3, 'c')]

107

numbers, letters = unzip(pairs)

108

# numbers: (1, 2, 3)

109

# letters: ('a', 'b', 'c')

110

111

# Unpack triplets

112

triplets = [(1, 'a', True), (2, 'b', False), (3, 'c', True)]

113

nums, chars, flags = unzip(triplets)

114

# nums: (1, 2, 3)

115

# chars: ('a', 'b', 'c')

116

# flags: (True, False, True)

117

```

118

119

### Parallel Reduction

120

121

```python

122

from toolz.sandbox import fold

123

from operator import add

124

import multiprocessing

125

126

# Sequential reduction

127

numbers = list(range(1000))

128

total = fold(add, numbers, 0)

129

# 499500

130

131

# Parallel reduction using multiprocessing

132

pool = multiprocessing.Pool()

133

parallel_total = fold(add, numbers, 0, map=pool.map, chunksize=50)

134

pool.close()

135

pool.join()

136

# 499500 (same result, computed in parallel)

137

138

# Custom combine function for different reduction patterns

139

from toolz import merge

140

dicts = [{'a': 1}, {'b': 2}, {'a': 3, 'c': 4}]

141

combined = fold(merge, dicts, {})

142

# {'a': 3, 'b': 2, 'c': 4}

143

```