or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

curried.mddicttoolz.mdfunctoolz.mdindex.mditertoolz.mdsandbox.md
tile.json

tessl/pypi-toolz

List processing tools and functional utilities

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/toolz@1.0.x

To install, run

npx @tessl/cli install tessl/pypi-toolz@1.0.0

index.mddocs/

Toolz

A comprehensive Python library providing list processing tools and functional utilities. Toolz implements functional programming patterns inspired by languages like Clojure, offering three main modules: itertoolz for operations on iterables, functoolz for higher-order functions, and dicttoolz for dictionary operations.

Package Information

  • Package Name: toolz
  • Language: Python
  • Installation: pip install toolz
  • Python Requirements: >=3.8

Core Imports

import toolz

Module-specific imports:

from toolz import groupby, map, filter, compose, merge
from toolz.itertoolz import unique, take, partition
from toolz.functoolz import curry, pipe, memoize  
from toolz.dicttoolz import assoc, get_in, valmap

Curried import (all functions automatically curried):

import toolz.curried as toolz

Basic Usage

import toolz
from toolz import groupby, pipe, curry, assoc

# Group data by a key function
names = ['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank']
grouped = groupby(len, names)
# {3: ['Bob', 'Dan'], 5: ['Alice', 'Edith', 'Frank'], 7: ['Charlie']}

# Function composition with pipe
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
result = pipe(
    data,
    lambda x: filter(lambda n: n % 2 == 0, x),  # even numbers
    lambda x: map(lambda n: n * 2, x),          # double them
    list                                        # convert to list
)
# [4, 8, 12, 16, 20]

# Dictionary operations
person = {'name': 'Alice', 'age': 30}
updated = assoc(person, 'city', 'New York')
# {'name': 'Alice', 'age': 30, 'city': 'New York'}

# Curried functions for partial application
from toolz.curried import map, filter
double = map(lambda x: x * 2)
evens = filter(lambda x: x % 2 == 0)

pipeline = toolz.compose(list, double, evens)
result = pipeline([1, 2, 3, 4, 5, 6])
# [4, 8, 12]

Architecture

Toolz follows functional programming principles with three core design patterns:

  • Immutable Operations: Functions return new data structures without modifying inputs
  • Composability: Functions work seamlessly together in data processing pipelines
  • Lazy Evaluation: Many functions return iterators for memory efficiency

The library is organized into logical modules:

  • itertoolz: Iterator/sequence operations (filtering, grouping, partitioning)
  • functoolz: Function composition and higher-order utilities
  • dicttoolz: Dictionary manipulation and nested access
  • recipes: Higher-level compositions of core functions
  • curried: Automatic partial application versions of all functions

Capabilities

Iterator Operations

Comprehensive sequence processing including filtering, grouping, partitioning, and transformation operations. These functions work with any iterable and form the backbone of data processing pipelines.

def groupby(key, seq): ...
def unique(seq, key=None): ...
def take(n, seq): ...
def partition(n, seq, pad=no_pad): ...
def frequencies(seq): ...
def merge_sorted(*seqs, **kwargs): ...

Iterator Operations

Function Composition

Higher-order functions for composing, currying, and transforming functions. Enables elegant functional programming patterns and pipeline creation.

def compose(*funcs): ...
def pipe(data, *funcs): ...
def curry(*args, **kwargs): ...
def memoize(func, cache=None, key=None): ...
def thread_first(val, *forms): ...
def juxt(*funcs): ...

Function Composition

Dictionary Operations

Immutable dictionary manipulation including merging, filtering, mapping, and nested access operations. All operations return new dictionaries without modifying inputs.

def merge(*dicts, **kwargs): ...
def assoc(d, key, value, factory=dict): ...
def get_in(keys, coll, default=None, no_default=False): ...
def valmap(func, d, factory=dict): ...
def keyfilter(predicate, d, factory=dict): ...
def update_in(d, keys, func, default=None, factory=dict): ...

Dictionary Operations

Curried Functions

All toolz functions available in curried form for automatic partial application. Enables more concise functional programming style and easier function composition.

import toolz.curried as toolz
# All functions automatically support partial application

Curried Functions

Recipe Functions

Higher-level compositions built from core toolz functions, providing common functional programming patterns.

def countby(key, seq):
    """
    Count elements of a collection by a key function.
    
    Parameters:
    - key: function to compute grouping key, or attribute name
    - seq: iterable sequence to count
    
    Returns:
    Dictionary mapping keys to occurrence counts
    """

def partitionby(func, seq):
    """
    Partition a sequence according to a function.
    
    Partition seq into a sequence of tuples such that, when traversing seq,
    every time the output of func changes a new tuple is started.
    
    Parameters:
    - func: function that determines partition boundaries
    - seq: iterable sequence to partition
    
    Returns:
    Iterator of tuples representing consecutive groups
    """

countby(key, seq) - Count elements by key function, combining groupby with counting:

from toolz import countby

# Count word lengths
words = ['apple', 'banana', 'cherry', 'date']
counts = countby(len, words)  
# {5: 2, 6: 1, 4: 1}

# Count by type
data = [1, 'a', 2.5, 'b', 3, 'c']
type_counts = countby(type, data)
# {<class 'int'>: 2, <class 'str'>: 3, <class 'float'>: 1}

partitionby(func, seq) - Partition sequence into groups where function returns same value:

from toolz import partitionby

# Partition by boolean condition  
numbers = [1, 3, 5, 2, 4, 6, 7, 9]
groups = list(partitionby(lambda x: x % 2 == 0, numbers))
# [[1, 3, 5], [2, 4, 6], [7, 9]]

# Partition by first letter
words = ['apple', 'apricot', 'banana', 'blueberry', 'cherry']  
groups = list(partitionby(lambda w: w[0], words))
# [['apple', 'apricot'], ['banana', 'blueberry'], ['cherry']]

Sandbox Functions

Experimental and specialized utility functions for advanced use cases including hash key utilities, parallel processing, and additional sequence operations.

class EqualityHashKey:
    """Create hash key using equality comparisons for unhashable types."""
    def __init__(self, key, item): ...

def unzip(seq):
    """Inverse of zip - unpack sequence of tuples into separate sequences."""

def fold(binop, seq, default=no_default, map=map, chunksize=128, combine=None):
    """Reduce without guarantee of ordered reduction for parallel processing."""

Sandbox Functions

Types

# Sentinel values for default parameters
no_default = '__no__default__'
no_pad = '__no__pad__'

# Function signature inspection results
class curry:
    """Curry a callable for partial application."""
    def __init__(self, *args, **kwargs): ...
    def bind(self, *args, **kwargs): ...
    def call(self, *args, **kwargs): ...

class Compose:
    """Function composition class for multiple function pipeline."""
    def __init__(self, funcs): ...
    def __call__(self, *args, **kwargs): ...

class InstanceProperty:
    """Property that returns different value when accessed on class vs instance."""
    def __init__(self, fget=None, fset=None, fdel=None, doc=None, classval=None): ...

class juxt:
    """Create function that calls several functions with same arguments."""
    def __init__(self, *funcs): ...
    def __call__(self, *args, **kwargs): ...

class excepts:
    """Create function with functional try/except block."""
    def __init__(self, exc, func, handler=return_none): ...
    def __call__(self, *args, **kwargs): ...

Common Patterns

Data Processing Pipelines

from toolz import pipe, filter, map, groupby, valmap

data = [
    {'name': 'Alice', 'age': 25, 'dept': 'engineering'},
    {'name': 'Bob', 'age': 30, 'dept': 'engineering'}, 
    {'name': 'Carol', 'age': 35, 'dept': 'marketing'},
    {'name': 'Dave', 'age': 40, 'dept': 'marketing'}
]

# Process and analyze the data
result = pipe(
    data,
    lambda x: filter(lambda p: p['age'] >= 30, x),    # adults 30+
    lambda x: groupby(lambda p: p['dept'], x),        # group by dept
    lambda x: valmap(len, x)                          # count per dept
)
# {'engineering': 1, 'marketing': 2}

Functional Composition

from toolz import compose, curry
from toolz.curried import map, filter

# Create reusable pipeline components
@curry
def multiply_by(factor, x):
    return x * factor

double = multiply_by(2)
is_even = lambda x: x % 2 == 0

# Compose into pipeline
process_evens = compose(
    list,
    map(double),
    filter(is_even)
)

result = process_evens([1, 2, 3, 4, 5, 6])
# [4, 8, 12]