CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-keras

Multi-backend deep learning framework that provides a unified, high-level API for building and training neural networks across JAX, TensorFlow, PyTorch, and OpenVINO backends.

Pending
Overview
Eval results
Files

activations.mddocs/

Activation Functions

Activation functions define the output of neural network layers and introduce non-linearity to enable learning complex patterns. Keras provides a comprehensive set of activation functions for various use cases.

Capabilities

Standard Activation Functions

Core activation functions commonly used in neural networks for introducing non-linearity and controlling gradient flow.

def relu(x, negative_slope=0.0, max_value=None, threshold=0.0):
    """
    Rectified Linear Unit activation function.
    
    Parameters:
    - x: Input tensor
    - negative_slope: Slope for values below threshold (default: 0.0)
    - max_value: Maximum value for saturation (default: None)
    - threshold: Threshold value below which values are damped (default: 0.0)
    
    Returns:
    Tensor with same shape and dtype as input
    """

def sigmoid(x):
    """
    Sigmoid activation function: 1 / (1 + exp(-x)).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with values between 0 and 1
    """

def tanh(x):
    """
    Hyperbolic tangent activation function: (exp(x) - exp(-x)) / (exp(x) + exp(-x)).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with values between -1 and 1
    """

def softmax(x, axis=-1):
    """
    Softmax activation function that normalizes input vector to probability distribution.
    
    Parameters:
    - x: Input tensor
    - axis: Axis along which to apply softmax (default: -1)
    
    Returns:
    Tensor with values summing to 1 along specified axis
    """

def linear(x):
    """
    Linear activation function (identity function): returns input unchanged.
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Input tensor unchanged
    """

Advanced Activation Functions

Modern activation functions that provide improved gradient properties and performance characteristics.

def gelu(x, approximate=False):
    """
    Gaussian Error Linear Unit activation function.
    
    Parameters:
    - x: Input tensor
    - approximate: Whether to use approximation (default: False)
    
    Returns:
    Tensor with GELU activation applied
    """

def silu(x):
    """
    Swish/SiLU activation function: x * sigmoid(x).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with SiLU activation applied
    """

def swish(x):
    """
    Alias for silu activation function.
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with Swish activation applied
    """

def mish(x):
    """
    Mish activation function: x * tanh(softplus(x)).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with Mish activation applied
    """

def selu(x):
    """
    Scaled Exponential Linear Unit activation function.
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with SELU activation applied
    """

def elu(x, alpha=1.0):
    """
    Exponential Linear Unit activation function.
    
    Parameters:
    - x: Input tensor
    - alpha: Scale factor for negative inputs (default: 1.0)
    
    Returns:
    Tensor with ELU activation applied
    """

def leaky_relu(x, negative_slope=0.01):
    """
    Leaky ReLU activation function with small negative slope.
    
    Parameters:
    - x: Input tensor
    - negative_slope: Slope for negative values (default: 0.01)
    
    Returns:
    Tensor with Leaky ReLU activation applied
    """

Specialized Activation Functions

Specialized activation functions for specific use cases and architectures.

def softplus(x):
    """
    Softplus activation function: log(1 + exp(x)).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with Softplus activation applied
    """

def softsign(x):
    """
    Softsign activation function: x / (1 + |x|).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with Softsign activation applied
    """

def exponential(x):
    """
    Exponential activation function: exp(x).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with exponential activation applied
    """

def hard_sigmoid(x):
    """
    Hard sigmoid activation function (piecewise linear approximation).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with hard sigmoid activation applied
    """

def hard_silu(x):
    """
    Hard SiLU activation function (computationally efficient approximation).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with hard SiLU activation applied
    """

def hard_swish(x):
    """
    Alias for hard_silu activation function.
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with hard Swish activation applied
    """

def hard_tanh(x):
    """
    Hard tanh activation function (piecewise linear approximation).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with hard tanh activation applied
    """

def relu6(x):
    """
    ReLU activation capped at 6: min(max(x, 0), 6).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with ReLU6 activation applied
    """

Shrinkage Functions

Shrinkage functions that apply thresholding operations for sparse representations.

def hard_shrink(x, lambd=0.5):
    """
    Hard shrinkage function that zeros values within threshold.
    
    Parameters:
    - x: Input tensor
    - lambd: Threshold value (default: 0.5)
    
    Returns:
    Tensor with hard shrinkage applied
    """

def soft_shrink(x, lambd=0.5):
    """
    Soft shrinkage function that applies soft thresholding.
    
    Parameters:
    - x: Input tensor
    - lambd: Threshold value (default: 0.5)
    
    Returns:
    Tensor with soft shrinkage applied
    """

def tanh_shrink(x):
    """
    Tanh shrinkage function: x - tanh(x).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with tanh shrinkage applied
    """

def threshold(x, value=0):
    """
    Threshold activation function that sets values below threshold to value.
    
    Parameters:
    - x: Input tensor
    - value: Threshold value (default: 0)
    
    Returns:
    Tensor with thresholding applied
    """

Sparse Activation Functions

Specialized activation functions for sparse representations and attention mechanisms.

def sparsemax(x, axis=-1):
    """
    Sparsemax activation function that produces sparse probability distributions.
    
    Parameters:
    - x: Input tensor
    - axis: Axis along which to apply sparsemax (default: -1)
    
    Returns:
    Tensor with sparse probability distribution
    """

def sparse_plus(x):
    """
    Sparse plus activation function for sparse representations.
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with sparse plus activation applied
    """

def sparse_sigmoid(x):
    """
    Sparse sigmoid activation function.
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with sparse sigmoid activation applied
    """

def squareplus(x, b=4):
    """
    Squareplus activation function: (x + sqrt(x^2 + b)) / 2.
    
    Parameters:
    - x: Input tensor
    - b: Smoothness parameter (default: 4)
    
    Returns:
    Tensor with squareplus activation applied
    """

Advanced Functions

Additional specialized activation functions for specific neural network architectures.

def glu(x, axis=-1):
    """
    Gated Linear Unit activation function.
    
    Parameters:
    - x: Input tensor
    - axis: Axis to split for gating (default: -1)
    
    Returns:
    Tensor with GLU activation applied
    """

def celu(x, alpha=1.0):
    """
    Continuously differentiable exponential linear unit.
    
    Parameters:
    - x: Input tensor
    - alpha: Scale parameter (default: 1.0)
    
    Returns:
    Tensor with CELU activation applied
    """

def log_sigmoid(x):
    """
    Logarithm of sigmoid function: log(sigmoid(x)).
    
    Parameters:
    - x: Input tensor
    
    Returns:
    Tensor with log-sigmoid activation applied
    """

def log_softmax(x, axis=-1):
    """
    Logarithm of softmax function: log(softmax(x)).
    
    Parameters:
    - x: Input tensor
    - axis: Axis along which to apply log-softmax (default: -1)
    
    Returns:
    Tensor with log-softmax activation applied
    """

Utility Functions

Helper functions for activation function management and serialization.

def serialize(activation):
    """
    Serialize an activation function to a string or config dict.
    
    Parameters:
    - activation: Activation function to serialize
    
    Returns:
    String identifier or config dictionary
    """

def deserialize(config, custom_objects=None):
    """
    Deserialize an activation function from a string or config dict.
    
    Parameters:
    - config: String identifier or config dictionary
    - custom_objects: Optional dict mapping names to custom objects
    
    Returns:
    Activation function
    """

def get(identifier):
    """
    Retrieve an activation function by string identifier.
    
    Parameters:
    - identifier: String name of activation function
    
    Returns:
    Activation function
    """

Usage Examples

import keras
from keras import activations

# Use activation functions directly
x = keras.ops.array([-2.0, -1.0, 0.0, 1.0, 2.0])

# Apply different activations
relu_output = activations.relu(x)
sigmoid_output = activations.sigmoid(x)
gelu_output = activations.gelu(x)

# Use in layer definitions
model = keras.Sequential([
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(32, activation='gelu'),
    keras.layers.Dense(10, activation='softmax')
])

# Or use activation functions directly
model = keras.Sequential([
    keras.layers.Dense(64, activation=activations.relu),
    keras.layers.Dense(32, activation=activations.gelu),
    keras.layers.Dense(10, activation=activations.softmax)
])

Install with Tessl CLI

npx tessl i tessl/pypi-keras

docs

activations.md

applications.md

data-utils.md

index.md

initializers.md

layers.md

models.md

operations.md

random.md

regularizers.md

saving.md

training.md

tile.json