or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

audio-models.mdevaluation-metrics.mdgenerative-models.mdimage-models.mdindex.mdlayers-components.mdmultimodal-models.mdtext-generation-sampling.mdtext-models.mdtokenizers.mdutilities-helpers.md
tile.json

tessl/pypi-keras-hub

Pretrained models for Keras with multi-framework compatibility.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/keras-hub@0.22.x

To install, run

npx @tessl/cli install tessl/pypi-keras-hub@0.22.0

index.mddocs/

Keras Hub

Keras Hub is a comprehensive pretrained modeling library providing Keras 3 implementations of popular model architectures for text, image, and audio data. It offers state-of-the-art models including BERT, ResNet, BART, BLOOM, DeBERTa, DistilBERT, GPT-2, Llama, Mistral, OPT, RoBERTa, Whisper, and XLM-RoBERTa with pretrained checkpoints available on Kaggle Models. The library supports multi-framework compatibility with JAX, TensorFlow, and PyTorch backends and enables fine-tuning on GPUs and TPUs with built-in PEFT techniques.

Package Information

  • Package Name: keras-hub
  • Language: Python
  • Installation: pip install keras-hub (for NLP models: pip install keras-hub[nlp])
  • License: Apache-2.0
  • Documentation: https://keras.io/keras_hub/

Core Imports

import keras_hub

Common pattern for specific components:

# Models - most commonly loaded with from_preset()
from keras_hub.models import BertTextClassifier, GPT2CausalLM
from keras_hub.models import ImageClassifier

# Tokenizers  
from keras_hub.tokenizers import BertTokenizer

# Layers
from keras_hub.layers import TransformerEncoder

# Metrics
from keras_hub.metrics import Bleu

# Utils
from keras_hub.utils import upload_preset

Basic Usage

import keras_hub

# Load a pretrained model for text classification
classifier = keras_hub.models.BertTextClassifier.from_preset("bert_base_en")

# Classify text
result = classifier.predict(["This is a great movie!", "I didn't like this film."])
print(result)

# Load a causal language model for text generation
generator = keras_hub.models.GPT2CausalLM.from_preset("gpt2_base_en")

# Generate text
generated = generator.generate("The weather today is", max_length=50)
print(generated)

# Use tokenizers directly
tokenizer = keras_hub.tokenizers.BertTokenizer.from_preset("bert_base_en")
tokens = tokenizer(["Hello world!", "How are you?"])
print(tokens)

Architecture

Keras Hub is organized around several key architectural patterns:

  • Backbones: Core model architectures without task-specific heads (e.g., BertBackbone, GPT2Backbone)
  • Task Models: Complete models with task-specific heads (e.g., BertTextClassifier, GPT2CausalLM)
  • Preprocessors: Handle data preprocessing for specific models and tasks
  • Tokenizers: Convert text to tokens for model input
  • Layers: Reusable neural network components for building custom models
  • Samplers: Text generation strategies for controlling output

The library follows consistent naming patterns: {Architecture}{Task} for task models, {Architecture}Backbone for backbones, and {Architecture}{Task}Preprocessor for preprocessors.

Capabilities

Text Models

Complete implementations of transformer models for natural language processing tasks including classification, masked language modeling, causal language modeling, and sequence-to-sequence tasks.

# Base classes
class CausalLM: ...
class MaskedLM: ...
class Seq2SeqLM: ...
class TextClassifier: ...

# Example architectures
class BertTextClassifier: ...
class GPT2CausalLM: ...
class BartSeq2SeqLM: ...

Text Models

Image Models

Vision models for image classification, object detection, and image segmentation tasks with popular architectures like ResNet, Vision Transformer, and EfficientNet.

# Base classes  
class ImageClassifier: ...
class ObjectDetector: ...
class ImageSegmenter: ...

# Example architectures
class ResNetImageClassifier: ...
class ViTImageClassifier: ...
class RetinaNetObjectDetector: ...

Image Models

Audio Models

Audio processing models for speech recognition and audio-to-text conversion.

class WhisperBackbone: ...
class MoonshineAudioToText: ...

Audio Models

Multimodal Models

Models that process multiple modalities like text and images together for advanced AI capabilities.

class CLIPBackbone: ...
class PaliGemmaCausalLM: ...
class SigLIPBackbone: ...

Multimodal Models

Generative Models

Advanced generative models for text-to-image synthesis and image manipulation.

class StableDiffusion3TextToImage: ...
class FluxTextToImage: ...
class StableDiffusion3Inpaint: ...

Generative Models

Tokenizers

Text tokenization utilities supporting various algorithms including byte-pair encoding, WordPiece, and SentencePiece.

class Tokenizer: ...
class BytePairTokenizer: ...
class WordPieceTokenizer: ...
class SentencePieceTokenizer: ...

Tokenizers

Layers and Components

Reusable neural network layers and components for building custom models or extending existing architectures.

class TransformerEncoder: ...
class TransformerDecoder: ...
class CachedMultiHeadAttention: ...
class PositionEmbedding: ...

Layers and Components

Text Generation Sampling

Sampling strategies for controlling text generation behavior in language models.

class Sampler: ...
class GreedySampler: ...
class TopKSampler: ...
class BeamSampler: ...

Text Generation Sampling

Evaluation Metrics

Metrics for evaluating model performance on various tasks including text generation and classification.

class Bleu: ...
class RougeL: ...
class Perplexity: ...

Evaluation Metrics

Utilities and Helpers

Utility functions for dataset processing, model hub integration, and common operations.

def upload_preset(uri: str, preset: str) -> None: ...
def imagenet_id_to_name(class_id: int) -> str: ...
def coco_id_to_name(class_id: int) -> str: ...

Utilities and Helpers

Version Information

__version__: str = "0.22.1"

def version() -> str:
    """Return the current version string."""
    ...