or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

backend-integration.mdindex.mdmodel-composition.mdmodel-construction.mdmodel-hub.mdmodel-io.mdmodel-validation.mdnumpy-integration.mdoperator-definitions.mdreference-implementation.mdshape-inference.mdtext-processing.mdversion-conversion.md
tile.json

tessl/pypi-onnx

Open Neural Network Exchange for AI model interoperability and machine learning frameworks

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/onnx@1.15.x

To install, run

npx @tessl/cli install tessl/pypi-onnx@1.15.0

index.mddocs/

ONNX

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML, defining an extensible computation graph model as well as definitions of built-in operators and standard data types, enabling interoperability between different frameworks and streamlining the path from research to production.

Package Information

  • Package Name: onnx
  • Language: Python
  • Installation: pip install onnx

Core Imports

import onnx

Import specific components:

from onnx import (
    ModelProto, GraphProto, NodeProto, TensorProto,
    load_model, save_model, helper, checker
)

Version and Constants

ONNX provides version information and important constants for working with different IR versions and feature stability levels.

__version__  # Package version string
IR_VERSION  # Current ONNX IR version
IR_VERSION_2017_10_10  # Historical IR version: 1
IR_VERSION_2017_10_30  # Historical IR version: 2
IR_VERSION_2017_11_3   # Historical IR version: 3
IR_VERSION_2019_1_22   # Historical IR version: 4
IR_VERSION_2019_3_18   # Historical IR version: 5
IR_VERSION_2019_9_19   # Historical IR version: 6
IR_VERSION_2020_5_8    # Historical IR version: 7
IR_VERSION_2021_7_30   # Historical IR version: 8

ONNX_ML                # Boolean indicating ONNX ML support
EXPERIMENTAL           # Status constant for experimental features
STABLE                 # Status constant for stable features

# External data management functions
def convert_model_to_external_data(
    model: ModelProto,
    all_tensors_to_one_file: bool = True,
    location: str | None = None,
    size_threshold: int = 1024,
    convert_attribute: bool = False,
) -> None

def load_external_data_for_model(model: ModelProto, base_dir: str) -> None

def write_external_data_tensors(model: ModelProto, base_dir: str) -> ModelProto

Usage example:

import onnx
print(f"ONNX version: {onnx.__version__}")
print(f"Current IR version: {onnx.IR_VERSION}")
print(f"ONNX ML supported: {onnx.ONNX_ML}")

Basic Usage

import onnx
from onnx import helper, TensorProto

# Load an existing ONNX model
model = onnx.load_model("path/to/model.onnx")

# Check if the model is valid
onnx.checker.check_model(model)

# Create a simple computation graph
# Define input/output value info
X = helper.make_tensor_value_info('X', TensorProto.FLOAT, [3, 2])
Y = helper.make_tensor_value_info('Y', TensorProto.FLOAT, [3, 2])

# Create a node (operation)
node_def = helper.make_node(
    'Relu',  # node name
    ['X'],   # inputs
    ['Y'],   # outputs
)

# Create the graph
graph_def = helper.make_graph(
    [node_def],        # nodes
    'test-model',      # name
    [X],              # inputs
    [Y],              # outputs
)

# Create the model
model_def = helper.make_model(graph_def, producer_name='onnx-example')

# Save the model
onnx.save_model(model_def, "relu_model.onnx")

Architecture

ONNX models follow a hierarchical structure based on protocol buffers:

  • ModelProto: Top-level container with metadata, IR version, and computation graph
  • GraphProto: Computation graph containing nodes, inputs, outputs, and initializers
  • NodeProto: Individual operations with operator type, inputs, outputs, and attributes
  • ValueInfoProto: Type and shape information for graph inputs/outputs
  • TensorProto: Tensor data representation with type, shape, and values
  • AttributeProto: Node parameters and configuration options

This design enables framework interoperability by providing a standard representation for neural networks and machine learning models, supporting conversion between frameworks like PyTorch, TensorFlow, scikit-learn, and deployment runtimes.

Capabilities

Model I/O Operations

Core functions for loading and saving ONNX models from various sources including files, streams, and binary data, with support for external data storage and multiple serialization formats.

Model I/O

Model Construction

Helper functions for programmatically creating ONNX models, graphs, nodes, tensors, and type definitions with proper protocol buffer structure and validation.

Model Construction

Model Validation

Comprehensive validation functions to verify ONNX model correctness, including graph structure, node compatibility, type consistency, and operator definitions.

Model Validation

Shape Inference

Automatic shape and type inference for model graphs, enabling optimization and validation of tensor shapes throughout the computation graph.

Shape Inference

NumPy Integration

Bidirectional conversion between ONNX tensors and NumPy arrays, supporting all ONNX data types including specialized formats like bfloat16 and float8 variants.

NumPy Integration

Model Composition

Functions for merging and composing multiple ONNX models or graphs, enabling modular model construction and complex pipeline creation.

Model Composition

Operator Definitions

Access to ONNX operator schemas, type definitions, and version compatibility information for all supported operators across different domains.

Operator Definitions

Text Parsing and Printing

Convert between ONNX protocol buffer representations and human-readable text formats for debugging, serialization, and model inspection.

Text Processing

Version Conversion

Convert ONNX models between different IR versions and operator set versions to maintain compatibility across framework versions.

Version Conversion

Model Hub Integration

Access to the ONNX Model Zoo for downloading pre-trained models, including model metadata and test data for validation.

Model Hub

Backend Integration

Abstract interfaces for implementing ONNX model execution backends, enabling custom runtime integration and testing frameworks.

Backend Integration

Reference Implementation

Complete reference implementation of ONNX operators for testing, validation, and educational purposes.

Reference Implementation

Protocol Buffer Types

class ModelProto:
    """ONNX model representation with computation graph and metadata"""

class GraphProto:
    """Computation graph with nodes, inputs, outputs, and initializers"""

class NodeProto:
    """Individual operation with operator type, inputs, and outputs"""

class TensorProto:
    """Tensor data with type, shape, and values"""

class ValueInfoProto:
    """Type and shape information for graph values"""

class AttributeProto:
    """Node attributes and parameters"""

class FunctionProto:
    """User-defined function representation"""

class TypeProto:
    """Type system definitions for tensors, sequences, maps, and optionals"""

class OperatorSetIdProto:
    """Operator set identifier with domain and version"""

class OperatorProto:
    """Operator definition protocol buffer"""

class OperatorSetProto:
    """Collection of operators for a specific opset version"""

class OperatorStatus:
    """Enumeration for operator status (experimental, stable)"""

class StringStringEntryProto:
    """Key-value string pairs for metadata"""

class TensorAnnotation:
    """Tensor annotation for quantization and optimization hints"""

class TrainingInfoProto:
    """Training-specific information for model execution"""

class Version:
    """Version information protocol buffer"""

class SparseTensorProto:
    """Sparse tensor representation with values, indices, and shape"""

class MapProto:
    """Map container for key-value pairs"""

class SequenceProto:
    """Sequence container for ordered collections"""

class OptionalProto:
    """Optional container that may or may not contain a value"""