IREE Python Compiler API - MLIR-based end-to-end compiler for Machine Learning models
—
Console scripts providing command-line access to compilation, optimization, and import functionality. These tools serve as the primary interface for batch processing, CI/CD integration, and interactive development workflows.
The iree-base-compiler package provides five main command-line tools installed as console scripts.
Primary compiler tool for converting MLIR to IREE VM modules.
iree-compile input.mlir -o output.vmfb [OPTIONS]
# Core options:
--iree-hal-target-backends=BACKEND # Target backend (llvm-cpu, cuda, vulkan-spirv, etc.)
--iree-input-type=TYPE # Input type (auto, stablehlo, tosa, torch, etc.)
--iree-vm-bytecode-module-output-format=FORMAT # Output format (flatbuffer-binary, etc.)
-o, --output=FILE # Output file path
# Optimization options:
--iree-opt-level=LEVEL # Optimization level (0-3)
--iree-flow-enable-fusing # Enable operation fusion
--iree-llvmcpu-target-cpu=CPU # Target CPU architecture
# Debug options:
--mlir-print-debuginfo # Include debug information
--mlir-print-ir-after-all # Print IR after each pass
--mlir-print-op-on-diagnostic # Print operations on diagnosticEntry Point: iree.compiler.tools.scripts.iree_compile.__main__:main
MLIR optimization tool for applying transformation passes.
iree-opt input.mlir [OPTIONS]
# Pass options:
--pass-pipeline=PIPELINE # Specify pass pipeline
--print-ir-after-all # Print IR after each pass
--print-ir-before-all # Print IR before each pass
--verify-each # Verify IR after each pass
# Common passes:
--iree-flow-transformation-pipeline # Flow dialect transformations
--iree-stream-transformation-pipeline # Stream dialect transformations
--canonicalize # Canonicalization pass
# Output options:
-o, --output=FILE # Output file path
--emit=FORMAT # Output formatEntry Point: iree.compiler.tools.scripts.iree_opt.__main__:main
ONNX model import tool for converting ONNX models to IREE-compatible MLIR.
iree-import-onnx model.onnx -o model.mlir [OPTIONS]
# Import options:
-o, --output=FILE # Output MLIR file
--upgrade # Upgrade ONNX model to latest version
--input-name=NAME # Input tensor name
--output-name=NAME # Output tensor nameEntry Point: iree.compiler.tools.import_onnx.__main__:_cli_main
MLIR IR manipulation and analysis tool.
iree-ir-tool [COMMAND] input.mlir [OPTIONS]
# Commands:
print # Print IR in various formats
verify # Verify IR validity
transform # Apply transformations
analyze # Analyze IR structure
# Options:
--output-format=FORMAT # Output format (text, bytecode)
--print-generic # Use generic operation format
--print-debuginfo # Include debug informationEntry Point: iree.compiler.tools.ir_tool.__main__:_cli_main
Build system command-line interface for complex compilation workflows.
iree-build [ENTRYPOINT] [OPTIONS]
# Build commands:
list # List available build entrypoints
run ENTRYPOINT # Run specific build entrypoint
clean # Clean build artifacts
# Common options:
--build-dir=DIR # Build directory path
--verbose # Verbose output
--parallel=N # Parallel execution limit
--config=FILE # Configuration file
# Entrypoint-specific options vary by entrypointEntry Point: iree.build.__main__:main
Base functions for implementing command-line tool functionality.
def find_tool(tool_name: str) -> str:
"""
Find the path to a compiler tool executable.
Parameters:
- tool_name: Name of the tool to find
Returns:
str: Full path to tool executable
Raises:
FileNotFoundError: If tool is not found
"""
def invoke_immediate(
command_line: List[str],
immediate_input: bytes = None
) -> bytes:
"""
Invoke a command-line tool and return output.
Parameters:
- command_line: Command and arguments to execute
- immediate_input: Optional stdin input
Returns:
bytes: Tool output
Raises:
CompilerToolError: If tool execution fails
"""
def build_compile_command_line(
input_file: str,
temp_file_saver: TempFileSaver,
options: CompilerOptions
) -> List[str]:
"""
Build command line for iree-compile tool.
Parameters:
- input_file: Input file path
- temp_file_saver: Temporary file manager
- options: Compilation options
Returns:
List[str]: Command line arguments
"""Functions for locating and managing compiler tool binaries.
def get_tool_path(tool_name: str) -> str:
"""
Get the full path to a compiler tool.
Parameters:
- tool_name: Tool name
Returns:
str: Full path to tool executable
"""
def verify_tool_availability(tool_name: str) -> bool:
"""
Check if a compiler tool is available.
Parameters:
- tool_name: Tool name to check
Returns:
bool: True if tool is available
"""
class ToolEnvironment:
"""
Manager for compiler tool environment and paths.
Handles tool discovery, path management, and environment
setup for compiler tool execution.
"""
def __init__(self, tool_search_paths: List[str] = None): ...
def find_tool(self, tool_name: str) -> str: ...
def setup_environment(self) -> Dict[str, str]: ...Exception classes and error handling for command-line tool failures.
class CompilerToolError(Exception):
"""
Exception raised when a compiler tool fails.
Provides access to tool output, error messages, and exit codes
for debugging compilation failures.
"""
def __init__(
self,
tool_name: str,
command_line: List[str],
exit_code: int,
stdout: bytes = None,
stderr: bytes = None
): ...
@property
def tool_name(self) -> str: ...
@property
def exit_code(self) -> int: ...
@property
def stdout_text(self) -> str: ...
@property
def stderr_text(self) -> str: ...# Compile MLIR to VM module for CPU
iree-compile model.mlir \
--iree-hal-target-backends=llvm-cpu \
--iree-input-type=auto \
-o model.vmfb
# Compile for GPU with optimization
iree-compile model.mlir \
--iree-hal-target-backends=cuda \
--iree-opt-level=3 \
--iree-llvmgpu-enable-prefetch \
-o model_gpu.vmfb# Import ONNX model to MLIR
iree-import-onnx model.onnx \
--output-names=output \
--optimize \
-o model.mlir
# Optimize MLIR representation
iree-opt model.mlir \
--iree-flow-transformation-pipeline \
--canonicalize \
-o model_opt.mlir
# Compile optimized MLIR
iree-compile model_opt.mlir \
--iree-hal-target-backends=vulkan-spirv \
-o model_vulkan.vmfb# Build for multiple targets using build system
iree-build compile_multi_target \
--model-path=./model.onnx \
--targets=llvm-cpu,cuda,vulkan-spirv \
--optimize \
--output-dir=./outputs
# Custom build with configuration
iree-build custom_build \
--config=build_config.yaml \
--build-dir=./build \
--parallel=4 \
--verbose# Debug compilation with IR dumps
iree-compile model.mlir \
--iree-hal-target-backends=llvm-cpu \
--mlir-print-ir-after-all \
--mlir-print-debuginfo \
-o model_debug.vmfb
# Analyze IR structure
iree-ir-tool analyze model.mlir \
--show-stats \
--show-dialects \
--verify
# Compare two MLIR files
iree-ir-tool diff model1.mlir model2.mlir \
--ignore-debug-infofrom iree.compiler.tools.binaries import find_tool, invoke_immediate
# Find compiler tool
iree_compile = find_tool("iree-compile")
# Build command line
command = [
iree_compile,
"model.mlir",
"--iree-hal-target-backends=llvm-cpu",
"-o", "model.vmfb"
]
# Execute compilation
try:
output = invoke_immediate(command)
print("Compilation successful")
except CompilerToolError as e:
print(f"Compilation failed: {e}")
print(f"Exit code: {e.exit_code}")
print(f"Error output: {e.stderr_text}")import os
import subprocess
from pathlib import Path
def batch_compile_models(input_dir: str, output_dir: str, target: str = "llvm-cpu"):
"""Batch compile all MLIR files in a directory."""
input_path = Path(input_dir)
output_path = Path(output_dir)
output_path.mkdir(exist_ok=True)
mlir_files = list(input_path.glob("*.mlir"))
for mlir_file in mlir_files:
output_file = output_path / f"{mlir_file.stem}.vmfb"
cmd = [
"iree-compile",
str(mlir_file),
f"--iree-hal-target-backends={target}",
"-o", str(output_file)
]
try:
result = subprocess.run(cmd, check=True, capture_output=True, text=True)
print(f"✓ Compiled {mlir_file.name}")
except subprocess.CalledProcessError as e:
print(f"✗ Failed to compile {mlir_file.name}: {e.stderr}")
# Usage
batch_compile_models("./models", "./compiled", "cuda")Install with Tessl CLI
npx tessl i tessl/pypi-iree-base-compiler