Python wrapper for OpenCL enabling GPU and parallel computing with comprehensive array operations and mathematical functions
86
A utility for managing and coordinating multiple GPU computations with proper event-based synchronization and profiling.
Execute a sequence of GPU operations with dependency tracking using event-based synchronization.
Execute multiple independent GPU operations concurrently and synchronize their completion.
Coordinate GPU work with external processes using user events.
@generates
class PipelineManager:
"""Manages GPU computation pipelines with event-based synchronization."""
def __init__(self, context, queue):
"""
Initialize the pipeline manager.
Args:
context: PyOpenCL context
queue: PyOpenCL command queue (must support profiling)
"""
pass
def execute_sequential(self, kernels, buffers):
"""
Execute kernels sequentially with event-based dependencies.
Args:
kernels: List of (kernel, global_size, local_size) tuples
buffers: List of buffer arguments for the kernels
Returns:
dict: Timing information with keys 'stage_0', 'stage_1', 'stage_2'
containing execution time in seconds for each kernel
"""
pass
def execute_batch(self, operations):
"""
Execute multiple independent operations concurrently and wait for all.
Args:
operations: List of (kernel, global_size, local_size, args) tuples
Returns:
list: Results from all operations after all complete
"""
pass
def execute_with_user_event(self, kernel, global_size, local_size, args):
"""
Execute a kernel that depends on a user event.
Args:
kernel: PyOpenCL kernel to execute
global_size: Global work size
local_size: Local work size
args: Kernel arguments
Returns:
tuple: (user_event, result_event) where user_event must be
completed manually to trigger execution
"""
passProvides GPU computing support with event synchronization capabilities.
@satisfied-by
Install with Tessl CLI
npx tessl i tessl/pypi-pyopencldocs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10