Ctrl + k

or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/pyopencl@2025.2.x
tile.json

tessl/pypi-pyopencl

tessl install tessl/pypi-pyopencl@2025.2.0

Python wrapper for OpenCL enabling GPU and parallel computing with comprehensive array operations and mathematical functions

Agent Success

Agent success rate when using this tile

86%

Improvement

Agent success rate improvement when using this tile compared to baseline

1.28x

Baseline

Agent success rate without this tile

67%

task.mdevals/scenario-9/

GPU Task Pipeline Manager

A utility for managing and coordinating multiple GPU computations with proper event-based synchronization and profiling.

Capabilities

Pipeline Execution

Execute a sequence of GPU operations with dependency tracking using event-based synchronization.

  • Given three kernel functions (init_kernel, process_kernel, finalize_kernel), the pipeline executes them in sequence on the GPU and returns execution times for each stage @test

Batch Processing with Wait Lists

Execute multiple independent GPU operations concurrently and synchronize their completion.

  • Given a list of 5 independent computation kernels, execute them concurrently and wait for all to complete before returning results @test

External Event Synchronization

Coordinate GPU work with external processes using user events.

  • Create a computation that waits for a user event, then execute it by manually completing the user event from the host @test

Implementation

@generates

API

class PipelineManager:
    """Manages GPU computation pipelines with event-based synchronization."""

    def __init__(self, context, queue):
        """
        Initialize the pipeline manager.

        Args:
            context: PyOpenCL context
            queue: PyOpenCL command queue (must support profiling)
        """
        pass

    def execute_sequential(self, kernels, buffers):
        """
        Execute kernels sequentially with event-based dependencies.

        Args:
            kernels: List of (kernel, global_size, local_size) tuples
            buffers: List of buffer arguments for the kernels

        Returns:
            dict: Timing information with keys 'stage_0', 'stage_1', 'stage_2'
                  containing execution time in seconds for each kernel
        """
        pass

    def execute_batch(self, operations):
        """
        Execute multiple independent operations concurrently and wait for all.

        Args:
            operations: List of (kernel, global_size, local_size, args) tuples

        Returns:
            list: Results from all operations after all complete
        """
        pass

    def execute_with_user_event(self, kernel, global_size, local_size, args):
        """
        Execute a kernel that depends on a user event.

        Args:
            kernel: PyOpenCL kernel to execute
            global_size: Global work size
            local_size: Local work size
            args: Kernel arguments

        Returns:
            tuple: (user_event, result_event) where user_event must be
                   completed manually to trigger execution
        """
        pass

Dependencies { .dependencies }

pyopencl { .dependency }

Provides GPU computing support with event synchronization capabilities.

@satisfied-by