Python wrapper for Nvidia CUDA parallel computation API with object cleanup, automatic error checking, and convenient abstractions.
Build a context manager utility for GPU computation that handles device context setup, operations, and proper synchronization.
Create a GPUContextManager class that:
The context manager should:
@generates
class GPUContextManager:
"""
Context manager for GPU operations with synchronization support.
"""
def __init__(self, device_id=0):
"""
Initialize the GPU context manager.
Args:
device_id: The GPU device ID to use (default: 0)
"""
pass
def __enter__(self):
"""
Enter the context and set up the GPU context.
Returns:
self
"""
pass
def __exit__(self, exc_type, exc_val, exc_tb):
"""
Exit the context and clean up GPU resources.
"""
pass
def synchronize(self):
"""
Synchronize the current context to ensure all GPU operations complete.
"""
pass
def get_device_info(self):
"""
Get information about the current device.
Returns:
dict: Dictionary with 'name' and 'total_memory' keys
"""
passProvides GPU computing capabilities with context management and synchronization.
@satisfied-by
tessl i tessl/pypi-pycuda@2025.1.0docs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10