CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-cudf-cu12

GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data

Pending
Overview
Eval results
Files

pandas-compatibility.mddocs/

Pandas Compatibility

cuDF provides seamless pandas compatibility through cudf.pandas, which enables automatic GPU acceleration for existing pandas code. This system provides transparent fallback to CPU pandas for unsupported operations while leveraging GPU acceleration when beneficial.

Import Statements

# Pandas acceleration mode
import cudf.pandas
cudf.pandas.install()  # Enable automatic acceleration

# Profiling utilities
from cudf.pandas import Profiler

# IPython integration  
%load_ext cudf.pandas  # In Jupyter/IPython

# Proxy utilities
from cudf.pandas import (
    as_proxy_object, is_proxy_object, is_proxy_instance
)

Acceleration Mode

Drop-in replacement system that automatically accelerates pandas operations with GPU when beneficial.

def install() -> None:
    """
    Enable cuDF pandas accelerator mode for automatic GPU acceleration
    
    Installs cuDF as a pandas accelerator that intercepts pandas operations
    and routes them to GPU when possible. Provides transparent fallback
    to CPU pandas for unsupported operations.
    
    After installation, existing pandas code automatically benefits from
    GPU acceleration without modification. Operations that cannot be 
    accelerated fall back to pandas seamlessly.
    
    Features:
        - Automatic GPU acceleration for supported operations
        - Transparent fallback to CPU pandas for unsupported operations  
        - Zero code changes required for existing pandas workflows
        - Maintains pandas API compatibility and behavior
        - Intelligent routing based on data size and operation type
        
    Examples:
        # Enable acceleration globally
        import cudf.pandas
        cudf.pandas.install()
        
        # Now pandas operations automatically use GPU when beneficial
        import pandas as pd
        df = pd.DataFrame({'x': range(1000000), 'y': range(1000000)})
        result = df.groupby('x').sum()  # Automatically uses GPU
        
        # Fallback for unsupported operations
        result = df.some_unsupported_operation()  # Uses CPU pandas
        
        # Works with existing pandas code unchanged
        df.to_csv('output.csv')  # GPU-accelerated I/O when possible
    """

Performance Profiling

Tools for analyzing pandas code to identify GPU acceleration opportunities.

class Profiler:
    """
    Performance profiler for pandas acceleration opportunities
    
    Analyzes pandas operations to identify performance bottlenecks and
    acceleration potential. Provides insights into which operations
    benefit from GPU acceleration and performance improvements achieved.
    
    Attributes:
        results: dict containing profiling results and statistics
        
    Methods:
        start(): Begin profiling pandas operations
        stop(): End profiling and collect results
        print_stats(): Display profiling statistics  
        get_results(): Return detailed profiling data
        
    Examples:
        # Basic profiling workflow
        import cudf.pandas
        cudf.pandas.install()
        
        profiler = cudf.pandas.Profiler()
        profiler.start()
        
        # Run pandas operations to profile
        import pandas as pd
        df = pd.DataFrame({'A': range(10000), 'B': range(10000)})
        result1 = df.groupby('A').sum()
        result2 = df.merge(df, on='A')
        result3 = df.sort_values('B')
        
        profiler.stop()
        profiler.print_stats()
        
        # Get detailed results
        stats = profiler.get_results()
        print(f"GPU accelerated operations: {stats['gpu_ops']}")
        print(f"CPU fallback operations: {stats['cpu_ops']}")
        print(f"Total speedup: {stats['speedup']:.2f}x")
    """
    
    def start(self) -> None:
        """
        Begin profiling pandas operations
        
        Starts collecting performance metrics for pandas operations
        including execution time, memory usage, and routing decisions.
        """
    
    def stop(self) -> None:
        """
        End profiling and collect results
        
        Stops profiling and computes final statistics including
        performance improvements and operation categorization.
        """
    
    def print_stats(self) -> None:
        """
        Display profiling statistics in readable format
        
        Prints summary of profiled operations including:
        - Total operations analyzed
        - GPU vs CPU operation breakdown  
        - Performance improvements achieved
        - Memory usage patterns
        - Recommendations for optimization
        """
    
    def get_results(self) -> dict:
        """
        Return detailed profiling data as dictionary
        
        Returns:
            dict: Comprehensive profiling results containing:
                - operation_times: Execution times for each operation
                - routing_decisions: GPU vs CPU routing for operations
                - memory_usage: Memory consumption patterns
                - speedups: Performance improvements achieved
                - recommendations: Optimization suggestions
        """

IPython Integration

Magic commands and extensions for Jupyter notebook integration.

def load_ipython_extension(ipython) -> None:
    """
    Load cuDF pandas IPython extension for notebook integration
    
    Provides magic commands and enhanced display formatting for
    cuDF pandas operations in Jupyter notebooks and IPython.
    
    Magic Commands Available:
        %%cudf_pandas_profile: Profile cell operations for acceleration opportunities
        %cudf_pandas_status: Show current acceleration status and statistics
        %cudf_pandas_fallback: Display recent fallback operations and reasons
        
    Parameters:
        ipython: IPython.InteractiveShell
            IPython shell instance to extend
            
    Examples:
        # In Jupyter notebook
        %load_ext cudf.pandas
        
        # Profile a cell's operations  
        %%cudf_pandas_profile
        import pandas as pd
        df = pd.DataFrame({'A': range(10000)})
        result = df.groupby('A').count()
        
        # Check acceleration status
        %cudf_pandas_status
        
        # See fallback operations
        %cudf_pandas_fallback
    """

Proxy Object System

Utilities for working with the proxy object system that enables transparent acceleration.

def as_proxy_object(obj, typ=None) -> object:
    """
    Wrap object as proxy for pandas acceleration
    
    Creates proxy object that intercepts method calls and routes them
    to appropriate backend (GPU cuDF or CPU pandas). Used internally
    by the acceleration system.
    
    Parameters:
        obj: Any
            Object to wrap as proxy (typically cuDF object)
        typ: type, optional
            Target proxy type (typically pandas type)
            
    Returns:
        object: Proxy object that behaves like pandas but uses cuDF backend
        
    Examples:
        # Typically used internally, but can be used explicitly
        import cudf
        cudf_df = cudf.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
        
        # Create proxy that behaves like pandas DataFrame
        proxy_df = cudf.pandas.as_proxy_object(cudf_df)
        
        # Proxy behaves like pandas but uses cuDF backend
        result = proxy_df.sum()  # Uses cuDF implementation
        type(result).__name__  # Shows 'Series' (pandas-like interface)
    """

def is_proxy_object(obj) -> bool:
    """
    Check if object is a proxy object for pandas acceleration
    
    Determines whether an object is part of the cuDF pandas proxy system,
    meaning it routes operations between cuDF and pandas backends.
    
    Parameters:
        obj: Any
            Object to check for proxy status
            
    Returns:
        bool: True if object is proxy object, False otherwise
        
    Examples:
        import cudf.pandas
        cudf.pandas.install()
        import pandas as pd
        
        # Create DataFrame (automatically proxied after install)
        df = pd.DataFrame({'A': [1, 2, 3]})
        
        # Check if it's a proxy
        is_proxy = cudf.pandas.is_proxy_object(df)  # True
        
        # Regular Python objects are not proxies
        regular_list = [1, 2, 3]
        is_proxy = cudf.pandas.is_proxy_object(regular_list)  # False
        
        # Native cuDF objects are not proxies  
        import cudf
        cudf_df = cudf.DataFrame({'A': [1, 2, 3]})
        is_proxy = cudf.pandas.is_proxy_object(cudf_df)  # False
    """

def is_proxy_instance(obj, typ) -> bool:
    """
    Check if object is instance of proxy class for given type
    
    More specific check that verifies an object is a proxy instance
    of a particular pandas type (DataFrame, Series, etc.).
    
    Parameters:
        obj: Any
            Object to check
        typ: type
            Type to check proxy instance against (e.g., pd.DataFrame)
            
    Returns:
        bool: True if object is proxy instance of specified type
        
    Examples:
        import cudf.pandas
        cudf.pandas.install()
        import pandas as pd
        
        # Create proxied objects
        df = pd.DataFrame({'A': [1, 2, 3]})
        series = pd.Series([1, 2, 3])
        
        # Check specific proxy types
        is_df_proxy = cudf.pandas.is_proxy_instance(df, pd.DataFrame)  # True
        is_series_proxy = cudf.pandas.is_proxy_instance(series, pd.Series)  # True
        
        # Cross-type checks return False
        is_df_as_series = cudf.pandas.is_proxy_instance(df, pd.Series)  # False
        
        # Non-proxy objects return False
        regular_dict = {'A': [1, 2, 3]}
        is_dict_proxy = cudf.pandas.is_proxy_instance(regular_dict, pd.DataFrame)  # False
    """

Acceleration Behavior

Automatic Routing

The cuDF pandas system intelligently routes operations based on several factors:

# Operations automatically routed to GPU when beneficial
import cudf.pandas
cudf.pandas.install()
import pandas as pd

# Large dataset operations -> GPU acceleration
large_df = pd.DataFrame({'x': range(1000000), 'y': range(1000000)})
result = large_df.groupby('x').sum()  # Uses cuDF GPU acceleration

# Small dataset operations -> CPU pandas (lower overhead)
small_df = pd.DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})
result = small_df.sum()  # Uses CPU pandas

# Supported operations -> GPU when data size warrants it
gpu_result = large_df.merge(large_df, on='x')  # GPU acceleration

# Unsupported operations -> automatic fallback to pandas
fallback_result = large_df.some_pandas_only_method()  # CPU fallback

Performance Thresholds

# The system considers multiple factors for routing decisions:

# 1. Data size thresholds
small_data = pd.Series(range(100))      # -> CPU pandas  
large_data = pd.Series(range(100000))   # -> GPU cuDF

# 2. Operation complexity
simple_op = df['col'].sum()             # -> GPU for large data
complex_op = df.apply(custom_function)  # -> CPU fallback

# 3. Memory availability  
# GPU operations require sufficient GPU memory
# Automatic fallback if GPU memory insufficient

# 4. Operation support
supported_ops = ['groupby', 'merge', 'concat', 'sort_values']  # -> GPU
unsupported_ops = ['some_pandas_specific_method']              # -> CPU

Configuration Options

# Configure acceleration behavior (conceptual - actual API may vary)
import cudf.pandas

# Install with custom thresholds
cudf.pandas.install(
    min_data_size=10000,      # Minimum rows for GPU acceleration
    memory_fraction=0.8,      # Max GPU memory fraction to use
    fallback_warnings=True    # Warn on fallback operations
)

# Disable acceleration for specific operations
cudf.pandas.configure(
    disable_operations=['apply', 'applymap'],  # Force CPU for these
    enable_profiling=True,     # Enable automatic profiling
    cache_conversions=True     # Cache pandas<->cuDF conversions
)

Common Usage Patterns

Drop-in Acceleration

# Existing pandas code - no changes needed
import cudf.pandas
cudf.pandas.install()

# Now all pandas imports automatically use acceleration
import pandas as pd
import numpy as np

# Large-scale data processing (automatically accelerated)
df = pd.read_csv('large_dataset.csv')  # GPU-accelerated I/O
df_grouped = df.groupby('category').agg({
    'sales': 'sum',
    'quantity': 'mean'
})  # GPU-accelerated groupby

# Join operations
df_merged = df.merge(df_grouped, on='category')  # GPU-accelerated merge

# Output operations  
df_merged.to_parquet('output.parquet')  # GPU-accelerated I/O

Performance Analysis

# Profile existing pandas workflows
import cudf.pandas
cudf.pandas.install()

profiler = cudf.pandas.Profiler()
profiler.start()

# Run existing pandas pipeline
import pandas as pd
df = pd.read_csv('data.csv')
processed = (df
    .fillna(0)
    .groupby('category')
    .agg({'value': ['sum', 'mean', 'std']})
    .reset_index()
)
processed.to_csv('results.csv')

profiler.stop()
stats = profiler.get_results()

print(f"Operations accelerated: {stats['accelerated_ops']}")
print(f"Fallback operations: {stats['fallback_ops']}") 
print(f"Overall speedup: {stats['total_speedup']:.2f}x")
print(f"Memory savings: {stats['memory_reduction']:.1f}%")

Gradual Migration

# Hybrid approach - mix cuDF and pandas as needed
import cudf
import pandas as pd
import cudf.pandas

# Explicit cuDF for known GPU-beneficial operations
cudf_df = cudf.read_parquet('large_data.parquet')  # Explicit GPU
processed_cudf = cudf_df.groupby('key').sum()

# Convert to pandas for unsupported operations
pandas_df = processed_cudf.to_pandas()
result = pandas_df.some_pandas_only_operation()

# Convert back for further GPU processing
final_cudf = cudf.from_pandas(result)
final_result = final_cudf.sort_values('column')

Compatibility Matrix

Fully Supported Operations

  • I/O: read_csv, read_parquet, to_csv, to_parquet
  • Groupby: Standard aggregations (sum, mean, count, min, max)
  • Joins: merge, concat, join
  • Sorting: sort_values, sort_index
  • Filtering: Boolean indexing, query
  • Reshaping: pivot_table, melt, stack, unstack

Partial Support (Selective Acceleration)

  • String Operations: Common string methods with GPU acceleration
  • DateTime Operations: Basic datetime arithmetic and formatting
  • Statistical Operations: Standard statistical functions
  • Window Operations: Rolling and expanding windows

Fallback Operations (CPU Only)

  • Custom Functions: User-defined functions in apply, map
  • Advanced String Operations: Complex regex and advanced text processing
  • Specialized Statistical Methods: Advanced statistical functions
  • Plot Operations: Matplotlib integration (uses CPU data)

Performance Benefits

Typical Speedups

  • Large Groupby Operations: 10-100x faster than pandas
  • I/O Operations: 2-20x faster for Parquet, CSV reading/writing
  • Join Operations: 5-50x faster for large table joins
  • Sorting: 3-30x faster for large datasets
  • Aggregations: 10-100x faster for numerical aggregations

Memory Efficiency

  • Columnar Storage: More memory-efficient data representation
  • GPU Memory Management: Automatic memory optimization
  • Reduced Copying: Fewer data copies between operations
  • Memory Pools: Efficient memory allocation and reuse

Best Practices

  • Let the System Decide: Trust automatic routing for most operations
  • Profile Regularly: Use Profiler to identify optimization opportunities
  • Monitor Fallbacks: Check for unexpected CPU fallbacks that might indicate issues
  • Batch Operations: Combine operations to maximize GPU efficiency
  • Memory Awareness: Consider GPU memory limits for very large datasets

Install with Tessl CLI

npx tessl i tessl/pypi-cudf-cu12

docs

core-data-structures.md

data-manipulation.md

index.md

io-operations.md

pandas-compatibility.md

testing-utilities.md

type-checking.md

tile.json