Python library for monitoring and managing AMD GPUs and CPUs with programmatic hardware metrics access
Functions for configuring and querying GPU partitioning modes, including compute partitions, memory partitions, and accelerator partition profiles. These functions enable dividing GPU resources into multiple isolated partitions for workload isolation, resource allocation, and multi-tenancy scenarios.
GPU partitioning allows a single physical GPU to be divided into multiple logical partitions, each with dedicated compute resources and memory. This enables:
These features are typically used in virtualization, cloud computing, and HPC environments to provide isolation and guaranteed resources to different workloads or tenants.
Get the current compute partition configuration.
def amdsmi_get_gpu_compute_partition(
processor_handle: processor_handle
) -> str:
"""
Get the current GPU compute partition configuration.
Returns a string describing the current compute partition mode, which determines
how the GPU's compute resources are divided. Compute partitioning splits the GPU's
execution resources (XCDs - eXecution Cache Dies) into isolated partitions.
Parameters:
- processor_handle: Handle for the target GPU device
Returns:
- str: Current compute partition mode as a string (e.g., "SPX", "DPX", "TPX", "QPX", "CPX")
Raises:
- AmdSmiParameterException: If processor_handle is invalid
- AmdSmiLibraryException: On query failure or if feature not supported
Example:
```python
import amdsmi
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Query current compute partition
partition = amdsmi.amdsmi_get_gpu_compute_partition(device)
print(f"Current compute partition: {partition}")
amdsmi.amdsmi_shut_down()
```
"""Configure the GPU compute partition mode.
def amdsmi_set_gpu_compute_partition(
processor_handle: processor_handle,
compute_partition: AmdSmiComputePartitionType
) -> None:
"""
Set the GPU compute partition configuration.
Configures how the GPU's compute resources are partitioned. This setting typically
requires a GPU reset to take effect and may require elevated privileges.
Compute partitioning allows dividing the GPU's execution resources into isolated
logical partitions, enabling resource allocation and isolation for different
workloads or virtual machines.
Parameters:
- processor_handle: Handle for the target GPU device
- compute_partition (AmdSmiComputePartitionType): Partition mode to set:
- SPX: Single Partition (all resources in one partition)
- DPX: Dual Partition (resources split into 2 partitions)
- TPX: Triple Partition (resources split into 3 partitions)
- QPX: Quad Partition (resources split into 4 partitions)
- CPX: Compute Partition (custom partition configuration)
Raises:
- AmdSmiParameterException: If processor_handle or compute_partition is invalid
- AmdSmiLibraryException: On configuration failure, insufficient permissions,
or if feature not supported
Notes:
- This operation typically requires root/administrator privileges
- A GPU reset may be required for changes to take effect
- Not all partition modes are supported on all GPU models
- Check available modes with amdsmi_get_gpu_accelerator_partition_profile_config()
Example:
```python
import amdsmi
from amdsmi import AmdSmiComputePartitionType
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Set to dual partition mode
try:
amdsmi.amdsmi_set_gpu_compute_partition(
device, AmdSmiComputePartitionType.DPX
)
print("Compute partition set to DPX (Dual Partition)")
print("Note: GPU reset may be required for changes to take effect")
except Exception as e:
print(f"Failed to set compute partition: {e}")
amdsmi.amdsmi_shut_down()
```
"""Get the current memory partition configuration.
def amdsmi_get_gpu_memory_partition(
processor_handle: processor_handle
) -> str:
"""
Get the current GPU memory partition configuration.
Returns a string describing the current memory partition mode (NPS mode).
Memory partitioning configures how the GPU's memory is divided into NUMA
(Non-Uniform Memory Access) domains.
Parameters:
- processor_handle: Handle for the target GPU device
Returns:
- str: Current memory partition mode as a string (e.g., "NPS1", "NPS2", "NPS4", "NPS8")
Raises:
- AmdSmiParameterException: If processor_handle is invalid
- AmdSmiLibraryException: On query failure or if feature not supported
Example:
```python
import amdsmi
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Query current memory partition
mem_partition = amdsmi.amdsmi_get_gpu_memory_partition(device)
print(f"Current memory partition: {mem_partition}")
amdsmi.amdsmi_shut_down()
```
"""Configure the GPU memory partition mode.
def amdsmi_set_gpu_memory_partition(
processor_handle: processor_handle,
memory_partition: AmdSmiMemoryPartitionType
) -> None:
"""
Set the GPU memory partition configuration.
Configures the GPU's memory partition mode (NPS mode). This determines how
the GPU's memory is divided into NUMA domains. Different NPS modes provide
different trade-offs between memory locality and flexibility.
Parameters:
- processor_handle: Handle for the target GPU device
- memory_partition (AmdSmiMemoryPartitionType): Partition mode to set:
- NPS1: Single NUMA domain (all memory unified)
- NPS2: Two NUMA domains (memory split in 2)
- NPS4: Four NUMA domains (memory split in 4)
- NPS8: Eight NUMA domains (memory split in 8)
Raises:
- AmdSmiParameterException: If processor_handle or memory_partition is invalid
- AmdSmiLibraryException: On configuration failure, insufficient permissions,
or if feature not supported
Notes:
- This operation typically requires root/administrator privileges
- A GPU reset may be required for changes to take effect
- Not all NPS modes are supported on all GPU models
- Higher NPS modes provide finer memory domain granularity
- Check available modes with amdsmi_get_gpu_memory_partition_config()
Example:
```python
import amdsmi
from amdsmi import AmdSmiMemoryPartitionType
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Set to NPS4 mode (4 NUMA domains)
try:
amdsmi.amdsmi_set_gpu_memory_partition(
device, AmdSmiMemoryPartitionType.NPS4
)
print("Memory partition set to NPS4")
print("Note: GPU reset may be required for changes to take effect")
except Exception as e:
print(f"Failed to set memory partition: {e}")
amdsmi.amdsmi_shut_down()
```
"""Alternative function to configure memory partition mode.
def amdsmi_set_gpu_memory_partition_mode(
processor_handle: processor_handle,
memory_partition: AmdSmiMemoryPartitionType
) -> None:
"""
Set the GPU memory partition mode.
This function is equivalent to amdsmi_set_gpu_memory_partition() and configures
the GPU's memory partition mode (NPS mode).
Parameters:
- processor_handle: Handle for the target GPU device
- memory_partition (AmdSmiMemoryPartitionType): Partition mode to set:
- NPS1: Single NUMA domain
- NPS2: Two NUMA domains
- NPS4: Four NUMA domains
- NPS8: Eight NUMA domains
Raises:
- AmdSmiParameterException: If processor_handle or memory_partition is invalid
- AmdSmiLibraryException: On configuration failure or if feature not supported
Notes:
- This function has the same behavior as amdsmi_set_gpu_memory_partition()
- Requires elevated privileges and may need GPU reset
Example:
```python
import amdsmi
from amdsmi import AmdSmiMemoryPartitionType
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Set memory partition mode
amdsmi.amdsmi_set_gpu_memory_partition_mode(
device, AmdSmiMemoryPartitionType.NPS2
)
amdsmi.amdsmi_shut_down()
```
"""Get detailed memory partition configuration including supported modes.
def amdsmi_get_gpu_memory_partition_config(
processor_handle: processor_handle
) -> Dict[str, Any]:
"""
Get detailed GPU memory partition configuration.
Returns comprehensive information about memory partitioning capabilities,
including supported NPS modes, current mode, and NUMA configuration.
Parameters:
- processor_handle: Handle for the target GPU device
Returns:
- dict: Memory partition configuration containing:
- partition_caps (List[str]): List of supported NPS modes (e.g., ["NPS1", "NPS2", "NPS4"])
Returns ["N/A"] if no modes are supported
- mp_mode (str): Current memory partition mode (e.g., "NPS1", "NPS4")
Returns "N/A" if mode is unknown
- num_numa_ranges (str): Number of NUMA ranges (currently returns "N/A")
- numa_range (str): NUMA range details (currently returns "N/A")
Raises:
- AmdSmiParameterException: If processor_handle is invalid
- AmdSmiLibraryException: On query failure
Example:
```python
import amdsmi
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Get memory partition configuration
config = amdsmi.amdsmi_get_gpu_memory_partition_config(device)
print("Memory Partition Configuration:")
print(f" Current mode: {config['mp_mode']}")
print(f" Supported modes: {', '.join(config['partition_caps'])}")
# Check if specific mode is supported
if "NPS4" in config['partition_caps']:
print(" NPS4 mode is supported")
amdsmi.amdsmi_shut_down()
```
"""Get the current accelerator partition profile and resource allocation.
def amdsmi_get_gpu_accelerator_partition_profile(
processor_handle: processor_handle
) -> Dict[str, Any]:
"""
Get the current GPU accelerator partition profile.
Returns information about the current accelerator partition profile, including
partition IDs, profile type, number of partitions, and resource allocation.
Accelerator partitions define how specialized hardware resources (encoders,
decoders, DMA engines, JPEG engines, XCC units) are allocated across partitions.
Parameters:
- processor_handle: Handle for the target GPU device
Returns:
- dict: Accelerator partition profile containing:
- partition_id (List[int]): List of partition IDs for this device
- partition_profile (dict): Profile details with:
- profile_type (str): Profile type (SPX, DPX, TPX, QPX, CPX) or "N/A"
- num_partitions (int or str): Number of partitions or "N/A"
- profile_index (int or str): Profile index or "N/A"
- memory_caps (List[str] or str): Supported memory modes or "N/A"
- num_resources (int or str): Number of resource types or "N/A"
- resources (List[dict] or str): Resource allocation details or "N/A"
Returns "N/A" values if feature is not supported on the device.
Raises:
- AmdSmiParameterException: If processor_handle is invalid
- AmdSmiLibraryException: On query failure (except NOT_SUPPORTED)
Example:
```python
import amdsmi
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Get accelerator partition profile
profile = amdsmi.amdsmi_get_gpu_accelerator_partition_profile(device)
print("Accelerator Partition Profile:")
print(f" Partition IDs: {profile['partition_id']}")
partition_info = profile['partition_profile']
if partition_info['profile_type'] != "N/A":
print(f" Profile type: {partition_info['profile_type']}")
print(f" Number of partitions: {partition_info['num_partitions']}")
print(f" Profile index: {partition_info['profile_index']}")
print(f" Memory capabilities: {partition_info['memory_caps']}")
if partition_info['resources'] != "N/A":
print(f" Resources ({partition_info['num_resources']}):")
for res in partition_info['resources']:
print(f" - {res}")
else:
print(" Accelerator partitioning not supported")
amdsmi.amdsmi_shut_down()
```
"""Configure the accelerator partition profile.
def amdsmi_set_gpu_accelerator_partition_profile(
processor_handle: processor_handle,
profile_index: int
) -> None:
"""
Set the GPU accelerator partition profile.
Configures the accelerator partition profile by profile index. The profile
determines how accelerator resources (encoders, decoders, DMA engines, etc.)
are allocated across partitions.
Parameters:
- processor_handle: Handle for the target GPU device
- profile_index (int): Profile index to set (0-based)
Valid indices depend on available profiles from
amdsmi_get_gpu_accelerator_partition_profile_config()
Raises:
- AmdSmiParameterException: If processor_handle or profile_index is invalid
- AmdSmiLibraryException: On configuration failure, insufficient permissions,
or if feature not supported
Notes:
- This operation typically requires root/administrator privileges
- A GPU reset may be required for changes to take effect
- Use amdsmi_get_gpu_accelerator_partition_profile_config() to discover
available profile indices and their resource allocations
- Profile index validity depends on GPU model and capabilities
Example:
```python
import amdsmi
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Get available profiles
config = amdsmi.amdsmi_get_gpu_accelerator_partition_profile_config(device)
# Display available profiles
print("Available accelerator partition profiles:")
for profile in config['profiles']:
print(f" Profile {profile['profile_index']}: "
f"{profile['profile_type']} - "
f"{profile['num_partitions']} partitions")
# Set to first available profile
if config['num_profiles'] > 0:
profile_idx = config['profiles'][0]['profile_index']
try:
amdsmi.amdsmi_set_gpu_accelerator_partition_profile(
device, profile_idx
)
print(f"Accelerator partition profile set to index {profile_idx}")
print("Note: GPU reset may be required")
except Exception as e:
print(f"Failed to set profile: {e}")
amdsmi.amdsmi_shut_down()
```
"""Get detailed configuration of all available accelerator partition profiles.
def amdsmi_get_gpu_accelerator_partition_profile_config(
processor_handle: processor_handle
) -> Dict[str, Any]:
"""
Get detailed GPU accelerator partition profile configuration.
Returns comprehensive information about all available accelerator partition
profiles, including resource allocation details for each profile.
This function provides a catalog of supported partition configurations,
showing how different accelerator resources (XCC, encoders, decoders, DMA,
JPEG engines) are allocated in each profile.
Parameters:
- processor_handle: Handle for the target GPU device
Returns:
- dict: Accelerator partition configuration containing:
- num_profiles (int): Number of available partition profiles
- num_resource_profiles (int): Number of resource profile entries
- profiles (List[dict]): List of available profiles, each containing:
- profile_type (str): Profile type (SPX, DPX, TPX, QPX, CPX)
- num_partitions (int): Number of partitions in this profile
- profile_index (int): Profile index for use with set functions
- memory_caps (List[str]): Supported memory modes (NPS1, NPS2, NPS4, NPS8)
- num_resources (int): Number of resource types allocated
- resources (List[dict]): Resource allocation details, each containing:
- profile_index (int): Profile this resource belongs to
- resource_type (str): Type (XCC, ENCODER, DECODER, DMA, JPEG)
- partition_resource (int): Resource count per partition
- num_partitions_share_resource (int): Partitions sharing this resource
Raises:
- AmdSmiParameterException: If processor_handle is invalid
- AmdSmiLibraryException: On query failure
Example:
```python
import amdsmi
amdsmi.amdsmi_init()
device = amdsmi.amdsmi_get_processor_handles()[0]
# Get accelerator partition configuration
config = amdsmi.amdsmi_get_gpu_accelerator_partition_profile_config(device)
print(f"Number of profiles available: {config['num_profiles']}")
print()
# Display each profile
for profile in config['profiles']:
print(f"Profile {profile['profile_index']}: {profile['profile_type']}")
print(f" Partitions: {profile['num_partitions']}")
print(f" Memory capabilities: {', '.join(profile['memory_caps'])}")
print(f" Resources ({profile['num_resources']}):")
for res in profile['resources']:
print(f" {res['resource_type']}:")
print(f" Per partition: {res['partition_resource']}")
print(f" Shared across: {res['num_partitions_share_resource']} partitions")
print()
amdsmi.amdsmi_shut_down()
```
"""Check current partition configuration across all partition types:
import amdsmi
amdsmi.amdsmi_init()
try:
devices = amdsmi.amdsmi_get_processor_handles()
for i, device in enumerate(devices):
print(f"\n=== GPU {i} Partition Configuration ===")
# Compute partition
try:
compute_part = amdsmi.amdsmi_get_gpu_compute_partition(device)
print(f"Compute Partition: {compute_part}")
except Exception as e:
print(f"Compute Partition: Not available ({e})")
# Memory partition
try:
memory_part = amdsmi.amdsmi_get_gpu_memory_partition(device)
print(f"Memory Partition: {memory_part}")
except Exception as e:
print(f"Memory Partition: Not available ({e})")
# Memory partition config
try:
mem_config = amdsmi.amdsmi_get_gpu_memory_partition_config(device)
print(f" Current mode: {mem_config['mp_mode']}")
print(f" Supported: {', '.join(mem_config['partition_caps'])}")
except Exception as e:
print(f"Memory partition config: Not available")
# Accelerator partition profile
try:
acc_profile = amdsmi.amdsmi_get_gpu_accelerator_partition_profile(device)
part_info = acc_profile['partition_profile']
if part_info['profile_type'] != "N/A":
print(f"Accelerator Profile: {part_info['profile_type']}")
print(f" Partitions: {part_info['num_partitions']}")
print(f" Profile index: {part_info['profile_index']}")
else:
print("Accelerator Profile: Not supported")
except Exception as e:
print(f"Accelerator Profile: Not available")
finally:
amdsmi.amdsmi_shut_down()Set up a GPU in dual partition mode (DPX):
import amdsmi
from amdsmi import AmdSmiComputePartitionType, AmdSmiMemoryPartitionType
def configure_dual_partition(device):
"""Configure GPU for dual partition mode."""
print("Configuring GPU for dual partition mode...")
# Set compute partition to DPX
try:
amdsmi.amdsmi_set_gpu_compute_partition(
device, AmdSmiComputePartitionType.DPX
)
print(" Compute partition set to DPX (Dual Partition)")
except Exception as e:
print(f" Failed to set compute partition: {e}")
return False
# Set memory partition to NPS2
try:
amdsmi.amdsmi_set_gpu_memory_partition(
device, AmdSmiMemoryPartitionType.NPS2
)
print(" Memory partition set to NPS2 (2 NUMA domains)")
except Exception as e:
print(f" Failed to set memory partition: {e}")
return False
print("\nConfiguration complete. GPU reset may be required.")
print("Please reset the GPU or reboot for changes to take effect.")
return True
amdsmi.amdsmi_init()
try:
device = amdsmi.amdsmi_get_processor_handles()[0]
# Show current configuration
print("Current configuration:")
try:
compute = amdsmi.amdsmi_get_gpu_compute_partition(device)
memory = amdsmi.amdsmi_get_gpu_memory_partition(device)
print(f" Compute: {compute}")
print(f" Memory: {memory}")
except:
print(" Unable to query current configuration")
print()
# Configure dual partition
success = configure_dual_partition(device)
if success:
# Verify (may not reflect changes until reset)
print("\nAttempting to verify (changes may require reset):")
try:
compute = amdsmi.amdsmi_get_gpu_compute_partition(device)
memory = amdsmi.amdsmi_get_gpu_memory_partition(device)
print(f" Compute: {compute}")
print(f" Memory: {memory}")
except:
print(" Verification pending GPU reset")
finally:
amdsmi.amdsmi_shut_down()Discover and display all available partition profiles:
import amdsmi
def explore_partition_profiles(device):
"""Explore all available partition profiles for a GPU."""
print("=== Memory Partition Configuration ===\n")
# Memory partition capabilities
try:
mem_config = amdsmi.amdsmi_get_gpu_memory_partition_config(device)
print(f"Current memory mode: {mem_config['mp_mode']}")
print(f"Supported NPS modes: {', '.join(mem_config['partition_caps'])}")
print()
except Exception as e:
print(f"Memory partition info not available: {e}\n")
print("=== Accelerator Partition Profiles ===\n")
# Accelerator partition profiles
try:
acc_config = amdsmi.amdsmi_get_gpu_accelerator_partition_profile_config(device)
if acc_config['num_profiles'] == 0:
print("No accelerator partition profiles available")
return
print(f"Total profiles: {acc_config['num_profiles']}\n")
for profile in acc_config['profiles']:
print(f"Profile {profile['profile_index']}: {profile['profile_type']}")
print(f" Number of partitions: {profile['num_partitions']}")
print(f" Memory capabilities: {', '.join(profile['memory_caps'])}")
print(f" Resource types: {profile['num_resources']}")
if profile['resources']:
print(f" Resource allocation:")
for res in profile['resources']:
print(f" - {res['resource_type']}:")
print(f" Per partition: {res['partition_resource']}")
print(f" Shared across: {res['num_partitions_share_resource']} partitions")
print()
except Exception as e:
print(f"Accelerator partition info not available: {e}")
amdsmi.amdsmi_init()
try:
devices = amdsmi.amdsmi_get_processor_handles()
for i, device in enumerate(devices):
print(f"\n{'='*60}")
print(f"GPU {i} Partition Profiles")
print(f"{'='*60}\n")
explore_partition_profiles(device)
finally:
amdsmi.amdsmi_shut_down()Intelligently select and apply the best partition profile:
import amdsmi
from amdsmi import AmdSmiMemoryPartitionType
def select_optimal_profile(device, target_partitions=2, prefer_nps_mode="NPS2"):
"""
Select and apply optimal partition profile.
Args:
device: GPU device handle
target_partitions: Desired number of partitions (2, 3, 4, etc.)
prefer_nps_mode: Preferred NPS mode (NPS1, NPS2, NPS4, NPS8)
"""
print(f"Selecting optimal profile for {target_partitions} partitions...")
# Get available profiles
try:
config = amdsmi.amdsmi_get_gpu_accelerator_partition_profile_config(device)
except Exception as e:
print(f"Unable to get profile configuration: {e}")
return False
if config['num_profiles'] == 0:
print("No partition profiles available")
return False
# Find matching profiles
matching_profiles = [
p for p in config['profiles']
if p['num_partitions'] == target_partitions
]
if not matching_profiles:
print(f"No profiles found with {target_partitions} partitions")
print("Available partition counts:")
for p in config['profiles']:
print(f" - {p['num_partitions']} partitions ({p['profile_type']})")
return False
# Select profile with preferred NPS support
selected_profile = None
for profile in matching_profiles:
if prefer_nps_mode in profile['memory_caps']:
selected_profile = profile
break
# Fallback to first matching profile
if not selected_profile:
selected_profile = matching_profiles[0]
print(f"\nSelected profile: {selected_profile['profile_type']}")
print(f" Profile index: {selected_profile['profile_index']}")
print(f" Partitions: {selected_profile['num_partitions']}")
print(f" Memory capabilities: {', '.join(selected_profile['memory_caps'])}")
# Apply accelerator partition profile
try:
amdsmi.amdsmi_set_gpu_accelerator_partition_profile(
device, selected_profile['profile_index']
)
print("\nAccelerator partition profile applied successfully")
except Exception as e:
print(f"\nFailed to apply profile: {e}")
return False
# Set memory partition if supported
if prefer_nps_mode in selected_profile['memory_caps']:
try:
nps_map = {
"NPS1": AmdSmiMemoryPartitionType.NPS1,
"NPS2": AmdSmiMemoryPartitionType.NPS2,
"NPS4": AmdSmiMemoryPartitionType.NPS4,
"NPS8": AmdSmiMemoryPartitionType.NPS8,
}
if prefer_nps_mode in nps_map:
amdsmi.amdsmi_set_gpu_memory_partition(
device, nps_map[prefer_nps_mode]
)
print(f"Memory partition set to {prefer_nps_mode}")
except Exception as e:
print(f"Failed to set memory partition: {e}")
print("\nConfiguration complete. GPU reset required for changes to take effect.")
return True
amdsmi.amdsmi_init()
try:
device = amdsmi.amdsmi_get_processor_handles()[0]
# Current configuration
print("Current configuration:")
try:
acc_profile = amdsmi.amdsmi_get_gpu_accelerator_partition_profile(device)
part_info = acc_profile['partition_profile']
print(f" Profile: {part_info['profile_type']}")
print(f" Partitions: {part_info['num_partitions']}")
except:
print(" Unable to query current configuration")
print()
# Select and apply optimal profile for dual partition
select_optimal_profile(
device,
target_partitions=2,
prefer_nps_mode="NPS2"
)
finally:
amdsmi.amdsmi_shut_down()Monitor how resources are allocated across partitions:
import amdsmi
def analyze_resource_allocation(device):
"""Analyze and display resource allocation across partitions."""
try:
config = amdsmi.amdsmi_get_gpu_accelerator_partition_profile_config(device)
except Exception as e:
print(f"Unable to get configuration: {e}")
return
if config['num_profiles'] == 0:
print("No partition profiles available")
return
# Get current profile
try:
current = amdsmi.amdsmi_get_gpu_accelerator_partition_profile(device)
current_profile = current['partition_profile']
current_idx = current_profile.get('profile_index', None)
except:
current_idx = None
print("=== Resource Allocation Analysis ===\n")
for profile in config['profiles']:
is_current = (profile['profile_index'] == current_idx)
marker = " [CURRENT]" if is_current else ""
print(f"Profile {profile['profile_index']}: "
f"{profile['profile_type']}{marker}")
print(f" Partitions: {profile['num_partitions']}")
if not profile['resources']:
print(" No resource information available")
continue
# Group resources by type
resource_types = {}
for res in profile['resources']:
res_type = res['resource_type']
if res_type not in resource_types:
resource_types[res_type] = []
resource_types[res_type].append(res)
# Display allocation
print(f" Resource allocation:")
for res_type, resources in resource_types.items():
total_resources = sum(r['partition_resource'] for r in resources)
sharing_info = resources[0]['num_partitions_share_resource']
print(f" {res_type}:")
print(f" Total: {total_resources}")
print(f" Per partition: {resources[0]['partition_resource']}")
if sharing_info > 1:
print(f" Sharing: {sharing_info} partitions share each resource")
else:
print(f" Sharing: Exclusive (no sharing)")
print()
amdsmi.amdsmi_init()
try:
devices = amdsmi.amdsmi_get_processor_handles()
for i, device in enumerate(devices):
print(f"\n{'='*60}")
print(f"GPU {i} Resource Allocation")
print(f"{'='*60}\n")
analyze_resource_allocation(device)
finally:
amdsmi.amdsmi_shut_down()Compute partition mode enumeration:
class AmdSmiComputePartitionType(IntEnum):
"""
GPU compute partition types.
Defines how the GPU's compute resources (XCDs) are partitioned.
"""
SPX = ... # Single Partition - All resources unified
DPX = ... # Dual Partition - Split into 2 partitions
TPX = ... # Triple Partition - Split into 3 partitions
QPX = ... # Quad Partition - Split into 4 partitions
CPX = ... # Compute Partition - Custom partition mode
INVALID = ... # Invalid partition typeMemory partition (NPS) mode enumeration:
class AmdSmiMemoryPartitionType(IntEnum):
"""
GPU memory partition types (NPS - NUMA Per Socket modes).
Defines how GPU memory is divided into NUMA domains.
"""
NPS1 = ... # 1 NUMA domain - Unified memory
NPS2 = ... # 2 NUMA domains - Memory split in 2
NPS4 = ... # 4 NUMA domains - Memory split in 4
NPS8 = ... # 8 NUMA domains - Memory split in 8
UNKNOWN = ... # Unknown partition modeAccelerator partition profile type enumeration:
class AmdSmiAcceleratorPartitionType(IntEnum):
"""
GPU accelerator partition profile types.
Defines partition profiles for accelerator resources.
"""
SPX = ... # Single Partition Compute
DPX = ... # Dual Partition Compute
TPX = ... # Triple Partition Compute
QPX = ... # Quad Partition Compute
CPX = ... # Compute Partition Compute
INVALID = ... # Invalid partition typeAccelerator resource type enumeration:
class AmdSmiAcceleratorPartitionResourceType(IntEnum):
"""
Types of accelerator resources that can be partitioned.
Identifies specific hardware resources allocated in partition profiles.
"""
XCC = ... # XCD (eXecution Cache Die) compute units
ENCODER = ... # Video encoder engines
DECODER = ... # Video decoder engines
DMA = ... # DMA (Direct Memory Access) engines
JPEG = ... # JPEG encode/decode engines
MAX = ... # Maximum value markerCompute partitioning divides the GPU's execution resources into isolated logical partitions. Each partition operates independently with dedicated:
Common modes:
Use cases: Virtual machine GPU assignment, workload isolation, multi-tenant environments.
Memory partitioning (NUMA Per Socket) divides GPU memory into separate NUMA domains. Each domain provides:
NPS Modes:
Higher NPS numbers provide better memory locality for partitioned workloads but may reduce performance for single large workloads due to reduced memory unification.
Accelerator partitioning assigns specialized hardware engines to partitions:
Each partition profile defines how many of each resource type is allocated per partition and whether resources are exclusive or shared.
Partition a single physical GPU into multiple virtual GPUs for cloud instances:
# Assign dedicated resources to 4 VMs
# Profile: QPX (Quad Partition)
# Memory: NPS4 (4 memory domains)
# Each VM gets isolated compute and memoryProvide guaranteed resources to different research groups:
# Profile: DPX (Dual Partition)
# Memory: NPS2 (2 memory domains)
# Two research groups with dedicated GPU resourcesAllocate video encoding/decoding resources across partitions:
# Profile with multiple encoders per partition
# Each partition handles independent video streams
# Resource sharing optimized for encoder/decoder utilizationOptimize memory locality for large-scale computations:
# Memory: NPS4 or NPS8
# Each compute partition accesses local memory domain
# Reduced memory latency for NUMA-aware applications