CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-modelscope

ModelScope brings the notion of Model-as-a-Service to life with unified interfaces for state-of-the-art machine learning models.

Pending
Overview
Eval results
Files

models.mddocs/

Model Interface

The Model interface provides direct access to pre-trained models with fine-grained control over the inference process. This lower-level interface allows for custom workflows, advanced model manipulation, and integration into custom pipelines.

Capabilities

Base Model Class

Abstract base class providing the foundation for all ModelScope models.

class Model:
    """
    Base model interface for all ModelScope models.
    """
    
    @classmethod
    def from_pretrained(
        cls,
        model_name_or_path: str,
        revision: Optional[str] = DEFAULT_MODEL_REVISION,
        cfg_dict: Config = None,
        device: str = None,
        trust_remote_code: Optional[bool] = False,
        **kwargs
    ):
        """
        Load a pre-trained model from ModelScope Hub or local path.
        
        Parameters:
        - model_name_or_path: Model identifier on ModelScope Hub or local directory path
        - revision: Model revision/version to load (default: DEFAULT_MODEL_REVISION)
        - cfg_dict: Configuration dictionary for the model
        - device: Target device ('cpu', 'cuda', 'gpu')
        - trust_remote_code: Whether to trust and execute remote code in the model
        - **kwargs: Additional model-specific parameters
        
        Returns:
        Initialized model instance
        """
    
    def forward(self, inputs):
        """
        Run forward pass through the model.
        
        Parameters:
        - inputs: Model inputs (format depends on model type)
        
        Returns:
        Model outputs
        """
    
    def __call__(self, inputs):
        """
        Callable interface for model inference.
        
        Parameters:
        - inputs: Model inputs
        
        Returns:
        Model outputs
        """
    
    def postprocess(self, inputs):
        """
        Post-process model outputs into user-friendly format.
        
        Parameters:
        - inputs: Raw model outputs
        
        Returns:
        Processed outputs
        """
    
    def to(self, device: str):
        """
        Move model to specified device.
        
        Parameters:
        - device: Target device ('cpu', 'cuda', 'gpu')
        
        Returns:
        Self (for method chaining)
        """
    
    def eval(self):
        """
        Set model to evaluation mode.
        
        Returns:
        Self (for method chaining)
        """
    
    def train(self):
        """
        Set model to training mode.
        
        Returns:
        Self (for method chaining)
        """

PyTorch Model Class

PyTorch-specific model implementation with additional PyTorch features.

class TorchModel(Model):
    """
    PyTorch-specific model implementation extending base Model class.
    """
    
    def __init__(self, model_dir: str, device: str = None, **kwargs):
        """
        Initialize PyTorch model.
        
        Parameters:
        - model_dir: Directory containing model files
        - device: Target device for model
        - **kwargs: Additional PyTorch-specific parameters
        """
    
    def save_pretrained(self, save_directory: str):
        """
        Save model to local directory.
        
        Parameters:
        - save_directory: Directory to save model files
        """
    
    def load_state_dict(self, state_dict: dict):
        """
        Load model weights from state dictionary.
        
        Parameters:
        - state_dict: PyTorch state dictionary
        """
    
    def state_dict(self) -> dict:
        """
        Get model state dictionary.
        
        Returns:
        PyTorch state dictionary containing model weights
        """

Model Builder

Factory functions for creating models dynamically based on configuration.

def build_model(cfg: dict, default_args: dict = None) -> Model:
    """
    Build model from configuration dictionary.
    
    Parameters:
    - cfg: Configuration dictionary specifying model architecture and parameters
    - default_args: Default arguments to merge with configuration
    
    Returns:
    Initialized model instance
    """

# Model registries for different components
MODELS: dict  # Registry of available model architectures
BACKBONES: dict  # Registry of backbone networks
HEADS: dict  # Registry of model heads

Model Heads

Specialized model heads for different tasks and architectures.

class Head:
    """
    Base class for model heads (task-specific output layers).
    """
    
    def __init__(self, **kwargs):
        """Initialize model head with task-specific parameters."""
    
    def forward(self, features):
        """
        Process backbone features through the head.
        
        Parameters:
        - features: Feature tensors from backbone network
        
        Returns:
        Task-specific outputs
        """

class TorchHead(Head):
    """
    PyTorch-specific model head implementation.
    """
    pass

Usage Examples

Basic Model Loading and Inference

from modelscope import Model

# Load model from ModelScope Hub
model = Model.from_pretrained('damo/nlp_structbert_sentence-similarity_chinese')

# Set to evaluation mode
model.eval()

# Run inference
inputs = "这是一个测试文本"
outputs = model(inputs)
print(outputs)

# Move to GPU if available
model.to('cuda')

Custom Model Configuration

from modelscope import Model

# Load model with custom configuration
model = Model.from_pretrained(
    'model_name',
    device='cuda',
    torch_dtype='float16',  # Use half precision
    trust_remote_code=True,  # Allow custom model code
    revision='v1.0.0'       # Specific model version
)

# Custom preprocessing
def preprocess_inputs(text):
    # Custom preprocessing logic
    return processed_text

# Run inference with preprocessing
raw_input = "原始文本"
processed_input = preprocess_inputs(raw_input)
output = model(processed_input)

Model Fine-tuning Setup

from modelscope import TorchModel

# Load model for fine-tuning
model = TorchModel.from_pretrained('base_model_name')

# Set to training mode
model.train()

# Access model parameters for optimizer
parameters = model.parameters()

# Example training loop setup
import torch.optim as optim
optimizer = optim.Adam(parameters, lr=1e-5)

# Save fine-tuned model
model.save_pretrained('./fine_tuned_model')

Building Custom Models

from modelscope import build_model, MODELS

# Define model configuration
model_config = {
    'type': 'BertModel',
    'vocab_size': 30000,
    'hidden_size': 768,
    'num_hidden_layers': 12,
    'num_attention_heads': 12
}

# Build model from configuration
model = build_model(model_config)

# Register custom model architecture
@MODELS.register_module()
class CustomModel(Model):
    def __init__(self, **kwargs):
        super().__init__()
        # Custom model implementation
    
    def forward(self, inputs):
        # Custom forward pass
        return outputs

Multi-GPU and Distributed Inference

from modelscope import TorchModel
import torch

# Load model
model = TorchModel.from_pretrained('large_model_name')

# Data parallel across multiple GPUs
if torch.cuda.device_count() > 1:
    model = torch.nn.DataParallel(model)

# Move to GPU
model.to('cuda')

# Distributed inference
model.eval()
with torch.no_grad():
    outputs = model(batch_inputs)

Model Inspection and Analysis

from modelscope import Model

# Load model
model = Model.from_pretrained('model_name')

# Inspect model architecture
print(f"Model type: {type(model)}")
print(f"Model parameters: {sum(p.numel() for p in model.parameters())}")

# Get model configuration
config = model.config
print(f"Model config: {config}")

# Access specific model components
if hasattr(model, 'backbone'):
    print(f"Backbone: {model.backbone}")
if hasattr(model, 'head'):
    print(f"Head: {model.head}")

Model Export and Conversion

from modelscope import TorchModel

# Load PyTorch model
model = TorchModel.from_pretrained('model_name')

# Export to ONNX
model.eval()
torch.onnx.export(
    model,
    example_input,
    'model.onnx',
    export_params=True,
    opset_version=11
)

# Save model state for later loading
torch.save(model.state_dict(), 'model_weights.pth')

# Load weights later
new_model = TorchModel.from_pretrained('model_name')
new_model.load_state_dict(torch.load('model_weights.pth'))

Conditional Model Loading

from modelscope import Model
from modelscope.utils.import_utils import is_torch_available

# Conditional loading based on available frameworks
if is_torch_available():
    from modelscope import TorchModel
    model = TorchModel.from_pretrained('pytorch_model')
else:
    # Fallback to base model or alternative implementation
    model = Model.from_pretrained('base_model')

# Handle different model formats
try:
    model = Model.from_pretrained('model_name', format='pytorch')
except Exception:
    try:
        model = Model.from_pretrained('model_name', format='tensorflow')
    except Exception:
        model = Model.from_pretrained('model_name')  # Default format

Model Caching and Optimization

from modelscope import Model

# Enable model caching for faster loading
model = Model.from_pretrained(
    'model_name',
    cache_dir='./model_cache',
    local_files_only=False  # Allow downloading if not cached
)

# Load with optimization flags
model = Model.from_pretrained(
    'model_name',
    torch_dtype='float16',      # Half precision
    low_cpu_mem_usage=True,     # Optimize CPU memory usage
    device_map='auto'           # Automatic device mapping
)

# Compile model for better performance (PyTorch 2.0+)
if hasattr(torch, 'compile'):
    model = torch.compile(model)

Install with Tessl CLI

npx tessl i tessl/pypi-modelscope

docs

datasets.md

export.md

hub.md

index.md

metrics.md

models.md

pipelines.md

preprocessors.md

training.md

utilities.md

tile.json