CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Overall
score

69%

Evaluation69%

1.33x

Agent success when using this tile

Overview
Eval results
Files

task.mdevals/scenario-3/

LLM Memory Configuration Manager

Build a utility that configures and initializes large language models with custom memory settings for different deployment scenarios.

Capabilities

Memory utilization configuration

  • Initialize a model with 70% GPU memory utilization @test
  • Initialize a model with default memory settings (no explicit configuration) @test

Swap space configuration

  • Configure a model with 4GB of CPU swap space for overflow handling @test
  • Configure a model with both custom GPU memory (80%) and swap space (2GB) @test

Implementation

@generates

Create a module that provides functionality to initialize LLM instances with different memory configurations for resource-constrained environments.

API

def create_llm_with_memory_config(
    model_name: str,
    gpu_memory_utilization: float = None,
    swap_space: int = None
):
    """
    Initialize an LLM instance with specified memory configuration.

    Args:
        model_name: Name or path of the model to load
        gpu_memory_utilization: Fraction of GPU memory to use (0.0 to 1.0)
        swap_space: CPU swap space in GB for memory overflow

    Returns:
        Initialized LLM instance with the specified memory settings
    """
    pass

Dependencies { .dependencies }

vllm { .dependency }

Provides high-throughput LLM inference with memory management capabilities.

Install with Tessl CLI

npx tessl i tessl/pypi-vllm

tile.json