CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

tessl/pypi-vllm

tessl install tessl/pypi-vllm@0.10.0

A high-throughput and memory-efficient inference and serving engine for LLMs

Agent Success

Agent success rate when using this tile

69%

Improvement

Agent success rate improvement when using this tile compared to baseline

1.33x

Baseline

Agent success rate without this tile

52%

rubric.jsonevals/scenario-10/

{
  "context": "This criteria evaluates how well the engineer uses vLLM's custom attention mechanism capabilities to implement a benchmarking tool. The focus is specifically on proper usage of attention backend configuration, LLM initialization with backend parameters, and execution of inference with different attention implementations.",
  "type": "weighted_checklist",
  "checklist": [
    {
      "name": "LLM Class Import",
      "description": "Correctly imports the LLM class from vllm package",
      "max_score": 5
    },
    {
      "name": "SamplingParams Import",
      "description": "Correctly imports SamplingParams class from vllm package for controlling generation behavior",
      "max_score": 5
    },
    {
      "name": "LLM Initialization",
      "description": "Properly initializes LLM instances with the model parameter in the __init__ or run_with_backend methods",
      "max_score": 15
    },
    {
      "name": "Attention Backend Configuration",
      "description": "Correctly passes the attention_backend parameter when initializing the LLM class (e.g., LLM(model=..., attention_backend=...))",
      "max_score": 25
    },
    {
      "name": "Default Backend Handling",
      "description": "Properly handles the case when attention_backend is None, allowing vLLM to use its default backend",
      "max_score": 10
    },
    {
      "name": "Text Generation",
      "description": "Uses the LLM.generate() method to perform text generation with the configured backend",
      "max_score": 15
    },
    {
      "name": "SamplingParams Usage",
      "description": "Creates and uses SamplingParams objects to control generation parameters like max_tokens and temperature",
      "max_score": 15
    },
    {
      "name": "Output Extraction",
      "description": "Correctly extracts generated text from RequestOutput objects returned by LLM.generate() (accessing outputs[0].text or similar)",
      "max_score": 10
    }
  ]
}

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/vllm@0.10.x
tile.json