tessl install tessl/pypi-vllm@0.10.0A high-throughput and memory-efficient inference and serving engine for LLMs
Agent Success
Agent success rate when using this tile
69%
Improvement
Agent success rate improvement when using this tile compared to baseline
1.33x
Baseline
Agent success rate without this tile
52%
{
"context": "This evaluation assesses how well the engineer uses vLLM's text generation capabilities to implement a story generator. The focus is on proper usage of the LLM class, generate() method, and SamplingParams configuration.",
"type": "weighted_checklist",
"checklist": [
{
"name": "LLM initialization",
"description": "Uses vLLM's LLM class to initialize the model in __init__ method, properly storing it as an instance variable. The model_name parameter should be passed to the LLM constructor.",
"max_score": 20
},
{
"name": "generate() method usage",
"description": "Uses the LLM.generate() method to generate text from prompts. The method should accept prompts as input and return RequestOutput objects containing the generated text.",
"max_score": 25
},
{
"name": "SamplingParams configuration",
"description": "Creates and uses SamplingParams objects to configure generation behavior. Must properly set max_tokens, temperature, and n parameters based on the method arguments.",
"max_score": 30
},
{
"name": "Output extraction",
"description": "Correctly extracts generated text from RequestOutput objects by accessing the outputs attribute and getting the text from CompletionOutput objects. Returns a list of strings as specified in the API.",
"max_score": 15
},
{
"name": "Error handling",
"description": "Implements proper validation to raise ValueError when prompt is empty, as specified in the API documentation.",
"max_score": 10
}
]
}