A high-throughput and memory-efficient inference and serving engine for LLMs
Overall
score
69%
Evaluation — 69%
↑ 1.33xAgent success when using this tile
{
"name": "tessl/pypi-vllm",
"version": "0.10.0",
"docs": "docs/index.md",
"describes": "pkg:pypi/vllm@0.10.2",
"summary": "A high-throughput and memory-efficient inference and serving engine for LLMs"
}Install with Tessl CLI
npx tessl i tessl/pypi-vllmdocs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10