Cost optimization patterns for LLM API usage — model routing by task complexity, budget tracking, retry logic, and prompt caching.
73
73%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
{
"name": "jbvc/cost-aware-llm-pipeline",
"version": "0.1.0",
"private": false,
"summary": "Cost optimization patterns for LLM API usage — model routing by task complexity, budget tracking, retry logic, and prompt caching.",
"skills": {
"cost-aware-llm-pipeline": {
"path": "SKILL.md"
}
}
}