Configure auto-configure Ollama when user needs local LLM deployment, free AI alternatives, or wants to eliminate hosted API costs. Trigger phrases: "install ollama", "local AI", "free LLM", "self-hosted AI", "replace OpenAI", "no API costs". Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
70
64%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/ollama-local-ai/skills/ollama-setup/SKILL.mdAuto-configure Ollama for local LLM deployment, eliminating hosted API costs and enabling offline AI inference. This skill handles system assessment, model selection based on available hardware (RAM, GPU), installation across macOS/Linux/Docker, and integration with Python, Node.js, and REST API clients.
nvidia-smi to verify)brew (macOS), curl (Linux), or docker (containerized)uname -s, free -h (Linux) or vm_stat (macOS), and nvidia-smi (if GPU present)brew install ollama && brew services start ollamacurl -fsSL https://ollama.com/install.sh | sh && sudo systemctl start ollamadocker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollamaollama pull llama3.2ollama list) and running a test prompt (ollama run llama3.2 "Say hello")curl http://localhost:11434/api/tagsollama, Node.js ollama, or raw HTTP)See ${CLAUDE_SKILL_DIR}/references/skill-workflow.md for the detailed workflow with code snippets.
http://localhost:11434| Error | Cause | Solution |
|---|---|---|
ollama: command not found | Installation incomplete or PATH not updated | Re-run install script; restart shell session; verify /usr/local/bin/ollama exists |
| Model pull fails with timeout | Network connectivity issue or Ollama registry unreachable | Check internet connection; retry with ollama pull --insecure behind corporate proxy |
| Out of memory during inference | Model size exceeds available RAM | Switch to a smaller quantized model (e.g., 7B instead of 13B); close memory-intensive applications |
| GPU not detected | CUDA drivers missing or incompatible version | Install CUDA toolkit >= 11.8; verify with nvidia-smi; restart Ollama service after driver install |
| Port 11434 already in use | Another service occupying the default Ollama port | Stop conflicting service; or set OLLAMA_HOST=0.0.0.0:11435 environment variable |
See ${CLAUDE_SKILL_DIR}/references/errors.md for additional error scenarios.
Scenario 1: Developer Workstation Setup -- Install Ollama on a macOS M2 machine with 16 GB RAM. Pull codellama:13b for code generation tasks. Integrate with a Python FastAPI application using the ollama Python package. Expected throughput: 30-50 tokens/second on Apple Silicon.
Scenario 2: Air-Gapped Server Deployment -- Install Ollama on an offline Ubuntu server via pre-downloaded binary. Transfer model weights via USB. Configure as a systemd service with auto-restart. Serve llama3.2:7b via REST API for internal team use.
Scenario 3: Docker-Based CI Pipeline -- Run Ollama in a Docker container as part of a CI/CD pipeline for automated code review. Pull mistral:7b, expose the API on port 11434, and integrate with a Node.js test harness that sends code diffs for analysis.
3e83543
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.