Configure auto-configure Ollama when user needs local LLM deployment, free AI alternatives, or wants to eliminate hosted API costs. Trigger phrases: "install ollama", "local AI", "free LLM", "self-hosted AI", "replace OpenAI", "no API costs". Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
70
64%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/ollama-local-ai/skills/ollama-setup/SKILL.mdQuality
Discovery
72%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has strong trigger term coverage and targets a clear niche (Ollama local LLM setup), making it distinctive. However, it suffers from a redundant and vague 'Use when' clause ('Use when appropriate context detected') that adds no real value, and the capability description is somewhat shallow—it says 'configure auto-configure Ollama' without detailing specific actions. The phrasing 'configure auto-configure' also appears to be a grammatical error.
Suggestions
Replace the vague 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.' with a concrete clause like 'Use when the user wants to set up Ollama, run models locally, or migrate away from hosted API providers.'
Expand the capability description with specific concrete actions, e.g., 'Installs and configures Ollama, downloads and manages local models, sets up API endpoints, and configures environment variables for local LLM inference.'
Fix the grammatical issue 'Configure auto-configure Ollama' to something clear like 'Configures Ollama for local LLM deployment.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (Ollama, local LLM deployment) and a general action ('configure auto-configure Ollama'), but doesn't list multiple specific concrete actions like installing, configuring models, setting up endpoints, or testing connections. | 2 / 3 |
Completeness | The 'what' is partially addressed (configure Ollama for local LLM deployment), and while trigger phrases are listed, the 'when' clause is weak and generic ('Use when appropriate context detected. Trigger with relevant phrases based on skill purpose') rather than providing explicit, meaningful guidance on when to select this skill. | 2 / 3 |
Trigger Term Quality | Includes a good set of natural trigger phrases users would actually say: 'install ollama', 'local AI', 'free LLM', 'self-hosted AI', 'replace OpenAI', 'no API costs'. These cover multiple natural variations of how users would express this need. | 3 / 3 |
Distinctiveness Conflict Risk | The description targets a very specific niche—Ollama configuration for local LLM deployment—with distinct trigger terms that are unlikely to conflict with other skills. Terms like 'install ollama' and 'replace OpenAI' are highly specific. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a reasonably well-structured skill that covers the Ollama setup workflow comprehensively with good progressive disclosure and error handling. Its main weaknesses are the lack of executable integration code (deferred entirely to a reference file) and somewhat verbose scenario descriptions that describe rather than demonstrate. The workflow would benefit from inline feedback loops and concrete code examples for the integration steps.
Suggestions
Add executable Python/Node.js integration code snippets directly in the Instructions or Examples section rather than deferring all code to the referenced workflow file
Convert the prose-based Examples section into concrete input/output pairs with actual commands and expected terminal output
Add explicit feedback loops in the workflow (e.g., 'If step 5 fails, check error table below and retry' or 'If model pull exceeds available disk, select a smaller model from step 2')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary verbosity—scenario descriptions are prose-heavy rather than concise, the overview restates what the description already covers, and some sections (like the prerequisites listing obvious package managers) could be tightened. However, it's not egregiously padded and most content is relevant. | 2 / 3 |
Actionability | The skill provides concrete commands for installation and verification (steps 3-6), but lacks executable code snippets for the integration step (step 7)—it just says 'using the appropriate client library' without showing actual code. The examples section describes scenarios in prose rather than providing copy-paste ready code. The actual code snippets are deferred to a referenced file. | 2 / 3 |
Workflow Clarity | Steps are clearly sequenced (1-10) with a logical progression from detection through installation to validation. Step 5 includes verification, which is good. However, there's no explicit feedback loop for error recovery within the workflow itself (e.g., 'if verification fails, do X'), and steps 8-10 are vague without concrete validation checkpoints. The error handling table helps but is separate from the workflow. | 2 / 3 |
Progressive Disclosure | The skill is well-structured with clear sections (Overview, Prerequisites, Instructions, Output, Error Handling, Examples, Resources). It appropriately references external files for detailed workflow code snippets and additional error scenarios, keeping the main file as an overview with one-level-deep references. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.