CtrlK
BlogDocsLog inGet started
Tessl Logo

ollama-setup

Configure auto-configure Ollama when user needs local LLM deployment, free AI alternatives, or wants to eliminate hosted API costs. Trigger phrases: "install ollama", "local AI", "free LLM", "self-hosted AI", "replace OpenAI", "no API costs". Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

67

Quality

61%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/ai-ml/ollama-local-ai/skills/ollama-setup/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has strong trigger terms and a distinctive niche (Ollama/local LLM setup), but suffers from an awkward opening ('Configure auto-configure Ollama'), lack of specific concrete actions, and a meaningless filler 'when' clause ('Use when appropriate context detected'). Replacing the generic guidance with actual use-case descriptions and listing specific capabilities would significantly improve it.

Suggestions

Replace the vague 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.' with a concrete 'Use when' clause, e.g., 'Use when the user wants to set up Ollama, run models locally, or migrate away from hosted API providers.'

List specific concrete actions the skill performs, e.g., 'Installs Ollama, downloads and configures local models, sets up API endpoints, and verifies deployment.'

Fix the awkward 'Configure auto-configure Ollama' phrasing to something clear like 'Configures Ollama for local LLM deployment.'

DimensionReasoningScore

Specificity

It names the domain (Ollama, local LLM deployment) and a general action ('configure auto-configure Ollama'), but doesn't list multiple specific concrete actions like installing, configuring models, setting up endpoints, etc. The phrase 'configure auto-configure' is also awkward and unclear.

2 / 3

Completeness

It partially answers 'what' (configure Ollama for local LLM deployment) and has trigger phrases, but the 'when' clause is weak and generic ('Use when appropriate context detected. Trigger with relevant phrases based on skill purpose') — this is filler rather than explicit guidance. The trigger phrases themselves help but the explicit 'when' statement is essentially meaningless.

2 / 3

Trigger Term Quality

Includes a good set of natural trigger phrases users would actually say: 'install ollama', 'local AI', 'free LLM', 'self-hosted AI', 'replace OpenAI', 'no API costs'. These cover multiple natural variations of how users might express this need.

3 / 3

Distinctiveness Conflict Risk

The description targets a clear niche — Ollama configuration for local LLM deployment — with distinct trigger terms like 'install ollama', 'self-hosted AI', and 'replace OpenAI' that are unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a solid structural foundation for Ollama setup with good coverage of platforms, hardware requirements, and error scenarios. Its main weaknesses are the lack of executable integration code snippets inline (deferred to missing bundle files), vague later workflow steps (7-10), and some verbosity in the overview and scenarios sections. The error handling table is a strength but would benefit from being part of the workflow's feedback loops rather than a separate section.

Suggestions

Add inline executable code snippets for Python, Node.js, and cURL integration (step 7) rather than deferring entirely to a missing reference file — at minimum include a quick-start example for each

Make steps 7-10 concrete with specific commands: e.g., for GPU acceleration show the exact environment variables or Docker flags, for validation show a specific benchmark command with expected output

Add feedback loops within the workflow: after step 5, explicitly state 'If test prompt fails, check error table below and retry' rather than separating error handling into a disconnected section

Provide the referenced bundle files (skill-workflow.md, errors.md) or inline the critical content — currently the skill depends on files that don't exist

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary verbosity. The overview paragraph restates what the title and description already convey. The scenarios section describes expected outcomes narratively rather than concisely. The prerequisites section is useful but could be tighter. Some content like 'eliminating hosted API costs and enabling offline AI inference' is marketing-speak rather than actionable instruction.

2 / 3

Actionability

The skill provides concrete commands for installation (brew, curl, docker) and verification steps, which is good. However, it lacks executable code snippets for the integration step (Python, Node.js, cURL examples are promised but not shown inline — they're deferred to a referenced file). The scenarios are descriptive rather than providing copy-paste ready code. Steps 7-10 are vague directives without concrete commands.

2 / 3

Workflow Clarity

The 10-step workflow is clearly sequenced with a logical progression from detection through installation to validation. Steps 1-6 have explicit commands and verification checkpoints. However, steps 7-10 are vague ('Configure integration', 'Set up GPU acceleration', 'Configure model persistence', 'Validate end-to-end') without concrete commands or validation criteria. There's no explicit feedback loop for error recovery within the workflow itself — errors are handled in a separate table rather than inline.

2 / 3

Progressive Disclosure

The skill references two external files (`skill-workflow.md` and `errors.md`) for detailed content, which is good progressive disclosure design. However, no bundle files are provided, meaning these references are broken/unverifiable. The main file itself is somewhat long with inline content (error table, scenarios) that could arguably be in reference files, while the integration code that should be inline is deferred to a missing reference file.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.