CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

ollama-setup

tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill ollama-setup

Configure auto-configure Ollama when user needs local LLM deployment, free AI alternatives, or wants to eliminate hosted API costs. Trigger phrases: "install ollama", "local AI", "free LLM", "self-hosted AI", "replace OpenAI", "no API costs". Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

47%

Overall

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Implementation

22%

This skill is a generic template with no Ollama-specific content whatsoever. Despite being titled 'Ollama Setup', it contains no installation commands, configuration guidance, model management instructions, or troubleshooting steps. The skill fails to deliver on its stated purpose of helping users with local LLM deployment.

Suggestions

Add concrete installation commands for different platforms (e.g., `curl -fsSL https://ollama.com/install.sh | sh` for Linux/Mac, Windows installer path)

Include executable examples for pulling and running models (e.g., `ollama pull llama2`, `ollama run llama2 'Hello'`)

Add a validation workflow: install -> verify with `ollama --version` -> pull model -> test with simple prompt -> troubleshoot if failures

Replace generic instructions with Ollama-specific guidance: API endpoint configuration, model selection criteria, memory requirements, and common integration patterns

DimensionReasoningScore

Conciseness

The content is brief but entirely generic - it doesn't waste tokens on explanations Claude knows, but the tokens it does use provide no Ollama-specific value. It's efficiently empty rather than efficiently informative.

2 / 3

Actionability

The skill provides zero concrete guidance for Ollama setup - no installation commands, no configuration examples, no model pulling syntax. Instructions like 'Invoke this skill when trigger conditions are met' are completely abstract and non-executable.

1 / 3

Workflow Clarity

The 4-step workflow is generic boilerplate that could apply to any skill. There are no Ollama-specific steps, no validation checkpoints for installation success, and no error recovery guidance for common setup issues.

1 / 3

Progressive Disclosure

References to external files (errors.md, examples.md) are present and one-level deep, but the main content is so sparse that there's nothing meaningful to disclose progressively. The structure exists but serves no purpose without actual content.

2 / 3

Total

6

/

12

Passed

Activation

55%

The description has strong trigger term coverage but suffers from vague capability specification and redundant filler text. The awkward phrasing 'Configure auto-configure Ollama' is confusing, and the ending sentences add no meaningful information. The skill would benefit from clearly listing specific actions it performs.

Suggestions

Replace 'Configure auto-configure Ollama' with specific actions like 'Install Ollama, configure models, set up local inference endpoints, and manage model downloads'

Remove the redundant ending 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose' - this adds no value

Add concrete deliverables or outcomes, such as 'generates configuration files, troubleshoots installation issues, optimizes model settings'

DimensionReasoningScore

Specificity

The description uses vague language like 'Configure auto-configure Ollama' (grammatically awkward) and 'local LLM deployment' without listing concrete actions. It doesn't specify what configuration steps, setup tasks, or specific capabilities are provided.

1 / 3

Completeness

The 'when' is addressed with explicit trigger phrases, but the 'what' is weak - it says 'Configure auto-configure Ollama' without explaining what configuration actually happens. The ending 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose' is redundant filler that adds no value.

2 / 3

Trigger Term Quality

Includes good coverage of natural trigger phrases users would say: 'install ollama', 'local AI', 'free LLM', 'self-hosted AI', 'replace OpenAI', 'no API costs'. These are realistic terms users would naturally use when seeking this functionality.

3 / 3

Distinctiveness Conflict Risk

The Ollama-specific focus and trigger phrases like 'install ollama' provide some distinctiveness, but broader terms like 'local AI' and 'free LLM' could overlap with other local AI/ML skills. The niche is somewhat clear but not fully distinct.

2 / 3

Total

8

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.