CtrlK
BlogDocsLog inGet started
Tessl Logo

llm-integration

Expert skill for integrating local Large Language Models using llama.cpp and Ollama. Covers secure model loading, inference optimization, prompt handling, and protection against LLM-specific vulnerabilities including prompt injection, model theft, and denial of service attacks.

79

Quality

76%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/llm-integration/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description demonstrates strong specificity with concrete actions and named tools, and occupies a distinct niche. However, it lacks an explicit 'Use when...' clause which limits its effectiveness for skill selection, and could benefit from more natural trigger terms that users would actually say when needing this skill.

Suggestions

Add a 'Use when...' clause with explicit triggers like 'Use when setting up local LLM inference, configuring Ollama or llama.cpp, or securing self-hosted AI models'

Include more natural user terms like 'run models locally', 'self-hosted AI', 'gguf files', 'quantized models', 'local inference'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'model loading', 'inference optimization', 'prompt handling', and protection against specific vulnerabilities ('prompt injection, model theft, denial of service attacks'). Names specific tools (llama.cpp, Ollama).

3 / 3

Completeness

Clearly answers 'what' (integrating local LLMs with specific tools and security concerns), but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied.

2 / 3

Trigger Term Quality

Includes some good technical terms users might say ('llama.cpp', 'Ollama', 'LLM', 'prompt injection'), but missing common variations like 'local AI', 'run models locally', 'self-hosted LLM', 'gguf', 'quantized models'.

2 / 3

Distinctiveness Conflict Risk

Clear niche focusing specifically on local LLM integration with named tools (llama.cpp, Ollama) and security concerns. Unlikely to conflict with general coding skills or cloud-based AI integration skills.

3 / 3

Total

10

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with excellent actionability and progressive disclosure. The code examples are executable and security-focused, with clear workflows and validation checkpoints. The main weakness is moderate verbosity in introductory sections that explain concepts Claude already understands, though the technical content itself is appropriately dense.

Suggestions

Trim Section 1 Overview and Section 2 Core Principles - remove explanatory text about what prompt injection is or why TDD matters; Claude knows these concepts

Condense Section 3 Core Responsibilities into a brief bullet list rather than prose explanations of security principles

DimensionReasoningScore

Conciseness

The skill contains some unnecessary verbosity in the overview and principles sections that Claude already knows (e.g., explaining what prompt injection is, general TDD principles). However, the code examples are reasonably efficient and the tables are well-condensed.

2 / 3

Actionability

Provides fully executable Python code examples with proper imports, concrete configuration classes, and copy-paste ready implementations. The patterns include specific version numbers, exact commands, and working code snippets.

3 / 3

Workflow Clarity

The TDD workflow in Section 6 provides clear sequencing with explicit steps (write test → implement → refactor → verify). The pre-deployment checklist provides validation checkpoints, and security patterns include explicit verification steps like checksum validation.

3 / 3

Progressive Disclosure

Excellent structure with clear overview in SKILL.md and well-signaled one-level-deep references to `references/advanced-patterns.md`, `references/security-examples.md`, and `references/threat-model.md`. Each section appropriately points to detailed materials without nesting.

3 / 3

Total

11

/

12

Passed

Validation

68%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (609 lines); consider splitting into references/ and linking

Warning

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

11

/

16

Passed

Repository
martinholovsky/claude-skills-generator
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.