CtrlK
BlogDocsLog inGet started
Tessl Logo

prompt-engineering

Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.

50

Quality

38%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/customaize-agent/skills/prompt-engineering/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a relevant domain (prompt engineering and LLM interaction design) and includes a 'Use when' clause, which is positive. However, it focuses almost entirely on when to use the skill without clearly stating what the skill actually does or delivers. The language is also in second person ('Use this skill when you writing') which is grammatically awkward and uses an inappropriate voice.

Suggestions

Add a clear 'what it does' statement before the 'Use when' clause, e.g., 'Provides best practices and templates for crafting effective prompts, system instructions, and agent configurations.'

Expand trigger terms to include natural phrases like 'prompt engineering', 'system prompt', 'few-shot', 'instruction design', 'prompt optimization'.

Rewrite in third person voice (e.g., 'Guides the creation of commands, hooks, and prompts for agents and LLM interactions') and fix the grammatical error ('when you writing' → 'when writing').

DimensionReasoningScore

Specificity

The description names the domain (LLM interaction, prompt engineering) and lists some actions like 'writing commands, hooks, skills for Agent' and 'optimizing prompts, improving LLM outputs, designing production prompt templates,' but these are somewhat vague and not deeply concrete actions.

2 / 3

Completeness

It has a 'Use when' clause that covers the 'when' aspect, but the 'what does this do' part is weak — it describes when to use it but never clearly states what the skill actually does or produces (e.g., does it generate prompts? review them? provide best practices?).

2 / 3

Trigger Term Quality

Includes some relevant keywords like 'prompts', 'sub agents', 'LLM', 'hooks', 'skills', 'prompt templates', but misses common natural variations users might say such as 'prompt engineering', 'system prompt', 'few-shot examples', 'chain of thought', or 'instruction tuning'.

2 / 3

Distinctiveness Conflict Risk

The scope is somewhat specific to prompt/LLM interaction authoring, but the broad phrasing 'any other LLM interaction' and overlap with general coding skills (writing commands, hooks) could cause conflicts with other skills.

2 / 3

Total

8

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers from severe verbosity, explaining well-known concepts (context windows, few-shot learning, chain-of-thought, persuasion psychology) that Claude already understands deeply. It combines three distinct topics into one monolithic file without any progressive disclosure or external references. While it contains some useful concrete examples, the signal-to-noise ratio is low—the actionable, novel content could likely be condensed to under 100 lines.

Suggestions

Remove explanations of concepts Claude already knows (what a context window is, what few-shot learning is, basic persuasion psychology) and focus only on project-specific patterns and novel guidance.

Split into multiple files: a concise SKILL.md overview with references to separate files for prompt engineering patterns, agent prompting practices, and persuasion principles.

Replace descriptive sections ('What it is', 'How it works') with terse pattern templates that are directly copy-paste usable when writing prompts.

Add a concrete workflow with validation steps: e.g., 'Write prompt → Test on 3 diverse inputs → Check for consistency → Iterate if needed → Document final version with rationale.'

DimensionReasoningScore

Conciseness

This skill is extremely verbose at ~400+ lines. It explains concepts Claude already knows well (what a context window is, what few-shot learning is, what chain-of-thought prompting is, basic persuasion psychology). The entire 'Context Window' section explains what a context window is to an LLM. Much of the content is general prompt engineering knowledge that Claude was trained on, not novel project-specific guidance.

1 / 3

Actionability

The skill provides some concrete examples (Python templates, markdown prompt examples) but much of the content is descriptive rather than instructive—it explains concepts and best practices rather than giving specific executable steps. The examples are illustrative rather than copy-paste ready for a specific task.

2 / 3

Workflow Clarity

The 'Progressive Disclosure' pattern and 'Prompt Optimization' sections show a sequence (Level 1→4, Version 1→3), but there are no validation checkpoints or feedback loops for the prompt engineering process itself. The skill lacks a clear workflow for when/how to apply these techniques in practice—it reads more like a reference than a step-by-step guide.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with three major sections (Prompt Engineering Patterns, Agent Prompting Best Practices, Persuasion Principles) all inlined in a single file. There are no references to supporting files, and content that could easily be split (e.g., the persuasion principles section, the integration patterns) is all crammed into one document exceeding 400 lines.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (560 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.