CtrlK
BlogDocsLog inGet started
Tessl Logo

prompt-optimization

Applies prompt repetition to improve accuracy for non-reasoning LLMs

55

Quality

45%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./agent-skills/prompt-optimization/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a narrow technique (prompt repetition for non-reasoning LLMs) which gives it some distinctiveness, but it lacks concrete actions, natural trigger terms users would say, and critically has no 'Use when...' clause. It reads more like a brief label than a functional skill description that would help Claude select it appropriately from a large skill library.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user is crafting prompts for non-reasoning models and wants to improve reliability through strategic repetition of key instructions.'

List specific concrete actions the skill performs, such as 'identifies critical instructions in a prompt, strategically duplicates them, and restructures prompt layout to reinforce key directives.'

Include natural trigger terms users might say, such as 'prompt engineering', 'repeat instructions', 'non-CoT models', 'improve prompt reliability', or specific model families like 'GPT-3.5', 'Claude Instant'.

DimensionReasoningScore

Specificity

Names the domain (prompt repetition for LLMs) and a general action (improve accuracy), but doesn't list specific concrete actions like 'duplicates key instructions', 'restructures prompts', or 'inserts repeated emphasis phrases'.

2 / 3

Completeness

Describes what it does at a high level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also weak, so this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'prompt repetition', 'accuracy', and 'non-reasoning LLMs', but misses common user-facing variations like 'prompt engineering', 'repeat instructions', 'prompt optimization', or specific model names users might reference.

2 / 3

Distinctiveness Conflict Risk

The mention of 'prompt repetition' and 'non-reasoning LLMs' provides some specificity that distinguishes it from general prompt engineering skills, but it could still overlap with broader prompt optimization or LLM tuning skills.

2 / 3

Total

7

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is reasonably well-organized with good progressive disclosure and clear structure, but suffers from redundancy between sections and lacks true actionability — it describes an automatic system without providing implementation code or verification steps. The 'Agent Instructions' section largely duplicates 'When to Activate', and the core mechanism is shown only as a conceptual example rather than executable code.

Suggestions

Remove the 'Agent Instructions' section or merge it with 'When to Activate' to eliminate redundancy

Add a verification step showing how to confirm prompt repetition is active and improving accuracy (e.g., checking metrics files or comparing outputs)

Provide executable implementation code rather than just a conceptual before/after string concatenation example

DimensionReasoningScore

Conciseness

The skill includes some unnecessary explanation (e.g., 'The repeated prompt enables bidirectional attention within the parallelizable prefill stage' and the 'Agent Instructions' section which restates activation criteria). The performance table and configuration sections are reasonably tight, but overall there's redundancy between 'When to Activate' and 'Agent Instructions'.

2 / 3

Actionability

The configuration examples with environment variables are concrete and copy-paste ready, but the core mechanism ('your prompt will be automatically repeated') is passive — there's no executable code showing how to implement the repetition, just a conceptual before/after. The skill describes what happens rather than providing implementation code.

2 / 3

Workflow Clarity

The skill describes when it activates and configuration options, but lacks any validation or verification steps. There's no guidance on how to confirm the optimization is working, no feedback loop for checking accuracy improvements, and no troubleshooting steps if results don't improve.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections, appropriate use of tables, and a single-level reference to 'references/prompt-repetition.md' for full documentation. The overview is concise with details appropriately organized into distinct sections.

3 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
asklokesh/loki-mode
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.