CtrlK
BlogDocsLog inGet started
Tessl Logo

prompt-repetition

A prompt repetition technique for improving LLM accuracy. Achieves significant performance gains in 67% (47/70) of 70 benchmarks. Automatically applied on lightweight models (haiku, flash, mini).

Install with Tessl CLI

npx tessl i github:supercent-io/skills-template --skill prompt-repetition
What are skills?

65

1.56x

Quality

47%

Does it follow best practices?

Impact

97%

1.56x

Average score across 3 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agent-skills/prompt-repetition/SKILL.md
SKILL.md
Review
Evals

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads more like a technical specification or research finding than a skill description for Claude's skill selection. It lacks user-facing trigger terms and completely omits guidance on when to use it. The benchmark statistics, while impressive, don't help Claude determine when to select this skill over others.

Suggestions

Add an explicit 'Use when...' clause describing scenarios where this skill should be selected (e.g., 'Use when working with smaller models and accuracy is critical').

Replace technical jargon with natural user language - what problem does this solve that a user would describe in plain terms?

Clarify what concrete actions this skill performs from the user's perspective, not just the underlying technique.

DimensionReasoningScore

Specificity

Names the domain (prompt repetition technique for LLM accuracy) and mentions a specific action (automatically applied on lightweight models), but doesn't list concrete user-facing actions or capabilities beyond the technique itself.

2 / 3

Completeness

Describes what it is (a prompt repetition technique) but completely lacks a 'Use when...' clause or any explicit guidance on when Claude should select this skill. The automatic application note doesn't help users know when to invoke it.

1 / 3

Trigger Term Quality

Uses technical jargon like 'prompt repetition technique', 'LLM accuracy', and 'benchmarks' that users wouldn't naturally say. Missing natural trigger terms like 'improve responses', 'better answers', or 'accuracy'.

1 / 3

Distinctiveness Conflict Risk

The focus on 'prompt repetition' and specific model names (haiku, flash, mini) provides some distinctiveness, but 'improving LLM accuracy' is broad enough to potentially conflict with other optimization or quality-focused skills.

2 / 3

Total

6

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive, actionable skill with excellent executable code and clear workflows. The main weakness is verbosity - the document explains concepts Claude knows (causal attention mechanics) and repeats information across sections (model lists, repetition counts). The content would benefit from splitting detailed implementation and research sections into separate files.

Suggestions

Remove or significantly condense the 'How It Works' section explaining causal attention limitations - Claude understands this concept

Move the production-ready implementation code to a separate IMPLEMENTATION.md file and reference it

Consolidate the repeated model lists and repetition count tables into a single Quick Reference section

Move research results and cost-accuracy analysis to a separate RESEARCH.md file for those who need the justification

DimensionReasoningScore

Conciseness

The skill contains valuable information but is verbose in places, explaining concepts like causal attention that Claude likely understands. The extensive tables and repeated information (e.g., model lists appear multiple times) could be consolidated.

2 / 3

Actionability

Provides fully executable Python code for the transformer class, clear examples with before/after prompts, and specific implementation guidance for skill_loader.py integration. Code is copy-paste ready.

3 / 3

Workflow Clarity

Clear 4-step application procedure with explicit decision points (verify model, determine repetitions, check tokens, transform). Includes validation via A/B testing method and explicit rules for when NOT to apply.

3 / 3

Progressive Disclosure

Content is well-structured with clear sections, but the document is monolithic at ~400 lines. The production implementation code, research details, and multi-agent integration could be split into separate reference files.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (544 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.