A prompt repetition technique for improving LLM accuracy. Achieves significant performance gains in 67% (47/70) of 70 benchmarks. Automatically applied on lightweight models (haiku, flash, mini).
65
47%
Does it follow best practices?
Impact
97%
1.56xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agent-skills/prompt-repetition/SKILL.mdModel-aware prompt transformer
Target model list
30%
100%
Reasoning model exclusion
100%
100%
CoT pattern detection
100%
100%
CoT skips repetition
100%
100%
Default 2x repetition
100%
100%
Applied marker added
0%
100%
Marker prevents re-application
0%
100%
Newline separator
0%
100%
80% context ratio
0%
100%
Context overflow reduction
0%
100%
Token estimation method
0%
100%
No padding substitution
100%
100%
Position/index repetition count logic
Position pattern triggers 3x
100%
100%
Position keyword set
80%
100%
MCQ uses 2x
100%
100%
Applied marker present
37%
100%
Newline separator used
100%
100%
Target model only
50%
100%
CoT not repeated
100%
100%
No padding used
100%
100%
Duplicate prevention check
0%
100%
Demo output readable
100%
50%
Multi-agent duplicate prevention
Marker-based skip
58%
100%
Marker added on transform
60%
100%
Second agent skips re-application
100%
100%
wrap_llm_call pattern
0%
100%
Lightweight model auto-applied
100%
100%
Non-lightweight model skipped
75%
100%
x-prompt-repetition-applied header/metadata
30%
100%
Pipeline simulation output
100%
100%
CoT agent skipped
0%
62%
No duplicate repetition in output
100%
100%
c033769
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.