Maximize context window efficiency, reduce latency, and prevent lost-in-middle issues through strategic masking and compaction. Use when token budgets are tight, tool outputs flood the context, conversations drift from intent, or latency spikes from cache misses. (triggers: *.log, chat-history.json, reduce tokens, optimize context, summarize history, clear output)
73
66%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.github/skills/common/common-context-optimization/SKILL.mdManage the Attention Budget. Treat context as a scarce resource.
Problem: Large tool outputs (logs, JSON lists) flood context and degrade reasoning. Solution: Replace raw output with semantic summaries after consumption.
references/masking.md for patterns.See implementation examples for masking patterns.
Problem: Long conversations drift from original intent. Solution: Recursive summarization that preserves State over Dialogue.
references/compaction.md for algorithms.See implementation examples for compacted state format.
Goal: Maximize pre-fill cache hits.
19a1140
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.