Master context engineering for AI agent systems. Use when designing agent architectures, debugging context failures, optimizing token usage, implementing memory systems, building multi-agent coordination, evaluating agent performance, or developing LLM-powered pipelines. Covers context fundamentals, degradation patterns, optimization techniques, compression strategies, memory architectures, multi-agent patterns, evaluation, tool design, and project development.
89
86%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Context engineering curates the smallest high-signal token set for LLM tasks. The goal: maximize reasoning quality while minimizing token usage.
When multiple skills are active:
For detailed guidance, see:
references/fundamentals.md - Context anatomy, attention mechanicsreferences/degradation.md - Debugging failures, lost-in-middle, poisoningreferences/optimization.md - Compaction, masking, caching, partitioningreferences/compression.md - Long sessions, summarization strategiesreferences/memory.md - Cross-session persistence, knowledge graphsreferences/multi-agent.md - Coordination patterns, context isolationreferences/evaluation.md - Testing agents, LLM-as-Judge, metricsreferences/tool-design.md - Tool consolidation, description engineering3376255
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.