CtrlK
BlogDocsLog inGet started
Tessl Logo

context-fundamentals

This skill should be used when the user asks to "understand context", "explain context windows", "design agent architecture", "debug context issues", "optimize context usage", or discusses context components, attention mechanics, progressive disclosure, or context budgeting. Provides foundational understanding of context engineering for AI agent systems.

62

Quality

53%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/context-fundamentals/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has strong trigger term coverage and explicit 'when to use' guidance, but falls short on specificity of capabilities — it only vaguely states it 'provides foundational understanding' without listing concrete actions. The domain is somewhat niche (context engineering for AI agents) but some terms like 'agent architecture' could cause overlap with other skills.

Suggestions

Replace 'Provides foundational understanding of context engineering' with specific concrete actions like 'Explains context window mechanics, diagrams token budget allocation, debugs context overflow issues, and designs progressive disclosure strategies for AI agent systems'.

Narrow potentially overlapping terms like 'design agent architecture' by qualifying them, e.g., 'design agent context architecture' to reduce conflict risk with general agent-building skills.

DimensionReasoningScore

Specificity

The description says it 'provides foundational understanding of context engineering for AI agent systems' which is vague and abstract. It does not list concrete actions like 'explains attention mechanics', 'diagrams context window allocation', or 'debugs token budget issues'. The 'what' is essentially 'provides understanding' which is not a concrete action.

1 / 3

Completeness

The description explicitly answers both 'what' (provides foundational understanding of context engineering for AI agent systems) and 'when' (with a clear 'Use when' equivalent listing specific trigger phrases). The 'when' is strong and explicit, though the 'what' is somewhat weak in specificity.

3 / 3

Trigger Term Quality

The description includes many natural trigger terms users would say: 'understand context', 'explain context windows', 'design agent architecture', 'debug context issues', 'optimize context usage', 'attention mechanics', 'progressive disclosure', 'context budgeting'. These cover a good range of natural phrases.

3 / 3

Distinctiveness Conflict Risk

Terms like 'agent architecture' and 'context' are fairly broad and could overlap with skills about building AI agents, prompt engineering, or general AI architecture. However, the specific combination of 'context engineering', 'context windows', 'context budgeting', and 'attention mechanics' does carve out a somewhat distinct niche.

2 / 3

Total

9

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like an educational article for human engineers than an actionable skill for Claude. It extensively explains concepts Claude already understands (attention mechanics, transformer architecture, tokenization) while providing relatively few concrete, executable instructions. The content would benefit enormously from aggressive trimming of explanatory material and replacement with specific, actionable procedures and decision trees.

Suggestions

Cut 60-70% of the explanatory content — remove sections on attention mechanics, position encoding, and other concepts Claude already knows. Focus only on non-obvious thresholds, specific techniques, and actionable decision criteria.

Replace descriptive paragraphs with concrete decision trees or checklists (e.g., 'When context exceeds 70% utilization: 1. Identify lowest-signal components using X criteria, 2. Apply compaction by doing Y, 3. Verify by checking Z').

Add executable code examples for key operations like token counting, context budget monitoring, and history compaction — the current examples are illustrative comments, not actionable implementations.

Practice progressive disclosure within the skill itself: move detailed topics (anatomy of context, attention mechanics) into referenced sub-files and keep only the actionable summary in the main SKILL.md.

DimensionReasoningScore

Conciseness

This skill is extremely verbose (~1800+ words) and explains many concepts Claude already knows well — attention mechanics, transformer architecture, position encoding, token estimation heuristics, and general prompt engineering principles. Much of this is foundational AI knowledge that doesn't need to be taught to Claude. The content reads more like a tutorial for human engineers than actionable instructions for an AI agent.

1 / 3

Actionability

The skill provides some concrete guidance (e.g., the system prompt XML structure example, the 60-70% capacity threshold, compaction triggers at 70-80%), but most content is descriptive rather than instructive. There are no executable code snippets or copy-paste-ready commands — the examples are illustrative markdown/comments rather than actionable implementations Claude could directly use.

2 / 3

Workflow Clarity

The skill describes processes like progressive disclosure and context budgeting but lacks explicit step-by-step workflows with validation checkpoints. The 'When to Activate' section lists triggers, and the guidelines provide a numbered list, but there's no clear 'do X, then validate Y, then proceed to Z' workflow for actually performing context engineering tasks.

2 / 3

Progressive Disclosure

The skill references external files (context-components.md, related skills) and has a References section with clear navigation signals. However, the main body is monolithic — the detailed topics on attention mechanics, position encoding, and context quality could easily be split into separate reference files. The irony is that a skill about progressive disclosure doesn't practice it well itself.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
muratcankoylan/Agent-Skills-for-Context-Engineering
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.