CtrlK
BlogDocsLog inGet started
Tessl Logo

context-degradation

This skill should be used when the user asks to "diagnose context problems", "fix lost-in-middle issues", "debug agent failures", "understand context poisoning", or mentions context degradation, attention patterns, context clash, context confusion, or agent performance degradation. Provides patterns for recognizing and mitigating context failures.

Install with Tessl CLI

npx tessl i github:muratcankoylan/Agent-Skills-for-Context-Engineering --skill context-degradation
What are skills?

Overall
score

75%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description excels at trigger term coverage and completeness, providing extensive natural language triggers and clear guidance on when to use the skill. The main weakness is the vague capability statement - 'provides patterns for recognizing and mitigating' doesn't specify what concrete actions the skill enables. The description structure is inverted (triggers before capabilities), which is unusual but functional.

Suggestions

Replace 'Provides patterns for recognizing and mitigating context failures' with specific actions like 'Analyzes context windows for attention degradation, identifies poisoned segments, restructures prompts to avoid lost-in-middle effects'

Consider restructuring to lead with capabilities before the 'Use when' clause for better readability

DimensionReasoningScore

Specificity

Names the domain (context problems/agent failures) and mentions 'patterns for recognizing and mitigating context failures' but lacks specific concrete actions like 'analyze attention patterns', 'restructure prompts', or 'identify poisoned context segments'.

2 / 3

Completeness

Explicitly answers both what ('Provides patterns for recognizing and mitigating context failures') and when ('should be used when the user asks to...' with extensive trigger list). Has clear 'Use when' equivalent structure.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'diagnose context problems', 'fix lost-in-middle issues', 'debug agent failures', 'context poisoning', 'context degradation', 'attention patterns', 'context clash', 'agent performance degradation'.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused specifically on context-related failures in agents. Terms like 'lost-in-middle', 'context poisoning', 'attention patterns' are specific enough to avoid conflicts with general debugging or agent skills.

3 / 3

Total

11

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides comprehensive coverage of context degradation patterns with good conceptual depth and organization. However, it leans heavily toward explanation over actionable guidance, lacking executable code examples and explicit step-by-step workflows with validation checkpoints. The content would benefit from being more concise and providing concrete, copy-paste-ready diagnostic and mitigation procedures.

Suggestions

Add executable code examples for detecting degradation (e.g., a Python function to measure context length and track performance metrics)

Create explicit step-by-step diagnostic workflows with validation checkpoints (e.g., 'Step 1: Check context length -> Step 2: If >X tokens, run compaction -> Step 3: Validate output quality')

Trim explanatory content about attention mechanisms and model internals that Claude already understands, focusing instead on actionable patterns

Replace the illustrative YAML/markdown examples with concrete, executable diagnostic or mitigation code snippets

DimensionReasoningScore

Conciseness

The skill contains substantial useful information but includes unnecessary explanations of concepts Claude likely knows (e.g., explaining what attention mechanisms are, how models allocate attention). Some sections could be tightened significantly, particularly the introductory paragraphs and the 'Empirical Evidence' subsections.

2 / 3

Actionability

The skill provides conceptual guidance and some practical patterns (four-bucket approach, architectural patterns) but lacks executable code examples. The two YAML/markdown examples are illustrative but not copy-paste actionable. Most guidance is descriptive rather than instructive with concrete commands or code.

2 / 3

Workflow Clarity

The skill describes patterns and strategies but lacks clear step-by-step workflows with validation checkpoints. The 'Guidelines' section lists recommendations but doesn't sequence them into actionable workflows. Detection and recovery processes are described conceptually without explicit validation steps.

2 / 3

Progressive Disclosure

The skill is well-organized with clear section headers, a logical progression from concepts to practical guidance, and appropriate references to related skills and external resources. The 'Integration' and 'References' sections provide clear one-level-deep navigation to related content.

3 / 3

Total

9

/

12

Passed

Validation

87%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation14 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

Total

14

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.