This skill should be used when the user asks to "offload context to files", "implement dynamic context discovery", "use filesystem for agent memory", "reduce context window bloat", or mentions file-based context management, tool output persistence, agent scratch pads, or just-in-time context loading.
Install with Tessl CLI
npx tessl i github:muratcankoylan/Agent-Skills-for-Context-Engineering --skill filesystem-contextOverall
score
66%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
37%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially all trigger terms with zero capability explanation. While it excels at listing when to use the skill, it completely fails to explain what the skill actually does. A user or Claude would know when to select it but have no idea what actions it performs.
Suggestions
Add concrete actions at the beginning describing what the skill does (e.g., 'Writes tool outputs and intermediate results to files, creates structured scratch pads, and loads context on-demand from the filesystem.')
Restructure to lead with capabilities, then follow with 'Use when...' clause containing the existing trigger terms
Use third person voice to describe specific operations (e.g., 'Persists conversation context to files', 'Retrieves stored context when needed')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions - only trigger phrases. It never explains what the skill actually does, using vague concepts like 'dynamic context discovery' and 'agent memory' without specifying concrete capabilities. | 1 / 3 |
Completeness | The description only addresses 'when' (trigger conditions) but completely omits 'what' - there is no explanation of what actions or capabilities this skill provides. This is the inverse of the typical problem. | 1 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'offload context to files', 'reduce context window bloat', 'agent scratch pads', 'just-in-time context loading'. These are specific phrases a user working with context management would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | The trigger terms are fairly specific to context/memory management, but without knowing what the skill actually does, it's unclear how it would be distinguished from other potential file-handling or memory-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive skill with strong actionability through concrete code examples and good progressive disclosure. However, it suffers from verbosity in conceptual explanations that Claude doesn't need, and lacks explicit validation/verification steps for potentially risky operations like self-modification and file cleanup.
Suggestions
Trim the 'Core Concepts' and 'Static vs Dynamic Context Trade-off' sections significantly - Claude understands these concepts and the current explanation adds ~500 tokens of context it doesn't need.
Add explicit validation steps to Pattern 6 (Self-Modification) - specify what guardrails to implement before allowing agents to modify their own instructions.
Add a cleanup/validation workflow for scratch files with explicit steps: when to clean, how to verify files aren't needed, and error recovery if cleanup removes needed content.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains useful information but is verbose in places, explaining concepts like 'static vs dynamic context' at length when Claude likely understands these tradeoffs. Some sections could be tightened significantly while preserving clarity. | 2 / 3 |
Actionability | Provides concrete, executable code examples throughout (Python implementations, YAML structures, bash commands, directory layouts). The patterns are copy-paste ready with clear implementation guidance. | 3 / 3 |
Workflow Clarity | While patterns are well-explained individually, the skill lacks explicit validation checkpoints and feedback loops. For example, the self-modification pattern mentions 'careful guardrails' but doesn't specify what validation steps to take before writing to instruction files. | 2 / 3 |
Progressive Disclosure | Well-structured with clear sections, a 'When to Activate' trigger section, and appropriate references to related skills and external resources. The internal reference to implementation-patterns.md provides one-level-deep navigation for detailed content. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
87%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 14 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 14 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.