Investigate compromised Docker containers by analyzing images, layers, volumes, logs, and runtime artifacts to identify malicious activity and evidence.
78
73%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/analyzing-docker-container-forensics/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent specificity and distinctiveness in the Docker forensics domain. It lists concrete artifacts to analyze and clearly communicates its purpose. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill over others.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user mentions Docker forensics, compromised containers, container incident response, or needs to investigate suspicious Docker activity.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: analyzing images, layers, volumes, logs, and runtime artifacts, with clear goals of identifying malicious activity and evidence. | 3 / 3 |
Completeness | Clearly answers 'what' (investigate compromised Docker containers by analyzing various artifacts) but lacks an explicit 'Use when...' clause specifying when Claude should select this skill. The 'when' is only implied by the nature of the task. | 2 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'compromised Docker containers', 'images', 'layers', 'volumes', 'logs', 'runtime artifacts', 'malicious activity', 'evidence'. These cover forensic/incident response vocabulary well. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche combining Docker forensics with incident response. The combination of 'compromised Docker containers' with specific forensic artifacts (layers, volumes, runtime artifacts) makes it very unlikely to conflict with general Docker or general security skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides highly actionable, executable forensic investigation commands and scripts for Docker container analysis. Its main weaknesses are verbosity (explanatory tables, scenario descriptions that don't add actionable value) and lack of validation checkpoints between workflow steps. The content would benefit from splitting reference material into separate files and adding explicit verification steps.
Suggestions
Add explicit validation checkpoints between steps (e.g., verify evidence hashes after export, confirm container export tar integrity before extraction, validate that docker commit succeeded before proceeding to layer analysis).
Move the 'Key Concepts' and 'Tools & Systems' tables to a separate REFERENCE.md file — Claude already knows what overlay2 and volume mounts are.
Remove or significantly condense the 'Common Scenarios' section — the descriptions are narrative rather than actionable and repeat guidance already covered in the workflow steps.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is quite long with some sections that could be trimmed. The 'Key Concepts' table explains things Claude already knows (e.g., what image layers, overlay2, and volume mounts are). The 'Common Scenarios' section is descriptive rather than actionable. However, the code blocks themselves are efficient and purposeful. | 2 / 3 |
Actionability | The skill provides fully executable bash commands and Python scripts throughout. Every step includes copy-paste ready commands with specific file paths, flags, and output redirection. The Python analysis scripts are complete and functional, not pseudocode. | 3 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced and logically ordered (preserve → analyze layers → host artifacts → filesystem changes → scan/report). However, there are no explicit validation checkpoints or feedback loops between steps — for example, no verification that evidence hashing succeeded, no check that container export completed correctly before proceeding to analysis, and no error recovery guidance for destructive operations like docker commit. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for detailed content. The Key Concepts table, Tools & Systems table, and Common Scenarios sections could be split into separate reference files. The skill is well-structured with clear headers but everything is inline, making it very long for a SKILL.md overview. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.