CtrlK
BlogDocsLog inGet started
Tessl Logo

analyzing-docker-container-forensics

Investigate compromised Docker containers by analyzing images, layers, volumes, logs, and runtime artifacts to identify malicious activity and evidence.

78

Quality

73%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/analyzing-docker-container-forensics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity and distinctiveness in the Docker container forensics domain. It names concrete artifacts and actions clearly. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill over others.

Suggestions

Add a 'Use when...' clause such as 'Use when the user mentions Docker forensics, compromised containers, container incident response, or needs to investigate suspicious Docker activity.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: analyzing images, layers, volumes, logs, and runtime artifacts, with clear goals of identifying malicious activity and evidence.

3 / 3

Completeness

Clearly answers 'what' (investigate compromised Docker containers by analyzing various artifacts) but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric.

2 / 3

Trigger Term Quality

Includes strong natural trigger terms users would say: 'compromised', 'Docker containers', 'images', 'layers', 'volumes', 'logs', 'malicious activity', 'evidence'. These cover the forensics/incident response domain well.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche combining Docker forensics with incident response. The combination of 'compromised Docker containers' with specific artifact types (layers, volumes, runtime artifacts) makes it very unlikely to conflict with general Docker or general security skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a highly actionable Docker forensics skill with excellent executable code examples and a logical multi-step workflow. Its main weaknesses are verbosity (redundant concept tables, scenario descriptions that restate workflow content) and lack of validation checkpoints in what is inherently a forensic evidence-handling process where verification is critical. The content would benefit from trimming explanatory tables and adding explicit verification steps.

Suggestions

Add explicit validation checkpoints after critical steps — e.g., verify evidence hashes, confirm tar export integrity, validate that docker export/commit succeeded before proceeding to analysis steps.

Remove or significantly trim the 'Key Concepts' and 'Tools & Systems' tables — these explain things Claude already knows and the tools are already demonstrated in the workflow.

Split detailed content (Common Scenarios, full Python analysis scripts) into separate referenced files to keep SKILL.md as a concise overview with navigation links.

Add error recovery guidance for common failure modes (e.g., container already removed, Docker daemon not running, insufficient permissions).

DimensionReasoningScore

Conciseness

The skill is fairly comprehensive but includes some unnecessary elements: the 'Key Concepts' table explains things Claude already knows (what overlay2 is, what volume mounts are), the 'Tools & Systems' table is largely redundant given the tools are demonstrated in the workflow, and the 'Common Scenarios' section is verbose prose that doesn't add actionable guidance beyond what's already in the workflow steps.

2 / 3

Actionability

The skill provides fully executable, copy-paste ready bash commands and Python scripts throughout all workflow steps. Commands include specific flags, output paths, and real tool invocations. The Python analysis scripts are complete and functional, not pseudocode.

3 / 3

Workflow Clarity

The 5-step workflow is clearly sequenced and logically ordered (preserve → analyze layers → host artifacts → filesystem changes → scan/report). However, there are no explicit validation checkpoints or feedback loops — for instance, no verification that evidence hashing succeeded, no check that container export completed correctly, and no error recovery guidance for failed commands in this forensically sensitive context.

2 / 3

Progressive Disclosure

The content is a monolithic document with no references to external files for detailed content. The Key Concepts table, Tools & Systems table, Common Scenarios section, and Output Format could be split into separate reference files. The document is quite long (~200+ lines of substantive content) and would benefit from a concise overview with links to detailed guides.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
mukul975/Anthropic-Cybersecurity-Skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.