Perform comprehensive forensic analysis of disk images using Autopsy to recover files, examine artifacts, and build investigation timelines.
63
55%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/analyzing-disk-image-with-autopsy/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is strong in specificity and distinctiveness, clearly identifying a niche forensic analysis skill using Autopsy. Its main weaknesses are the absence of an explicit 'Use when...' clause and limited coverage of natural trigger term variations that users might employ when requesting forensic analysis help.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user needs digital forensics, disk image analysis, evidence examination, or mentions Autopsy.'
Expand trigger terms to include common variations like 'digital forensics', 'deleted file recovery', 'evidence analysis', '.E01', '.dd', 'file carving', or 'incident response'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'recover files', 'examine artifacts', and 'build investigation timelines', all within the clearly named domain of forensic analysis using Autopsy on disk images. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific actions, but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this dimension at 2 per the rubric guidelines. | 2 / 3 |
Trigger Term Quality | Includes good terms like 'forensic analysis', 'disk images', 'Autopsy', 'recover files', and 'investigation timelines', but misses common user variations like 'digital forensics', '.dd', '.E01', 'evidence', 'deleted files', 'file carving', or 'incident response'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'forensic analysis', 'disk images', and 'Autopsy' creates a very clear niche that is unlikely to conflict with other skills; this is a highly specialized domain with distinct terminology. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides highly actionable, concrete forensic analysis guidance with executable commands and specific GUI instructions, which is its primary strength. However, it is excessively verbose, includes concept explanations Claude doesn't need, and dumps everything into a single monolithic document without progressive disclosure. The workflow lacks explicit validation checkpoints critical for forensic operations where evidence integrity is paramount.
Suggestions
Remove the Key Concepts and Tools tables entirely—Claude already knows what MFT, file carving, and these CLI tools are. This alone would cut ~30 lines.
Add explicit validation checkpoints: verify image hash before and after analysis, validate recovered files, and include a chain-of-custody verification step between major workflow stages.
Split Common Scenarios and detailed ingest module descriptions into separate reference files (e.g., SCENARIOS.md, INGEST_MODULES.md) and link from the main skill.
Trim Prerequisites to only non-obvious items (e.g., hash databases, disk space multiplier) and remove things like 'Java Runtime Environment' and RAM recommendations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~200+ lines. Explains basic concepts Claude already knows (what MFT is, what file carving is, what NTFS is). The Key Concepts and Tools tables are largely unnecessary padding. Prerequisites list system requirements Claude doesn't need to be told. The Common Scenarios section describes workflows at a high level without adding actionable value beyond what the main workflow already covers. | 1 / 3 |
Actionability | Provides fully executable CLI commands (fls, icat, mmls, mactime, tsk_recover) with concrete paths, flags, and expected output examples. The GUI steps are specific with exact menu paths and field values. Regex patterns for keyword searches are copy-paste ready. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced from case creation through analysis to reporting, but there are no explicit validation checkpoints or error recovery feedback loops. For forensic operations (which are sensitive and can be destructive to evidence integrity), there should be hash verification steps after recovery, validation of extracted files, and explicit chain-of-custody verification points. The Data Source Integrity module is mentioned but not enforced as a checkpoint. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inline. The Key Concepts table, Tools table, Common Scenarios, and Output Format sections could all be in separate reference files. No references to external files for advanced topics. The content is ~200 lines that could be a concise overview with links to detailed guides for each step. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.