Parse Windows Prefetch files using the windowsprefetch Python library to reconstruct application execution history, detect renamed or masquerading binaries, and identify suspicious program execution patterns.
55
45%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/analyzing-windows-prefetch-with-python/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, specific description that clearly identifies its forensic analysis niche and lists concrete capabilities. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The domain-specific terminology is well-chosen and naturally aligns with what forensic analysts would request.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Windows Prefetch analysis, .pf files, application execution timelines, or forensic investigation of program execution artifacts.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Parse Windows Prefetch files', 'reconstruct application execution history', 'detect renamed or masquerading binaries', and 'identify suspicious program execution patterns'. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific capabilities, but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this dimension at 2 per the rubric guidelines. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords a forensic analyst would use: 'Windows Prefetch', 'prefetch files', 'application execution history', 'masquerading binaries', 'suspicious program execution', and the specific library name 'windowsprefetch'. These are terms users in DFIR would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche: Windows Prefetch file analysis is a very specific forensic domain. The mention of the specific Python library, Prefetch files, and forensic use cases like detecting masquerading binaries makes it very unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is essentially a description of what a prefetch analysis tool would do, rather than instructions for how to build or use one. It contains no executable Python code despite being explicitly about using the windowsprefetch Python library. The extensive example output creates an illusion of completeness while the actual instructional content is vague and abstract.
Suggestions
Add concrete, executable Python code using the windowsprefetch library for each step (e.g., `import windowsprefetch; pf = windowsprefetch.Prefetch('file.pf'); print(pf.executableName, pf.runCount, pf.lastRunTime)`)
Replace the vague 4-step workflow with specific code blocks showing how to parse files, extract fields, detect suspicious patterns (with actual comparison logic), and build timelines
Remove the generic 'When to Use' section and trim the overview to remove explanations of what Prefetch files are - Claude already knows this
Reduce the example output to a compact representative sample and instead invest those tokens in actual implementation code with validation steps
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The 'When to Use' section is generic boilerplate that adds no value. The overview explains what Prefetch files are (Claude already knows this). The steps are vague placeholders without any actual code. The massive example output consumes significant tokens while providing no executable guidance. | 1 / 3 |
Actionability | There is zero executable code in this skill despite being about Python-based analysis. Steps like 'Extract executable name, run count, last execution timestamps' and 'Flag known attack tools' describe what to do abstractly without any concrete implementation using the windowsprefetch library. The example output references a script (prefetch_analyzer.py) that is never shown or defined. | 1 / 3 |
Workflow Clarity | The four steps are vague descriptions without concrete commands, validation checkpoints, or error handling. There's no guidance on what to do if parsing fails, how to handle corrupted prefetch files, or how to validate results. The workflow reads more like a table of contents than actionable steps. | 1 / 3 |
Progressive Disclosure | The content has some structural organization with clear sections (Overview, Steps, Expected Output, Example Output), but the example output is excessively long and inline. There are no references to external files for detailed content like detection rule lists or API reference for the windowsprefetch library. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.