Extract and analyze Windows Registry hives to uncover user activity, installed software, autostart entries, and evidence of system compromise.
63
55%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/analyzing-windows-registry-for-artifacts/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is strong in specificity and distinctiveness, clearly identifying a niche forensic analysis capability around Windows Registry hives. Its main weaknesses are the lack of an explicit 'Use when...' clause and missing some natural trigger term variations that users in digital forensics might use.
Suggestions
Add a 'Use when...' clause, e.g., 'Use when the user needs to examine Windows Registry artifacts, investigate system compromise, or perform forensic analysis of registry hives.'
Include additional trigger terms and file references users might mention, such as 'NTUSER.DAT', 'SAM', 'SYSTEM hive', 'SOFTWARE hive', 'registry forensics', 'ShimCache', 'AmCache', or 'MRU lists'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Extract and analyze Windows Registry hives' with specific outputs including 'user activity, installed software, autostart entries, and evidence of system compromise.' | 3 / 3 |
Completeness | Clearly answers 'what does this do' (extract and analyze registry hives for specific artifacts), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes good domain-specific terms like 'Windows Registry hives', 'autostart entries', 'installed software', and 'system compromise', but misses common user variations like 'registry forensics', 'NTUSER.DAT', 'SAM', 'SYSTEM hive', 'regedit', or 'registry analysis'. | 2 / 3 |
Distinctiveness Conflict Risk | Very clear niche targeting Windows Registry hive analysis specifically; unlikely to conflict with other skills due to the highly specific domain of registry forensics and the distinct trigger terms like 'Registry hives' and 'autostart entries'. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at actionability with concrete, executable commands and Python scripts for registry forensics. However, it is severely bloated—explaining concepts Claude already knows, including redundant reference tables inline, and listing near-identical commands that could be condensed. The lack of validation checkpoints for forensic operations (where data integrity is critical) and the monolithic structure significantly reduce its effectiveness as a skill file.
Suggestions
Remove the 'Key Concepts' and 'Tools & Systems' tables or move them to a separate REFERENCE.md file—Claude already knows what registry hives and MRU lists are.
Condense repetitive RegRipper commands into a single loop or table of plugin names rather than listing each individually with near-identical syntax.
Add validation checkpoints: verify hive integrity after extraction (e.g., check file sizes, validate hive headers), confirm RegRipper output is non-empty, and include error handling for corrupted hives using transaction logs.
Move Common Scenarios and Output Format to separate files (SCENARIOS.md, OUTPUT.md) and reference them from the main skill to reduce token footprint.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~200+ lines with extensive repetitive command blocks. The 'Key Concepts' table explains things Claude already knows (what a registry hive is, what MRU means, what transaction logs are). The 'When to Use' and 'Prerequisites' sections add padding. Many commands are near-duplicates differing only in plugin name, which could be condensed into a table or loop. | 1 / 3 |
Actionability | The skill provides fully executable bash commands and Python scripts that are copy-paste ready. Specific file paths, tool invocations, and Python code with proper imports and struct unpacking for UserAssist parsing are all concrete and directly usable. | 3 / 3 |
Workflow Clarity | Steps are clearly numbered and sequenced (extract → analyze → persistence → user activity → system info), but there are no validation checkpoints or feedback loops. No verification that hive extraction succeeded, no integrity checks after parsing, and no error recovery guidance for corrupted or dirty hives despite mentioning transaction logs. | 2 / 3 |
Progressive Disclosure | The entire skill is a monolithic wall of content with no references to external files. The Key Concepts table, Tools & Systems table, four Common Scenarios, and detailed Output Format could all be split into separate reference files. Everything is inline, making the skill unnecessarily long for the context window. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.