Analyzes intrusion activity against the Lockheed Martin Cyber Kill Chain framework to identify which phases an adversary has completed, where defenses succeeded or failed, and what controls would have interrupted the attack at earlier phases. Use when conducting post-incident analysis, building prevention-focused security controls, or mapping detection gaps to kill chain phases. Activates for requests involving kill chain analysis, intrusion kill chain, attack phase mapping, or Lockheed Martin kill chain framework.
84
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly articulates specific capabilities, provides explicit trigger guidance with both 'Use when' and 'Activates for' clauses, and occupies a well-defined niche around the Lockheed Martin Cyber Kill Chain framework. It uses proper third-person voice throughout and includes natural keywords that users in cybersecurity would actually use. The description is concise yet comprehensive.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: identifies which phases an adversary has completed, where defenses succeeded or failed, and what controls would have interrupted the attack at earlier phases. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (analyzes intrusion activity against the Cyber Kill Chain framework to identify phases, defense gaps, and controls) and 'when' (explicit 'Use when' clause for post-incident analysis, building security controls, mapping detection gaps, plus an 'Activates for' clause with trigger terms). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'kill chain analysis', 'intrusion kill chain', 'attack phase mapping', 'Lockheed Martin kill chain framework', 'post-incident analysis', 'detection gaps'. Good coverage of variations including the formal name and common shorthand. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: specifically the Lockheed Martin Cyber Kill Chain framework. The combination of framework-specific terminology and explicit trigger terms makes it very unlikely to conflict with other security or analysis skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured instructional skill with a clear 5-step workflow and useful concrete examples like the phase matrix template. Its main weaknesses are moderate verbosity (explaining concepts Claude already knows, like kill chain phase definitions and basic security terms) and lack of truly executable artifacts (no detection queries, no template files, no code). The workflow clarity is strong, but actionability and conciseness could be improved.
Suggestions
Add concrete, executable examples such as Splunk/EQL detection queries for specific kill chain phases, or a structured JSON/YAML template for the kill chain analysis report output.
Trim the phase indicator lists and Key Concepts table — Claude already knows what phishing, beaconing, and scheduled tasks are. Focus on non-obvious indicators and decision criteria instead.
Extract detailed content (full ATT&CK-to-kill-chain mappings, COA examples per phase, report templates) into referenced supplementary files to improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient but includes some content Claude would already know (e.g., basic definitions of kill chain phases, what beaconing is, what phishing emails are). The Key Concepts table and phase indicator lists add bulk that an experienced analyst model wouldn't need. However, the structure is reasonably tight and not egregiously padded. | 2 / 3 |
Actionability | The skill provides a clear multi-step process with concrete examples (the phase matrix template, COA categories, report structure), but lacks executable code/commands or copy-paste ready artifacts. The guidance is specific enough to follow but remains at the instructional/descriptive level rather than providing concrete query examples, detection rule snippets, or template files. | 2 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced with logical progression from mapping actions → identifying detection points → ATT&CK enrichment → COA development → reporting. The phase matrix example in Step 2 serves as an explicit validation checkpoint showing what completed vs. detected vs. not achieved looks like, and Step 3-4 build iteratively on prior steps. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and headers, but it's monolithic — all content is inline with no references to external files for detailed content like ATT&CK mappings, report templates, or example analyses. The Tools & Systems section mentions external tools but doesn't link to supplementary skill files. For a skill of this length (~120 lines), some content (e.g., the full phase indicator lists, the Key Concepts table) could be split out. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
888bbe4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.