Conduct proactive, hypothesis-driven threat hunting. Use when performing advanced hunting based on threat intelligence, TTPs, or anomalies. For Tier 3 analysts or dedicated threat hunters. Supports iterative search, pivoting, and comprehensive documentation.
74
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has strong completeness with explicit 'Use when' guidance and good distinctiveness by targeting a specific analyst tier and methodology. However, it could benefit from more concrete action verbs and broader trigger term coverage to help users naturally discover this skill when they need advanced threat hunting capabilities.
Suggestions
Add more specific concrete actions like 'analyze logs for lateral movement', 'correlate indicators of compromise', 'map findings to MITRE ATT&CK framework'
Expand trigger terms to include common variations users might say: 'IOC', 'indicators of compromise', 'MITRE ATT&CK', 'adversary behavior', 'hunt for threats'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (threat hunting) and mentions some actions like 'iterative search, pivoting, and comprehensive documentation', but lacks concrete specific actions like 'analyze network logs', 'correlate IOCs', or 'map to MITRE ATT&CK framework'. | 2 / 3 |
Completeness | Clearly answers both what ('conduct proactive, hypothesis-driven threat hunting', 'supports iterative search, pivoting, documentation') and when ('Use when performing advanced hunting based on threat intelligence, TTPs, or anomalies') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'threat hunting', 'TTPs', 'threat intelligence', and 'anomalies', but misses common user variations like 'hunt for threats', 'IOC', 'indicators of compromise', 'MITRE', 'adversary behavior', or 'suspicious activity'. | 2 / 3 |
Distinctiveness Conflict Risk | Clearly distinguishes itself with specific niche targeting 'Tier 3 analysts or dedicated threat hunters' and 'hypothesis-driven' approach, making it unlikely to conflict with general security monitoring or incident response skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid threat hunting skill with excellent actionability - the MCP tool calls and UDM query examples are concrete and executable. The workflow is logically structured but lacks explicit validation checkpoints before critical actions like escalation. The content could be more concise by removing explanatory text Claude already knows and better organized through progressive disclosure to separate reference materials.
Suggestions
Add explicit validation checkpoints before Step 8 escalation (e.g., 'Verify findings with secondary data source before confirming threat')
Remove explanatory questions in Step 4 ('Does this match our hypothesis?') - Claude knows how to analyze; focus on specific validation criteria instead
Split example hunt queries and hypothesis templates into a separate HUNT_REFERENCE.md file and link to it
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary structure like verbose input descriptions and template sections that could be tightened. The workflow steps contain some explanatory text that Claude would already understand (e.g., 'Key questions at each iteration'). | 2 / 3 |
Actionability | Provides concrete, executable MCP tool calls with proper syntax, specific UDM query examples that are copy-paste ready, and clear command patterns. The example hunt queries section gives real, usable query templates. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced (1-8) with logical progression, but lacks explicit validation checkpoints. The 'Hunt Loop' in Step 4 describes iteration but doesn't specify when to stop or how to validate findings before escalation. Missing verification steps before escalating to Incident Response. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and a logical flow from inputs to outputs. However, it's a monolithic document that could benefit from splitting detailed query examples or hypothesis templates into separate reference files. References to '/document-in-case' and '/generate-report' are mentioned but not linked. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.