CtrlK
BlogDocsLog inGet started
Tessl Logo

hunt-threat

Conduct proactive, hypothesis-driven threat hunting. Use when performing advanced hunting based on threat intelligence, TTPs, or anomalies. For Tier 3 analysts or dedicated threat hunters. Supports iterative search, pivoting, and comprehensive documentation.

72

Quality

66%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/hunt-threat/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a reasonably well-structured description that clearly identifies its domain and includes an explicit 'Use when' clause with relevant trigger terms for cybersecurity professionals. Its main weaknesses are the lack of highly specific concrete actions (what exactly does it do beyond 'search' and 'pivot'?) and potential overlap with adjacent cybersecurity skills. Adding more concrete deliverables and sharper boundary conditions would strengthen it.

Suggestions

Add more specific concrete actions such as 'build hunting hypotheses from MITRE ATT&CK TTPs, query log sources, correlate indicators of compromise, and generate hunt reports'

Sharpen distinctiveness by explicitly stating what this skill does NOT cover, e.g., 'Not for initial alert triage or automated detection rule creation'

DimensionReasoningScore

Specificity

The description names the domain (threat hunting) and mentions some actions like 'iterative search, pivoting, and comprehensive documentation,' but these are somewhat generic within the cybersecurity domain. It lacks concrete specific actions like 'query SIEM logs,' 'map to MITRE ATT&CK framework,' or 'generate IOC reports.'

2 / 3

Completeness

Clearly answers both what ('conduct proactive, hypothesis-driven threat hunting, supports iterative search, pivoting, and comprehensive documentation') and when ('when performing advanced hunting based on threat intelligence, TTPs, or anomalies'). The 'Use when' clause is explicit.

3 / 3

Trigger Term Quality

Good coverage of natural terms a threat hunter would use: 'threat hunting,' 'threat intelligence,' 'TTPs,' 'anomalies,' 'pivoting,' 'hypothesis-driven,' and 'Tier 3 analysts.' These are terms users in this domain would naturally mention.

3 / 3

Distinctiveness Conflict Risk

While it specifies 'threat hunting' and 'Tier 3 analysts,' it could overlap with other cybersecurity skills like incident response, threat intelligence analysis, or SIEM querying skills. The mention of TTPs and anomalies could trigger for detection engineering or alert triage skills as well.

2 / 3

Total

10

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a competent threat hunting skill with a clear workflow structure, concrete tool references, and useful example queries. Its main weaknesses are the lack of validation checkpoints in a complex iterative process, some pseudocode placeholders instead of fully executable examples, and a monolithic structure that could benefit from splitting reference material into separate files. The content is moderately concise but includes some sections that could be tightened.

Suggestions

Add explicit validation checkpoints: verify query results are non-empty before proceeding, validate IOC format before enrichment, and confirm case documentation before escalation.

Replace pseudocode placeholders like `gti-mcp.get_..._report(identifier=LEAD)` with concrete, copy-paste-ready examples using realistic sample values.

Split the example hunt queries and hypothesis templates into a separate HUNT_REFERENCE.md file, keeping SKILL.md as a concise workflow overview with links.

Add error handling guidance for common failure modes (e.g., query timeouts, empty result sets, API rate limits) to support the iterative hunt loop.

DimensionReasoningScore

Conciseness

The skill is moderately efficient but includes some unnecessary verbosity. The 'Key questions at each iteration' section and the detailed input examples add bulk that a Tier 3 analyst (or Claude acting as one) wouldn't need. The hypothesis templates section is somewhat redundant given the examples already provided in inputs. However, most content is relevant and not explaining basic concepts.

2 / 3

Actionability

The skill provides concrete tool calls (gti-mcp, secops-mcp, bigquery) and example UDM queries, which is good. However, many commands use placeholder/pseudocode patterns like `gti-mcp.get_..._report(identifier=LEAD)` and `bigquery.execute-query(query="Complex analytical query")` rather than fully executable examples. The hunt loop in Step 4 is procedural guidance rather than concrete executable steps.

2 / 3

Workflow Clarity

The 8-step workflow is clearly sequenced and the iterative hunt loop in Step 4 is well-structured. However, there are no explicit validation checkpoints — no verification that queries return valid results, no checks before escalation, and no error handling for failed queries or API calls. For a complex multi-step process involving iterative searching and potential incident escalation, the lack of validation gates is a notable gap.

2 / 3

Progressive Disclosure

The content is well-organized with clear section headers and a logical flow from inputs through workflow to outputs. However, it's a long monolithic document (~150 lines of substantive content) that could benefit from splitting detailed query examples, hypothesis templates, and the enrichment reference into separate files. References to `/document-in-case` and `/generate-report` are mentioned but not linked to documentation.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
dandye/ai-runbooks
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.