Build an automated system to track adversary infrastructure using passive DNS, certificate transparency, WHOIS data, and IP enrichment to map and monitor threat actor command-and-control networks.
72
66%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/building-adversary-infrastructure-tracking-system/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at specificity and distinctiveness, clearly naming concrete data sources and techniques in the threat intelligence domain. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The domain-specific terminology serves as strong natural trigger terms for the intended audience.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about tracking threat actor infrastructure, C2 detection, adversary network mapping, or intelligence gathering on malicious domains and IPs.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and data sources: passive DNS, certificate transparency, WHOIS data, IP enrichment, and the goal of mapping/monitoring threat actor C2 networks. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (build automated tracking system using specific data sources), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords a threat intelligence analyst would use: 'adversary infrastructure', 'passive DNS', 'certificate transparency', 'WHOIS', 'IP enrichment', 'command-and-control', 'threat actor'. These are highly domain-specific and naturally used terms. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche combining adversary infrastructure tracking with specific intelligence sources (passive DNS, CT logs, WHOIS, IP enrichment). Very unlikely to conflict with other skills due to the specialized threat intelligence domain. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides genuinely useful, executable code for building an adversary infrastructure tracking system with real API integrations and graph analysis. However, it is significantly bloated with explanatory content Claude doesn't need (key concepts, generic 'when to use' section), and the workflow lacks validation checkpoints and error handling that would be critical for a system making many external API calls. The code quality is good but the surrounding content needs trimming.
Suggestions
Remove the 'Key Concepts' section entirely—Claude already understands passive DNS, infrastructure pivoting, and adversary patterns.
Replace the generic 'When to Use' section with specific trigger conditions (e.g., 'Given a seed domain/IP from a threat report, expand to map full C2 infrastructure').
Add explicit validation steps between workflow stages: verify API responses, handle rate limits, validate graph integrity before analysis, and check for false positives in reverse IP lookups.
Consider splitting the three main classes into referenced files (e.g., tracker.py, graph.py, monitor.py) with only usage examples inline in the SKILL.md.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is excessively verbose. The 'Key Concepts' section explains passive DNS, infrastructure pivoting, and adversary patterns—concepts Claude already knows well. The 'When to Use' section is generic boilerplate. The Overview paragraph restates what the title already conveys. Much of this could be cut to focus on the actual implementation code. | 1 / 3 |
Actionability | The code is concrete, executable, and copy-paste ready with real API endpoints, proper request handling, and complete class implementations. It covers the full pipeline from passive DNS lookup through graph building to monitoring, with specific library usage and API calls. | 3 / 3 |
Workflow Clarity | The three steps are logically sequenced (discover → graph → monitor), but there are no validation checkpoints between steps. No error handling for API failures, rate limiting, or bad data. The 'Validation Criteria' section is a checklist of expected outcomes rather than actionable verification steps integrated into the workflow. Missing feedback loops for a system that involves external API calls and data quality issues. | 2 / 3 |
Progressive Disclosure | The content is a monolithic document with all code inline. The references section links to external resources, but there's no splitting of content into separate files for the graph analysis, monitoring, or API configuration. For a skill this long (~250+ lines of code), the detailed implementations could be referenced rather than fully inlined. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.