CtrlK
BlogDocsLog inGet started
Tessl Logo

osint-recon

Gather and correlate open source intelligence from public sources for authorized investigations, threat intelligence, and attack surface assessment. Use when the user mentions 'OSINT,' 'open source intelligence,' 'digital footprint,' 'public records,' 'threat intelligence,' 'investigate a domain,' or needs to research a target using publicly available data.

60

Quality

70%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/osint-recon/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description with excellent trigger term coverage and a clear 'Use when' clause that makes it highly selectable. The main weakness is that the capability description could be more specific — listing concrete actions like 'enumerate domains, harvest emails, map social media accounts' rather than higher-level phrases like 'gather and correlate.' Overall, it performs well across all dimensions.

Suggestions

Replace or supplement the high-level 'gather and correlate' phrasing with specific concrete actions such as 'enumerate subdomains, harvest email addresses, analyze WHOIS records, map social media profiles' to improve specificity.

DimensionReasoningScore

Specificity

The description names the domain (OSINT) and some actions ('gather and correlate,' 'investigations,' 'threat intelligence,' 'attack surface assessment'), but the actions are somewhat high-level rather than listing multiple concrete specific operations like 'enumerate subdomains, harvest email addresses, map social media profiles.'

2 / 3

Completeness

Clearly answers both 'what' (gather and correlate open source intelligence for investigations, threat intelligence, and attack surface assessment) and 'when' (explicit 'Use when...' clause with multiple trigger scenarios).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms: 'OSINT,' 'open source intelligence,' 'digital footprint,' 'public records,' 'threat intelligence,' 'investigate a domain,' and 'publicly available data' — these are terms users would naturally use when requesting this type of work.

3 / 3

Distinctiveness Conflict Risk

OSINT is a well-defined niche with distinct terminology. The trigger terms like 'OSINT,' 'digital footprint,' 'attack surface assessment,' and 'investigate a domain' are specific enough to avoid conflicts with general security or research skills.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a competent OSINT skill that covers the domain broadly and includes important ethical guardrails and a useful output template. However, it leans toward being a reference catalog of sources rather than a tightly actionable workflow — many sections list what to check without providing executable commands or concrete examples. The workflow would benefit from explicit sequencing, validation checkpoints between collection and analysis, and splitting detailed technique references into separate files.

Suggestions

Add executable commands or concrete examples to the Organization OSINT, Email/Username OSINT, and Threat Intelligence sections instead of just listing source names (e.g., actual API calls to VirusTotal, specific searchsploit commands).

Introduce explicit workflow sequencing with numbered steps and validation checkpoints, e.g., 'Step 1: Passive DNS collection → Step 2: Validate findings against a second source → Step 3: Correlate across domains → Step 4: Draft report'.

Split detailed technique references (Google dorking syntax, tool-specific commands, source catalogs) into separate bundle files and reference them from the main SKILL.md to improve progressive disclosure.

Tighten the Organization OSINT and Analysis sections by removing guidance Claude can infer (e.g., 'cross-reference findings across multiple sources') and replacing with specific correlation techniques or tool commands.

DimensionReasoningScore

Conciseness

Generally efficient but includes some unnecessary framing (e.g., the Organization OSINT bullet list is mostly things Claude already knows how to look up, and the Ethics Check section, while important, is somewhat verbose). The document could be tightened in several places, but it doesn't egregiously over-explain.

2 / 3

Actionability

Provides some executable commands (whois, dig, crt.sh curl, exiftool, Google dorks) but many sections are just bullet-point lists of sources without concrete commands or code. Email/Username OSINT and Organization OSINT sections describe rather than instruct, and the Threat Intelligence section is entirely source-listing without actionable steps.

2 / 3

Workflow Clarity

The overall flow (ethics check → collection → analysis → output) is logical, and the output format template is a strong checkpoint. However, there's no explicit sequencing of the collection phase, no validation steps between stages, and no feedback loop for verifying findings before moving to analysis. For an investigation workflow involving potentially sensitive data aggregation, the lack of explicit verification checkpoints is a gap.

2 / 3

Progressive Disclosure

The content is reasonably well-structured with clear section headers, but it's monolithic — the full output template, all collection techniques, and analysis guidance are all inline. With no bundle files, there's no offloading of detailed reference material (e.g., a separate Google dorking cheatsheet or tool reference). The References section at the end lists external resources but doesn't integrate them as navigable extensions.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
briiirussell/cybersecurity-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.