URLScan.io is a free service for scanning and analyzing suspicious URLs. It captures screenshots, DOM content, HTTP transactions, JavaScript behavior, and network connections of web pages in an isolat
53
42%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/analyzing-malicious-url-with-urlscan/SKILL.mdQuality
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description starts strong with specific capabilities of URLScan.io but is critically truncated, cutting off mid-sentence. It lacks a 'Use when...' clause entirely, which significantly hurts completeness. The specificity of the tool name and security-focused actions provide good distinctiveness, but the incomplete text undermines its effectiveness for skill selection.
Suggestions
Complete the truncated description - it cuts off at 'in an isolat' which appears to be 'in an isolated environment'.
Add an explicit 'Use when...' clause with trigger terms like 'suspicious URL', 'phishing link', 'malicious website', 'URL safety check', 'analyze a link'.
Include common user-facing variations such as 'check if a URL is safe', 'scan a link', 'website security analysis' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: scanning URLs, capturing screenshots, DOM content, HTTP transactions, JavaScript behavior, and network connections. These are detailed, concrete capabilities. | 3 / 3 |
Completeness | The description covers 'what' (scanning and analyzing URLs with various captures) but is truncated and completely lacks a 'when' clause or explicit trigger guidance. Per rubric, a missing 'Use when...' clause caps completeness at 2, and the truncation makes it even weaker. | 1 / 3 |
Trigger Term Quality | Includes some natural keywords like 'suspicious URLs', 'scanning', 'screenshots', 'HTTP transactions', but the description appears truncated and is missing common user-facing terms like 'malicious URL', 'phishing', 'website safety check', or 'URL analysis'. | 2 / 3 |
Distinctiveness Conflict Risk | URLScan.io is a very specific tool/service with a clear niche in URL security analysis. The mention of the specific service name and security-focused capabilities makes it highly distinguishable from other skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a general knowledge article about URLScan.io than an actionable skill for Claude. It spends too many tokens on concepts Claude already knows (phishing indicators, what URLScan does) while lacking the executable code and validation steps that would make it truly useful. The workflow is a reasonable outline but needs concrete implementation and error handling.
Suggestions
Remove the 'Key Concepts' and 'Phishing URL Red Flags' sections entirely—Claude already knows these—and replace with a concise, executable Python script that submits a URL, polls for results, and extracts IOCs.
Add explicit validation checkpoints: check scan status before proceeding to analysis (e.g., poll the result UUID endpoint until status is 'done'), and verify IOC extraction produced non-empty results.
Replace the generic 'When to Use' boilerplate with a single-line trigger condition, and provide a complete working Python example rather than referencing an undefined 'scripts/process.py'.
Either inline the process.py script content or provide a clear file reference with expected location and usage syntax.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Significant verbosity explaining concepts Claude already knows (what URLScan.io is, what phishing red flags look like, HTTP protocols, etc.). The 'Key Concepts' section is largely general knowledge that doesn't need to be spelled out. The 'When to Use' section is generic boilerplate that adds no value. | 1 / 3 |
Actionability | The API submission example provides a concrete endpoint and payload, but there's no executable Python code despite listing Python as a prerequisite. Step 4 references 'scripts/process.py' without showing its contents or usage. The workflow steps are mostly descriptive checklists rather than executable commands. | 2 / 3 |
Workflow Clarity | Steps are listed in a logical sequence, but there are no validation checkpoints or feedback loops. No guidance on what to do if the scan fails, times out, or returns unexpected results. No explicit verification that the scan completed before proceeding to analysis. | 2 / 3 |
Progressive Disclosure | Content is somewhat structured with headers, but everything is inline in one file including lengthy concept explanations that could be separated. References to 'scripts/process.py' are mentioned but not linked or explained. The Tools & Resources section is a flat list of external URLs without clear navigation to deeper skill-specific materials. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.