Builds an automated malware submission and analysis pipeline that collects suspicious files from endpoints and email gateways, submits them to sandbox environments and multi-engine scanners, and generates verdicts with IOCs for SIEM integration. Use when SOC teams need to scale malware analysis beyond manual sandbox submissions for high-volume alert triage.
76
71%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/building-automated-malware-submission-pipeline/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates specific capabilities (file collection, sandbox submission, verdict generation with IOCs), includes relevant domain terminology that security professionals would naturally use, and provides an explicit 'Use when' clause targeting a well-defined use case. The description is concise yet comprehensive, with minimal fluff and clear distinctiveness from other potential skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: collects suspicious files from endpoints and email gateways, submits to sandbox environments and multi-engine scanners, generates verdicts with IOCs for SIEM integration. These are detailed, concrete capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (builds automated malware submission and analysis pipeline with specific capabilities) and 'when' (explicit 'Use when SOC teams need to scale malware analysis beyond manual sandbox submissions for high-volume alert triage'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords that SOC analysts would use: 'malware', 'sandbox', 'IOCs', 'SIEM', 'alert triage', 'malware analysis', 'endpoints', 'email gateways', 'multi-engine scanners'. Good coverage of domain-specific terms users would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific niche combining malware analysis pipeline, sandbox submission, IOC generation, and SIEM integration. This is unlikely to conflict with other skills due to its very targeted security operations focus. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at actionability with comprehensive, executable Python code covering the full malware analysis pipeline. However, it is severely over-engineered for a SKILL.md file — the massive inline code blocks, redundant glossary of well-known security concepts, and tool descriptions make it extremely token-inefficient. It also lacks validation checkpoints for destructive operations like IOC blocking and has no progressive disclosure structure.
Suggestions
Move the large class implementations into separate referenced files (e.g., collector.py, screener.py, submitter.py) and keep SKILL.md as a concise overview with the orchestration function and key patterns only.
Remove the 'Key Concepts' glossary table and 'Tools & Systems' descriptions — Claude already knows what dynamic analysis, sandboxes, and VirusTotal are.
Add explicit validation checkpoints: verify sandbox submission success before polling, validate IOC format before pushing to blocklists, and add error handling/retry logic in the orchestration function.
Remove the 'Common Scenarios' section which largely restates the skill description, and trim the 'When to Use' section to 1-2 lines.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~350+ lines. It includes a glossary of terms Claude already knows (Dynamic Analysis, Static Analysis, IOC Extraction, etc.), tool descriptions that are common knowledge, and a 'Common Scenarios' section that restates the skill description. The code examples, while detailed, are excessively long with inline comments explaining obvious operations. | 1 / 3 |
Actionability | The skill provides fully executable Python code for every step of the pipeline — file collection, hash lookups, sandbox submission, verdict generation, and SIEM integration. Code is copy-paste ready with real API endpoints, proper class structures, and concrete examples for CrowdStrike, Cuckoo, Joe Sandbox, VirusTotal, and Splunk. | 3 / 3 |
Workflow Clarity | The 6-step workflow is clearly sequenced and the orchestration function ties steps together logically. However, there are no explicit validation checkpoints — no error handling for failed API calls, no verification that sandbox submissions succeeded before proceeding, no retry logic for transient failures, and no validation before pushing IOCs to blocklists (a destructive/blocking operation). | 2 / 3 |
Progressive Disclosure | The entire skill is a monolithic wall of content with no references to external files. Hundreds of lines of implementation code are inline when they could be split into separate reference files (e.g., collector.py, screener.py). The glossary, tools list, and common scenarios sections add bulk without being referenced from a concise overview. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
888bbe4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.