Builds an automated malware submission and analysis pipeline that collects suspicious files from endpoints and email gateways, submits them to sandbox environments and multi-engine scanners, and generates verdicts with IOCs for SIEM integration. Use when SOC teams need to scale malware analysis beyond manual sandbox submissions for high-volume alert triage.
76
71%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/building-automated-malware-submission-pipeline/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates specific capabilities (file collection, sandbox submission, verdict generation with IOCs), includes rich domain-specific trigger terms, and provides an explicit 'Use when' clause targeting a well-defined audience and use case. The description is concise yet comprehensive, and its narrow security operations focus makes it highly distinctive.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: collects suspicious files from endpoints and email gateways, submits to sandbox environments and multi-engine scanners, generates verdicts with IOCs for SIEM integration. These are detailed, concrete capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (builds automated malware submission and analysis pipeline with specific capabilities) and 'when' (explicit 'Use when SOC teams need to scale malware analysis beyond manual sandbox submissions for high-volume alert triage'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords that SOC analysts would use: 'malware', 'sandbox', 'IOCs', 'SIEM', 'alert triage', 'malware analysis', 'endpoints', 'email gateways', 'multi-engine scanners'. Good coverage of domain-specific terms users would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific niche combining malware analysis pipeline, sandbox submission, IOC generation, and SIEM integration. Very unlikely to conflict with other skills due to the narrow, well-defined security operations domain. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at actionability with comprehensive, executable Python code covering the full malware analysis pipeline from collection through SIEM integration. However, it is severely bloated with unnecessary explanatory content (glossary, tool descriptions, common scenarios) that Claude already knows, and lacks validation checkpoints critical for a pipeline that pushes blocking rules to firewalls. The monolithic structure with no progressive disclosure makes it token-inefficient.
Suggestions
Remove the Key Concepts table, Tools & Systems descriptions, and Common Scenarios sections — Claude already knows these concepts. This alone would cut ~40 lines of unnecessary content.
Add explicit validation checkpoints: verify API responses before proceeding (check HTTP status codes), validate sandbox report completeness before IOC extraction, and add a confirmation/review step before pushing IOCs to firewall blocklists.
Split into multiple files: keep SKILL.md as a concise overview with the orchestration function and workflow summary, then reference separate files like COLLECTORS.md, SANDBOX.md, and SIEM_INTEGRATION.md for the detailed class implementations.
Add error handling patterns — at minimum show try/except blocks around API calls and what to do when sandbox analysis fails or times out, since the current code silently returns error dicts with no recovery guidance.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~400+ lines. It includes a glossary of terms Claude already knows (Dynamic Analysis, Static Analysis, IOC Extraction, etc.), lengthy tool descriptions, and common scenarios that are essentially marketing copy. The code examples, while useful, are padded with extensive boilerplate that could be significantly condensed. | 1 / 3 |
Actionability | The skill provides fully executable Python code with concrete API calls, specific endpoint URLs, proper class structures, and real library usage (vt-py, requests, pefile). Code is copy-paste ready with specific parameters for Cuckoo, Joe Sandbox, VirusTotal, and Splunk HEC integration. | 3 / 3 |
Workflow Clarity | The 6-step workflow is clearly sequenced and the orchestration function ties steps together logically. However, there are no validation checkpoints — no error handling for failed API calls, no verification that sandbox submissions succeeded before proceeding, no checks on IOC quality before pushing to blocklists. For a pipeline that pushes IOCs to firewalls (a destructive/blocking operation), missing validation caps this at 2. | 2 / 3 |
Progressive Disclosure | The entire skill is a monolithic wall of code and text with no references to external files. The Key Concepts table, Tools & Systems descriptions, and Common Scenarios sections could all be separate reference files. Everything is inline, making the skill extremely long and difficult to navigate quickly. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.