Campaign attribution analysis involves systematically evaluating evidence to determine which threat actor or group is responsible for a cyber operation. This skill covers collecting and weighting attr
46
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/analyzing-campaign-attribution-evidence/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is truncated mid-sentence, which severely undermines its completeness and usefulness for skill selection. While it establishes a clear domain (cyber threat attribution), it fails to list specific concrete actions, lacks a 'Use when...' clause, and cuts off before providing sufficient detail for Claude to distinguish it from other cybersecurity-related skills.
Suggestions
Complete the truncated description to fully enumerate specific capabilities (e.g., 'collecting IOCs, mapping TTPs to MITRE ATT&CK, comparing infrastructure overlaps, assessing confidence levels').
Add an explicit 'Use when...' clause with natural trigger terms such as 'attribution', 'threat actor identification', 'APT', 'who is behind this attack', 'campaign analysis', or 'threat intelligence'.
Ensure the description is not cut off and includes enough detail to clearly distinguish this skill from general cybersecurity or threat intelligence skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (campaign attribution analysis, cyber operations) and describes the general action (evaluating evidence to determine threat actors), but it appears truncated and does not list multiple specific concrete actions. | 2 / 3 |
Completeness | The description is truncated mid-sentence, so it only partially answers 'what does this do' and completely lacks a 'Use when...' clause or any explicit trigger guidance. Per the rubric, a missing 'Use when' clause caps completeness at 2, and the truncation makes even the 'what' incomplete. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'attribution', 'threat actor', 'cyber operation', but the description is truncated so it likely misses common variations users might say such as 'APT', 'IOC', 'threat intelligence', or 'who is behind this attack'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on campaign attribution and threat actor identification is somewhat specific to the cyber threat intelligence domain, but the truncation and lack of explicit triggers means it could overlap with broader threat intelligence or cybersecurity analysis skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a reasonable structural framework for campaign attribution analysis but suffers from verbosity, explaining concepts Claude already knows (Diamond Model, ACH, confidence levels). The code is functional but lacks a concrete end-to-end usage example and validation/iteration steps critical for an analytical workflow where false flags and ambiguity are core concerns.
Suggestions
Remove the 'Key Concepts' section entirely or reduce it to a brief bullet list of the six evidence categories—Claude already understands attribution analysis, ACH, and confidence levels.
Add a concrete end-to-end usage example showing the full workflow with sample data: creating an analyzer, adding real evidence items, evaluating against hypotheses, and interpreting the output.
Add explicit validation/feedback loop steps: e.g., after ranking hypotheses, check if the top candidate has any inconsistent evidence that needs investigation, and iterate if confidence is below threshold.
Split the detailed code implementations into a separate reference file and keep SKILL.md as a concise overview with quick-start example and links to the detailed code.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is verbose with significant unnecessary content. The 'Key Concepts' section explains attribution categories, confidence levels, and ACH methodology that Claude already knows. The 'When to Use' section is generic boilerplate. The overview repeats the description. Much of this could be cut to focus on the actual executable workflow. | 1 / 3 |
Actionability | The code is mostly executable Python, but it's more of a framework/skeleton than copy-paste ready analysis. The classes and functions define data structures but lack concrete usage examples showing how to actually run an attribution analysis end-to-end with real data. There's no example invocation tying the steps together. | 2 / 3 |
Workflow Clarity | Steps are listed sequentially (collect evidence → infrastructure analysis → TTP comparison → report), but there are no validation checkpoints or feedback loops. For an analytical process where false flags and ambiguity are explicitly mentioned, there's no step for validating evidence quality, cross-checking results, or iterating when confidence is low. | 2 / 3 |
Progressive Disclosure | The content is a monolithic document with everything inline. The Key Concepts section, detailed code for each step, and references are all in one file. The infrastructure analysis, TTP comparison, and ACH framework code could be split into separate reference files. References are listed but not integrated as progressive disclosure points. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
888bbe4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.