CtrlK
BlogDocsLog inGet started
Tessl Logo

analyzing-campaign-attribution-evidence

Campaign attribution analysis involves systematically evaluating evidence to determine which threat actor or group is responsible for a cyber operation. This skill covers collecting and weighting attr

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/analyzing-campaign-attribution-evidence/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is truncated mid-sentence, which severely undermines its effectiveness. While it identifies a reasonably specific domain (cyber campaign attribution), it fails to provide complete capability details, lacks a 'Use when...' clause, and misses important trigger terms that users would naturally employ when requesting this type of analysis.

Suggestions

Complete the truncated description to fully list specific actions such as 'weighting attribution indicators, comparing TTPs against known threat actor profiles, assessing confidence levels, and generating attribution reports'.

Add an explicit 'Use when...' clause with natural trigger terms like 'who is behind this attack', 'APT attribution', 'threat actor identification', 'campaign analysis', 'IOC correlation', or 'cyber attribution'.

Include file format or input type references if applicable (e.g., 'STIX/TAXII data, MITRE ATT&CK mappings, malware samples') to improve distinctiveness and trigger matching.

DimensionReasoningScore

Specificity

The description names the domain (campaign attribution analysis, cyber operations) and describes the general action (evaluating evidence to determine threat actors), but it appears truncated and does not list multiple specific concrete actions like weighting indicators, comparing TTPs, or generating reports.

2 / 3

Completeness

The description is truncated mid-sentence, so it only partially answers 'what does this do' and completely lacks any 'when should Claude use it' guidance or explicit trigger clause. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the truncation makes even the 'what' incomplete.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'attribution', 'threat actor', 'cyber operation', and 'campaign', but the truncation means it likely misses common variations users might say such as 'APT', 'IOC', 'threat intelligence', 'who is behind this attack', etc.

2 / 3

Distinctiveness Conflict Risk

The cyber attribution domain is fairly niche, which helps distinctiveness, but the truncation and lack of explicit triggers means it could overlap with broader threat intelligence or cyber analysis skills without clear differentiation.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is overly verbose, spending significant tokens on definitional content Claude already knows (what attribution categories are, what ACH is, confidence level definitions). The code provides a reasonable framework but lacks concrete end-to-end examples with sample data, and the workflow lacks validation checkpoints between steps. The content would benefit significantly from trimming conceptual explanations and adding a concrete worked example.

Suggestions

Remove or drastically reduce the 'Key Concepts' section—Claude already understands attribution categories, confidence levels, and ACH. Replace with a brief reference table if needed.

Add a concrete end-to-end usage example with sample data showing the full workflow from evidence collection through report generation.

Integrate validation checkpoints into the workflow steps (e.g., 'Verify evidence covers at least 3 of 6 categories before proceeding to ACH evaluation').

Move the detailed class implementation to a separate reference file and keep SKILL.md focused on the workflow with concise code snippets.

DimensionReasoningScore

Conciseness

The skill is verbose with unnecessary explanations of concepts Claude already knows (what attribution is, what ACH is, what confidence levels mean, what infrastructure overlap is). The 'Key Concepts' section is largely definitional padding. The 'When to Use' section lists generic boilerplate. Much of this could be cut without losing actionable value.

1 / 3

Actionability

The code is mostly executable Python, but it's more of a framework/skeleton than copy-paste ready analysis. The functions lack concrete usage examples with real data, and the AttributionAnalyzer class requires significant manual orchestration. There's no end-to-end example showing how to actually run an attribution analysis from start to finish.

2 / 3

Workflow Clarity

Steps are listed sequentially (collect, analyze infrastructure, compare TTPs, generate report), but there are no validation checkpoints between steps, no feedback loops for when evidence is ambiguous or contradictory, and no explicit guidance on when to iterate. The 'Validation Criteria' section is a checklist but isn't integrated into the workflow as verification steps.

2 / 3

Progressive Disclosure

The content is a monolithic document with everything inline. The references section links to external resources, but there's no splitting of detailed content (e.g., the full code classes could be in separate files). The Key Concepts section and code blocks together make this quite long when the overview could point to detailed implementation files.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
mukul975/Anthropic-Cybersecurity-Skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.