CtrlK
BlogDocsLog inGet started
Tessl Logo

building-incident-timeline-with-timesketch

Build collaborative forensic incident timelines using Timesketch to ingest, normalize, and analyze multi-source event data for attack chain reconstruction and investigation documentation.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/building-incident-timeline-with-timesketch/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche around Timesketch forensic timeline analysis, which is its strongest aspect. However, it lacks an explicit 'Use when...' clause, which is critical for skill selection, and the action verbs remain somewhat abstract rather than listing concrete operations. Adding trigger guidance and more natural user-facing keywords would significantly improve it.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about forensic timelines, Timesketch, incident investigation, or analyzing event logs for attack reconstruction.'

Include common user-facing trigger terms and variations such as 'DFIR', 'digital forensics', 'timeline analysis', 'plaso', 'log correlation', and 'security incident investigation'.

Make actions more concrete by specifying discrete operations, e.g., 'import plaso/CSV/JSON logs into Timesketch, create and annotate timelines, tag events, search for indicators of compromise, and export investigation reports.'

DimensionReasoningScore

Specificity

Names the domain (forensic incident timelines, Timesketch) and some actions (ingest, normalize, analyze), but the actions are somewhat abstract and not fully concrete—'attack chain reconstruction' and 'investigation documentation' are high-level rather than specific discrete operations.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent and the 'what' is only moderately clear, this scores at the lower end.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'Timesketch', 'forensic', 'incident timelines', 'attack chain', and 'event data', but misses common user variations such as 'DFIR', 'digital forensics', 'timeline analysis', 'log analysis', 'plaso', or '.plaso files'.

2 / 3

Distinctiveness Conflict Risk

The mention of 'Timesketch' and 'forensic incident timelines' creates a very clear niche that is unlikely to conflict with other skills. This is a highly specialized domain with distinct terminology.

3 / 3

Total

8

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a comprehensive reference guide or tutorial than a concise, actionable skill file. It contains significant verbosity explaining concepts Claude already knows (Timesketch architecture, MITRE ATT&CK basics, what various log sources contain) while lacking validation checkpoints in its workflows and proper content organization across files. The strongest aspect is the concrete query examples and ingestion commands, but these are buried in excessive surrounding context.

Suggestions

Remove the Overview explanation of what Timesketch is, the 'When to Use' section, 'Prerequisites', and Architecture sections—Claude already knows these concepts. Start directly with deployment and ingestion commands.

Add explicit validation steps after each ingestion method (e.g., 'Verify import: check sketch timeline count in UI or via API `sketch.list_timelines()`') and after running analyzers.

Split the data sources table, MITRE ATT&CK mapping, and API automation examples into separate referenced files (e.g., DATA_SOURCES.md, API_REFERENCE.md) with clear navigation links from the main skill.

Replace the prose-based UI instructions in Steps 1 and 4 with concrete API commands or at minimum specific UI paths (e.g., 'Navigate to /sketch/new, fill in Name and Description fields').

DimensionReasoningScore

Conciseness

The content is highly verbose with extensive explanations Claude already knows (what Timesketch is, what its components do, what MITRE ATT&CK techniques are). The overview paragraph, 'When to Use' section, 'Prerequisites' section, and architecture descriptions are largely unnecessary padding. The MITRE ATT&CK mapping table and data sources table, while informative, add significant token cost for information Claude likely already possesses.

1 / 3

Actionability

There are concrete commands for deployment, data ingestion, and API usage that are mostly executable. However, several sections use numbered prose instructions rather than concrete commands (Steps 1, 4), the Sigma rule integration example is incomplete, and the analysis workflow mixes vague UI instructions with specific query examples. The Python API example is executable but uses plaintext credentials.

2 / 3

Workflow Clarity

The analysis workflow has a clear 4-step sequence and the data ingestion methods are well-organized. However, there are no validation checkpoints—no steps to verify successful ingestion, confirm analyzer completion, or validate that imported data appears correctly in the timeline. For a multi-step forensic process involving data manipulation, the absence of verification steps is a significant gap.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to supplementary files. Everything—deployment, ingestion methods, analysis workflows, API automation, data source tables, MITRE mappings—is crammed into a single document. The data sources table, MITRE mapping, and advanced API examples would be better split into separate reference files with clear navigation links.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
mukul975/Anthropic-Cybersecurity-Skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.