CtrlK
BlogDocsLog inGet started
Tessl Logo

analyzing-linux-system-artifacts

Examine Linux system artifacts including auth logs, cron jobs, shell history, and system configuration to uncover evidence of compromise or unauthorized activity.

63

Quality

55%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/analyzing-linux-system-artifacts/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity and distinctiveness, clearly naming concrete Linux forensic artifacts and the investigative purpose. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill over others. The natural trigger terms are well-chosen for the security/forensics domain.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Linux forensics, incident response, investigating a compromised server, or analyzing system logs for suspicious activity.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and artifacts: 'auth logs, cron jobs, shell history, and system configuration' with clear purposes 'uncover evidence of compromise or unauthorized activity'.

3 / 3

Completeness

Clearly answers 'what' (examine Linux system artifacts to uncover compromise), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'Linux', 'auth logs', 'cron jobs', 'shell history', 'system configuration', 'compromise', 'unauthorized activity'. These are terms a user investigating a Linux system would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focusing specifically on Linux forensic analysis of system artifacts for security incidents. Unlikely to conflict with general coding, document, or even broader security skills due to the specific artifact types mentioned.

3 / 3

Total

11

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive but overly verbose Linux forensics guide that tries to pack too much into a single file. While it provides concrete bash commands and Python scripts (a strength), it suffers from excessive length, broken/truncated code, explanations of concepts Claude already knows, and no validation checkpoints in a workflow where evidence integrity is critical. The content would be significantly improved by aggressive trimming, fixing the broken Python script, adding verification steps, and splitting reference material into separate files.

Suggestions

Remove the Key Concepts and Tools tables entirely — Claude already knows what auth.log, SUID bits, and chkrootkit are. This saves ~40 lines of tokens.

Fix the truncated Python script in Step 2 (shadow parsing cuts off mid-code) and ensure all code blocks are complete and executable.

Add explicit validation checkpoints: verify the forensic image mounted successfully (check mount output), verify artifact collection completeness (file counts/checksums), and validate analysis outputs before proceeding.

Move the Common Scenarios section and tool reference tables to separate files (e.g., SCENARIOS.md, TOOLS.md) and link to them from the main skill, keeping SKILL.md focused on the core workflow.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive repetitive file paths, explains concepts Claude already knows (what auth.log is, what SUID bits are), and includes large reference tables of basic Linux forensics concepts. The Key Concepts and Tools tables add little value for Claude. The Common Scenarios section is descriptive prose rather than actionable guidance.

1 / 3

Actionability

Provides concrete bash commands and a Python script that are mostly executable, but the Python script is truncated/broken mid-code (the shadow parsing section cuts off), and many commands assume a very specific directory structure. The scenarios section is purely descriptive rather than providing executable steps.

2 / 3

Workflow Clarity

Steps are clearly numbered and sequenced (mount → analyze accounts → check persistence → shell history → rootkits), but there are no validation checkpoints between steps, no error recovery guidance, and no verification that collected artifacts are complete or that the forensic image mounted correctly. For forensic operations where evidence integrity matters, this is a significant gap.

2 / 3

Progressive Disclosure

Monolithic wall of content with no references to external files. Everything is inline including lengthy reference tables, four detailed scenarios, and extensive code blocks. The content would benefit greatly from splitting the scenarios, tool references, and detailed collection scripts into separate files.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
mukul975/Anthropic-Cybersecurity-Skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.