CtrlK
BlogDocsLog inGet started
Tessl Logo

git-forensics

分析 Git 历史,发现"逻辑耦合"(总一起改的文件)和"热点"(高频修改的复杂模块)。基于 Adam Tornhill 的《Your Code as a Crime Scene》方法论。

Install with Tessl CLI

npx tessl i github:Lingjie-chen/MT5 --skill git-forensics
What are skills?

70

1.45x

Quality

56%

Does it follow best practices?

Impact

90%

1.45x

Average score across 3 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./.trae/skills/git-forensics/SKILL.md
SKILL.md
Review
Evals

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at specificity and distinctiveness by clearly naming the Adam Tornhill methodology and specific analysis types (logical coupling, hotspots). However, it critically lacks any 'Use when...' guidance, making it difficult for Claude to know when to select this skill from a large skill library. The trigger terms are adequate but could include more natural user phrases.

Suggestions

Add a 'Use when...' clause with trigger phrases like 'analyze code history', 'find frequently changed files', 'identify technical debt', 'code hotspots', or 'which files change together'

Include English equivalents of key terms (logical coupling, hotspots, code as crime scene) since users may search in either language

Add common user intent phrases such as 'refactoring priorities', 'code complexity analysis', or 'maintenance burden'

DimensionReasoningScore

Specificity

Lists specific concrete actions: analyzing Git history, discovering 'logical coupling' (files that change together), and 'hotspots' (frequently modified complex modules). Also references a specific methodology source.

3 / 3

Completeness

Clearly describes WHAT the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for WHEN Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has none at all.

1 / 3

Trigger Term Quality

Includes relevant terms like 'Git 历史', '逻辑耦合', '热点', but missing common variations users might say such as 'code analysis', 'technical debt', 'refactoring targets', or file extensions like '.git'.

2 / 3

Distinctiveness Conflict Risk

Very distinct niche focusing specifically on Git history analysis for code coupling and hotspots using a named methodology. Unlikely to conflict with general code analysis or Git commit message skills.

3 / 3

Total

9

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a solid conceptual framework for Git forensics analysis based on established methodology, with clear workflow steps and decision matrices. However, it falls short on actionability by referencing Python scripts that aren't provided or linked, and includes some verbose philosophical framing that doesn't add practical value. The mandatory 'sequential thinking' section adds process overhead without clear benefit.

Suggestions

Provide the actual Python scripts (git_forensics.py, git_hotspots.py) or replace with executable shell/git commands that accomplish the same analysis

Remove or condense the philosophical quotes and repeated methodology attributions to improve token efficiency

Replace the 'sequential thinking' mandate with concrete pre-flight checks (e.g., a checklist of git commands to verify repository state)

DimensionReasoningScore

Conciseness

The skill includes some unnecessary philosophical framing and quotes that don't add actionable value. The methodology attribution is repeated multiple times. However, the core content is reasonably focused on the task.

2 / 3

Actionability

Provides quick-start commands and git commands, but references scripts (git_forensics.py, git_hotspots.py) without showing their content or how to create them. The actual analysis steps are conceptual rather than executable.

2 / 3

Workflow Clarity

Clear three-step workflow with explicit sequencing. Includes validation guidance (check for shallow clone, filter noise, watch for renames). The strategy matrix provides clear decision criteria for prioritization.

3 / 3

Progressive Disclosure

Content is well-structured with clear sections, but references external scripts that don't exist or aren't linked. No references to additional documentation files for advanced topics. The skill is self-contained but could benefit from separating the methodology explanation from the practical guide.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.