CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/repo-scan

Cross-stack source code asset audit — classifies every file, detects embedded third-party libraries, and delivers actionable four-level verdicts per module with interactive HTML reports.

61

Quality

61%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at specificity and distinctiveness, clearly articulating concrete capabilities like file classification, third-party library detection, and four-level verdict generation with HTML reports. However, it lacks an explicit 'Use when...' clause, which caps completeness, and could benefit from more natural trigger terms that users would actually say when needing this kind of audit.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for a code audit, license compliance check, dependency analysis, or wants to identify vendored/third-party code in a codebase.'

Include more natural trigger terms users would say, such as 'license check', 'dependency scan', 'vendored code', 'compliance audit', or 'open source detection'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'classifies every file', 'detects embedded third-party libraries', 'delivers actionable four-level verdicts per module', and 'interactive HTML reports'. These are concrete, well-defined capabilities.

3 / 3

Completeness

The 'what' is well-covered (classifies files, detects libraries, delivers verdicts with HTML reports), but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill.

2 / 3

Trigger Term Quality

Contains some relevant terms like 'source code', 'audit', 'third-party libraries', and 'asset', but misses common natural user phrases like 'license check', 'dependency scan', 'code audit', 'vendor detection', or 'compliance'. The phrase 'cross-stack' is somewhat jargon-heavy.

2 / 3

Distinctiveness Conflict Risk

The combination of source code asset auditing, third-party library detection, four-level verdicts, and interactive HTML reports creates a very distinct niche that is unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

37%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides a good high-level overview of repo-scan's capabilities with well-structured tables and a concrete installation process. However, it critically lacks actual usage commands — there's no example of how to invoke the tool, specify depth levels, or interpret outputs. The workflow section describes the tool's internal logic rather than providing actionable steps for Claude to follow when performing an audit.

Suggestions

Add concrete usage examples showing actual CLI commands, e.g., `python repo-scan.py --depth standard /path/to/repo` with expected output snippets

Replace the conceptual 'How It Works' section with an actionable workflow: steps Claude should follow to run a scan, verify results, and present findings to the user

Add a validation/verification step showing how to confirm the report was generated correctly and how to handle common errors (e.g., unsupported file types, permission issues)

Include a brief example of the output format (even a truncated JSON or HTML snippet) so Claude knows what to expect and how to summarize results for the user

DimensionReasoningScore

Conciseness

The content is mostly efficient but includes some unnecessary sections like the 'When to Use' bullets and the 'How It Works' section which largely restates what Claude could infer from the capability table. The example section adds value but the overall content could be tightened.

2 / 3

Actionability

Installation instructions are concrete and copy-paste ready, but the actual usage of the tool is never shown — there are no command-line invocations, no example of how to run a scan with a specific depth level, and no sample output format. The skill describes what the tool does but doesn't show how to invoke it.

2 / 3

Workflow Clarity

The 'How It Works' section describes conceptual steps of the tool's internal process, not actionable workflow steps for Claude to follow. There are no validation checkpoints, no error handling guidance, and no clear sequence of commands to execute when performing an audit. For a tool that produces reports on large codebases, missing verification steps is a significant gap.

1 / 3

Progressive Disclosure

The content is reasonably structured with clear sections and tables, but everything is in one file with no references to detailed documentation. The analysis depth levels and capabilities tables are well-organized, but the content that could benefit from separate files (e.g., detailed library detection rules, report interpretation guide) is neither included nor referenced.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents