CtrlK
BlogDocsLog inGet started
Tessl Logo

audit-context-building

Enables ultra-granular, line-by-line code analysis to build deep architectural context before vulnerability or bug finding.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/audit-context-building/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description conveys a general sense of purpose—deep code analysis for security and bug detection—but relies on buzzword-heavy language ('ultra-granular', 'deep architectural context') without listing concrete actions. It lacks an explicit 'Use when...' clause, making it harder for Claude to know precisely when to select this skill over other code analysis or security-related skills.

Suggestions

Add an explicit 'Use when...' clause with natural trigger terms like 'security audit', 'find vulnerabilities', 'code review for bugs', 'static analysis', or 'security review'.

Replace vague qualifiers like 'ultra-granular' and 'deep architectural context' with concrete actions such as 'traces data flow across functions', 'identifies injection points', 'maps call graphs', or 'flags unsafe patterns'.

Clarify the boundary with other potential code analysis skills by specifying what makes this skill distinct (e.g., language support, analysis depth, types of vulnerabilities detected).

DimensionReasoningScore

Specificity

It names the domain (code analysis) and some actions ('line-by-line code analysis', 'build deep architectural context', 'vulnerability or bug finding'), but these are more abstract descriptions of approach than concrete, enumerable actions like 'extract', 'fill', 'merge'.

2 / 3

Completeness

It describes what the skill does (line-by-line code analysis for vulnerability/bug finding) but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric should cap completeness at 2, and the 'when' is entirely missing, placing it at 1.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'code analysis', 'vulnerability', 'bug finding', and 'architectural context', but misses common user-facing variations such as 'security audit', 'code review', 'static analysis', 'CVE', 'security scan', or 'find bugs'.

2 / 3

Distinctiveness Conflict Risk

The 'ultra-granular, line-by-line' qualifier and focus on vulnerability/bug finding provide some distinctiveness, but it could easily overlap with general code review, security analysis, or debugging skills without clearer boundaries.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a comprehensive framework for deep code analysis with clear phasing and structured checklists, but suffers from significant verbosity and redundancy. It explains concepts Claude already understands, repeats its scope constraints multiple times, and includes motivational/anti-rationalization content that wastes tokens. The actual actionable content (examples, output format, checklist) is largely deferred to external files, leaving the main skill heavy on philosophy but light on concrete demonstration.

Suggestions

Cut the 'Rationalizations' table and sections 9/10 entirely—Claude doesn't need motivational coaching or repeated scope reminders. Consolidate the 'do not' constraints into a single brief line in section 1.

Inline at least one concrete, compact analysis example showing the expected output format for a small function, rather than deferring all examples to external files.

Reduce sections 1 and 3 which heavily overlap—merge them into a single brief 'Purpose & Behavior' section that states the goal and default behavior in under 10 lines.

Move the detailed Phase 2 subsections (5.1-5.5) into a referenced file and keep only a concise summary with the key checklist items inline, improving the balance of progressive disclosure.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, with significant redundancy. It explains concepts Claude already knows (what First Principles/5 Whys are, why gist-level understanding is bad), includes a 'Rationalizations' table that patronizingly tells Claude not to skip steps, and repeats the 'pure context building only' constraint multiple times across sections 1, 9, and 10.

1 / 3

Actionability

The skill provides structured checklists and analysis frameworks (per-function microstructure, cross-function flow rules, quality thresholds with specific minimums), which is somewhat concrete. However, it lacks any executable code/commands, and the actual analysis example is deferred to an external file (FUNCTION_MICRO_ANALYSIS_EXAMPLE.md) rather than shown inline, leaving the core skill without a concrete demonstration of the expected output.

2 / 3

Workflow Clarity

The three-phase workflow (Orientation → Granular Analysis → Global Understanding) is clearly sequenced, and the completeness checklist provides validation. However, the validation steps are deferred to external files (COMPLETENESS_CHECKLIST.md, OUTPUT_REQUIREMENTS.md), and there are no explicit feedback loops for error recovery within the phases themselves—just a general instruction to 'update the model' when contradicted.

2 / 3

Progressive Disclosure

The skill references external files (FUNCTION_MICRO_ANALYSIS_EXAMPLE.md, OUTPUT_REQUIREMENTS.md, COMPLETENESS_CHECKLIST.md) which is good progressive disclosure, but the main file itself is still a monolithic wall of text with too much inline content that could be split out. The references are reasonably well-signaled but the balance between inline and referenced content is off—too much stays in the main file.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.