CtrlK
BlogDocsLog inGet started
Tessl Logo

cpg-analysis

Deep code property graph analysis with Joern CPG (AST+CFG+PDG) and CodeQL for control flow, data flow, taint analysis, and security auditing

52

Quality

57%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/cpg-analysis/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at specificity and distinctiveness by naming concrete tools (Joern, CodeQL) and analysis techniques (taint analysis, control flow, data flow, CPG). However, it completely lacks a 'Use when...' clause, which is critical for Claude to know when to select this skill. It also leans heavily on technical jargon, missing more natural user-facing trigger terms like 'find vulnerabilities' or 'static analysis'.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for static analysis, vulnerability detection, taint tracking, or code property graph queries using Joern or CodeQL.'

Include more natural user-facing trigger terms such as 'find vulnerabilities', 'static analysis', 'security scan', 'code review for security issues', or 'source-sink analysis'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'code property graph analysis', 'control flow', 'data flow', 'taint analysis', and 'security auditing'. Also names specific tools (Joern CPG, CodeQL) and graph components (AST+CFG+PDG).

3 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent, this scores a 1.

1 / 3

Trigger Term Quality

Includes good technical keywords like 'Joern', 'CodeQL', 'taint analysis', 'control flow', 'data flow', 'security auditing', and 'CPG'. However, these are heavily technical terms; common user phrases like 'find vulnerabilities', 'code review', 'static analysis', or 'security scan' are missing.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive due to naming specific tools (Joern CPG, CodeQL) and specific analysis types (property graph analysis, taint analysis, AST+CFG+PDG). Unlikely to conflict with generic code review or security skills.

3 / 3

Total

9

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with concrete code examples, clear tool descriptions, and a useful tiered architecture. Its main weaknesses are moderate verbosity (explaining graph theory concepts Claude already knows, redundant comparison tables) and lack of explicit error recovery/validation feedback loops in the workflow. The content would benefit from splitting detailed query references into separate files and adding validation checkpoints.

Suggestions

Remove the CPG component diagram explaining what AST/CFG/CDG/DDG/PDG stand for — Claude already knows these concepts. Replace with a one-line summary if needed.

Add explicit error recovery steps to the combined workflow (e.g., 'If CPG build fails: check Docker is running, verify language support, retry with --verbose flag').

Extract the CPGQL query examples and CodeQL patterns into separate reference files (e.g., CPGQL_QUERIES.md, CODEQL_PATTERNS.md) and reference them from the main skill to improve progressive disclosure.

DimensionReasoningScore

Conciseness

The CPG component breakdown diagram explains concepts Claude already knows (what AST, CFG, etc. stand for). The tier selection guide and comparison tables add useful context but some rows are redundant. The 'When to Use' tables repeat justifications across tiers. Overall mostly efficient but could be tightened by ~30%.

2 / 3

Actionability

Provides concrete, executable CPGQL queries, CodeQL query examples, MCP configuration JSON, installation commands, and specific tool names with their purposes. The code examples are copy-paste ready and cover real use cases like SQL injection detection and dead code finding.

3 / 3

Workflow Clarity

The combined workflow section provides a clear 5-step sequence, and the anti-patterns table includes a validation checkpoint (check `get_cpg_status` before querying). However, there are no explicit feedback loops for error recovery — e.g., what to do if CPG build fails, if CodeQL query returns no results, or if the database is stale. For operations involving security auditing, this gap is notable.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and headers, but it's a monolithic document with no references to external files for detailed content (e.g., full CPGQL syntax reference, complete CodeQL query library, or installation guide). The inline tables and query examples could be split into referenced files, especially given the document's length (~180 lines of substantive content).

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
alinaqi/claude-bootstrap
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.