Deep code property graph analysis with Joern CPG (AST+CFG+PDG) and CodeQL for control flow, data flow, taint analysis, and security auditing
65
57%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/cpg-analysis/SKILL.mdQuality
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at specificity and distinctiveness by naming concrete tools (Joern, CodeQL) and analysis techniques (taint analysis, control flow, data flow). However, it completely lacks a 'Use when...' clause, which is a critical gap for skill selection. The trigger terms are heavily technical and miss common user phrasings like 'find vulnerabilities' or 'static analysis'.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for static analysis, vulnerability scanning, taint tracking, code security review, or mentions Joern or CodeQL.'
Include more natural user-facing trigger terms such as 'find vulnerabilities', 'static analysis', 'security scan', 'code review for security issues', and 'vulnerability detection'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'code property graph analysis', 'control flow', 'data flow', 'taint analysis', and 'security auditing'. Also names specific tools (Joern CPG, CodeQL) and graph components (AST+CFG+PDG). | 3 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause should cap completeness at 2, and since the 'when' is entirely absent (not even implied well), this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes good technical keywords like 'Joern', 'CodeQL', 'taint analysis', 'control flow', 'data flow', 'security auditing', and 'CPG'. However, these are heavily technical terms; common user phrases like 'find vulnerabilities', 'code review', 'static analysis', or 'security scan' are missing. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to naming specific tools (Joern CPG, CodeQL) and specific analysis types (property graph analysis, taint analysis). This is a clear niche unlikely to conflict with other skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with concrete code examples, clear tool references, and a useful tier selection framework. Its main weaknesses are moderate verbosity (explanatory tables that could be tightened), lack of explicit validation/error-recovery steps in the combined workflow, and missed opportunities to offload detailed query references to separate files. The anti-patterns section is a strong addition.
Suggestions
Add explicit validation checkpoints and error recovery to the Combined Workflow (e.g., 'Verify CPG status before step 3', 'If taint query returns no results, broaden source/sink definitions')
Move the detailed CPGQL and CodeQL query examples to separate reference files (e.g., CPGQL-QUERIES.md, CODEQL-PATTERNS.md) and link from the main skill
Remove or condense the CPG component diagram—Claude already knows what AST/CFG/PDG are; a single-line definition suffices
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The CPG diagram explaining AST/CFG/CDG/DDG/PDG is unnecessary context for Claude, and the 'When to Use' tables have a redundant 'Other Tiers Can't Do This' column that largely restates the 'Why' column. The tier selection guide and anti-patterns are efficient, but overall there's moderate bloat. | 2 / 3 |
Actionability | Provides concrete, executable CPGQL queries, CodeQL query examples, MCP configuration JSON, installation commands, and specific tool names with their purposes. The code examples are copy-paste ready and cover common use cases. | 3 / 3 |
Workflow Clarity | The combined workflow provides a clear 5-step sequence, but lacks explicit validation checkpoints and error recovery. There's no feedback loop for when CPG build fails, CodeQL queries return unexpected results, or how to verify findings. The anti-pattern about checking `get_cpg_status` hints at validation but isn't integrated into the workflow itself. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and tables, but it's a monolithic document that could benefit from splitting detailed query references and language-specific examples into separate files. The mention of 'base.md + code-graph.md + security.md' loading suggests a multi-file structure exists but isn't leveraged for offloading detail. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
d4ddb03
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.