AST-based code graph for fast symbol lookup, dependency analysis, and blast radius via codebase-memory-mcp MCP server
56
47%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-graph/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear technical domain and names several capabilities but relies heavily on developer jargon without providing natural trigger terms users would actually use. The most significant weakness is the complete absence of a 'Use when...' clause, making it difficult for Claude to know when to select this skill over others. The description reads more like a tagline than actionable selection guidance.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to find where a symbol is defined, trace dependencies between modules, or understand the impact of changing a function.'
Include natural trigger terms users would say, such as 'find references', 'who calls this function', 'what depends on this', 'impact of changing', 'code navigation', 'trace imports'.
Expand the concrete actions beyond jargon—e.g., 'Finds function/class definitions, traces import chains across files, identifies which files are affected by a code change.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (code analysis) and several actions (symbol lookup, dependency analysis, blast radius), but these are somewhat jargon-heavy and could be more concrete about what specific operations are performed (e.g., 'find function definitions', 'trace import chains'). | 2 / 3 |
Completeness | Describes what it does (AST-based code graph for symbol lookup, dependency analysis, blast radius) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also only moderate, so this scores 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant technical terms like 'symbol lookup', 'dependency analysis', 'blast radius', and 'AST', but these are developer jargon rather than natural phrases a user would say. Missing common variations like 'find references', 'who calls this function', 'impact analysis', 'code navigation'. | 2 / 3 |
Distinctiveness Conflict Risk | Mentioning the specific MCP server name 'codebase-memory-mcp' and the AST-based approach provides some distinctiveness, but 'code graph', 'symbol lookup', and 'dependency analysis' could overlap with other code intelligence or LSP-based skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill effectively communicates the 'graph first, file second' principle and provides a solid workflow with validation steps. However, it suffers from significant redundancy across sections (the same guidance appears in the comparison table, decision framework, and anti-patterns) and lacks concrete tool invocation examples showing actual arguments and responses. Trimming the duplicated content and adding one or two real usage examples would substantially improve it.
Suggestions
Add concrete tool invocation examples showing actual arguments and expected responses (e.g., a real search_graph call with parameters and what the output looks like)
Consolidate the 'When to Use Graph vs Direct Read' table, 'Decision Framework', and 'Anti-Patterns' sections — they convey largely the same information three times
Remove the ASCII art box, which restates the core principle already explained in prose above it
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill has significant redundancy — the core principle is restated in the ASCII box, the decision framework repeats the 'when to use' table, and the anti-patterns table largely duplicates the decision framework. The ASCII art box is decorative padding. However, the tables themselves are reasonably efficient. | 2 / 3 |
Actionability | The skill names specific MCP tools and provides a clear workflow sequence, but lacks concrete executable examples of actual tool invocations (e.g., what arguments to pass to search_graph, what the response looks like). The guidance is specific enough to point Claude to the right tools but not copy-paste ready in terms of actual usage patterns. | 2 / 3 |
Workflow Clarity | The 'Before Any Code Change' workflow is clearly sequenced (steps 0-6) with explicit validation checkpoints (step 3 blast radius, step 6 verify). The 'Never skip step 3' callout and the feedback loop of detect_changes before and after edits demonstrate strong workflow discipline. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and tables, but it's monolithic — all content is inline in a single file with no references to external documentation. The redundancy between sections (decision framework, anti-patterns, when-to-use table) suggests content that could be better organized or consolidated. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
d4ddb03
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.