AST-based code graph for fast symbol lookup, dependency analysis, and blast radius via codebase-memory-mcp MCP server
45
47%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-graph/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear technical domain and names several capabilities but relies heavily on developer jargon without providing natural trigger terms users would actually use. The biggest weakness is the complete absence of a 'Use when...' clause, making it unclear when Claude should select this skill over other code analysis tools. Adding explicit trigger guidance and more user-facing language would significantly improve selection accuracy.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks to find where a symbol is defined, trace dependencies between modules, or assess the impact/blast radius of changing a function.'
Include natural language trigger terms users would actually say, such as 'find references', 'who calls this function', 'impact of changing', 'code navigation', 'trace imports'.
Expand the concrete actions list with user-facing descriptions, e.g., 'Finds symbol definitions and references across the codebase, traces import/dependency chains between files, and estimates the blast radius of code changes.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (code analysis) and several actions (symbol lookup, dependency analysis, blast radius), but these are somewhat jargon-heavy and could be more concrete about what specific operations are performed (e.g., 'find function definitions', 'trace import chains'). | 2 / 3 |
Completeness | Describes what it does (AST-based code graph for symbol lookup, dependency analysis, blast radius) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also only moderate, so this scores 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant technical terms like 'symbol lookup', 'dependency analysis', 'blast radius', and 'AST', but these are developer jargon rather than natural phrases a user would say. Missing common variations like 'find references', 'who calls this function', 'impact analysis', 'code navigation'. | 2 / 3 |
Distinctiveness Conflict Risk | Mentioning the specific MCP server name 'codebase-memory-mcp' and the AST-based approach provides some distinctiveness, but 'code graph', 'symbol lookup', and 'dependency analysis' could overlap with other code intelligence or LSP-based skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a solid conceptual framework for using the code graph MCP server with a clear workflow and good decision tables. Its main weaknesses are redundancy (the core message is repeated 4+ times in different formats) and lack of concrete tool invocation examples showing actual parameters and expected responses. The workflow section is the strongest part, with explicit validation steps and a clear sequence.
Suggestions
Add concrete tool invocation examples showing actual parameters and sample responses (e.g., `search_graph({"query": "sendEmail", "project": "myapp"})` → example output), to improve actionability.
Consolidate the redundant 'graph first' messaging — the Core Principle paragraph, ASCII box, Decision Framework, and Anti-Patterns table all say the same thing; keep the table-based versions and remove the ASCII art box and redundant prose.
Consider adding a brief example of a `query_graph` structured query since it's the most complex tool but has zero usage examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill has significant redundancy — the 'Core Principle' section, the ASCII art box, the decision framework, and the anti-patterns table all repeat the same 'graph first, file second' message in slightly different forms. The ASCII box is particularly wasteful (adds ~10 lines restating what was just said). However, the tables and tool listings are reasonably efficient. | 2 / 3 |
Actionability | The skill names specific MCP tools and provides a clear workflow sequence, but lacks concrete executable examples of actual tool invocations (e.g., what does a `search_graph` call look like with parameters? What does the response look like?). The guidance is specific enough to point Claude in the right direction but missing the copy-paste-ready examples that would earn a 3. | 2 / 3 |
Workflow Clarity | The 6-step workflow (PLAN → LOCATE → UNDERSTAND → BLAST → TRACE → CHANGE → VERIFY) is clearly sequenced with explicit validation checkpoints (step 3 'never skip' blast radius, step 6 verify). The emphasis on pre-change analysis and post-change verification constitutes a proper feedback loop for potentially destructive code changes. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and tables, but it's a monolithic document with no references to external files for detailed content (e.g., tool parameter reference, query syntax examples, or advanced usage patterns). Given the breadth of 14 MCP tools covered, some content could be split out. However, no bundle files exist, which limits what could be referenced. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
65efb33
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.