A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions).
48
36%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-agent-memory-mcp/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description reads more like a marketing tagline than a functional skill description. It relies on abstract concepts ('hybrid memory system,' 'knowledge management') without specifying concrete actions or providing any trigger guidance for when Claude should select this skill. The parenthetical categories hint at scope but don't compensate for the lack of actionable detail.
Suggestions
Replace abstract language with concrete actions, e.g., 'Stores and retrieves architectural decisions, design patterns, and project knowledge. Searches past entries by topic or keyword.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to remember a decision, recall a past pattern, save project context, or look up architectural choices.'
Remove jargon like 'hybrid memory system' and 'AI agents' in favor of terms users would naturally use, such as 'project memory,' 'save notes,' 'recall decisions.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'hybrid memory system' and 'persistent, searchable knowledge management.' The parenthetical '(Architecture, Patterns, Decisions)' hints at categories but doesn't describe concrete actions the skill performs (e.g., 'stores architectural decisions,' 'searches past patterns'). | 1 / 3 |
Completeness | The description weakly addresses 'what' (knowledge management) but provides no 'when' guidance whatsoever. There is no 'Use when...' clause or equivalent explicit trigger guidance, and even the 'what' is too abstract to be useful. | 1 / 3 |
Trigger Term Quality | The terms used ('hybrid memory system,' 'persistent,' 'searchable knowledge management,' 'AI agents') are technical jargon unlikely to match natural user queries. Users would more likely say things like 'remember this,' 'save a decision,' 'look up past patterns,' or 'project memory.' | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'Architecture, Patterns, Decisions' and 'AI agents' provides some domain specificity, but 'knowledge management' and 'memory system' are broad enough to overlap with note-taking, documentation, or other memory/context skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides actionable setup instructions and clear MCP tool documentation with concrete examples. However, it suffers from generic boilerplate sections ('When to Use', 'Limitations') that waste tokens, and lacks validation checkpoints in the setup workflow (e.g., how to verify the server is running). The content is well-organized but could benefit from trimming filler and adding error recovery guidance.
Suggestions
Remove or replace the generic 'When to Use' and 'Limitations' sections with project-specific guidance (e.g., 'Use when you need to persist architectural decisions across sessions').
Add a validation step after server startup, such as 'Verify the server is running: curl http://localhost:<port>/health or check for the expected stdout message'.
Add brief error recovery guidance for common setup failures (e.g., port conflicts, missing Node.js version).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary filler. The 'When to Use' and 'Limitations' sections are generic boilerplate that add no real value. The MCP tool descriptions are reasonably lean but could be tighter. | 2 / 3 |
Actionability | Setup steps include concrete, copy-paste-ready bash commands. MCP tool documentation provides specific argument signatures and usage examples with realistic invocations. The dashboard startup command is also concrete. | 3 / 3 |
Workflow Clarity | The setup workflow is clearly sequenced (clone → install → start), but there are no validation checkpoints—no way to verify the server started correctly, no error recovery guidance, and no verification step after installation. For a multi-step setup involving compilation and server startup, this is a gap. | 2 / 3 |
Progressive Disclosure | The content is reasonably structured with clear sections, but everything is inline in a single file with no references to supporting documentation. Given no bundle files are provided, there's no external reference structure. The API reference section could be separated for a cleaner overview, but for a skill of this size it's borderline acceptable. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
431bfad
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.