CtrlK
BlogDocsLog inGet started
Tessl Logo

neo4j-agent-memory-skill

Authoritative reference for the neo4j-agent-memory Python package — a graph-native memory system for AI agents built on Neo4j — and for the hosted service (NAMS) at memory.neo4jlabs.com. Use this skill whenever the user mentions neo4j-agent-memory, agent memory with Neo4j, context graphs, the POLE+O model, MemoryClient/MemorySettings, the memory MCP server, or any of the framework integrations (LangChain, PydanticAI, CrewAI, AWS Strands, Google ADK, Microsoft Agent Framework, OpenAI Agents, LlamaIndex). Also use when the user mentions the hosted service at memory.neo4jlabs.com, NAMS, the Neo4j Agent Memory Service, the `nams_` API key prefix, or the hosted MCP endpoint. Also use when writing documentation, blog posts, tutorials, PRDs, or code samples for the project, when comparing agent memory approaches, or when positioning graph-native memory against vector-only approaches — even if the user doesn't explicitly name the package.

60

Quality

70%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./neo4j-agent-memory-skill/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description with excellent trigger term coverage and completeness, clearly specifying both what the skill does and when to use it. Its main weakness is that the 'what' portion is somewhat abstract ('authoritative reference') rather than listing concrete actions the skill enables. The extensive trigger term enumeration across package names, service identifiers, framework integrations, and conceptual terms makes it highly effective for skill selection.

Suggestions

Replace or supplement 'Authoritative reference for' with specific concrete actions like 'Provides API documentation, configuration guidance, and code examples for the neo4j-agent-memory Python package' to improve specificity.

DimensionReasoningScore

Specificity

The description names the domain (neo4j-agent-memory Python package, hosted service NAMS) and mentions some actions like 'writing documentation, blog posts, tutorials, PRDs, or code samples' and 'comparing agent memory approaches,' but it doesn't list concrete technical actions the skill performs (e.g., 'configure MemoryClient connections, store and retrieve agent memories, set up framework integrations'). It reads more as a reference guide than a list of specific capabilities.

2 / 3

Completeness

The description clearly answers both 'what' (authoritative reference for the neo4j-agent-memory package and hosted NAMS service) and 'when' with extensive explicit trigger guidance using 'Use this skill whenever...' and 'Also use when...' clauses covering multiple scenarios including direct mentions, related concepts, and content creation tasks.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including package names (neo4j-agent-memory), class names (MemoryClient/MemorySettings), service names (NAMS, Neo4j Agent Memory Service), URLs (memory.neo4jlabs.com), API key prefixes (nams_), framework names (LangChain, PydanticAI, CrewAI, etc.), and conceptual terms (POLE+O model, context graphs, graph-native memory). Users would naturally mention many of these terms.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with very specific triggers tied to a particular package, service, and ecosystem. The mention of specific class names, API key prefixes, URLs, and the POLE+O model make it extremely unlikely to conflict with other skills. The only slight risk is the broad 'comparing agent memory approaches' trigger, but the overall specificity is strong.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is highly actionable with excellent executable code examples, concrete commands, and clear configuration snippets for multiple deployment scenarios. However, it is significantly over-length, mixing technical reference with marketing positioning, editorial guidelines, diagram conventions, and repeated verification warnings — much of which could be split into separate files or removed entirely. The result is a comprehensive but bloated reference that would consume substantial context window for information that is often non-technical.

Suggestions

Extract the 'Positioning Language', 'Common Corrections to Watch For', and 'Diagram Conventions' sections into separate bundle files (e.g., POSITIONING.md, CORRECTIONS.md, DIAGRAMS.md) and reference them from the main skill to reduce token footprint by ~40%.

Consolidate the three separate verification warnings (top disclaimer, NAMS warning, checklist) into a single concise 'Before Publishing' section to eliminate redundancy.

Remove explanatory content Claude already knows (e.g., what POLE stands for can be a one-liner, the Diataxis framework explanation is unnecessary, the 'What It Is' section restates the opening line).

Add a validation step after the Python quickstart (e.g., 'Expected output: context dict with short_term and long_term keys') and after MCP registration (e.g., 'Verify: run `claude mcp list` and confirm the server appears').

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines. It includes extensive positioning language, marketing guidance ('Do Say / Don't Say'), diagram color conventions, documentation philosophy (Diataxis), competitor framing advice, and repeated checklists — much of which is not actionable technical instruction. Sections like 'Positioning Language' and 'Common Corrections to Watch For' are editorial guidelines, not skill content Claude needs to execute tasks. The version disclaimer is repeated multiple times.

1 / 3

Actionability

The skill provides fully executable code examples: the Python quickstart with async context manager, MCP server invocation commands, Claude Code/Desktop registration configs, pip install commands with extras, and framework integration import paths. All code is copy-paste ready with concrete values and clear patterns.

3 / 3

Workflow Clarity

The skill presents clear sequences for installation and MCP setup, but lacks explicit validation checkpoints. For example, after installing and running the quickstart, there's no 'verify your connection works' step. The pre-publish checklist is good but is a static list rather than a sequenced workflow with feedback loops. Multi-step processes like setting up NAMS vs self-hosted don't have explicit validation gates.

2 / 3

Progressive Disclosure

The skill references external resources (GitHub, PyPI, canonical docs, other skills like 'excalidraw skill', 'neo4j-styleguide skill') and points to canonical examples in the repo. However, the SKILL.md itself is monolithic — all content is inline with no bundle files to offload detailed reference material like the full extras list, the complete tool profile definitions, or the positioning guide. The positioning/marketing content and diagram conventions could easily be separate files.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
neo4j-contrib/neo4j-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.