Persistent memory system for AI agents following Model Context Protocol (MCP). Use for storing long-term memories across sessions, semantic search of past knowledge, building knowledge graphs, auto-injecting context, deduplicating memories, syncing to cloud storage. Essential for agents that need to remember decisions, solutions, preferences, and learned patterns over time.
76
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Dive-Memory v3 provides long-term persistent memory for AI agents, solving the "context forgetting" problem across sessions.
Store memories with rich metadata using the Python API:
from dive_memory_v3 import DiveMemory
memory = DiveMemory()
# Add memory
memory.add(
content="Fixed JWT auth bug with refresh token rotation",
section="solutions",
subsection="authentication",
tags=["jwt", "security", "bug-fix"],
importance=8,
metadata={"code_snippet": "...", "success_rate": 1.0}
)Search using natural language + hybrid search (vector + keyword):
# Search memories
results = memory.search(
query="How to fix JWT authentication issues?",
section="solutions",
tags=["authentication"],
top_k=5
)
for result in results:
print(f"[{result.importance}] {result.content}")
print(f"Relevance: {result.score:.2f}")Automatically build relationships between memories:
# Get related memories
related = memory.get_related(memory_id, max_depth=2)
# Visualize graph
graph = memory.get_graph(section="solutions")
# Returns: {nodes: [...], edges: [...]}Automatically inject relevant memories into prompts:
# Enable auto-injection
memory.enable_context_injection()
# When processing task, relevant memories auto-prepend
task = "Implement user authentication"
context = memory.get_context_for_task(task)
# Returns: "Past solutions: JWT with refresh tokens..."Automatically detect and merge duplicate memories:
# Run deduplication
duplicates = memory.find_duplicates(threshold=0.95)
memory.merge_duplicates(duplicates, strategy="keep_newer")Sync memories across devices:
# Configure cloud sync
memory.configure_sync(
provider="s3",
bucket="dive-memory-sync",
auto_sync=True
)
# Manual sync
memory.sync_to_cloud()
memory.sync_from_cloud()Dive-Memory v3 runs as an MCP server for integration with Claude Desktop, Claude Code, etc.
cd /home/ubuntu/skills/dive-memory-v3/scripts
python3 mcp_server.pymemory_add: Add new memorymemory_search: Search memoriesmemory_update: Update existing memorymemory_delete: Delete memorymemory_graph: Get knowledge graphmemory_related: Find related memoriesmemory_stats: Get memory statisticsAdd to Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"dive-memory": {
"command": "python3",
"args": ["/home/ubuntu/skills/dive-memory-v3/scripts/mcp_server.py"],
"env": {
"OPENAI_API_KEY": "your-key-here"
}
}
}
}Organize memories hierarchically:
solutions/
├── authentication/
├── database/
└── api/
decisions/
├── architecture/
└── technology/
preferences/
research/
├── ai-models/
└── frameworks/tags: List of keywordsimportance: 1-10 scoresource: Origin of memorytimestamp: Creation timeaccess_count: Usage frequencylast_accessed: Last retrieval timeRemember successful solutions and patterns:
# Store solution
memory.add(
content="Use tRPC for type-safe APIs without code generation",
section="solutions/api",
tags=["typescript", "api", "type-safety"],
importance=9
)
# Later, when building API
context = memory.search("How to build type-safe API?")Build knowledge base from research:
# Store findings
memory.add(
content="Claude Opus 4.5: Best for code quality (10/10)",
section="research/ai-models",
tags=["claude", "code-review"],
importance=8
)
# Auto-link to related memories
# Links to: "GPT-5.2 for security", "DeepSeek for reasoning"Remember architectural decisions:
memory.add(
content="Chose PostgreSQL over MongoDB for ACID guarantees",
section="decisions/database",
tags=["database", "architecture"],
metadata={"rationale": "Need transactions for financial data"}
)Learn from task execution:
# After successful task
memory.add(
content="Agent #42 excels at React component refactoring",
section="capabilities",
tags=["agent-42", "react", "refactoring"],
importance=7
)
# Route future React tasks to Agent #42Automatic importance calculation based on:
Remove low-value memories:
# Prune memories with:
# - importance < 3
# - not accessed in 90 days
# - access_count < 2
memory.prune(
min_importance=3,
max_age_days=90,
min_access_count=2
)Merge similar memories:
# Find similar memories (0.7-0.95 similarity)
similar = memory.find_similar(threshold=0.7)
# Consolidate into summary
memory.consolidate(similar, strategy="llm_summary")# Export to JSON
memory.export("memories.json", section="solutions")
# Import from JSON
memory.import_from_json("memories.json")
# Export to Markdown
memory.export_markdown("knowledge_base.md")Configuration file at references/config.json contains all settings. Key options:
See references/config.json for full configuration options.
scripts/mcp_server.py: MCP server entry pointscripts/memory_cli.py: Command-line interfacescripts/setup_database.py: Initialize SQLite databasescripts/sync_to_cloud.py: Manual cloud syncscripts/export_graph.py: Export knowledge graph visualizationVACUUM on SQLite databasetop_k parametersimilarity_thresholdDive-Memory v3 integrates seamlessly with Dive AI V20:
from dive_ai import DiveOrchestrator
from dive_memory_v3 import DiveMemory
# Initialize
orchestrator = DiveOrchestrator()
memory = DiveMemory()
# Enable memory for orchestrator
orchestrator.set_memory(memory)
# Execute task with auto-context injection
result = orchestrator.execute(
task="Build authentication system",
use_memory=True # Auto-inject relevant memories
)
# Store execution results
memory.add(
content=f"Task completed: {result.summary}",
section="executions",
tags=["authentication", "success"],
metadata={"cost": result.cost, "time": result.duration}
)references/api_reference.mdreferences/mcp_protocol.mdreferences/schema.sqlreferences/config.json20ba150
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.