Mema's personal brain - SQLite metadata index for documents and Redis short-term context buffer. Use for organizing workspace knowledge paths and managing ephemeral session state.
71
61%
Does it follow best practices?
Impact
90%
1.05xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/1999azzar/mema/SKILL.mdQuality
Discovery
35%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies its technical components (SQLite, Redis) but relies heavily on abstract jargon ('knowledge paths', 'ephemeral session state') that users wouldn't naturally use. It lacks specific concrete actions and natural trigger terms, making it difficult for Claude to reliably select this skill when appropriate.
Suggestions
Replace jargon with natural trigger terms users would say, e.g., 'find a document', 'remember this for later', 'search my notes', 'what did I work on'.
List specific concrete actions like 'index and search document metadata', 'store temporary session context', 'query workspace file locations', 'track recently accessed documents'.
Expand the 'Use when' clause with explicit scenarios, e.g., 'Use when the user asks to find, organize, or recall documents in the workspace, or when session context needs to be preserved across interactions'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (SQLite metadata index, Redis context buffer) and some actions (organizing, managing), but doesn't list specific concrete actions like 'index documents', 'query metadata', 'store session variables', or 'flush context buffers'. | 2 / 3 |
Completeness | Has a 'what' (SQLite metadata index and Redis context buffer) and a 'when' clause ('Use for organizing workspace knowledge paths and managing ephemeral session state'), but the 'when' is vague and uses abstract language rather than explicit, concrete trigger scenarios. | 2 / 3 |
Trigger Term Quality | Uses technical jargon like 'metadata index', 'ephemeral session state', and 'knowledge paths' that users would rarely naturally say. Missing natural trigger terms a user might use like 'find documents', 'remember this', 'search notes', or 'workspace memory'. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of SQLite and Redis provides some technical distinctiveness, but 'organizing workspace knowledge' and 'managing session state' are broad enough to potentially overlap with other knowledge management or workspace organization skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, concise skill that provides actionable CLI commands for all core operations. Its main weakness is the lack of validation/verification steps—there's no guidance on confirming successful initialization, handling Redis connection failures, or verifying that indexing operations completed correctly. Overall it's a strong skill body for a relatively simple tool.
Suggestions
Add a verification step after `init` (e.g., `mema.py list` should return empty results) and mention expected behavior when Redis is unreachable.
Include brief error handling guidance, such as what happens if you try to `mental get` a key that doesn't exist or if Redis is down.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient. Every section serves a purpose, there's no explanation of what SQLite or Redis are, and no unnecessary padding. The notes are brief and informative. | 3 / 3 |
Actionability | Provides specific, copy-paste ready CLI commands for every operation (index, list, mental set/get, init). Includes concrete paths, flags, and defaults. Claude can execute these directly. | 3 / 3 |
Workflow Clarity | The setup workflow is clearly sequenced (copy env, configure, init), and individual operations are clear. However, there are no validation checkpoints—e.g., no way to verify the init succeeded, no error handling guidance if Redis is unavailable, and no feedback loop for failed indexing operations. | 2 / 3 |
Progressive Disclosure | For a skill under 50 lines with a focused scope, the content is well-organized into logical sections (components, workflows, setup, security) with clear headers. No unnecessary nesting or external reference chains. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 9 / 11 Passed | |
fca9ef2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.