CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

knowledge-graph

tessl i github:jdrhyne/agent-skills --skill knowledge-graph

Three-Layer Memory System — automatic fact extraction, entity-based knowledge graph, and weekly synthesis. Manages life/areas/ entities with atomic facts and living summaries.

60%

Overall

SKILL.md
Review
Evals

Validation

75%
CriteriaDescriptionResult

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

metadata_field

'metadata' should map string keys to string values

Warning

license_field

'license' field is missing

Warning

body_output_format

No obvious output/return/format terms detected; consider specifying expected outputs

Warning

Total

12

/

16

Passed

Implementation

77%

This is a well-structured skill with excellent actionability and clear workflows for a complex multi-step system. The main weaknesses are verbosity (some explanatory sections could be trimmed) and the monolithic structure—setup instructions, cron configs, and AGENTS.md blocks could be moved to separate reference files to make the core skill more scannable.

Suggestions

Move the Setup section (directory creation, AGENTS.md blocks, cron configs) to a separate SETUP.md file and reference it from the main skill

Remove the 'Why This Matters' section under Low-token Recall Policy—the benefits are self-evident from the rules

Consider moving the detailed 'What Qualifies as a Durable Fact' examples to a REFERENCE.md file, keeping only a brief summary in the main skill

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some unnecessary explanations (e.g., the 'Why This Matters' section explaining obvious benefits, the detailed 'Compounding Flywheel' ASCII diagram). Some sections like the architecture overview could be tightened.

2 / 3

Actionability

Provides fully executable bash commands, complete JSON schemas, and copy-paste ready code blocks for creating entities, setting up directories, and configuring cron jobs. The fact schema and task procedures are concrete and specific.

3 / 3

Workflow Clarity

Multi-step processes are clearly numbered with explicit validation checkpoints (e.g., 'Check existing facts.jsonl — skip if already known', 'If fact contradicts existing: supersede the old one'). The extraction and synthesis tasks have clear sequences with feedback loops for handling contradictions.

3 / 3

Progressive Disclosure

The skill is quite long (~200 lines) with substantial inline content that could be split into separate reference files (e.g., the full AGENTS.md blocks, cron configuration examples, setup instructions). References to external files exist but the main file contains too much detail for an overview.

2 / 3

Total

10

/

12

Passed

Activation

17%

This description uses technical implementation language rather than user-facing terminology, making it difficult for Claude to match against natural user requests. It lacks explicit trigger guidance ('Use when...') and relies on jargon like 'atomic facts' and 'entity-based knowledge graph' that users would never say. The description explains the system's architecture rather than its practical utility.

Suggestions

Add a 'Use when...' clause with natural trigger terms like 'remember this', 'what do I know about', 'track information about [person/project]', 'personal notes'

Replace technical jargon with user-facing language: instead of 'atomic facts' use 'key details'; instead of 'entity-based knowledge graph' use 'information about people, projects, and topics'

Add concrete actions users would request: 'Store facts about people and projects, recall information, summarize what you know about a topic'

DimensionReasoningScore

Specificity

Names the domain (memory system) and some actions (fact extraction, knowledge graph, weekly synthesis), but the actions are somewhat abstract rather than concrete user-facing operations like 'create', 'update', 'query'.

2 / 3

Completeness

Describes what the system is (three-layer memory) but lacks any explicit 'Use when...' clause or trigger guidance. The 'when' is completely missing, which per rubric caps this at maximum 2, but the 'what' is also weak.

1 / 3

Trigger Term Quality

Uses technical jargon ('entity-based knowledge graph', 'atomic facts', 'living summaries') that users would not naturally say. Missing natural terms like 'remember', 'notes', 'track information', 'personal knowledge'.

1 / 3

Distinctiveness Conflict Risk

The 'life/areas/' path and 'three-layer' structure provide some distinctiveness, but 'memory system' and 'knowledge graph' could overlap with other note-taking or knowledge management skills.

2 / 3

Total

6

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.