CtrlK
BlogDocsLog inGet started
Tessl Logo

oh-my-ai/llm-wiki

Maintains a persistent, interlinked markdown wiki between immutable raw sources and answers: ingest, query, lint, index and log—compounding knowledge instead of one-shot RAG.

94

0.97x
Quality

94%

Does it follow best practices?

Impact

96%

0.97x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

task.mdevals/scenario-1/

Research Wiki: First Ingest from Schema Config

Problem/Feature Description

A machine learning researcher named Dr. Chen has been accumulating raw notes and papers in a local folder for several months. She has decided to start maintaining a structured knowledge base to help her synthesize ideas across sources. She already wrote a configuration file (AGENTS.md) that defines where things should live in her wiki—including the raw sources root, the wiki root, the index filename, the log filename, and naming conventions for topic pages.

Dr. Chen has one raw source ready to ingest: a recent summary she wrote on attention mechanisms. She wants the agent to set up the wiki and ingest this article so the knowledge base is ready to use. She expects the agent to respect her directory layout exactly as configured, rather than making up its own folder structure.

Output Specification

Ingest the raw source according to the wiki schema defined in AGENTS.md. The expected outputs are:

  • One or more new or updated wiki pages under the wiki root defined in the schema
  • An updated index file at the path defined in the schema
  • A log entry appended to the log file defined in the schema

All file paths used must match the schema configuration—do not create directories or files at paths not defined in or derived from the schema.

Input Files

The following files are provided as inputs. Extract them before beginning.

=============== FILE: AGENTS.md ===============

Wiki Configuration

Paths

  • Raw sources root: notes/raw/
  • Wiki root: notes/wiki/
  • Index file: notes/wiki/index.md
  • Log file: notes/wiki/log.md
  • Topic pages: notes/wiki/topics/
  • People pages: notes/wiki/people/

Naming conventions

  • Topic filenames: lowercase, hyphen-separated (e.g. attention-mechanisms.md)
  • People filenames: firstname-lastname.md
  • Log headings use the format: ## [YYYY-MM-DD] <keyword> | <short title>

Categories in index

The index groups entries under: ## Topics, ## People, ## Sources

=============== FILE: notes/raw/2026-03-15-attention-mechanisms.md ===============

Attention Mechanisms in Transformers

Date: 2026-03-15 Author: Vaswani et al. (summary by Dr. Chen)

Overview

Attention mechanisms allow neural networks to selectively focus on different parts of the input when producing each output token. The key insight is the scaled dot-product attention formula:

Attention(Q, K, V) = softmax(QK^T / sqrt(d_k)) * V

where Q, K, V are the query, key, and value matrices, and d_k is the key dimension. Scaling by sqrt(d_k) prevents vanishing gradients when d_k is large.

Multi-head attention

Multi-head attention runs h parallel attention operations ("heads") over different linear projections of Q, K, V, then concatenates and projects the results. This lets the model attend to different representation subspaces simultaneously.

Key researchers

  • Ashish Vaswani – lead author of the original "Attention Is All You Need" paper (2017)
  • Noam Shazeer – co-author; later contributions to mixture-of-experts architectures

Relationship to earlier work

Self-attention builds on sequence-to-sequence models and earlier additive attention (Bahdanau et al., 2015), but replaces recurrence entirely.

Open questions

  • How does attention head specialization emerge during training?
  • Is there a principled way to prune heads without accuracy loss?

evals

scenario-1

criteria.json

task.md

SKILL.md

tile.json