Bootstrap a new LLM-maintained wiki at a chosen folder, following the llm-wiki.md pattern (a three-layer memex - raw sources, LLM-generated wiki pages, and a CLAUDE.md or AGENTS.md schema that tells the LLM how to ingest and maintain the wiki). Creates the directory layout, writes a tailored schema file, plus index.md and log.md with a bootstrap entry. Use this skill when the user asks to "set up an llm-wiki", "create an LLM wiki", "bootstrap a wiki", "instantiate the llm-wiki pattern", or invokes /setup-llm-wiki. The skill asks the user about target folder, domain (research deep-dive / personalised work wiki / personal knowledge base / business-team wiki / reading a book / combination), source types (web articles, academic PDFs, meeting/podcast transcripts, own notes), image handling, optional search tooling (qmd), schema filename (CLAUDE.md or AGENTS.md), and optional symlinks to sibling folders if the target sits inside an Obsidian vault.
87
90%
Does it follow best practices?
Impact
83%
2.24xAverage score across 4 eval scenarios
Advisory
Suggest reviewing before use
Security
1 medium severity finding. This skill can be installed but you should review these findings before use.
The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.
Third-party content exposure detected (high risk: 0.70). The skill explicitly instructs the agent to pull user-generated content from external systems and public web sources — e.g., the MCP integrations section (assets/fragments/mcp-integrations.md) and the "Source types" handling in SKILL.md describe fetching from Granola, Slack, Notion, Gmail/Drive and web articles into raw/ and then ingesting them, so untrusted third-party content can be read and influence ingestion/agent behavior.