CtrlK
BlogDocsLog inGet started
Tessl Logo

capture

Capture a web link, summarize it, and save to Obsidian vault. Use when the user wants to save a URL to their knowledge base.

Install with Tessl CLI

npx tessl i github:tomashrdlicka/engram --skill capture
What are skills?

80

Quality

76%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/capture/SKILL.md
SKILL.md
Review
Evals

capture

Capture a web link, summarize it, and save to Obsidian vault.

Usage

/capture <url>
/capture <url> --context "Why this is interesting"

Examples

/capture https://example.com/article-about-llms
/capture https://x.com/user/status/123 --context "Good prompting tips"

Instructions

When the user invokes /capture, follow these steps:

Step 0: Load Configuration

Read config.json from the engram project root to get the vault path:

{
  "vault_path": "~/Documents/Obsidian/WebCapture"
}

Use the vault_path value as {VAULT_PATH} in all paths below. Expand ~ to the user's home directory.

Step 1: Parse Input

Extract the URL from the command. If --context is provided, store it as user_context.

Step 2: Fetch Content

Use WebFetch to retrieve the page content:

WebFetch url=<url> prompt="Extract the following from this page:
1. Title
2. Author (if available)
3. Main content/body text
4. Publication date (if available) - this is the date the content was originally published, NOT today's date
5. Key topics/themes
Return as structured data."

Important: The published date is when the original content was created/posted. The captured date is always today (when the user saves the link). These are separate fields. Always try to extract the real publication date from the page. If unavailable, set published to null.

Step 3: Determine Content Type

Based on the URL and content, classify as one of:

  • article - Blog posts, news articles, long-form content
  • video - YouTube, Vimeo, video content
  • tool - Product pages, SaaS tools, apps
  • research - Academic papers, arxiv, technical research
  • x_post - Twitter/X posts and threads
  • photo - Screenshots, photos, infographics, diagrams, visual content where the image IS the main content (not just an article with images)
  • quick - Everything else

URL patterns and signals to help:

  • youtube.com, vimeo.com -> video
  • x.com, twitter.com -> x_post (but if user_context says it's mainly a photo/screenshot, use photo)
  • arxiv.org, *.edu, papers -> research
  • Product landing pages with pricing -> tool
  • Direct image URLs (.png, .jpg, .webp), or posts where user says "screenshot" / "photo" / "image" -> photo

Step 4: Determine Topic

Categorize into one of these topics based on content:

  • ai-ml - AI, ML, LLMs, agents, prompting
  • dev-tools - Developer tools, IDEs, CLI
  • product-ideas - Business, startups, products
  • design - UI/UX, design systems
  • productivity - Workflows, productivity systems
  • _inbox - If unsure, use inbox

Step 5: Generate Summary, Key Points, and Enriched Tags

Create:

  1. A 1-2 sentence summary capturing the essence
  2. 3-5 key points/takeaways
  3. Enriched tags (6-12 total):
    • Start with user-provided tags (from --context or queue) as seeds
    • Analyze the fetched content to extract additional tags: technologies mentioned, key concepts, people/authors, methodologies, tools
    • Merge with user tags, deduplicate
    • Tags should be specific enough for connection discovery (prefer claude-code over software, prefer llm-prompting over ai)
    • Cap at 10-12 tags total

If user_context was provided, use it to guide which aspects to emphasize.

Intent Detection

After enriching tags, detect intent:

  • If queue item has share_twitter: true, add tag share-twitter and set share_intent: "twitter" on the index entry
  • If queue item has deep_learn: true, add tag deep-learn and set deep_learn: true on the index entry

Auto-detection fallback (for items without checkboxes):

  • If user_context contains phrases like "post on X", "post on twitter", "share on twitter", "postable on X", "tweet this", "share this", set share_intent: "twitter"
  • If user_context contains phrases like "learn", "study", "deep dive", "important", "reread", "want to learn", "understand this", set deep_learn: true

Step 6: Generate Note ID and Slug

  • ID: Generate a UUID (use current timestamp + random)
  • Slug: Create from title (lowercase, hyphens, max 50 chars)
    • Example: "Building LLM Agents That Work" -> "building-llm-agents-that-work"

Step 7: Find Related Notes and Build Connections

7a: Find Related Notes Read {VAULT_PATH}/_system/index.json and search for related notes:

  • Notes sharing 2+ tags with the new note
  • Notes in the same topic
  • Notes with overlapping themes in summaries or key_points
  • Select the top 3 most related notes

7b: Add Bidirectional Links For each related note found:

  1. In the new note's ## Connections section, add a wiki-link with one-sentence context woven into prose. Example:
    This pairs well with [[content/dev-tools/article-writing-a-good-claude-md|Writing a Good CLAUDE.md]] which covers the practical side of agent configuration.
  2. Read each related note's file and append a backlink to the new note in its ## Connections section (create the section if it doesn't exist). Use the same prose style.

Important: If no related notes are found (e.g., first note in a new topic), add a comment: <!-- No connections yet - will be linked as more notes are captured -->

Step 8: Create Note File

Create the note at: {VAULT_PATH}/content/{topic}/{type}-{slug}.md

Use the appropriate template from _system/templates/ and fill in:

  • All frontmatter fields
  • Summary
  • Key points
  • Any extracted quotes
  • ## Connections section with the wiki-links from Step 7b

Step 9: Update Index

Read {VAULT_PATH}/_system/index.json and:

  1. Add the new note to the notes array with all fields including:
    • related: array of [{"path": "...", "reason": "shared tags / theme overlap"}] from Step 7a
  2. Update related arrays on the related notes' index entries to include the new note
  3. Increment stats.total_notes
  4. Increment stats.unread
  5. Increment stats.by_type.{type}
  6. Increment stats.by_topic.{topic}
  7. Update stats.total_connections (count of all related pairs across all notes)
  8. Update stats.isolated_notes (count of notes with empty related array)
  9. Update last_updated timestamp
  10. Update stats.twitter_queue (count of notes with share_intent == "twitter" and twitter_posted != true)
  11. Update stats.learning_resources (count of notes with deep_learn == true)
  12. Write back the updated index

New optional fields on index entries:

  • share_intent: "twitter" or null (default null)
  • deep_learn: true or false (default false)
  • twitter_posted: true or false (default false)
  • twitter_posted_at: ISO date or null

Step 10: Update Topic Page

Read {VAULT_PATH}/content/{topic}/_topic.md and:

  1. Add the new note to the ## Notes section as a wiki-link entry:
    - [{type}] [[content/{topic}/{filename}|{title}]] - {one-line summary}
  2. If the new note creates a meaningful cluster with existing notes in this topic, add or update the ## Threads section with a brief description of the thread.

Step 11: Regenerate Views

After updating the index, regenerate the 4 view files from index.json data:

  1. views/by-date.md - List all notes sorted by capture date (newest first). Group by date (e.g., "## 2026-02-05"). Each entry: - [{type}] [[{note_path}|{title}]] - {summary snippet}

  2. views/by-type.md - Group notes by content type (article, video, photo, x_post, tool, research, quick). Under each heading, list notes sorted by date.

  3. views/unread.md - List only notes where read: false in the index, sorted by date. Each entry includes title, type, topic, and capture date.

  4. views/favorites.md - List only notes where favorite: true in the index, sorted by date.

  5. views/twitter-queue.md - Notes with share_intent: "twitter" and twitter_posted != true. Each entry: title with source URL link, 1-2 sentence summary, suggested tweet angle based on key_points/user_context, tags for hashtag inspiration. Header shows count.

  6. views/learnings.md - Notes with deep_learn: true, grouped by topic. Each topic group shows: topic name with note count, each note with title/summary/connections to other learning resources. Footer suggests reading order.

For each view file, read _system/index.json, filter/sort the notes array, and write the complete markdown file. Use Obsidian wiki-links ([[path|title]]) for note references.

Step 12: Report Success

Tell the user:

  • Note created at: content/{topic}/{filename}
  • Summary of what was captured
  • Tags applied (highlight enriched tags beyond user-provided ones)
  • Connections made (which notes were linked and why)

Step 12b: Git Sync Vault

After all files are written, push vault changes to git:

cd {VAULT_PATH}
git add -A
git commit -m "vault: captured {slug} ({date})"
git push origin main

If the vault is not a git repo yet, skip this step silently.

Error Handling

  • If WebFetch fails, inform user and ask if they want to create a quick note with just the URL
  • If URL is invalid, ask user to check the URL
  • If content is paywalled, note this and capture what's available

Notes

  • The vault is at {VAULT_PATH}/
  • Index file is at _system/index.json
  • Queue file is at data/queue.json
  • Always update both the note file AND the index
  • Always read the vault CLAUDE.md at {VAULT_PATH}/CLAUDE.md for vault conventions
Repository
tomashrdlicka/engram
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.