Process all queued links from the web capture queue into Obsidian notes. Use when the user wants to process their captured links.
80
Quality
76%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/process-links/SKILL.mdProcess all queued links from the web capture queue.
/process-links
/process-links --limit 5
/process-links --topic ai-ml--limit N - Process only the first N links--topic TOPIC - Only process links that would go to this topic--dry-run - Show what would be processed without doing itWhen the user invokes /process-links, follow these steps:
Read config.json from the engram project root to get the vault path:
{
"vault_path": "~/Documents/Obsidian/WebCapture"
}Use the vault_path value as {VAULT_PATH} in all paths below. Expand ~ to the user's home directory.
Read both files:
data/queue.json{VAULT_PATH}/_system/index.json (for connection discovery)status: "pending"index.json notes. If a match is found, mark the queue item as status: "processed" with note_path pointing to the existing note and skipped_reason: "duplicate", then skip it. Report skipped duplicates in the summary.--limit if specified--topic filter after determining topic (if specified)For each pending link:
Report: "Processing: {title} ({url})"
Use the same logic as /capture:
## Connections section, add wiki-links with prose context. Read each related note file and append a backlink.## Connections sectionrelated array on new note AND on related notes' entries)## Notes sectionIf user_context is present in the queue item, use it to guide summarization
Intent detection (per-link):
share_twitter: true, add tag share-twitter and set share_intent: "twitter" on the index entrydeep_learn: true, add tag deep-learn and set deep_learn: true on the index entryuser_context contains phrases like "post on X", "post on twitter", "share on twitter", "postable on X", "tweet this", "share this", set share_intent: "twitter". If it contains "learn", "study", "deep dive", "important", "reread", "want to learn", "understand this", set deep_learn: trueAfter successful processing:
processed_at timestampnote_path with the created note pathIf processing fails:
error field with reasonWrite the updated queue back to data/queue.json
After all links are processed (not per-link), regenerate the 4 view files once from index.json data:
views/by-date.md - List all notes sorted by capture date (newest first). Group by date. Each entry: - [{type}] [[{note_path}|{title}]] - {summary snippet}
views/by-type.md - Group notes by content type (article, video, x_post, photo, tool, research, quick). Under each heading, list notes sorted by date.
views/unread.md - List only notes where read: false, sorted by date. Each entry includes title, type, topic, and capture date.
views/favorites.md - List only notes where favorite: true, sorted by date.
views/twitter-queue.md - Notes with share_intent: "twitter" and twitter_posted != true. Each entry: title with source URL link, 1-2 sentence summary, suggested tweet angle based on key_points/user_context, tags for hashtag inspiration. Header shows count.
views/learnings.md - Notes with deep_learn: true, grouped by topic. Each topic group shows: topic name with note count, each note with title/summary/connections to other learning resources. Footer suggests reading order.
Read {VAULT_PATH}/_system/index.json, filter/sort the notes array, and write each view file. Use Obsidian wiki-links ([[path|title]]).
After all links are processed and views regenerated, push vault changes to git:
cd {VAULT_PATH}
git add -A
git commit -m "vault: processed N links (YYYY-MM-DD)"
git push origin mainIf the vault is not a git repo yet, skip this step silently.
Tell the user:
Processed X links:
- [title1] -> content/topic/note.md (connected to: note-a, note-b)
- [title2] -> content/topic/note.md (connected to: note-c)
Failed: Y links
- [title3]: error reason
Remaining in queue: Z linksIf --dry-run is specified:
data/queue.json{VAULT_PATH}/{VAULT_PATH}/CLAUDE.md for vault conventionsed688ad
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.