AI-powered web search via Exa with content extraction. Use when user says "exa search", "web search with content", "find similar pages", or needs broad web results beyond academic databases (arXiv, Semantic Scholar).
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/exa-search/SKILL.mdSearch query: $ARGUMENTS
Exa is the broad web search source with built-in content extraction:
| Skill | Best for |
|---|---|
/arxiv | Direct preprint search and PDF download |
/semantic-scholar | Published venue papers (IEEE, ACM, Springer), citation counts |
/deepxiv | Layered reading: search, brief, section map, section reads |
/exa-search | Broad web search: blogs, docs, news, companies, research papers — with content extraction |
Use Exa when you need results beyond academic databases, or when you want content (highlights, full text, summaries) extracted alongside search results.
tools/exa_search.py relative to the current project.Overrides (append to arguments):
/exa-search "RAG pipelines" — max: 5— top 5 results/exa-search "diffusion models" — category: research paper— research papers only/exa-search "startup funding" — category: news, start date: 2025-01-01— recent news/exa-search "transformer" — content: text, max chars: 8000— full text mode/exa-search "transformer" — content: summary— LLM-generated summaries/exa-search "transformer" — domains: arxiv.org,huggingface.co— domain filter/exa-search "https://arxiv.org/abs/2301.07041" — similar— find similar pages
Exa requires the exa-py SDK and an API key:
pip install exa-pySet your API key:
export EXA_API_KEY=your-key-hereGet a key from exa.ai.
Parse $ARGUMENTS for:
find-similar mode)find-similar mode instead of searchresearch paper, news, company, personal site, financial report, peoplehighlights (default), text, summary, noneauto (default), neural, fast, instantSCRIPT=$(find tools/ -name "exa_search.py" 2>/dev/null | head -1)If not found, tell the user:
exa_search.py not found. Make sure tools/exa_search.py exists and exa-py is installed:
pip install exa-pyStandard search:
python3 "$SCRIPT" search "QUERY" --max 10 --content highlightsWith filters:
python3 "$SCRIPT" search "QUERY" --max 10 \
--category "research paper" \
--start-date 2025-01-01 \
--content text --max-chars 8000Find similar pages:
python3 "$SCRIPT" find-similar "URL" --max 5 --content highlightsGet content for known URLs:
python3 "$SCRIPT" get-contents "URL1" "URL2" --content textFormat results as a structured table:
| # | Title | Authors | Venue/Publisher | URL | Date | Key Content |
|---|-------|---------|-----------------|-----|------|-------------|For each result:
category: "research paper" hits only — also record authors
(from Exa's author/authors fields, or fallback: parse from the
result snippet) and venue/publisher (from publisher, source, or
the domain hosting the paper). These are needed by Step 6's wiki
hook; if either is unavailable for a given hit, skip wiki ingest
for that one hit and log a note.After presenting results, suggest:
Required when research-wiki/ exists AND the search returned
results of category: "research paper"; skip silently otherwise.
General web results (blog posts, docs, news) are not ingested —
the wiki is for papers only.
For each research paper hit, try to recover an arXiv ID from the URL
(arxiv.org/abs/<id>); if present, use --arxiv-id. Otherwise fall
back to manual metadata:
if [ -d research-wiki/ ] and query category was "research paper":
for each research-paper hit in results:
if URL matches arxiv.org/abs/<id>:
python3 tools/research_wiki.py ingest_paper research-wiki/ \
--arxiv-id "<id>"
else:
python3 tools/research_wiki.py ingest_paper research-wiki/ \
--title "<title>" --authors "<authors joined by , >" \
--year <year> --venue "<venue or publisher>"The helper handles slug / dedup / page / index / log — do not
handwrite papers/<slug>.md. See
shared-references/integration-contract.md.
EXA_API_KEY is set before searchinghighlights content mode for a good balance of speed and contextcategory: "research paper" when the user is clearly looking for academic contenttext content mode when the user needs full page content/arxiv or /semantic-scholar for comprehensive literature coverage2028ac4
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.