CtrlK
BlogDocsLog inGet started
Tessl Logo

exa-search

AI-powered web search via Exa with content extraction. Use when user says "exa search", "web search with content", "find similar pages", or needs broad web results beyond academic databases (arXiv, Semantic Scholar).

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/exa-search/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description with excellent trigger terms and completeness, including an explicit 'Use when' clause and clear differentiation from related academic search skills. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., summarizing results, extracting full page content, finding semantically similar URLs).

Suggestions

Expand the capability list with more concrete actions, e.g., 'Searches the web via Exa API, extracts full page content, finds semantically similar pages, and filters by domain or date.'

DimensionReasoningScore

Specificity

Names the domain (web search via Exa) and mentions 'content extraction' and 'find similar pages', but doesn't list multiple concrete actions comprehensively—e.g., it doesn't specify what kinds of content extraction or what outputs are produced.

2 / 3

Completeness

Clearly answers both 'what' (AI-powered web search via Exa with content extraction) and 'when' (explicit 'Use when' clause with specific trigger phrases and a scope boundary distinguishing it from academic database skills).

3 / 3

Trigger Term Quality

Includes strong natural trigger terms: 'exa search', 'web search with content', 'find similar pages', and differentiates from academic databases (arXiv, Semantic Scholar). These are terms users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Explicitly distinguishes itself from academic database tools (arXiv, Semantic Scholar) and uses the unique trigger 'exa search'. The scope boundary ('broad web results beyond academic databases') makes it clearly distinguishable from related search skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a functional skill with strong actionability — concrete commands, clear argument specifications, and multiple usage examples. Its main weaknesses are moderate verbosity (the comparison table, setup basics, and extensive argument list could be trimmed) and missing validation/error-handling checkpoints in the workflow, particularly around API failures and empty result sets. The wiki integration in Step 6 is a nice touch but adds complexity without corresponding error recovery.

Suggestions

Add validation checkpoints after Step 3 (check for API errors, empty results, missing API key at runtime) to improve workflow robustness.

Remove or significantly trim the comparison table and setup section — Claude knows how to pip install and export env vars; the description field already handles routing.

Consider moving the full argument specification list to a separate reference file and keeping only the most common options inline to improve conciseness and progressive disclosure.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary content like the comparison table with other skills (Claude can be told when to use it via the description), and the setup section explains basic pip install and env var export which Claude already knows. However, the core workflow and argument parsing sections are reasonably efficient.

2 / 3

Actionability

The skill provides fully executable bash commands for every operation mode (search, find-similar, get-contents), concrete argument parsing specifications, and specific examples with real flags. The Step 6 wiki integration includes conditional logic with executable commands.

3 / 3

Workflow Clarity

The 6-step workflow is clearly sequenced and includes a script-not-found check, but it lacks validation checkpoints after search execution (e.g., checking for empty results, API errors, or malformed responses). The wiki integration step has conditional logic but no error recovery if ingest_paper fails.

2 / 3

Progressive Disclosure

The skill references `shared-references/integration-contract.md` for wiki details which is good progressive disclosure, but the main body is quite long (~130 lines) with detailed argument parsing and multiple workflow steps that could benefit from being split. No bundle files are provided to verify referenced paths exist.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.