Search for research papers and academic content using Exa advanced search. Full filter support including date ranges and text filtering. Use when searching for academic papers, arXiv preprints, or scientific research.
86
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description that clearly communicates its purpose and when to use it. The trigger terms are well-chosen for the academic search domain, and the explicit 'Use when' clause with specific terms like 'arXiv preprints' makes it highly distinguishable. The main weakness is that the specific capabilities beyond basic searching could be more detailed — listing concrete actions like retrieving abstracts, finding citations, or filtering by author would strengthen specificity.
Suggestions
Add more concrete actions beyond 'search' — e.g., 'retrieve abstracts, find related papers, filter by author or publication venue' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (research papers, academic content) and the tool (Exa advanced search), mentions filter support including date ranges and text filtering, but doesn't list multiple concrete actions beyond 'search' — e.g., no mention of downloading, summarizing, citing, or exporting. | 2 / 3 |
Completeness | Clearly answers both 'what' (search for research papers using Exa with filter support) and 'when' (explicit 'Use when searching for academic papers, arXiv preprints, or scientific research'). | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'research papers', 'academic', 'arXiv', 'preprints', 'scientific research', 'date ranges'. These cover common variations of how users would phrase academic search requests. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of Exa search, academic papers, arXiv, and scientific research creates a clear niche that is unlikely to conflict with general web search or document processing skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with clear examples and important constraints well-highlighted (single-item array restriction, token isolation pattern). The main weakness is the verbose parameter enumeration section that lists many parameters Claude would already understand from tool definitions, which inflates the token cost without adding proportional value. The workflow and examples are strong points.
Suggestions
Trim the full parameter listing to only research-paper-specific parameters and gotchas (like the single-item array restriction), referencing a shared parameters doc for the complete list.
Remove or consolidate the 'Additional' parameters section (userLocation, moderation, subpages, etc.) which are generic and don't need explicit enumeration for this category.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary enumeration of parameters that Claude likely already knows (e.g., listing every single parameter like userLocation, moderation, subpages). The parameter listing section could be trimmed to focus only on research-paper-specific nuances and gotchas like the single-item array restriction. | 2 / 3 |
Actionability | Provides concrete, copy-paste-ready tool call examples with realistic parameters, specific domain suggestions, and clear constraints (single-item array restriction, category restriction). The examples cover distinct use cases (date filtering, domain filtering) and the output format is specified. | 3 / 3 |
Workflow Clarity | The workflow is clear: spawn a Task agent, call the tool with the specified category, merge/deduplicate results, return distilled output. The token isolation section provides an explicit multi-step process. For a search skill, the workflow is appropriately simple and the critical constraints (tool restriction, array size restriction) serve as validation checkpoints. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections, but the full parameter listing is inlined when it could reference shared documentation. The skill is somewhat long for what it does, though it doesn't have deeply nested references. No external file references are provided for the shared parameter documentation that likely applies across multiple search categories. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c6ec084
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.