Search and analyze research papers, find related work, summarize key ideas. Use when user says "find papers", "related work", "literature review", "what does this paper say", or needs to understand academic papers.
82
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/skills-codex/research-lit/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates its capabilities, provides explicit trigger guidance with natural user phrases, and is well-scoped to the academic research domain. It follows the recommended pattern of listing concrete actions followed by a 'Use when...' clause with multiple trigger terms. The description is concise without being vague.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Search and analyze research papers, find related work, summarize key ideas.' These are distinct, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (search, analyze, find related work, summarize) and 'when' with an explicit 'Use when...' clause listing specific trigger phrases and a general condition. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would actually say: 'find papers', 'related work', 'literature review', 'what does this paper say', 'academic papers'. These cover a good range of natural user phrasings. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to academic/research paper domain with distinct triggers like 'literature review', 'related work', and 'academic papers'. Unlikely to conflict with general document or summarization skills due to the academic focus. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
55%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is highly actionable with excellent workflow clarity, providing concrete executable commands and a well-sequenced multi-step process with proper error handling. However, it is severely bloated — the repetitive script-location patterns, redundant de-duplication explanations for each source, and extensive inline examples make it far too long for a SKILL.md overview. The content desperately needs progressive disclosure, splitting source-specific details into separate reference files.
Suggestions
Extract the per-source bash script blocks (arXiv, Semantic Scholar, DeepXiv, Exa) into a separate SOURCES.md reference file, keeping only a summary table and one-line descriptions in the main SKILL.md.
Create a shared template for the script-location pattern (ARIS_REPO lookup → tools/ fallback → ~/.codex/ fallback) instead of repeating it verbatim for every source.
Move the extensive source selection examples and override syntax into a separate USAGE.md or collapse them into a compact table — the 12+ example lines are redundant given the source table already explains valid IDs.
Consolidate the de-duplication logic into a single section rather than repeating similar instructions (match by arXiv ID, then normalized title) for each source independently.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~300+ lines. It over-explains source selection with redundant examples, repeats de-duplication logic for every source, and includes lengthy bash script blocks for locating helper scripts that follow nearly identical patterns. Much of this could be condensed into a table or template pattern. | 1 / 3 |
Actionability | The skill provides fully executable bash commands for each data source, concrete glob patterns for file discovery, specific API field lists, and clear output format (markdown table with defined columns). The code examples are copy-paste ready with fallback chains. | 3 / 3 |
Workflow Clarity | The workflow is clearly sequenced (Steps 0a through 6) with explicit validation checkpoints: de-duplication between sources, PDF size verification (>10KB), rate limiting, and clear error handling (stop and report vs skip silently). Feedback loops are present for missing configurations. | 3 / 3 |
Progressive Disclosure | Everything is crammed into a single monolithic file with no references to external documentation. The detailed bash scripts for each source (arXiv, Semantic Scholar, DeepXiv, Exa), the source selection examples, and the override syntax could all be split into separate reference files. The single file is overwhelming. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
700fbe2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.