Search and analyze research papers, find related work, summarize key ideas. Use when user says "find papers", "related work", "literature review", "what does this paper say", or needs to understand academic papers.
85
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that follows best practices. It lists concrete capabilities, provides explicit trigger guidance with natural user phrases, and is clearly scoped to the academic research domain. The description is concise yet comprehensive, making it easy for Claude to select appropriately.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Search and analyze research papers, find related work, summarize key ideas.' These are distinct, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (search, analyze, find related work, summarize) and 'when' with an explicit 'Use when...' clause listing specific trigger phrases and a general condition. | 3 / 3 |
Trigger Term Quality | Includes natural phrases users would actually say: 'find papers', 'related work', 'literature review', 'what does this paper say', and 'academic papers'. Good coverage of common variations. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to academic/research paper domain with distinct triggers like 'literature review', 'related work', and 'academic papers' that are unlikely to conflict with general document or search skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, comprehensive literature review skill with clear workflow sequencing and good graceful degradation logic. Its main weaknesses are verbosity (motivational comments, explanatory asides) and incomplete actionability in the MCP integration sections where tool calls are described abstractly rather than with concrete examples. The skill would benefit from trimming explanatory prose and either providing exact MCP tool call examples or splitting MCP details into a referenced file.
Suggestions
Remove motivational/explanatory asides (e.g., 'Zotero annotations are gold', 'more valuable than raw paper content') — Claude doesn't need persuading, just instructions.
Provide concrete MCP tool call examples with actual parameters (e.g., `mcp__zotero__search_items({query: 'diffusion models'})`) instead of vague 'try calling a Zotero MCP tool'.
Consider splitting Zotero and Obsidian integration details into a separate INTEGRATIONS.md file, keeping only a brief summary and reference in the main skill.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long with some unnecessary explanations (e.g., 'Zotero annotations are gold', 'Obsidian notes represent the user's processed understanding') and verbose source detection instructions. However, most content is functional and not explaining concepts Claude already knows. Could be tightened by ~30%. | 2 / 3 |
Actionability | Provides some concrete commands (bash scripts for arxiv_fetch.py, glob patterns) but much of the workflow is described in prose rather than executable steps. The Zotero and Obsidian sections say 'try calling a tool' without specifying exact tool names or parameters. The output table format is concrete but the search/analysis steps are more descriptive than prescriptive. | 2 / 3 |
Workflow Clarity | The workflow is clearly sequenced (Steps 0a through 5) with explicit priority ordering, de-duplication checkpoints between steps, graceful degradation logic, and conditional branching (skip if MCP not configured). The source selection parsing is well-defined with examples. Rate limiting and file size validation are included for downloads. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a source table, but it's quite long (~180 lines) and could benefit from splitting detailed MCP integration instructions into separate files. The constants/overrides section and source table are good organizational choices, but the Zotero and Obsidian step details could be referenced rather than inline. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
dc00dfb
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.