Communications-domain literature review with Claude-style knowledge-base-first retrieval. Use when the task is about communications, wireless, networking, satellite/NTN, Wi-Fi, cellular, transport protocols, congestion control, routing, scheduling, MAC/PHY, rate adaptation, channel estimation, beamforming, or communication-system research and the user wants papers, related work, a survey, or a landscape summary. Search Zotero, Obsidian, and local paper folders first when available, then search IEEE Xplore, ScienceDirect, ACM Digital Library, and broader web in that order.
84
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines its niche (communications-domain literature review), provides extensive trigger terms covering both the domain and user intents, and specifies a concrete retrieval pipeline. It uses proper third-person voice and includes an explicit 'Use when...' clause with comprehensive trigger conditions. The description is thorough without being padded.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple concrete actions: literature review, paper retrieval, related work generation, survey creation, landscape summary. Also specifies the retrieval pipeline (Zotero, Obsidian, local folders, then IEEE Xplore, ScienceDirect, ACM Digital Library, web). | 3 / 3 |
Completeness | Clearly answers both 'what' (communications-domain literature review with knowledge-base-first retrieval) and 'when' (explicit 'Use when...' clause listing specific domains and user intents like wanting papers, related work, surveys, or landscape summaries). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'papers', 'related work', 'survey', 'landscape summary', plus extensive domain keywords like 'wireless', 'networking', 'satellite/NTN', 'Wi-Fi', 'cellular', 'beamforming', 'congestion control', 'routing', 'MAC/PHY', etc. These are terms researchers naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive: narrowly scoped to communications-domain literature review with a specific retrieval order (Zotero/Obsidian/local first, then academic databases). Unlikely to conflict with general research skills or non-communications skills due to the detailed domain and tool specificity. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, domain-specific literature review skill with a clear multi-step workflow and thoughtful degradation rules. Its main weaknesses are the lack of concrete executable examples (no MCP tool calls, no actual search query patterns, no code for PDF scanning) and verbosity in venue listings and repeated source priority information. The workflow sequencing and output specification are strong points.
Suggestions
Add concrete examples of MCP tool invocations for Zotero and Obsidian searches, and show actual search query construction patterns for IEEE Xplore/ScienceDirect/ACM.
Extract the venue tier lists into a separate reference file (e.g., VENUES.md) and link to it from the main skill to reduce token footprint.
Consolidate the source selection, retrieval order, and external search policy sections—they currently repeat priority ordering information in three places.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is thorough but verbose in places. The venue tier lists, source selection parsing, and retrieval order sections repeat information that could be consolidated. The 'Purpose' section listing topic keywords is somewhat redundant given the frontmatter description. However, most content is domain-specific configuration that Claude wouldn't inherently know. | 2 / 3 |
Actionability | The skill provides clear procedural steps and structured output format (literature table with specific columns), which is good. However, it lacks concrete executable code or commands—there are no actual search queries, API calls, MCP tool invocations, or code snippets showing how to scan local PDFs, query Zotero, or search IEEE Xplore. The guidance is detailed but remains at the instructional/descriptive level rather than executable. | 2 / 3 |
Workflow Clarity | The workflow is clearly sequenced (Steps 0a → 0b → 0c → 1 → 2 → Synthesis → Output) with explicit graceful degradation rules (skip unavailable sources silently vs. report missing config for explicitly requested ones). The tiered search strategy has clear escalation criteria, and the output format serves as a validation checkpoint for completeness. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and logical sections, but it's a monolithic document (~250 lines) that could benefit from splitting venue tiers, synthesis rules, or output templates into separate reference files. There are no references to external files for detailed guidance, and the venue lists in particular could be externalized. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
700fbe2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.