Perform AI-powered web searches with real-time information using Perplexity models via LiteLLM and OpenRouter. This skill should be used when conducting web searches for current information, finding recent scientific literature, getting grounded answers with source citations, or accessing information beyond the model knowledge cutoff. Provides access to multiple Perplexity models including Sonar Pro, Sonar Pro Search (advanced agentic search), and Sonar Reasoning Pro through a single OpenRouter API key.
82
75%
Does it follow best practices?
Impact
95%
2.11xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/perplexity-search/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates what the skill does (AI-powered web searches using Perplexity models), when to use it (current information needs, scientific literature, beyond knowledge cutoff), and how it works (via LiteLLM and OpenRouter). It uses third person voice correctly, includes natural trigger terms, and is distinctive enough to avoid conflicts with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple concrete actions: 'web searches for current information', 'finding recent scientific literature', 'getting grounded answers with source citations', 'accessing information beyond the model knowledge cutoff'. Also names specific models (Sonar Pro, Sonar Pro Search, Sonar Reasoning Pro) and integration details (LiteLLM, OpenRouter). | 3 / 3 |
Completeness | Clearly answers both 'what' (AI-powered web searches via Perplexity models through LiteLLM/OpenRouter) and 'when' with an explicit trigger clause: 'should be used when conducting web searches for current information, finding recent scientific literature, getting grounded answers with source citations, or accessing information beyond the model knowledge cutoff.' | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'web search', 'current information', 'recent scientific literature', 'source citations', 'knowledge cutoff', 'Perplexity'. Also includes technical terms like 'OpenRouter', 'LiteLLM', 'Sonar Pro' that power users would reference. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: Perplexity-based web search via OpenRouter/LiteLLM. The combination of specific provider (Perplexity), specific models (Sonar Pro variants), and specific integration path (OpenRouter API key) makes it very unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is highly actionable with executable commands and code examples, but severely undermined by verbosity. Large sections (query crafting tips, 5 common use cases, integration with other skills, best practices that repeat earlier content, a summary section) inflate the token count without adding proportional value. Content that belongs in reference files is duplicated inline, defeating the purpose of the progressive disclosure structure.
Suggestions
Cut the content by at least 50%: remove the 'When to Use' list (Claude can infer this), the 'Crafting Effective Queries' section (move to references/search_strategies.md), the 'Integration with Other Skills' section, and the 'Summary' section entirely.
Reduce 'Common Use Cases' to 1-2 examples max inline, moving the rest to a reference file.
Consolidate 'Best Practices' into the relevant sections (e.g., model selection guidance already exists in 'Available Models') rather than repeating it in a separate section.
Add error handling/validation to the batch processing workflow (e.g., check result['success'] and log failures) to improve workflow clarity for batch operations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~300+ lines. Extensive sections on 'When to Use', 'Best Practices', 'Integration with Other Skills', query crafting tips, and a summary section all repeat information or explain things Claude already knows. The 'Common Use Cases' section has 5 lengthy examples that are largely redundant. The 'Best Practices' section rehashes guidance already given earlier. | 1 / 3 |
Actionability | Provides fully executable CLI commands, Python code for programmatic access, concrete bash scripts for batch processing, and specific model names with exact flags. The setup steps are copy-paste ready with real commands. | 3 / 3 |
Workflow Clarity | Setup steps are clearly sequenced with a verification step (--check-setup). However, the batch processing section lacks validation/error handling for failed queries, and there's no explicit feedback loop for verifying search result quality or handling partial failures. | 2 / 3 |
Progressive Disclosure | References to external files (references/search_strategies.md, references/model_comparison.md, references/openrouter_setup.md) are well-signaled, but the main file contains far too much inline content that should be in those reference files — query crafting guidance, detailed use cases, integration patterns, and best practices all bloat the overview. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.