CtrlK
BlogDocsLog inGet started
Tessl Logo

perplexity-search

Perform AI-powered web searches with real-time information using Perplexity models via LiteLLM and OpenRouter. This skill should be used when conducting web searches for current information, finding recent scientific literature, getting grounded answers with source citations, or accessing information beyond the model knowledge cutoff. Provides access to multiple Perplexity models including Sonar Pro, Sonar Pro Search (advanced agentic search), and Sonar Reasoning Pro through a single OpenRouter API key.

82

2.11x
Quality

75%

Does it follow best practices?

Impact

95%

2.11x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/perplexity-search/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly communicates what the skill does (AI-powered web searches using Perplexity models), when to use it (current information needs, scientific literature, citations, beyond knowledge cutoff), and how it works (via LiteLLM and OpenRouter). It uses third person voice correctly, includes rich trigger terms, and is distinctive enough to avoid conflicts with other skills.

DimensionReasoningScore

Specificity

Lists multiple concrete actions: 'web searches for current information', 'finding recent scientific literature', 'getting grounded answers with source citations', 'accessing information beyond the model knowledge cutoff'. Also names specific models (Sonar Pro, Sonar Pro Search, Sonar Reasoning Pro) and integration details (LiteLLM, OpenRouter).

3 / 3

Completeness

Clearly answers both 'what' (AI-powered web searches via Perplexity models through LiteLLM/OpenRouter) and 'when' with an explicit trigger clause: 'should be used when conducting web searches for current information, finding recent scientific literature, getting grounded answers with source citations, or accessing information beyond the model knowledge cutoff.'

3 / 3

Trigger Term Quality

Includes strong natural trigger terms users would say: 'web search', 'current information', 'recent scientific literature', 'source citations', 'knowledge cutoff', 'Perplexity'. Also includes technical terms like 'OpenRouter', 'LiteLLM', 'Sonar Pro' that power users would reference.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: Perplexity-powered web search via OpenRouter/LiteLLM. The combination of specific provider (Perplexity), specific models (Sonar Pro variants), and specific integration path (OpenRouter API key) makes it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent actionable guidance with executable code and clear CLI examples, but is severely bloated with unnecessary content. Large sections on query crafting, integration with other skills, best practices, and a summary section add significant token cost without proportional value. Much of the inline content (model comparison details, search strategies, troubleshooting) duplicates what's already referenced in external files.

Suggestions

Cut the 'Crafting Effective Queries' section entirely or reduce to 2-3 bullet points — Claude already knows how to write good prompts, and detailed guidance exists in references/search_strategies.md

Remove the 'Integration with Other Skills' section completely — it's speculative and adds no actionable information

Collapse 'Best Practices' and 'Common Use Cases' into a compact table or 5-line summary, moving details to reference files

Remove the 'Summary' section and 'When to Use' section — these restate the overview and waste tokens

DimensionReasoningScore

Conciseness

Extremely verbose at ~300+ lines. Extensive sections on 'When to Use', 'Best Practices', 'Integration with Other Skills', query crafting tips, and a summary section all repeat information or explain things Claude already knows. The 'Crafting Effective Queries' section teaches basic prompt engineering that Claude doesn't need. The 'Integration with Other Skills' section is entirely speculative filler.

1 / 3

Actionability

Provides fully executable CLI commands, Python code for programmatic access, concrete query examples, and specific setup steps. The code examples are copy-paste ready with real flags and parameters.

3 / 3

Workflow Clarity

Setup steps are clearly sequenced with a verification step. However, the batch processing section lacks validation/error handling for failed queries, and there's no explicit feedback loop for verifying search result quality or handling partial failures.

2 / 3

Progressive Disclosure

References to external files (references/search_strategies.md, references/model_comparison.md, references/openrouter_setup.md) are well-signaled, but the main file contains far too much inline content that should be in those reference files — query crafting guidance, detailed use cases, integration patterns, and best practices sections bloat the overview significantly.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.