Build production-ready Tavily integrations with best practices for web search, content extraction, crawling, and research in agentic workflows, RAG systems, and autonomous agents
69
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./content/tavily/skills/tavily-best-practices/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche around Tavily integrations and lists relevant capability areas, but it reads more like a marketing tagline than a functional skill description. It critically lacks a 'Use when...' clause, making it harder for Claude to know when to select this skill. The buzzword-heavy phrasing ('production-ready', 'best practices', 'agentic workflows', 'autonomous agents') adds fluff without improving selectability.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Tavily API integration, web search APIs, or building search-powered applications with Tavily.'
Replace marketing language like 'production-ready' and 'best practices' with concrete actions, e.g., 'Generates Tavily API client code for web search queries, extracts webpage content, configures site crawling, and sets up multi-step research pipelines.'
Include common user-facing trigger terms like 'Tavily API', 'search API', 'web scraping', or 'search the internet' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Tavily) and lists several actions (web search, content extraction, crawling, research), but these read more like category labels than concrete specific actions. 'Build production-ready integrations' is somewhat vague about what exactly is produced. | 2 / 3 |
Completeness | The description addresses 'what' (build Tavily integrations for search, extraction, crawling, research) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'when' is not even implied clearly, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Tavily', 'web search', 'content extraction', 'crawling', 'RAG systems', and 'agentic workflows'. However, it misses common user-facing variations like 'search API', 'Tavily API', 'web scraping', or 'search the web'. Some terms like 'agentic workflows' and 'autonomous agents' are more jargon than natural user language. | 2 / 3 |
Distinctiveness Conflict Risk | The explicit mention of 'Tavily' as a specific product/API makes this highly distinctive and unlikely to conflict with other skills. It occupies a clear niche around Tavily-specific integrations. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that serves as an effective overview and routing document for the Tavily API. It excels at conciseness, actionability, and progressive disclosure with executable examples and clear navigation to detailed references. The main area for improvement is adding error handling or validation guidance for operations like extract and crawl that interact with external URLs.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient. It jumps straight into installation, initialization, and method usage without explaining what a search API is or how web scraping works. Every section earns its place with concrete code and tables. | 3 / 3 |
Actionability | Every method has a fully executable Python code example that is copy-paste ready. Key parameters are listed concisely alongside each method, and the decision table helps users pick the right method immediately. | 3 / 3 |
Workflow Clarity | The research() method includes a clear polling workflow with status checking, but other methods are presented as single-call operations without validation or error handling. For extract/crawl operations that could fail on URLs, some error handling guidance would strengthen this. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure with a concise overview in the main file and clearly signaled one-level-deep references for each method (search.md, extract.md, crawl.md, research.md, sdk.md, integrations.md). The 'Detailed Guides' section provides clear descriptions of what each reference contains. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 9 / 11 Passed | |
eeb895b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.