Implement efficient similarity search with vector databases. Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieval performance.
Install with Tessl CLI
npx tessl i github:wshobson/agents --skill similarity-search-patterns75
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a reasonably well-structured description with explicit 'Use when' guidance and a clear domain focus. Its main weaknesses are moderate specificity in concrete actions and incomplete coverage of natural trigger terms users might employ when needing vector database help.
Suggestions
Add more specific concrete actions like 'create and manage embeddings', 'configure vector indexes', 'implement ANN algorithms', or 'tune similarity thresholds'.
Expand trigger terms to include common variations like 'embeddings', 'vector store', 'RAG', 'cosine similarity', 'FAISS', 'Pinecone', or 'embedding search'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (vector databases, similarity search) and mentions some actions (building semantic search, implementing queries, optimizing retrieval), but lacks concrete specific actions like 'create embeddings', 'configure indexes', or 'tune distance metrics'. | 2 / 3 |
Completeness | Clearly answers both what ('Implement efficient similarity search with vector databases') and when ('Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieval performance') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'similarity search', 'semantic search', 'nearest neighbor queries', and 'retrieval performance', but misses common variations users might say like 'embeddings', 'vector store', 'RAG', 'cosine similarity', or specific database names like 'Pinecone', 'Weaviate'. | 2 / 3 |
Distinctiveness Conflict Risk | Has a clear niche focused on vector databases and similarity search with distinct triggers; unlikely to conflict with general database skills or other search-related skills due to specific terminology like 'nearest neighbor' and 'semantic search'. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides excellent actionable code templates for four major vector databases with complete, executable implementations. However, it's somewhat verbose with unnecessary introductory content, lacks explicit workflow guidance for sequencing operations, and could benefit from better progressive disclosure by splitting database-specific implementations into separate files.
Suggestions
Add a workflow section showing the typical sequence: initialize store -> generate embeddings -> upsert documents -> search -> validate results, with explicit checkpoints
Remove or condense the 'When to Use This Skill' and 'Core Concepts' sections - Claude knows these basics and they consume tokens without adding value
Split each database implementation into separate files (e.g., PINECONE.md, QDRANT.md) and keep SKILL.md as a concise overview with selection criteria
Add error handling patterns and validation steps for batch upsert operations (e.g., verify document count, handle partial failures)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary elements like the 'When to Use This Skill' section that largely repeats the description, and the Core Concepts tables explain basics Claude likely knows. The templates themselves are appropriately detailed. | 2 / 3 |
Actionability | Provides fully executable, copy-paste ready code templates for four major vector databases (Pinecone, Qdrant, pgvector, Weaviate). Each implementation includes complete class definitions with typed parameters, batch operations, search methods, and hybrid search capabilities. | 3 / 3 |
Workflow Clarity | While individual methods are clear, there's no explicit workflow showing how to sequence operations (e.g., init -> upsert -> search -> validate results). Missing validation checkpoints for batch operations and no error handling patterns for common failure modes like connection issues or malformed vectors. | 2 / 3 |
Progressive Disclosure | Content is reasonably organized with templates separated by database type, but the file is quite long (~400 lines) with all implementations inline. The Core Concepts section could be a separate reference, and individual database implementations could be split into separate files with SKILL.md providing an overview and quick selection guide. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (561 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.