CtrlK
BlogDocsLog inGet started
Tessl Logo

similarity-search-patterns

Implement efficient similarity search with vector databases. Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieval performance.

66

1.09x
Quality

48%

Does it follow best practices?

Impact

100%

1.09x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/llm-application-dev/skills/similarity-search-patterns/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a solid structure with an explicit 'Use when' clause and covers the core domain well. However, it could be more specific about concrete actions (e.g., configuring indexes, choosing distance metrics, managing embeddings) and include more natural trigger terms that users would actually say. The description is competent but somewhat generic within the vector search domain.

Suggestions

Add more concrete actions like 'configure vector indexes, choose distance metrics, store and query embeddings, tune recall/latency tradeoffs'

Include additional natural trigger terms users would say, such as 'embeddings', 'vector store', 'FAISS', 'Pinecone', 'ChromaDB', 'ANN search', or 'cosine similarity'

DimensionReasoningScore

Specificity

Names the domain (vector databases, similarity search) and some actions (building semantic search, implementing nearest neighbor queries, optimizing retrieval performance), but doesn't list multiple concrete specific actions like indexing strategies, embedding storage, or specific database operations.

2 / 3

Completeness

Clearly answers both 'what' (implement efficient similarity search with vector databases) and 'when' (explicit 'Use when' clause covering semantic search, nearest neighbor queries, and retrieval optimization).

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'similarity search', 'vector databases', 'semantic search', 'nearest neighbor', and 'retrieval performance', but misses common user terms like 'embeddings', 'vector store', 'FAISS', 'Pinecone', 'ChromaDB', 'ANN', or 'vector index'.

2 / 3

Distinctiveness Conflict Risk

'Semantic search' and 'retrieval performance' could overlap with general search/information retrieval skills or RAG pipeline skills. The vector database focus provides some distinction, but 'optimizing retrieval performance' is broad enough to conflict with other skills.

2 / 3

Total

9

/

12

Passed

Implementation

29%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a code dump of four vector database client wrappers with minimal instructional value. While the code itself is high quality and executable, the skill fails at conciseness (massive token footprint with repetitive patterns), workflow clarity (no sequencing, validation, or decision guidance), and progressive disclosure (everything crammed into one file). It would be far more effective as a brief overview with a decision matrix and links to per-provider implementation files.

Suggestions

Extract each provider template into its own reference file (e.g., PINECONE.md, QDRANT.md, PGVECTOR.md, WEAVIATE.md) and keep SKILL.md as a concise overview with a decision matrix for choosing between them.

Add a clear workflow: 1) Choose provider based on constraints, 2) Set up index with validation, 3) Batch upsert with progress/error checking, 4) Validate search quality with test queries, 5) Tune parameters based on recall/latency metrics.

Remove the Core Concepts section (distance metrics, index types) or reduce it to a one-line reference — Claude already knows these concepts.

Add validation checkpoints: verify index creation, test with known queries after upsert, measure recall against ground truth before deploying.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines with four full implementation templates that are largely repetitive (each showing upsert/search/hybrid_search patterns). The 'Core Concepts' section explains distance metrics and index types that Claude already knows. The content could be reduced by 60-70% by showing one canonical implementation and noting differences for other providers.

1 / 3

Actionability

The code templates are fully executable, complete with imports, type hints, and realistic implementations. Each template covers upsert, search, filtered search, and hybrid search with copy-paste ready code.

3 / 3

Workflow Clarity

There is no clear workflow or sequencing for how to implement similarity search end-to-end. The templates are presented as isolated classes with no guidance on when to choose one over another, no validation steps (e.g., verifying index creation succeeded, checking search quality), and no error handling or recovery patterns for batch operations like upserts.

1 / 3

Progressive Disclosure

All four complete implementations are inlined in a single monolithic file, making it a wall of code. The templates should be split into separate reference files with SKILL.md providing a concise overview and links. The Resources section links to external docs but doesn't reference any companion skill files.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (561 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.