Vector embeddings configuration and semantic search
51
40%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./src/skills/bundled/embeddings/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too terse and lacks both concrete actions and explicit trigger guidance. While it names a recognizable domain (vector embeddings and semantic search), it fails to explain what specific tasks the skill performs or when Claude should select it. It reads more like a topic label than a functional skill description.
Suggestions
Add specific concrete actions, e.g., 'Configures vector embedding models, creates embedding indexes, performs similarity queries, and tunes search relevance parameters.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about vector databases, embedding configuration, similarity search, cosine similarity, RAG pipelines, or semantic retrieval.'
Include common user-facing variations and file/tool references (e.g., 'Pinecone', 'Weaviate', 'FAISS', '.embeddings', 'vector store') to improve trigger term coverage and distinctiveness.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('vector embeddings', 'semantic search') but does not list any concrete actions. There are no verbs describing what the skill actually does—no 'configure', 'generate', 'query', etc. | 1 / 3 |
Completeness | The description weakly addresses 'what' (configuration and semantic search) but provides no 'when' clause or explicit trigger guidance. The lack of a 'Use when...' clause caps this at 2 per the rubric, and the 'what' is also very weak, so it scores 1. | 1 / 3 |
Trigger Term Quality | It includes some relevant keywords like 'vector embeddings', 'semantic search', and 'configuration', which users might mention. However, it misses common variations such as 'similarity search', 'embedding model', 'vector database', 'cosine similarity', or 'RAG'. | 2 / 3 |
Distinctiveness Conflict Risk | The terms 'vector embeddings' and 'semantic search' are somewhat specific to a niche, but without concrete actions or explicit triggers, it could overlap with general search skills, database configuration skills, or ML/AI skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides highly actionable, executable TypeScript code and clear chat commands for embeddings configuration and semantic search. However, it suffers from being a monolithic reference document with no progressive disclosure or external file references, and lacks explicit workflow sequencing and validation steps. The best practices section adds little value for Claude.
Suggestions
Add a concise Quick Start section at the top and move the detailed API reference, provider tables, and use cases into separate referenced files (e.g., API_REFERENCE.md, PROVIDERS.md)
Add explicit error handling examples and validation steps (e.g., verify embedding dimensions match expected, handle API failures, confirm storage succeeded)
Remove the 'Best Practices' section — these are generic tips Claude already knows and don't add actionable value
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably structured but includes some unnecessary sections like the 'Best Practices' tips that are generic advice Claude already knows (use caching, batch requests, monitor costs). The provider comparison table and use cases section add bulk that could be trimmed or referenced externally. | 2 / 3 |
Actionability | The skill provides fully executable TypeScript code examples with concrete API calls, configuration objects, and specific provider/model names. Chat commands are clearly listed with exact syntax. Code is copy-paste ready. | 3 / 3 |
Workflow Clarity | The content is primarily an API reference rather than a multi-step workflow, but the implicit workflow (configure → embed → store → search) lacks explicit sequencing or validation checkpoints. There's no guidance on error handling, verifying embeddings were stored correctly, or what to do when API calls fail. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of content (~180 lines) with no references to external files. The provider details, model tables, use cases, and full API reference are all inline when they could be split into separate reference files. There's no overview/quick-start section that points to deeper content. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
e71a5f6
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.