Vector embeddings with HNSW indexing, sql.js persistence, and hyperbolic support. 75x faster with agentic-flow integration. Use when: semantic search, pattern matching, similarity queries, knowledge retrieval. Skip when: exact text matching, simple lookups, no semantic understanding needed.
66
56%
Does it follow best practices?
Impact
74%
1.60xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/embeddings/SKILL.mdQuality
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has strong structural completeness with explicit 'Use when' and 'Skip when' clauses, and occupies a clear niche. However, it leans too heavily on technical jargon (HNSW, sql.js, hyperbolic) rather than describing concrete user-facing actions, and the '75x faster with agentic-flow integration' claim reads as marketing fluff without substantive value for skill selection.
Suggestions
Replace technical implementation details ('HNSW indexing, sql.js persistence, hyperbolic support') with concrete actions like 'Index documents, query nearest neighbors, store and retrieve vector embeddings'.
Add more natural trigger terms users would actually say, such as 'find similar documents', 'vector search', 'nearest neighbor', 'RAG', or 'embedding lookup'.
Remove the '75x faster with agentic-flow integration' claim as it's unverifiable marketing language that doesn't help Claude select the right skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (vector embeddings) and some technical specifics (HNSW indexing, sql.js persistence, hyperbolic support), but the actual actions/capabilities are described vaguely as use cases ('semantic search, pattern matching') rather than concrete actions the skill performs (e.g., 'index documents', 'query nearest neighbors', 'store embeddings'). | 2 / 3 |
Completeness | Clearly answers both 'what' (vector embeddings with HNSW indexing, sql.js persistence, hyperbolic support) and 'when' with explicit 'Use when' and 'Skip when' clauses that provide clear trigger guidance for selection and deselection. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'semantic search', 'similarity queries', 'knowledge retrieval', and 'pattern matching' that users might say. However, it's heavy on technical jargon ('HNSW indexing', 'sql.js persistence', 'hyperbolic support') that users are unlikely to use in natural requests, and misses common variations like 'find similar', 'vector search', 'embeddings', 'nearest neighbor'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of vector embeddings, HNSW indexing, and semantic search creates a clear niche that is unlikely to conflict with other skills. The 'Skip when' clause further helps distinguish it from text search or simple lookup skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a decent overview of embedding capabilities with concrete CLI commands and well-structured tables, but lacks depth in actionability and workflow guidance. It reads more like a feature catalog than an instructional skill—commands are listed without showing expected outputs, error recovery, or how steps connect into a coherent workflow. The best practices section is too vague to be actionable.
Suggestions
Add expected output examples for each command (e.g., what does `embeddings search` return? Show a sample JSON response with similarity scores).
Define a clear end-to-end workflow: init → embed documents → verify embeddings stored → search → validate results, with explicit validation checkpoints at each step.
Replace the generic 'Best Practices' list with specific, actionable guidance (e.g., 'Use HNSW when pattern database exceeds 10K entries; set ef_construction=200 for recall >95%').
Add error handling guidance: what happens when embedding fails, how to retry batch operations, how to verify index integrity.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Mostly efficient with good use of tables, but the features table and best practices section include some filler that doesn't add actionable value (e.g., 'Cross-platform SQLite persistent cache (WASM)' is descriptive rather than instructive). The purpose statement is slightly redundant with the description. | 2 / 3 |
Actionability | Provides concrete CLI commands which are copy-paste ready, but lacks executable code examples showing programmatic usage, expected output formats, or how to interpret results. The commands are surface-level without showing configuration options, error handling, or real integration patterns. | 2 / 3 |
Workflow Clarity | No clear multi-step workflow is defined. Commands are listed independently without sequencing, dependencies, or validation checkpoints. There's no guidance on what to do if initialization fails, how to verify embeddings were stored correctly, or how to validate search results. For batch operations especially, validation/feedback loops are missing. | 1 / 3 |
Progressive Disclosure | Content is reasonably organized with clear sections and tables, but everything is inline in one file. Topics like HNSW configuration, hyperbolic embeddings, and quantization settings could benefit from references to detailed docs. No external references are provided for advanced features. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
398f7c2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.