Vector embeddings with HNSW indexing, sql.js persistence, and hyperbolic support. 75x faster with agentic-flow integration. Use when: semantic search, pattern matching, similarity queries, knowledge retrieval. Skip when: exact text matching, simple lookups, no semantic understanding needed.
66
56%
Does it follow best practices?
Impact
74%
1.60xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/embeddings/SKILL.mdQuality
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has a strong structure with explicit 'Use when' and 'Skip when' clauses that aid skill selection, and occupies a clear niche. However, it leans too heavily on technical jargon (HNSW, sql.js, hyperbolic) rather than describing concrete user-facing actions, and the '75x faster with agentic-flow integration' claim reads as marketing fluff without substantive value for skill selection.
Suggestions
Replace technical implementation details ('HNSW indexing, sql.js persistence, hyperbolic support, 75x faster') with concrete actions like 'Index documents, query nearest neighbors, store and retrieve vector embeddings'.
Add more natural trigger terms users would actually say, such as 'find similar documents', 'vector search', 'nearest neighbor', 'embed text', or 'cosine similarity'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (vector embeddings) and some technical specifics (HNSW indexing, sql.js persistence, hyperbolic support), but the actual actions/capabilities are described vaguely as use cases ('semantic search, pattern matching') rather than concrete actions the skill performs (e.g., 'index documents', 'query nearest neighbors', 'store embeddings'). | 2 / 3 |
Completeness | Clearly answers both 'what' (vector embeddings with HNSW indexing, sql.js persistence, hyperbolic support) and 'when' with explicit 'Use when' and 'Skip when' clauses that provide clear trigger guidance for selection. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'semantic search', 'similarity queries', 'knowledge retrieval', and 'pattern matching' that users might say. However, it's heavy on technical jargon ('HNSW indexing', 'sql.js persistence', 'hyperbolic support') that users are unlikely to use in natural requests, and misses common variations like 'find similar', 'vector search', 'embeddings', 'nearest neighbor'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of vector embeddings, HNSW indexing, and semantic search creates a clear niche that is unlikely to conflict with other skills. The 'Skip when' clause further helps distinguish it from text search or simple lookup skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a decent catalog of CLI commands for vector embeddings but reads more like a feature overview than actionable guidance. It lacks workflow sequencing, validation steps, expected outputs, and error handling. The best practices section is generic, and the content would benefit from showing a complete end-to-end workflow with verification checkpoints.
Suggestions
Add a clear end-to-end workflow showing the sequence: init → embed → search, with expected output examples at each step and validation checkpoints (e.g., verify init succeeded before embedding).
Include expected output or response format for each command so Claude knows what success looks like and can detect failures.
Replace the generic 'Best Practices' section with specific guidance on when to choose each option (e.g., 'Use Int8 quantization when dataset > 10k vectors; use hyperbolic when data has tree-like hierarchy such as taxonomies').
Add error handling guidance for batch operations, including what to do when embedding fails mid-batch.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The feature table and quantization table add some value but the 'Best Practices' section is generic advice Claude already knows. The feature table is more marketing than instruction. Could be tightened. | 2 / 3 |
Actionability | Provides concrete CLI commands which are copy-paste ready, but lacks expected output examples, error handling, and doesn't show how to actually integrate embeddings into code (only CLI wrappers). No programmatic API usage is shown. | 2 / 3 |
Workflow Clarity | Commands are listed independently with no sequencing, no validation steps, and no feedback loops. There's no guidance on what to do if embedding fails, no workflow connecting init → embed → search, and batch operations lack any validation checkpoint. | 1 / 3 |
Progressive Disclosure | Content is reasonably organized with clear sections and tables, but everything is inline with no references to deeper documentation. The quantization and hyperbolic features could benefit from linked detailed docs rather than surface-level mentions. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
01070ed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.