Vector embeddings with HNSW indexing, sql.js persistence, and hyperbolic support. 75x faster with agentic-flow integration. Use when: semantic search, pattern matching, similarity queries, knowledge retrieval. Skip when: exact text matching, simple lookups, no semantic understanding needed.
66
56%
Does it follow best practices?
Impact
74%
1.60xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/embeddings/SKILL.mdQuality
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has a strong structure with explicit 'Use when' and 'Skip when' clauses that aid skill selection, and occupies a clear niche. However, it leans too heavily on technical jargon (HNSW, sql.js, hyperbolic) rather than describing concrete user-facing actions, and the '75x faster with agentic-flow integration' reads as marketing fluff rather than useful selection criteria.
Suggestions
Replace technical implementation details ('HNSW indexing, sql.js persistence, hyperbolic support') with concrete actions like 'indexes documents for semantic search, finds similar content, retrieves relevant knowledge from embeddings'.
Add more natural trigger terms users would actually say, such as 'find similar', 'vector search', 'nearest neighbor', 'RAG', 'embedding lookup'.
Remove the '75x faster with agentic-flow integration' claim as it is unverifiable marketing language that doesn't help with skill selection.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (vector embeddings) and some technical specifics (HNSW indexing, sql.js persistence, hyperbolic support), but the actual actions/capabilities are described vaguely as use cases ('semantic search, pattern matching') rather than concrete actions the skill performs (e.g., 'index documents', 'query nearest neighbors', 'store embeddings'). | 2 / 3 |
Completeness | Clearly answers both 'what' (vector embeddings with HNSW indexing, sql.js persistence, hyperbolic support) and 'when' with explicit 'Use when' and 'Skip when' clauses that provide clear trigger guidance for selection. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'semantic search', 'similarity queries', 'knowledge retrieval', and 'pattern matching' that users might say. However, it's heavy on technical jargon ('HNSW indexing', 'sql.js persistence', 'hyperbolic support') that users are unlikely to use in natural requests, and misses common variations like 'find similar', 'vector search', 'embeddings', 'nearest neighbor'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of vector embeddings, HNSW indexing, and semantic search creates a clear niche that is unlikely to conflict with other skills. The 'Skip when' clause further helps distinguish it from text search or simple lookup skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a reasonable overview of CLI commands for vector embeddings but reads more like a feature catalog than actionable guidance. It lacks workflow sequencing, validation steps, expected outputs, and deeper configuration examples. The best practices section is generic and the feature table is more descriptive than instructive.
Suggestions
Add a clear sequential workflow (e.g., init → configure → embed → validate → search) with explicit validation checkpoints, especially for batch operations
Include expected output examples for each command so Claude knows what success looks like
Replace the generic 'Best Practices' list with specific configuration guidance (e.g., HNSW parameters for different dataset sizes, when to choose which quantization level)
Add references to detailed docs for complex features like hyperbolic embeddings and HNSW tuning rather than listing them as one-line table entries
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The feature table and quantization table add some value but the 'Best Practices' section is generic advice Claude already knows. The feature table is more marketing than instruction. Could be tightened. | 2 / 3 |
Actionability | Provides concrete CLI commands which are copy-paste ready, but lacks expected output examples, error handling, and doesn't show how to actually integrate embeddings into code (only CLI wrappers). No programmatic API examples or configuration details. | 2 / 3 |
Workflow Clarity | Commands are listed independently with no sequencing, no validation steps, and no feedback loops. There's no guidance on what to do if embedding fails, no workflow for the common init→embed→search pipeline, and batch operations lack any validation checkpoints. | 1 / 3 |
Progressive Disclosure | Content is organized into clear sections with headers and tables, but everything is inline with no references to detailed documentation for advanced features like HNSW configuration, hyperbolic embeddings setup, or quantization options that likely need more detail. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
0f7c750
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.