CtrlK
BlogDocsLog inGet started
Tessl Logo

embeddings

Vector embeddings with HNSW indexing, sql.js persistence, and hyperbolic support. 75x faster with agentic-flow integration. Use when: semantic search, pattern matching, similarity queries, knowledge retrieval. Skip when: exact text matching, simple lookups, no semantic understanding needed.

66

1.60x
Quality

56%

Does it follow best practices?

Impact

74%

1.60x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/embeddings/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a strong structure with explicit 'Use when' and 'Skip when' clauses that aid skill selection, and occupies a clear niche. However, it leans too heavily on technical jargon (HNSW, sql.js, hyperbolic) rather than describing concrete user-facing actions, and the '75x faster with agentic-flow integration' claim reads as marketing fluff without substantiation. The actual capabilities (what the skill *does*) could be more concretely articulated.

Suggestions

Replace technical implementation details ('HNSW indexing, sql.js persistence, hyperbolic support') with concrete actions like 'indexes documents for semantic search, finds similar content, retrieves relevant knowledge from embeddings'.

Add more natural trigger terms users would actually say, such as 'find similar', 'vector search', 'nearest neighbor', 'RAG', 'embedding lookup'.

Remove the unsubstantiated performance claim '75x faster with agentic-flow integration' which reads as marketing fluff and doesn't help with skill selection.

DimensionReasoningScore

Specificity

Names the domain (vector embeddings) and some technical specifics (HNSW indexing, sql.js persistence, hyperbolic support), but the actual actions/capabilities are described vaguely as use cases ('semantic search, pattern matching') rather than concrete actions the skill performs (e.g., 'index documents', 'query nearest neighbors', 'store embeddings').

2 / 3

Completeness

Clearly answers both 'what' (vector embeddings with HNSW indexing, sql.js persistence, hyperbolic support) and 'when' with explicit 'Use when' and 'Skip when' clauses that provide clear trigger guidance for skill selection.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'semantic search', 'similarity queries', 'knowledge retrieval', and 'pattern matching' that users might say. However, it's heavy on technical jargon ('HNSW indexing', 'sql.js persistence', 'hyperbolic support') that users are unlikely to use in natural requests, and misses common variations like 'find similar', 'vector search', 'embeddings', 'nearest neighbor'.

2 / 3

Distinctiveness Conflict Risk

The combination of vector embeddings, HNSW indexing, and semantic search creates a clear niche that is unlikely to conflict with other skills. The 'Skip when' clause further helps distinguish it from text search or simple lookup skills.

3 / 3

Total

10

/

12

Passed

Implementation

37%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides a decent overview of available CLI commands for embeddings but reads more like a feature catalog than actionable guidance. It lacks workflow sequencing showing how the pieces fit together, has no validation or error handling steps for batch operations, and the best practices are too generic to add value beyond what Claude already knows.

Suggestions

Add a clear end-to-end workflow showing the sequence: init → embed documents → build HNSW index → search, with validation checkpoints after each step

Include expected output examples for search commands so Claude knows how to interpret and present results to users

Replace the generic 'Best Practices' section with specific configuration guidance (e.g., when to use Int8 vs Binary quantization, HNSW parameter tuning for different dataset sizes)

Add error handling guidance and validation steps, especially for batch embedding operations

DimensionReasoningScore

Conciseness

The feature table and quantization table add some value but feel like marketing material rather than actionable guidance. The 'Best Practices' section is generic advice Claude already knows. However, the command examples are reasonably lean.

2 / 3

Actionability

Provides concrete CLI commands which is good, but lacks executable code examples showing programmatic usage, expected output formats, or how to interpret results. The commands are surface-level without showing configuration options, error handling, or real integration patterns.

2 / 3

Workflow Clarity

There is no clear multi-step workflow sequencing. Commands are listed independently without showing how they connect (e.g., init → embed → search pipeline). No validation steps, no error recovery, no feedback loops for batch operations which the scoring notes say should cap at 2 minimum.

1 / 3

Progressive Disclosure

Content is reasonably organized with clear sections and headers, but everything is inline with no references to deeper documentation. For a skill covering HNSW configuration, hyperbolic embeddings, quantization options, and memory integration, there should be references to detailed guides for each feature area.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.