CtrlK
BlogDocsLog inGet started
Tessl Logo

mgrep-code-search

Semantic code search using mgrep for efficient codebase exploration. This skill should be used when searching or exploring codebases with more than 30 non-gitignored files and/or nested directory structures. It provides natural language semantic search that complements traditional grep/ripgrep for finding features, understanding intent, and exploring unfamiliar code.

86

11.62x
Quality

81%

Does it follow best practices?

Impact

93%

11.62x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description effectively communicates when to use the skill with clear threshold criteria (30+ files, nested directories) and distinguishes itself from traditional grep tools. However, it could be stronger on specific concrete actions and natural trigger terms users might actually say when needing semantic code search.

Suggestions

Add more concrete actions like 'find implementations of features, locate similar code patterns, search by intent rather than exact text'

Include natural user trigger phrases such as 'find code that does X', 'where is the function for', 'search for similar code', 'look for feature'

DimensionReasoningScore

Specificity

Names the domain (semantic code search) and tool (mgrep), mentions 'natural language semantic search' and 'finding features, understanding intent, exploring unfamiliar code', but doesn't list multiple concrete actions like 'search by function name, find similar code patterns, locate implementations'.

2 / 3

Completeness

Clearly answers both what ('Semantic code search using mgrep for efficient codebase exploration') and when ('should be used when searching or exploring codebases with more than 30 non-gitignored files and/or nested directory structures') with explicit trigger conditions.

3 / 3

Trigger Term Quality

Includes relevant terms like 'searching', 'exploring codebases', 'semantic search', 'grep/ripgrep', but missing common user phrases like 'find code', 'look for', 'where is', 'search for function', or file type mentions.

2 / 3

Distinctiveness Conflict Risk

Clear niche with specific scope (30+ files, nested directories) and distinct positioning against grep/ripgrep. The semantic search focus and mgrep tool name create a unique identity unlikely to conflict with general file search or code editing skills.

3 / 3

Total

10

/

12

Passed

Implementation

87%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted skill that efficiently teaches mgrep usage with concrete, executable examples and good organization. The main weakness is the workflow section lacks error handling guidance and validation steps for when things go wrong (e.g., indexing failures, empty results). The 'When to Use' section effectively differentiates mgrep from traditional grep tools.

Suggestions

Add troubleshooting guidance for common issues: what to do if indexing fails, if searches return no/poor results, or how to verify index health

Include a validation step in the workflow to confirm the watcher is running and index is current before searching

DimensionReasoningScore

Conciseness

The content is lean and efficient, avoiding explanations of concepts Claude already knows. Every section serves a purpose with no padding or unnecessary context about what semantic search is or how it works internally.

3 / 3

Actionability

Provides fully executable, copy-paste ready commands throughout. Examples are concrete with real query strings, specific flags, and path arguments that can be used immediately.

3 / 3

Workflow Clarity

The workflow section provides clear sequencing (start watcher → search → refine), but lacks validation checkpoints. There's no guidance on what to do if indexing fails, if searches return no results, or how to verify the index is current.

2 / 3

Progressive Disclosure

Content is well-organized with clear sections progressing from overview to quick start to detailed options. For a skill of this size (~80 lines), the structure is appropriate without needing external file references.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
intellectronica/agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.