Semantic code search using mgrep for efficient codebase exploration. This skill should be used when searching or exploring codebases with more than 30 non-gitignored files and/or nested directory structures. It provides natural language semantic search that complements traditional grep/ripgrep for finding features, understanding intent, and exploring unfamiliar code.
Install with Tessl CLI
npx tessl i github:intellectronica/agent-skills --skill mgrep-code-search84
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has strong completeness with explicit 'when to use' criteria (30+ files, nested directories) and good distinctiveness through specific tool naming and threshold conditions. However, it could improve specificity by listing concrete actions (e.g., 'find function definitions', 'locate implementations') and include more natural trigger terms users would actually say when searching code.
Suggestions
Add concrete action verbs like 'find function definitions', 'locate implementations', 'discover related code patterns' to improve specificity
Include natural user phrases as trigger terms such as 'find code', 'where is the function', 'search for', 'look for implementation'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (semantic code search) and tool (mgrep), mentions 'natural language semantic search' and 'finding features, understanding intent, exploring unfamiliar code' but doesn't list multiple concrete actions like specific operations or outputs. | 2 / 3 |
Completeness | Clearly answers both what (semantic code search using mgrep for codebase exploration) and when (codebases with 30+ non-gitignored files and/or nested directory structures), with explicit trigger conditions. | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'searching', 'exploring codebases', 'semantic search', 'grep/ripgrep', but missing common user phrases like 'find code', 'search for function', 'where is', 'look for' that users would naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | Clear niche with specific triggers - the 30+ files threshold and 'mgrep' tool name create distinct boundaries that differentiate it from general grep/search skills, reducing conflict risk. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted skill that efficiently teaches mgrep usage with concrete, executable examples and good organization. The main weakness is the workflow section lacks validation steps - there's no guidance on verifying the watcher is running, checking index status, or troubleshooting common issues like empty results or stale indexes.
Suggestions
Add validation checkpoint after starting the watcher (e.g., how to verify indexing completed successfully)
Include troubleshooting guidance for common issues: no results found, stale index symptoms, watcher not running
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of concepts Claude already knows. Every section serves a purpose with no padding or unnecessary context about what semantic search is or how it works internally. | 3 / 3 |
Actionability | Provides fully executable, copy-paste ready commands throughout. Examples are concrete with real query strings, path arguments, and option flags that can be used immediately. | 3 / 3 |
Workflow Clarity | The workflow section provides clear sequencing (start watcher → search → refine), but lacks validation checkpoints. There's no guidance on what to do if indexing fails, if searches return no results, or how to verify the index is current. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections progressing from overview to quick start to detailed options. For a skill of this size (~80 lines), the structure is appropriate without needing external file references. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.