Search code repositories for code related to a given code snippet, ranking results by call chain similarity, textual similarity, and functional similarity. Use when finding related code, locating similar implementations, discovering code dependencies, or identifying code that performs similar operations. Outputs ranked file lists with matching code snippets and relevance scores.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill code-search-assistant85
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It clearly articulates the specific capability (similarity-based code search with multiple ranking criteria), provides explicit trigger conditions via a 'Use when...' clause, and describes the output format. The description uses appropriate third-person voice and includes natural developer terminology.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Search code repositories', 'ranking results by call chain similarity, textual similarity, and functional similarity', and 'Outputs ranked file lists with matching code snippets and relevance scores'. | 3 / 3 |
Completeness | Clearly answers both what (search repositories, rank by multiple similarity metrics, output ranked lists) AND when (explicit 'Use when...' clause with four specific trigger scenarios). | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'related code', 'similar implementations', 'code dependencies', 'similar operations', 'code snippet'. These are terms developers naturally use when searching for code. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on code similarity search with specific ranking criteria (call chain, textual, functional similarity). Distinct from general code search or file finding skills due to the similarity-focused approach. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a comprehensive framework for code similarity search with clear workflow steps and well-defined scoring criteria. However, it leans toward conceptual guidance rather than executable instructions, and the document is verbose with content that could be condensed or split into reference files. The lack of concrete tool invocation syntax (actual Grep commands, file reading patterns) limits immediate actionability.
Suggestions
Add concrete, executable examples of Grep/Glob commands with actual syntax (e.g., `grep -rn 'fetch(' --include='*.js' ./src/`)
Condense the scoring formulas and functional categories into a compact reference table or separate REFERENCE.md file
Replace pseudocode scoring logic with actual implementable patterns or clarify this is a conceptual framework for Claude's reasoning
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary elaboration, such as detailed explanations of similarity categories and scoring formulas that could be more compact. The example usage section adds value but the overall document could be tightened. | 2 / 3 |
Actionability | Provides conceptual guidance and example patterns but lacks truly executable code. The Grep/Glob mentions are vague without concrete command syntax, and the scoring formulas are descriptive rather than implementable algorithms Claude can directly execute. | 2 / 3 |
Workflow Clarity | Clear 7-step workflow with logical sequencing from analysis through ranking to output formatting. Each step has defined substeps and the process flow is easy to follow with explicit scoring criteria and output format specifications. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and headers, but everything is in a single monolithic file. The detailed scoring formulas, functional categories, and extensive examples could be split into reference files for cleaner navigation. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.