Finds code by meaning, structure, or text across large codebases — picks the right search strategy (grep, AST query, call graph walk, semantic search) for the question being asked. Use when the user asks where something is implemented, when navigating unfamiliar code, or when a simple grep isn't enough.
Install with Tessl CLI
npx tessl i github:santosomar/general-secure-coding-agent-skills --skill code-search-assistant100
Quality
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
There are four kinds of code search. Picking the wrong one wastes time or misses results. The skill is matching the question to the tool.
| Question shape | Tool | Why |
|---|---|---|
"Where is FooBar defined?" | Text grep (rg -w) | Exact symbol — fast, precise |
"Where is FooBar used?" | Text grep + filter, or LSP "find references" | Same symbol, many hits |
| "What calls this function, transitively?" | Call graph walk | Grep finds direct calls; you need the tree |
| "Where do we validate email addresses?" | Semantic / fuzzy search | Concept, not symbol — no single keyword |
| "Find all places that cast then dereference" | AST / structural query | Syntactic pattern, not a string |
| "What's the code path from HTTP to DB?" | Dataflow / taint trace | Cross-function, value-following |
Grep is fast but dumb. Make it less dumb:
| Trick | Example |
|---|---|
| Word boundaries | rg -w foo — matches foo not foobar |
| File type filter | rg -t py foo — only Python files |
| Definition vs use | `rg '^(def |
| Multi-line pattern | rg -U 'if.*\n.*return None' |
| Exclude vendored/generated | rg foo -g '!vendor/' -g '!*.pb.go' |
| Case-insensitive for NL concepts | rg -i 'email.*valid' |
False-positive pruning: comments, strings, tests. rg foo | rg -v test_ | rg -v '^.*#' — crude but works. Or use the -t type filter to skip test directories if the language has conventions.
Grep finds text. AST search finds structure. You need AST when:
return inside a for inside a try."Tools: semgrep (pattern syntax looks like code with holes), ast-grep, language-specific (Python ast module, clang query).
Example semgrep pattern — find SQL built by concatenation:
pattern: |
$CURSOR.execute($X + $Y)Grep for execute gives you thousands of hits. The pattern gives you the dangerous ones.
"What eventually calls dangerous_write?" Grep finds direct callers. For the full tree:
dangerous_write.LSP "call hierarchy" does this in IDEs. Manually: breadth-first, dedupe visited functions. Output is a tree, not a list.
"Where do we handle session expiry?" — no single symbol. The code might say timeout, ttl, expires_at, staleness, max_age. Semantic search embeds code and query, ranks by meaning.
When you don't have a semantic search index, approximate:
expir, timeout, ttl, stale, max_age.session, auth, cookie).Q: "Where in this Django app do we actually write to the orders table?"
Wrong first move: grep orders — 847 hits, mostly templates and tests.
Right sequence:
rg 'class.*Model.*orders' -t py or rg "db_table.*orders" → Order in models/order.py..save(), .create(), .update(), .delete(), .bulk_create(). But those are on any model — need to narrow.rg 'Order\.(objects\.)?(create|update|bulk)' -t py + rg 'order\.save\(\)' (instance-level, harder — order could be any variable name).Order and call .save(). rg 'def.*order.*:' -A 20 | rg save.rg 'INSERT INTO orders|UPDATE orders' -i — catches anyone bypassing the ORM.Result: 6 write sites. 4 through the ORM (service layer), 1 in a migration, 1 raw SQL in a management command (flagged — why is this bypassing the ORM?).
getattr(obj, 'foo')() won't match rg foo\(. Know your language's reflection escape hatches.## Query
<what was asked>
## Search strategy
<text | AST | call-graph | semantic> — <why this one>
## Searches run
1. <command / pattern> → <N> hits
2. <refinement> → <M> hits
...
## Results (ranked)
| Location | Snippet | Relevance |
| -------- | ------- | --------- |
## Notes
<known blind spots — reflection, generated code, dynamic dispatch>47d56bb
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.