CtrlK
BlogDocsLog inGet started
Tessl Logo

librarian

Research and documentation expert - finds answers and examples

32

Quality

13%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/librarian/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too vague to be useful for skill selection. It lacks specific actions, domain focus, trigger conditions, and distinguishing characteristics. Almost any user query could potentially match 'research and documentation' making it impossible for Claude to reliably choose this skill over others.

Suggestions

Specify the domain and concrete actions: What kind of research? (e.g., 'Searches API documentation', 'Queries internal knowledge bases', 'Finds code examples in repositories')

Add explicit 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when user asks about library documentation, API references, or needs code examples for specific frameworks')

Narrow the scope to create distinctiveness - is this for technical docs, company wikis, academic research? Define the niche clearly.

DimensionReasoningScore

Specificity

The description uses vague language like 'finds answers and examples' without specifying concrete actions. It doesn't explain what kind of research, what sources, or what documentation formats are involved.

1 / 3

Completeness

The description only vaguely addresses 'what' (finds answers/examples) and completely lacks any 'when' guidance or explicit trigger conditions for Claude to know when to select this skill.

1 / 3

Trigger Term Quality

The terms 'research', 'documentation', 'answers', and 'examples' are extremely generic and would match almost any information-seeking query. No specific domain keywords or natural user phrases are included.

1 / 3

Distinctiveness Conflict Risk

This description is highly generic and would conflict with virtually any skill that involves looking up information, reading docs, or providing examples. It has no clear niche or distinguishing characteristics.

1 / 3

Total

4

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is overly verbose, spending most of its tokens on an example response that teaches Python async/await rather than providing actionable research guidance. It describes what a good researcher does conceptually but lacks concrete, executable instructions for how Claude should actually perform research tasks. The content would benefit from dramatic trimming and focus on specific, actionable research procedures.

Suggestions

Remove the lengthy async/await example - it demonstrates output format but wastes tokens explaining concepts Claude knows. Replace with a brief template showing expected response structure.

Add concrete, actionable research instructions: specific search strategies, how to evaluate source credibility, what to do when sources conflict.

Extract the 'Example Response Format' section to a separate EXAMPLES.md file and reference it briefly.

Remove explanations of what documentation sources are (Stack Overflow, GitHub, etc.) - Claude knows these. Focus on when to prefer each source for specific query types.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanations of concepts Claude already knows (async/await basics, what documentation is, what Stack Overflow is). The lengthy example response format section explains fundamental Python concepts that don't need teaching.

1 / 3

Actionability

Contains executable code examples for async/await, but these are illustrative examples of output format rather than actionable instructions for how Claude should perform research. The actual research workflow is vague ('Identify Sources', 'Verify Information').

2 / 3

Workflow Clarity

Lists research steps but they're abstract ('Clarify what's being asked', 'Check multiple sources') without concrete validation checkpoints or specific actions. No feedback loops for when research fails or sources conflict.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. The lengthy example response format could be in a separate file. Everything is inline with no clear navigation structure for different use cases.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
TurnaboutHero/oh-my-antigravity
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.