Access up-to-date, version-specific documentation and code examples from Context7. Use this skill to verify library and framework details.
67
53%
Does it follow best practices?
Impact
83%
1.59xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./context7-skill/SKILL.mdQuality
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description conveys the general purpose of fetching documentation via Context7 but lacks the specificity and explicit trigger guidance needed for reliable skill selection. It would benefit from more concrete actions, natural user trigger terms, and a clear 'Use when...' clause to distinguish it from other documentation or code-reference skills.
Suggestions
Add an explicit 'Use when...' clause with natural trigger scenarios, e.g., 'Use when the user asks about a specific library's API, needs version-specific documentation, or mentions Context7.'
Include more natural trigger terms users would say, such as 'API docs', 'package docs', 'latest version', 'how to use [library]', '.md docs', or specific popular framework names.
List more concrete actions beyond 'access documentation', e.g., 'Retrieves API references, fetches version-specific usage examples, and looks up function signatures from Context7's documentation index.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (documentation lookup via Context7) and some actions ('access documentation and code examples', 'verify library and framework details'), but doesn't list multiple concrete actions like searching, comparing versions, or retrieving API signatures. | 2 / 3 |
Completeness | The 'what' is partially addressed ('access documentation and code examples'), and there's a weak 'when' implied by 'Use this skill to verify library and framework details', but it lacks an explicit 'Use when...' clause with concrete trigger scenarios (e.g., 'Use when the user asks about a specific library API, needs version-specific docs, or references Context7'). | 2 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'documentation', 'code examples', 'library', 'framework', and 'Context7', but misses common user phrases like 'API docs', 'how to use [library]', 'latest version', 'package documentation', or specific framework names. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Context7' adds some distinctiveness, but 'documentation' and 'code examples' are broad enough to overlap with general documentation skills, code search skills, or other reference-lookup tools. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a reasonably well-structured skill that clearly explains the two access methods and provides a logical workflow. Its main weaknesses are the lack of fully executable examples (no complete end-to-end demonstration) and missing validation checkpoints within the workflow. The progressive disclosure and organization are strong points.
Suggestions
Add a complete end-to-end example showing a resolve call followed by a query call with realistic input and expected output to improve actionability.
Integrate validation checkpoints into the workflow, e.g., 'Verify the resolved library ID matches the expected library before proceeding to Step 3' and explicitly place the retry limit as a checkpoint within the workflow steps.
Trim the Overview paragraph to remove redundant phrasing—the skill description already covers the purpose, so the body should jump more quickly to actionable content.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary explanation (e.g., the Overview paragraph restates what the skill description already conveys, and phrases like 'ensuring your code is based on the correct version' are filler). Some tightening is possible but it's not egregiously verbose. | 2 / 3 |
Actionability | Provides concrete MCP call signatures and CLI commands, but lacks fully executable, copy-paste-ready examples. The MCP calls show function signatures without complete usage context, and the CLI commands lack example output or a full end-to-end worked example showing input and expected response. | 2 / 3 |
Workflow Clarity | The three-step workflow is clearly sequenced with good selection criteria and handling guidance. However, there are no explicit validation checkpoints or feedback loops—e.g., no step to verify the resolved library ID is correct before querying, and the retry limit guidance is mentioned outside the workflow rather than integrated as a checkpoint. | 2 / 3 |
Progressive Disclosure | Content is well-structured with clear sections, a configuration table, and references to external files (scripts/context7_cli.py, references/troubleshooting.md) that are one level deep and clearly signaled. The skill serves as an effective overview without being monolithic. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
de14120
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.