Access up-to-date, version-specific documentation and code examples from Context7. Use this skill to verify library and framework details.
67
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./context7-skill/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is functional and covers both what the skill does and when to use it, which is good for completeness. However, it lacks specificity in the concrete actions it performs and could benefit from more natural trigger terms that users would actually say. The mention of Context7 helps distinguish it, but the overall language is somewhat generic for a documentation retrieval skill.
Suggestions
Add more specific concrete actions, e.g., 'Fetches API references, retrieves function signatures, looks up configuration options, and finds usage examples from Context7.'
Expand trigger terms with natural user language variations, e.g., 'Use when the user asks about docs, API usage, how a library works, package documentation, or needs to check the latest API for a specific version.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (documentation lookup via Context7) and some actions ('access documentation and code examples', 'verify library and framework details'), but doesn't list multiple concrete actions like searching, comparing versions, or retrieving API signatures. | 2 / 3 |
Completeness | Clearly answers both 'what' (access up-to-date, version-specific documentation and code examples from Context7) and 'when' ('Use this skill to verify library and framework details'), providing an explicit trigger clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'documentation', 'code examples', 'library', 'framework', and 'Context7', but misses common user-facing terms like 'docs', 'API reference', 'how to use [library]', 'latest version', or specific framework names that users would naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Context7' as a specific tool adds some distinctiveness, but 'documentation' and 'code examples' are broad enough to potentially overlap with other documentation-related skills or general coding assistance skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a reasonable structure for integrating with Context7's documentation API, with clear tool descriptions and a logical workflow. However, it falls short on actionability due to lack of concrete, executable examples with real values and expected outputs. The content could be more concise by trimming explanatory text and focusing on the delta knowledge Claude actually needs.
Suggestions
Add a concrete, end-to-end example showing a real library resolution and documentation query with actual input values and expected output format.
Integrate validation checkpoints into the workflow—e.g., 'Verify the resolved library ID matches the expected package before proceeding to query-docs.'
Trim the overview paragraph and security requirement section to remove explanations Claude doesn't need (e.g., what Context7 does is already in the description; Claude knows not to leak API keys with a shorter instruction).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The overview section explains what Context7 does in a way that's somewhat redundant given the skill description. Phrases like 'directly from the source' and 'ensuring your code is based on the correct version' are padding. The structure is reasonable but could be tightened—e.g., the security requirement section over-explains, and the selection criteria in Step 2 are somewhat verbose. | 2 / 3 |
Actionability | The skill provides MCP call signatures and CLI commands, which is helpful, but the examples are incomplete—there are no concrete, copy-paste-ready examples showing actual queries with real library names and expected outputs. The MCP call syntax shown (e.g., `resolve_library_id(query="...", libraryName="...")`) uses placeholder ellipses rather than concrete values. | 2 / 3 |
Workflow Clarity | The three-step workflow (check availability → resolve → query) is clearly sequenced, but lacks explicit validation checkpoints. There's no feedback loop for what to do if resolution returns unexpected results beyond 'ask the user,' and the retry limit of 3 is mentioned but not integrated into the workflow steps as a checkpoint. | 2 / 3 |
Progressive Disclosure | The skill references `scripts/context7_cli.py` and `references/troubleshooting.md`, which is good progressive disclosure structure. However, no bundle files were provided to verify these exist, and the main SKILL.md includes inline content (like selection criteria details) that could arguably be in a reference file. The references are one-level deep and clearly signaled, which is positive. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
00eed16
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.