CtrlK
BlogDocsLog inGet started
Tessl Logo

product-self-knowledge

Stop and consult this skill whenever your response would include specific facts about Anthropic's products. Covers: Claude Code (how to install, Node.js requirements, platform/OS support, MCP server integration, configuration), Claude API (function calling/tool use, batch processing, SDK usage, rate limits, pricing, models, streaming), and Claude.ai (Pro vs Team vs Enterprise plans, feature limits). Trigger this even for coding tasks that use the Anthropic SDK, content creation mentioning Claude capabilities or pricing, or LLM provider comparisons. Any time you would otherwise rely on memory for Anthropic product details, verify here instead — your training data may be outdated or wrong.

89

Quality

86%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that thoroughly covers what the skill does (Anthropic product knowledge across three product lines with detailed subtopics) and when to use it (any time Anthropic product facts would be cited, including edge cases like SDK coding tasks and LLM comparisons). The trigger terms are natural and comprehensive, and the scope is distinct enough to avoid conflicts with other skills. The final sentence about outdated training data adds useful context for why the skill should be consulted.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and knowledge areas: Claude Code installation, Node.js requirements, platform/OS support, MCP server integration, API function calling/tool use, batch processing, SDK usage, rate limits, pricing, models, streaming, and plan comparisons (Pro vs Team vs Enterprise).

3 / 3

Completeness

Clearly answers both 'what' (covers Anthropic product facts across Claude Code, Claude API, and Claude.ai with detailed subtopics) and 'when' (explicit triggers: 'whenever your response would include specific facts about Anthropic's products', 'even for coding tasks that use the Anthropic SDK', 'content creation mentioning Claude capabilities or pricing', 'LLM provider comparisons').

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'Anthropic SDK', 'Claude Code', 'Claude API', 'pricing', 'rate limits', 'install', 'function calling', 'tool use', 'batch processing', 'streaming', 'Pro vs Team vs Enterprise', 'LLM provider comparisons'. These are terms users would naturally use.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche as an Anthropic product knowledge reference. The scope is well-bounded to Anthropic-specific product facts, and the description explicitly distinguishes when to trigger (factual claims about Anthropic products) vs general coding or content tasks, making conflicts with other skills unlikely.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured routing/reference skill that efficiently directs Claude to the right documentation sources for Anthropic product questions. Its main strength is conciseness and clear organization. Its weakness is that it's essentially a link directory with a lightweight workflow—it lacks concrete examples of how to handle specific query types and doesn't include verification steps for when documentation might be unavailable or contradictory.

Suggestions

Add 1-2 concrete examples showing the full workflow: e.g., 'User asks about Claude Code Node.js requirements → check Claude Code docs map → navigate to installation page → respond with specific version requirement and source link'

Add a validation/fallback step for when documentation URLs are unreachable or information appears outdated, such as checking the product news page or explicitly flagging uncertainty to the user

DimensionReasoningScore

Conciseness

The skill is lean and efficient. It doesn't explain what Claude API or Claude Code are—it assumes Claude knows. Every section serves a clear purpose: routing, workflow, and quick reference links. No wasted tokens.

3 / 3

Actionability

The skill provides concrete URLs and a clear routing decision tree, but the actual actionable guidance is essentially 'go check these docs.' There are no concrete examples of how to answer specific product questions, no example lookups, and no demonstration of what a good response looks like after consulting docs.

2 / 3

Workflow Clarity

The 5-step response workflow is clearly sequenced and logical, but lacks validation checkpoints. There's no guidance on what to do if docs are unreachable, if information conflicts between sources, or how to verify that retrieved information is current. The 'if uncertain' step is a fallback but not a true validation loop.

2 / 3

Progressive Disclosure

For a skill with no bundle files, the content is well-organized into clear sections (Core Principles, Question Routing, Response Workflow, Quick Reference) with appropriate use of headers. References are one level deep, pointing directly to external documentation URLs. The structure is easy to scan and navigate.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
douglasvought/wiggle-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.