Develop examples for AI SDK functions. Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures.
Install with Tessl CLI
npx tessl i github:vercel/ai --skill develop-ai-functions-example77
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is functional with a clear 'Use when' clause that explicitly states triggers and purposes. Its main weakness is moderate specificity in concrete actions and trigger terms that lean toward technical paths rather than natural user language. The narrow scope to a specific directory path makes it distinctive but may miss users who describe their needs differently.
Suggestions
Add more natural trigger terms users might say, such as 'AI SDK demo', 'function example code', or 'SDK sample'
List more specific concrete actions like 'create new example files', 'run validation tests', 'update existing demos' to improve specificity
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (AI SDK functions, examples) and some actions (creating, running, modifying), but doesn't list comprehensive concrete actions like 'validate provider support, demonstrate features, create test fixtures' as distinct capabilities - these read more as purposes than specific actions. | 2 / 3 |
Completeness | Clearly answers both what ('Develop examples for AI SDK functions') and when ('Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'examples', 'ai-functions', 'provider support', 'test fixtures', but the path 'examples/ai-functions/src' is very specific and users might use more natural variations like 'AI SDK example', 'function examples', or 'demo code' that aren't covered. | 2 / 3 |
Distinctiveness Conflict Risk | The specific path 'examples/ai-functions/src' and focus on 'AI SDK functions' creates a clear niche that is unlikely to conflict with other skills - it's narrowly scoped to a particular directory and purpose. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides strong actionable guidance with excellent code templates and clear organization. The main weaknesses are some verbosity in explanations and missing validation workflows for ensuring examples work correctly before use. The file naming conventions and utility documentation are particularly well done.
Suggestions
Add a validation workflow: after creating an example, run it and verify expected output before considering it complete
Condense the utility helpers section - Claude can infer usage from function names and brief descriptions
Add a troubleshooting section for common errors (missing env vars, wrong model IDs) with quick fixes
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is generally efficient but includes some redundancy. The extensive tables and multiple templates could be condensed, and some explanations (like what the run() wrapper does) are somewhat verbose for Claude's capabilities. | 2 / 3 |
Actionability | Excellent actionability with fully executable code templates for basic, streaming, tool calling, and structured output patterns. Commands for running examples are copy-paste ready, and all code examples are complete and executable. | 3 / 3 |
Workflow Clarity | The 'When to Write Examples' section provides good guidance on scenarios, but lacks explicit validation steps. There's no workflow for verifying examples work correctly before committing, and no feedback loop for debugging failed examples. | 2 / 3 |
Progressive Disclosure | Well-organized with clear sections, tables for quick reference, and logical progression from overview to templates to utilities. Content is appropriately structured for a single file without needing external references for this scope. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.