CtrlK
BlogDocsLog inGet started
Tessl Logo

develop-ai-functions-example

Develop examples for AI SDK functions. Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures.

79

1.40x
Quality

70%

Does it follow best practices?

Impact

100%

1.40x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/develop-ai-functions-example/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a reasonably well-constructed description with a clear 'Use when' clause and a distinct scope tied to a specific directory path. Its main weaknesses are moderate specificity in the concrete actions described and somewhat project-specific trigger terms that may not match natural user language. The path-based scoping provides excellent distinctiveness.

Suggestions

Add more specific concrete actions, e.g., 'write example scripts, configure provider settings, generate test fixture files' to improve specificity.

Include more natural trigger term variations users might say, such as 'sample code', 'demo functions', 'SDK examples', or 'function examples'.

DimensionReasoningScore

Specificity

Names the domain (AI SDK functions examples) and some actions (creating, running, modifying examples), but doesn't list specific concrete actions like 'validate provider support, demonstrate features, create test fixtures' in a way that clarifies what those entail technically.

2 / 3

Completeness

Clearly answers both 'what' (develop examples for AI SDK functions) and 'when' (Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures) with an explicit 'Use when' clause and specific triggers.

3 / 3

Trigger Term Quality

Includes relevant terms like 'examples', 'ai-functions', 'provider support', 'test fixtures', but these are somewhat project-specific jargon. Missing more natural user terms like 'sample code', 'demo', or 'AI SDK example' variations that users might naturally say.

2 / 3

Distinctiveness Conflict Risk

The specific path 'examples/ai-functions/src' and the focused domain of AI SDK function examples create a clear niche that is unlikely to conflict with other skills. The scope is well-bounded.

3 / 3

Total

10

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides strong actionable guidance with excellent executable templates and clear naming conventions for AI SDK examples. Its main weaknesses are the lack of an explicit creation workflow with validation steps, and some verbosity in listing best practices and directory contents that could be more concise. The content would benefit from a clear step-by-step workflow for creating and verifying new examples.

Suggestions

Add an explicit step-by-step workflow for creating a new example: create file → use template → run with pnpm tsx → verify output → commit, with a validation checkpoint after running.

Trim the 'Best Practices' section — items like 'keep examples focused' and 'handle errors gracefully' are things Claude already knows and don't add value.

Consider moving the full directory listing table and utility helpers reference into a separate REFERENCE.md file, keeping only the most commonly used categories inline.

DimensionReasoningScore

Conciseness

The content is reasonably well-organized but includes some unnecessary verbosity. The extensive tables listing every directory and utility file, plus best practices that Claude already knows (like 'keep examples focused', 'handle errors gracefully'), add tokens without proportional value. The four separate templates are useful but could be more compact.

2 / 3

Actionability

The skill provides fully executable, copy-paste-ready TypeScript templates for multiple use cases (basic, streaming, tool calling, structured output), concrete run commands, and specific file naming conventions. The templates are complete and immediately usable.

3 / 3

Workflow Clarity

The 'When to Write Examples' section provides guidance on when to create examples, and the templates show how to structure them, but there's no explicit workflow for creating a new example (e.g., create file → write code → run → verify output). There are no validation checkpoints or feedback loops for verifying that an example works correctly.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and tables, but it's somewhat monolithic — all content is inline in one file. The reference to 'capture-api-response-test-fixture' skill is a good cross-reference, but the utility helpers section and reusable tools section could potentially be separate reference files given the overall length.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

metadata_field

'metadata' should map string keys to string values

Warning

Total

9

/

11

Passed

Repository
vercel/ai
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.