CtrlK
BlogDocsLog inGet started
Tessl Logo

develop-ai-functions-example

Develop examples for AI SDK functions. Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures.

79

1.40x
Quality

70%

Does it follow best practices?

Impact

100%

1.40x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/develop-ai-functions-example/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a reasonably well-constructed description with a clear 'Use when' clause and a specific file path that aids distinctiveness. Its main weakness is that the actions and purposes described (validate provider support, demonstrate features, create test fixtures) are somewhat abstract rather than listing concrete operations. The trigger terms are project-specific, which helps distinctiveness but may not cover all natural user phrasings.

Suggestions

Add more concrete actions such as 'write function call examples, configure provider settings, generate sample API responses' to improve specificity.

Include additional natural trigger terms users might say, such as 'function calling', 'tool use examples', 'SDK demo', or 'provider testing'.

DimensionReasoningScore

Specificity

Names the domain (AI SDK functions examples) and some actions (creating, running, modifying examples), but doesn't list specific concrete actions like 'validate provider support, demonstrate features, create test fixtures' in enough detail—these are somewhat vague purposes rather than concrete operations.

2 / 3

Completeness

Clearly answers both 'what' (develop examples for AI SDK functions) and 'when' (Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures). Has an explicit 'Use when...' clause with specific triggers.

3 / 3

Trigger Term Quality

Includes relevant terms like 'AI SDK functions', 'examples', 'ai-functions', 'provider support', 'test fixtures', but these are fairly project-specific and may miss natural user phrasings. The path 'examples/ai-functions/src' is a good specific trigger but overall coverage of natural language variations is limited.

2 / 3

Distinctiveness Conflict Risk

The specific path 'examples/ai-functions/src' and the narrow domain of 'AI SDK functions' examples make this clearly distinguishable from other skills. Unlikely to conflict with general coding or testing skills.

3 / 3

Total

10

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides strong actionable guidance with concrete, executable templates and clear organizational conventions for AI SDK examples. Its main weaknesses are moderate verbosity from repetitive template patterns that could be condensed, and the lack of an explicit end-to-end workflow with validation steps for creating new examples. The content would benefit from being split across files given its length.

Suggestions

Add an explicit step-by-step workflow for creating a new example: create file with naming convention → choose template → customize → run and verify output → commit.

Consider extracting the four detailed templates into a separate TEMPLATES.md file and referencing it from the main skill, keeping only the basic template inline.

Remove obvious best practices that Claude already knows (e.g., 'Add comments for complex logic', 'Handle errors gracefully') to improve conciseness.

DimensionReasoningScore

Conciseness

The content is reasonably well-organized but includes some redundancy. The four separate templates (basic, streaming, tool calling, structured output) are quite similar and could be condensed. The 'Best Practices' section contains some obvious advice (e.g., 'Handle errors gracefully: The run() wrapper handles this automatically'). However, the tables and directory listings are efficient.

2 / 3

Actionability

The skill provides fully executable, copy-paste-ready TypeScript templates for multiple use cases (basic, streaming, tool calling, structured output). Running commands are concrete (`pnpm tsx src/generate-text/openai.ts`), file naming conventions are specific with examples, and utility usage is demonstrated with real code.

3 / 3

Workflow Clarity

The 'When to Write Examples' section provides guidance on triggers but lacks a clear step-by-step workflow for creating a new example (e.g., create file → follow naming convention → use template → run → verify output). There are no validation checkpoints or verification steps to confirm an example works correctly before considering it done.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and tables, but it's quite long (~180 lines) and could benefit from splitting detailed templates and utility references into separate files. The reference to 'capture-api-response-test-fixture' skill is a good cross-reference, but no bundle files exist to offload the detailed template content.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

metadata_field

'metadata' should map string keys to string values

Warning

Total

9

/

11

Passed

Repository
vercel/ai
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.