CtrlK
BlogDocsLog inGet started
Tessl Logo

capture-api-response-test-fixture

Capture API response test fixture.

62

2.02x
Quality

43%

Does it follow best practices?

Impact

99%

2.02x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/capture-api-response-test-fixture/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is extremely terse—just six words—providing minimal information about what the skill does and no guidance on when to use it. While 'API response test fixture' hints at a specific niche, the lack of concrete actions, trigger terms, and a 'Use when...' clause makes it insufficient for reliable skill selection among many options.

Suggestions

Add a 'Use when...' clause specifying trigger scenarios, e.g., 'Use when the user wants to save API responses as test fixtures, create mock data, or record HTTP responses for testing.'

List specific concrete actions the skill performs, such as 'Records HTTP responses, saves them as JSON/YAML fixture files, and organizes fixtures by endpoint.'

Include natural keyword variations users might say, such as 'mock data', 'stub responses', 'snapshot', 'recorded responses', or 'test data capture'.

DimensionReasoningScore

Specificity

The description names a vague action ('Capture') and a domain ('API response test fixture') but does not list any concrete actions like saving responses, mocking endpoints, generating fixture files, or specifying formats.

1 / 3

Completeness

The description only weakly addresses 'what' (capture a fixture) and completely lacks any 'when' clause or explicit trigger guidance, which per the rubric caps completeness at 2 at best, and the 'what' is too thin to even reach 2.

1 / 3

Trigger Term Quality

It includes some relevant keywords like 'API response', 'test fixture' that a user might say, but misses common variations such as 'mock data', 'stub', 'snapshot', 'recorded response', or file extensions.

2 / 3

Distinctiveness Conflict Risk

The phrase 'API response test fixture' is somewhat specific to a testing niche, but without more detail it could overlap with general API testing skills, mock generation skills, or snapshot testing skills.

2 / 3

Total

6

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides good actionable code examples for capturing API response test fixtures in two scenarios (generateText and streamText). Its main weaknesses are a lack of explicit validation steps in the workflows (e.g., verifying fixture correctness before use) and some verbosity in the introductory text that could be tightened. The structure is reasonable but could benefit from more explicit step-by-step sequencing.

Suggestions

Add explicit numbered workflow steps for each fixture type, including a validation checkpoint (e.g., 'Verify the JSON fixture is valid and contains expected fields before committing').

Tighten the introductory paragraph—remove hedging language like 'we aim at' and 'unless they are too large in which case some cutting that does not change semantics is advised' in favor of a concise rule like 'Store raw provider responses. Trim large responses without changing semantics.'

DimensionReasoningScore

Conciseness

Mostly efficient but has some unnecessary phrasing like 'we aim at storing test fixtures with the true responses from the providers (unless they are too large in which case some cutting that does not change semantics is advised)' which could be tightened. The explanatory text around the code examples adds some bulk but is mostly useful.

2 / 3

Actionability

Provides fully executable, copy-paste ready TypeScript code examples for both generateText and streamText scenarios. Includes specific commands (pnpm tsx), specific file paths, and concrete helper functions like saveRawChunks.

3 / 3

Workflow Clarity

The two workflows (generateText and streamText) are described with implicit steps but lack explicit numbered sequences and validation checkpoints. For example, there's no verification step to confirm the fixture was captured correctly or that the output file is valid JSON before copying it to the fixtures folder.

2 / 3

Progressive Disclosure

References specific paths like `packages/openai/src/responses/__fixtures__` and test files for naming conventions, which is helpful. However, no bundle files are provided to verify these references, and the content could benefit from clearer navigation signals separating the overview from the two fixture types.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

metadata_field

'metadata' should map string keys to string values

Warning

Total

9

/

11

Passed

Repository
vercel/ai
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.