Generate client SDKs in multiple languages from OpenAPI specifications. Use when generating client libraries for API consumption. Trigger with phrases like "generate SDK", "create client library", or "build API SDK".
71
66%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/api-development/api-sdk-generator/skills/generating-api-sdks/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with clear 'what' and 'when' clauses, strong trigger terms, and a distinctive niche. Its main weakness is that the capability description is somewhat thin — it could enumerate more specific actions beyond just 'generate' (e.g., language-specific outputs, model generation, authentication handling). Overall it would perform well in skill selection scenarios.
Suggestions
Expand the specificity of capabilities by listing concrete actions such as 'generate typed models, create API endpoint methods, produce authentication helpers, support TypeScript/Python/Go/Java output'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (client SDKs from OpenAPI specs) and the core action (generate), but doesn't list multiple specific concrete actions like 'generate typed models, create API methods, produce authentication helpers, build request/response handlers'. | 2 / 3 |
Completeness | Clearly answers both 'what' (generate client SDKs in multiple languages from OpenAPI specifications) and 'when' (explicit 'Use when' clause and 'Trigger with phrases' providing concrete trigger guidance). | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms: 'generate SDK', 'create client library', 'build API SDK', 'OpenAPI specifications', 'client libraries', 'API consumption'. These cover common variations users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche — generating client SDKs from OpenAPI specs is a very specific task unlikely to conflict with other skills. The trigger terms like 'generate SDK' and 'OpenAPI' are narrowly scoped. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill has good structural organization and progressive disclosure with clear references to supporting files, but critically lacks actionable, executable content. The instructions read as a high-level design document rather than concrete guidance Claude can follow—no actual code, no CLI commands, no tool invocations are provided. The workflow is sequenced but missing validation checkpoints for what is a complex, multi-step code generation process.
Suggestions
Add executable code examples for at least one target language showing the actual generated client class structure, model definitions, and a concrete OpenAPI Generator CLI invocation (e.g., `openapi-generator-cli generate -i spec.yaml -g typescript-fetch -o sdk/typescript/`)
Replace the prose-based Examples section with actual input (sample OpenAPI spec snippet) and output (generated code snippet) pairs that Claude can use as templates
Add explicit validation checkpoints after key steps, such as verifying the generated code compiles/type-checks before proceeding to add retry logic and pagination helpers
Include a concrete minimal working example showing the end-to-end flow from a small OpenAPI spec to a working SDK in at least one language
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately verbose with some unnecessary explanation. The overview paragraph restates what the instructions already cover. The examples section describes scenarios in prose rather than providing executable code, and the error handling table, while useful, is lengthy. Some trimming is possible but it's not egregiously padded. | 2 / 3 |
Actionability | Despite listing 9 steps, the instructions are entirely descriptive and abstract—there are no concrete code snippets, no executable commands, no actual code generation tool invocations, and no copy-paste ready examples. The examples section describes what should be produced in prose rather than showing actual code. Claude would need to invent the entire implementation from scratch. | 1 / 3 |
Workflow Clarity | The steps are sequenced logically (validate spec → extract models → generate code → add features → test), but there are no validation checkpoints between steps. For a complex multi-language code generation workflow, there's no feedback loop for verifying generated output correctness before proceeding, and no explicit validation step after generation. | 2 / 3 |
Progressive Disclosure | The skill appropriately references external files for detailed content: `references/implementation.md` for the full implementation guide, `references/errors.md` for comprehensive error patterns, and `references/examples.md` for additional examples. These are one-level-deep, clearly signaled references with the main file serving as a well-structured overview. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.