CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-wrapper-product

Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc. ) into focused tools people will pay for. Not just "ChatGPT but different" - products that solve specific problems with AI.

38

Quality

24%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-ai-wrapper-product/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description reads more like a marketing tagline than a functional skill description. It lacks concrete actions, has no 'Use when...' clause, and relies on vague language like 'focused tools people will pay for' rather than specifying what the skill actually does (e.g., architecture design, API integration patterns, pricing strategy, prompt engineering). The editorializing ('Not just ChatGPT but different') wastes space that could be used for actionable trigger terms.

Suggestions

Add a 'Use when...' clause with specific triggers like 'Use when building AI-powered SaaS products, integrating OpenAI/Anthropic APIs, designing prompt pipelines, or planning monetization for AI tools.'

Replace vague language with concrete actions, e.g., 'Designs API integration architectures, builds prompt chains, structures pricing models, and creates user-facing AI features for commercial products.'

Remove editorial commentary ('Not just ChatGPT but different') and use that space for natural trigger terms users would say, such as 'AI wrapper', 'SaaS', 'API product', 'AI startup', 'prompt engineering for production'.

DimensionReasoningScore

Specificity

The description uses vague language like 'building products' and 'focused tools' without listing concrete actions. It describes a philosophy ('not just ChatGPT but different') rather than specific capabilities like 'design API integration architecture, create pricing models, build prompt pipelines.'

1 / 3

Completeness

The 'what' is vaguely stated (building products that wrap AI APIs) but lacks specifics, and there is no 'when' clause or explicit trigger guidance at all. The missing 'Use when...' clause caps this at 2 per the rubric, but the weak 'what' brings it to 1.

1 / 3

Trigger Term Quality

It includes some relevant keywords like 'AI APIs', 'OpenAI', 'Anthropic', and 'products', which users might naturally mention. However, it misses common variations like 'SaaS', 'wrapper', 'API integration', 'monetization', 'prompt engineering', or 'AI startup'.

2 / 3

Distinctiveness Conflict Risk

The focus on wrapping AI APIs into paid products provides some distinctiveness, but 'building products' and 'solve specific problems with AI' are broad enough to overlap with general coding skills, AI development skills, or product strategy skills.

2 / 3

Total

6

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in topic coverage but severely bloated, trying to be an entire AI product development handbook in one file. It contains useful executable code examples for common patterns (retry logic, streaming, cost tracking) but undermines its value through excessive verbosity, redundant sections, and lack of progressive disclosure. The content would benefit greatly from being restructured into a concise overview with references to detailed sub-files.

Suggestions

Reduce the SKILL.md to a concise overview (~100 lines) covering the wrapper stack pattern and key decisions, then split detailed sections (prompt engineering, cost management, rate limiting, hallucination handling) into separate referenced files.

Remove redundant sections like the separate 'Expertise' and 'Capabilities' lists, the role description paragraph, and the 'When to Use' trigger list which duplicates frontmatter functionality.

Add a clear end-to-end workflow for building an AI wrapper product with explicit validation checkpoints (e.g., 'Verify cost tracking is working before launching to users').

Remove explanatory text Claude already knows (what hallucinations are, what rate limits are) and replace with just the actionable fix patterns.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Contains redundant sections (e.g., 'Capabilities' and 'Expertise' lists that repeat each other), explains concepts Claude already knows (what rate limiting is, what hallucinations are), and includes extensive tables with obvious information. The model selection table with vague '$' pricing and subjective quality ratings adds little value.

1 / 3

Actionability

Contains executable JavaScript code examples for API calls, cost tracking, retry logic, streaming, and caching which are concrete and useful. However, many code snippets reference undefined functions (parseOutput, getUserPlan, getDailyUsage, hashPrompt) without implementation, and the model selection table uses outdated model names. The differentiation and strategy sections are abstract advice rather than actionable guidance.

2 / 3

Workflow Clarity

The 'Wrapper Stack' diagram provides a clear high-level sequence, and the sharp edges sections follow a problem-diagnosis-fix pattern. However, there's no overarching workflow for building an AI wrapper product from start to finish with validation checkpoints. The collaboration workflows at the end are just numbered lists without validation steps or decision points.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to external files. All content—architecture, prompt engineering, cost management, differentiation, sharp edges, validation checks, and collaboration patterns—is crammed into a single file. Content like the detailed sharp edges sections and collaboration workflows should be split into separate referenced files.

1 / 3

Total

6

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (685 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.