CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-product

Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns.

Install with Tessl CLI

npx tessl i github:duclm1x1/Dive-Ai --skill ai-product
What are skills?

35

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description suffers from a critically broken 'Use when' clause that contains placeholder text instead of actual triggers. The opening sentence is marketing fluff rather than functional description. While it names relevant AI/LLM topics, it lacks concrete action verbs and provides no usable guidance for skill selection.

Suggestions

Replace the broken 'Use when: keywords, file_patterns, code_patterns' with actual trigger terms like 'Use when building LLM-powered features, implementing RAG systems, designing AI chat interfaces, or optimizing API costs'

Remove the marketing opener ('Every product will be AI-powered...') and replace with concrete actions: 'Implements LLM integrations, designs RAG pipelines, engineers production prompts, builds AI user interfaces'

Add specific file patterns or code patterns as actual examples: 'Use when working with OpenAI/Anthropic SDKs, vector databases, embedding pipelines, or .prompt files'

DimensionReasoningScore

Specificity

Names domain (AI/LLM integration) and lists several areas (RAG architecture, prompt engineering, AI UX, cost optimization), but these are high-level categories rather than concrete actions. No specific verbs describing what the skill actually does.

2 / 3

Completeness

The 'what' is partially addressed with topic areas, but the 'when' clause is completely broken with placeholder text ('Use when: keywords, file_patterns, code_patterns'). This provides no actual guidance on when to use the skill.

1 / 3

Trigger Term Quality

The 'Use when' clause contains placeholder text ('keywords, file_patterns, code_patterns') rather than actual trigger terms. While the body mentions terms like 'LLM', 'RAG', 'prompt engineering', the trigger section is broken and unusable.

1 / 3

Distinctiveness Conflict Risk

The AI/LLM focus provides some distinctiveness, but terms like 'prompt engineering' and 'AI UX' are broad enough to potentially overlap with other AI-related skills. The broken trigger section prevents clear differentiation.

2 / 3

Total

6

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads like an outline or table of contents rather than actionable guidance. It identifies important AI product development concerns but fails to provide any executable code, concrete examples, or complete solutions. The Sharp Edges table is particularly problematic—it promises solutions but delivers only empty comment placeholders.

Suggestions

Add complete, executable code examples for each pattern (e.g., actual Pydantic schema validation, streaming implementation, prompt versioning setup)

Fill in the Sharp Edges table solutions with real code snippets instead of comment placeholders

Remove the persona paragraph—it wastes tokens explaining Claude's expertise rather than teaching skills

Add a concrete workflow for at least one end-to-end scenario (e.g., 'Building a validated RAG endpoint') with numbered steps and validation checkpoints

DimensionReasoningScore

Conciseness

The persona introduction is unnecessary padding (Claude doesn't need to be told it's an expert). The patterns and anti-patterns sections are lean, but the Sharp Edges table has incomplete solutions (just comments like '# Always validate output:' with no actual code).

2 / 3

Actionability

Critical failure: the skill describes what to do but provides zero executable code. The Sharp Edges table promises solutions but only shows comment placeholders. 'Use function calling or JSON mode with schema validation' is vague direction, not concrete guidance.

1 / 3

Workflow Clarity

No multi-step workflows are defined. The skill lists concepts (validation, streaming, versioning) but never sequences them into actionable processes. No validation checkpoints or feedback loops for any of the critical operations mentioned.

1 / 3

Progressive Disclosure

The content is organized into logical sections (Patterns, Anti-Patterns, Sharp Edges) which aids navigation. However, there are no references to external files for detailed implementations, and the Sharp Edges table is incomplete inline content that should either be fleshed out or linked elsewhere.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.