CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-engineer

Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.

37

1.00x
Quality

16%

Does it follow best practices?

Impact

56%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/ai-engineer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description reads like a marketing pitch with buzzwords rather than a precise skill selector. It covers a broad AI/LLM domain with some specific technical terms but lacks concrete actions and completely omits trigger guidance ('Use when...'). The heavy use of industry jargon without grounding in specific user scenarios makes it difficult for Claude to reliably select this skill at the right time.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks to build RAG pipelines, configure vector databases, create AI agents, or integrate LLM APIs into applications.'

Replace buzzwords with concrete actions, e.g., 'Configures vector database indexing and retrieval, builds multi-step agent workflows with tool use, implements document chunking and embedding pipelines' instead of 'advanced RAG systems' and 'agent orchestration'.

Include natural user terms and file/framework references users might mention, such as 'LangChain', 'embeddings', 'semantic search', 'chatbot', 'Pinecone', 'ChromaDB', or 'OpenAI API'.

DimensionReasoningScore

Specificity

Names the domain (LLM applications, RAG systems, agents) and lists some actions (vector search, multimodal AI, agent orchestration, enterprise AI integrations), but these are more like buzzword categories than concrete specific actions. It doesn't describe what it actually does with these (e.g., 'configures vector databases', 'builds retrieval pipelines').

2 / 3

Completeness

Describes 'what' at a high level but completely lacks any 'when' clause or explicit trigger guidance. There is no 'Use when...' or equivalent statement, which per the rubric should cap completeness at 2, and since the 'what' is also somewhat vague/buzzwordy, this falls to 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'RAG', 'LLM', 'vector search', 'agents', 'multimodal AI' that users might mention, but misses common variations and natural phrasing like 'chatbot', 'embeddings', 'retrieval augmented generation', 'AI pipeline', 'LangChain', 'semantic search', or specific framework names users would reference.

2 / 3

Distinctiveness Conflict Risk

The combination of RAG, vector search, and agent orchestration provides some distinctiveness, but terms like 'production-ready LLM applications' and 'enterprise AI integrations' are broad enough to overlap with many AI-related skills. Without clearer boundaries, it could conflict with general coding skills or other AI-focused skills.

2 / 3

Total

7

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a capability resume or persona description rather than actionable instructions. It lists hundreds of technologies and concepts Claude already knows without providing any concrete code, specific implementation patterns, or executable guidance. The content would need a fundamental restructuring to become a useful skill—replacing technology lists with concrete examples, adding executable code snippets, and splitting detailed topics into referenced sub-documents.

Suggestions

Replace the extensive 'Capabilities' technology lists with 2-3 concrete, executable code examples for the most common tasks (e.g., a production RAG pipeline setup, an agent workflow implementation).

Add specific validation checkpoints and feedback loops to the workflow, especially for operations like RAG pipeline setup, vector database configuration, and agent deployment.

Split detailed reference material (model comparisons, framework-specific patterns, safety checklists) into separate linked files and keep SKILL.md as a concise overview with quick-start guidance.

Remove all content that merely lists things Claude already knows (model names, framework descriptions, general best practices) and focus on project-specific conventions, preferred patterns, and concrete implementation templates.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of technologies, tools, and concepts that Claude already knows. The 'Capabilities' section is essentially a resume listing every AI tool and framework rather than providing actionable guidance. Most content (model names, framework lists, general best practices) adds no value beyond what Claude already knows.

1 / 3

Actionability

No concrete code examples, no executable commands, no specific implementation patterns. The entire skill is abstract descriptions and bullet-point lists of technologies. Instructions like 'Design the AI architecture, data flow, and model selection' are vague directives with no concrete guidance on how to actually do anything.

1 / 3

Workflow Clarity

The 4-step 'Instructions' section is extremely high-level with no validation checkpoints, no error recovery, and no concrete sequencing. The 'Response Approach' section similarly lists abstract steps without any verification or feedback loops, despite the skill covering complex multi-step operations like RAG pipelines and agent orchestration.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Hundreds of lines of capability listings are inlined that could be split into focused reference documents. No navigation structure, no links to detailed guides for specific topics like RAG implementation or agent setup.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.