CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-engineer

Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.

32

Quality

16%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/ai-engineer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description reads like a marketing pitch with buzzwords rather than a precise skill selector. It covers a broad domain (LLM apps, RAG, agents) but lacks concrete actions and completely omits a 'Use when...' clause, making it difficult for Claude to know exactly when to select this skill over others. The terms used are more industry jargon than natural user language.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks about building chatbots, setting up RAG pipelines, configuring vector databases, or orchestrating AI agents.'

Replace buzzword categories with concrete actions, e.g., 'Configures vector database indexing and retrieval, builds multi-step agent workflows, implements document chunking and embedding pipelines' instead of 'advanced RAG systems' and 'agent orchestration'.

Include natural user terms and common tool names users might mention, such as 'embeddings', 'semantic search', 'chatbot', 'LangChain', 'retrieval pipeline', 'knowledge base'.

DimensionReasoningScore

Specificity

Names the domain (LLM applications, RAG systems, agents) and lists some actions (vector search, multimodal AI, agent orchestration, enterprise AI integrations), but these are more like buzzword categories than concrete specific actions. It doesn't describe what it actually does with these (e.g., 'configures vector databases', 'implements retrieval pipelines').

2 / 3

Completeness

Describes 'what' at a high level but completely lacks any 'when' clause or explicit trigger guidance. There is no 'Use when...' or equivalent, which per the rubric should cap completeness at 2, and the 'what' itself is vague enough that this falls to a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'RAG', 'vector search', 'agents', 'LLM applications', and 'multimodal AI' that users might mention. However, it misses common variations and natural phrasings users would say like 'chatbot', 'embeddings', 'retrieval augmented generation', 'AI pipeline', 'LangChain', 'prompt chaining', etc.

2 / 3

Distinctiveness Conflict Risk

The combination of RAG, vector search, and agent orchestration provides some distinctiveness, but terms like 'production-ready LLM applications' and 'enterprise AI integrations' are broad enough to overlap with many AI-related skills. It could easily conflict with general coding skills, API integration skills, or other AI/ML skills.

2 / 3

Total

7

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a persona/resume description rather than an actionable skill document. It enumerates hundreds of technologies and concepts Claude already knows without providing any executable code, concrete workflows, or specific guidance. The content is almost entirely descriptive rather than instructive, making it ineffective as a teaching document.

Suggestions

Replace the capability catalog with 2-3 concrete, executable code examples for the most common tasks (e.g., a production RAG pipeline setup, an agent workflow with LangGraph) with copy-paste ready code.

Add specific multi-step workflows with validation checkpoints for key operations like 'Setting up a RAG pipeline' or 'Deploying an LLM service', including error recovery steps.

Remove the 'Capabilities', 'Knowledge Base', 'Behavioral Traits', and 'Example Interactions' sections entirely — these describe what Claude already knows. Replace with concrete patterns, anti-patterns, and decision trees (e.g., 'When to use which vector DB' as a brief table).

Split detailed reference material (model comparison tables, framework-specific guides) into separate bundle files and reference them from a concise SKILL.md overview.

DimensionReasoningScore

Conciseness

Extremely verbose and padded with information Claude already knows. The massive capability lists (specific model names, framework names, database names) are essentially catalogs that don't teach Claude anything new. Sections like 'Behavioral Traits', 'Knowledge Base', 'Example Interactions', and 'Purpose' are redundant persona descriptions that waste tokens without adding actionable value.

1 / 3

Actionability

Contains zero executable code, no concrete commands, no specific examples with inputs/outputs. The entire skill is abstract description and enumeration of technologies. The 'Instructions' section is four vague bullet points ('Clarify use cases', 'Design the AI architecture') that provide no concrete guidance on how to actually perform any task.

1 / 3

Workflow Clarity

The four-step 'Instructions' workflow is extremely vague with no validation checkpoints, no error recovery, and no concrete sequencing. For a skill covering complex operations like RAG pipelines, agent orchestration, and production deployments, there are no specific workflows, no feedback loops, and no verification steps.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files and no bundle files to support the content. All content is dumped inline in a single file with flat section structure. The massive capability lists should be in separate reference files, and the skill should provide a concise overview with pointers to detailed guides.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.