CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-engineer

Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.

Install with Tessl CLI

npx tessl i github:sickn33/antigravity-awesome-skills --skill ai-engineer
What are skills?

37

1.00x

Quality

16%

Does it follow best practices?

Impact

56%

1.00x

Average score across 3 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/ai-engineer/SKILL.md
SKILL.md
Review
Evals

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description covers a relevant technical domain with moderate specificity but lacks explicit trigger guidance ('Use when...'), which is critical for skill selection. The terminology is somewhat technical and could benefit from more natural user-facing keywords and concrete action examples.

Suggestions

Add a 'Use when...' clause with explicit triggers like 'Use when building chatbots, implementing semantic search, creating AI agents, or setting up retrieval-augmented generation pipelines'

Include more natural user terms alongside technical ones: 'chatbot', 'AI assistant', 'embeddings', 'langchain', 'semantic search', 'knowledge base'

Make actions more concrete: instead of 'implements vector search', specify 'create and query vector embeddings, configure similarity search, build retrieval pipelines'

DimensionReasoningScore

Specificity

Names the domain (LLM applications, RAG systems, agents) and lists some actions (vector search, multimodal AI, agent orchestration, enterprise AI integrations), but these are still somewhat high-level concepts rather than concrete specific actions like 'create embeddings' or 'configure retrieval pipelines'.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance should cap completeness at 2, and this has no 'when' component at all.

1 / 3

Trigger Term Quality

Includes relevant technical terms like 'LLM', 'RAG', 'vector search', 'agents', but missing common user variations like 'chatbot', 'AI assistant', 'embeddings', 'retrieval augmented generation', 'langchain', or 'semantic search' that users might naturally say.

2 / 3

Distinctiveness Conflict Risk

The combination of LLM applications, RAG, and agents provides some specificity, but terms like 'production-ready' and 'enterprise AI integrations' are broad enough to potentially overlap with general coding skills or other AI-related skills.

2 / 3

Total

7

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a persona description or capability catalog rather than actionable instructions. It extensively lists technologies and concepts Claude already knows without providing any concrete code, commands, or step-by-step workflows. The document would benefit from being dramatically shortened and restructured around specific, executable guidance for common AI engineering tasks.

Suggestions

Replace the extensive capability lists with 2-3 concrete, executable code examples for common tasks (e.g., a working RAG implementation with Pinecone, a basic LangChain agent setup)

Transform the vague 4-step 'Instructions' into specific workflows with validation checkpoints, such as 'RAG System Implementation Workflow' with concrete steps and verification commands

Remove or drastically condense the 'Capabilities', 'Knowledge Base', and 'Behavioral Traits' sections - Claude already knows these technologies and doesn't need a catalog

Add progressive disclosure by creating separate reference files for detailed topics (e.g., VECTOR_DATABASES.md, AGENT_PATTERNS.md) and linking to them from a concise overview

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of technologies, capabilities, and knowledge areas that Claude already knows. The document reads like a resume or capability catalog rather than actionable instructions, with massive sections like 'Capabilities' and 'Knowledge Base' that add little operational value.

1 / 3

Actionability

No concrete code examples, commands, or executable guidance anywhere. The 'Instructions' section is just 4 vague bullet points. Everything is abstract description ('Design the AI architecture', 'Implement with monitoring') rather than specific, copy-paste ready instructions.

1 / 3

Workflow Clarity

The 4-step 'Instructions' section is too abstract to be useful ('Clarify use cases', 'Design the AI architecture'). No validation checkpoints, no feedback loops, no concrete sequences for any of the complex operations mentioned like RAG implementation or agent orchestration.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. All content is inline in one massive document with no clear navigation structure. The extensive capability lists could be split into reference documents, but instead everything is dumped into a single file.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.