CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-engineer

MASTER AI: LLM Apps, Advanced RAG, Agents (ReAct/Plan), Prompting (CoT/Few-shot), LangGraph, VectorDBs, RAGAS Eval. Use for ANY AI/LLM task.

47

Quality

34%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agent/skills/ai-engineer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

42%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description lists relevant AI/LLM topics and frameworks but fails to describe concrete actions the skill performs. The overly broad trigger clause 'Use for ANY AI/LLM task' undermines distinctiveness and would cause conflicts with other AI-related skills. The 'MASTER AI' prefix is promotional fluff rather than useful information.

Suggestions

Replace 'Use for ANY AI/LLM task' with specific trigger scenarios like 'Use when building RAG pipelines, designing agent architectures, or evaluating LLM outputs with RAGAS'

Add concrete action verbs describing what the skill does, e.g., 'Designs RAG architectures, implements ReAct agents, configures vector database retrieval, writes evaluation pipelines'

Remove 'MASTER AI' promotional language and narrow the scope to avoid conflicts with other AI-related skills

DimensionReasoningScore

Specificity

Lists domain areas (RAG, Agents, Prompting, LangGraph, VectorDBs, RAGAS) and some techniques (CoT/Few-shot, ReAct/Plan), but these are category names rather than concrete actions. No verbs describing what the skill actually does.

2 / 3

Completeness

The 'what' is partially addressed through listing topics, but lacks concrete actions. The 'when' clause ('Use for ANY AI/LLM task') is present but overly broad and not explicit about specific triggers.

2 / 3

Trigger Term Quality

Includes relevant technical terms users might search for (RAG, LangGraph, VectorDBs, Agents, prompting), but 'MASTER AI' is not a natural user phrase, and the coverage is heavy on jargon without common variations or plain-language alternatives.

2 / 3

Distinctiveness Conflict Risk

'Use for ANY AI/LLM task' is extremely generic and would conflict with virtually any other AI-related skill. The broad scope makes it impossible to distinguish from other AI/ML skills.

1 / 3

Total

7

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a high-level topic index rather than actionable guidance. It lists many AI/ML concepts (ReAct, RAG, CoT) but provides no executable code examples, concrete implementation patterns, or detailed workflows. Claude already knows these concepts; the skill should instead provide project-specific configurations, working code snippets, and clear step-by-step processes.

Suggestions

Replace concept lists with executable code examples - e.g., show a complete ReAct loop implementation or a working RAG pipeline with actual Python/JS code

Add validation checkpoints to the Execution Protocol - what does success look like at each step? How do you verify the agent is working correctly?

Create separate reference files (e.g., RAG_PATTERNS.md, AGENT_EXAMPLES.md) and link to them from the main skill for progressive disclosure

Remove explanations of concepts Claude already knows (CoT, Few-Shot, embeddings) and focus on project-specific configurations or non-obvious implementation details

DimensionReasoningScore

Conciseness

The content is reasonably efficient with bullet-point structure, but includes some unnecessary framing ('You are a Principal AI Architect') and the merged skills note at the bottom adds no value. The menu structure adds overhead without clear benefit.

2 / 3

Actionability

The content is almost entirely abstract descriptions and concept lists rather than executable guidance. Phrases like 'Implement the ReAct loop' and 'Use Hybrid Search' describe what to do but provide no concrete code, commands, or specific examples of how to do it.

1 / 3

Workflow Clarity

The Execution Protocol provides a 4-step sequence with script commands, but lacks validation checkpoints or error recovery steps. The main content sections are topic lists without clear workflows for multi-step processes like building a RAG system.

2 / 3

Progressive Disclosure

The skill is a monolithic file with no references to external documentation. It mentions scripts but doesn't link to detailed guides for any of the complex topics (RAG, agents, evaluation). All content is inline with no clear navigation to deeper resources.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Dokhacgiakhoa/antigravity-ide
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.