CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-engineer

Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.

34

1.00x
Quality

12%

Does it follow best practices?

Impact

56%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-ai-engineer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

25%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description reads like a marketing pitch full of AI buzzwords rather than a precise skill selector. It covers an extremely broad domain without concrete actions or explicit trigger conditions, making it both hard to distinguish from other AI-related skills and difficult for Claude to know when to select it. Adding a 'Use when...' clause and narrowing the scope to specific, actionable capabilities would significantly improve it.

Suggestions

Add an explicit 'Use when...' clause with concrete trigger scenarios, e.g., 'Use when the user asks to build a RAG pipeline, set up vector search, create an AI agent, or integrate LLM APIs into an application.'

Replace broad buzzwords with specific concrete actions, e.g., 'Configures vector database indexing, implements retrieval-augmented generation pipelines, builds multi-step agent workflows with tool use, integrates OpenAI/Anthropic APIs.'

Narrow the scope or clearly delineate boundaries to reduce conflict risk — if this skill truly covers all of LLM apps, RAG, agents, and multimodal AI, explain what distinguishes it from skills that might handle individual subtopics.

DimensionReasoningScore

Specificity

Names the domain (LLM applications, RAG systems, agents) and lists some actions (vector search, multimodal AI, agent orchestration, enterprise AI integrations), but these are more like buzzword categories than concrete specific actions. It doesn't describe what it actually does with these things (e.g., 'configures vector databases', 'implements retrieval pipelines').

2 / 3

Completeness

Describes 'what' at a high level but completely lacks any 'when' clause or explicit trigger guidance. There is no 'Use when...' or equivalent statement telling Claude when to select this skill, which per the rubric should cap completeness at 2, and the 'what' is also fairly vague, bringing it to 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'RAG', 'vector search', 'agents', 'LLM applications', and 'multimodal AI' that users might mention. However, it misses common variations and natural phrasings users would say like 'chatbot', 'embeddings', 'retrieval augmented generation', 'AI pipeline', 'LangChain', 'prompt chaining', etc.

2 / 3

Distinctiveness Conflict Risk

The description is extremely broad, covering LLM applications, RAG, agents, vector search, multimodal AI, and enterprise integrations — essentially the entire AI/ML application space. This would easily conflict with more specific skills for any of these individual areas.

1 / 3

Total

6

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads like a capabilities brochure or resume rather than actionable instructions. It lists hundreds of technologies and concepts Claude already knows without providing any concrete code, specific implementation patterns, or executable guidance. The content would need a fundamental restructuring to be useful—replacing technology lists with focused, executable examples and clear workflows for specific tasks.

Suggestions

Replace the extensive capability/technology lists with 2-3 concrete, executable code examples for the most common tasks (e.g., a production RAG pipeline setup, an agent workflow implementation).

Add specific validation checkpoints and error recovery steps to the workflow, especially for operations like vector database indexing, RAG pipeline testing, and agent deployment.

Split detailed content into separate reference files (e.g., RAG_PATTERNS.md, AGENT_FRAMEWORKS.md) and keep SKILL.md as a concise overview with clear navigation links.

Remove all technology enumeration lists (model names, framework names, tool names) that Claude already knows, and instead focus on project-specific conventions, preferred patterns, and decision criteria for choosing between options.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of technologies, tools, and concepts that Claude already knows. The 'Capabilities' section is essentially a resume listing every AI tool and framework rather than providing actionable guidance. Most content (model names, framework lists, general concepts) adds no value beyond what Claude already knows.

1 / 3

Actionability

No concrete code examples, no executable commands, no specific implementation patterns. The entire skill is abstract descriptions and bullet-point lists of technologies. Instructions like 'Design the AI architecture, data flow, and model selection' are vague directives with no concrete guidance on how to actually do anything.

1 / 3

Workflow Clarity

The 4-step 'Instructions' section is extremely high-level with no validation checkpoints, no error recovery, and no concrete sequencing. The 'Response Approach' section similarly lists abstract steps without any verification or feedback loops, despite the skill covering complex multi-step operations like RAG pipelines and agent orchestration.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Hundreds of lines of capability listings are inline rather than being split into focused reference documents. No navigation structure, no links to detailed guides for specific topics like RAG implementation or agent setup.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.