CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-ml

AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features.

50

1.08x
Quality

26%

Does it follow best practices?

Impact

94%

1.08x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-ai-ml/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies the AI/ML domain and lists several sub-areas but reads more like a category label than an actionable skill description. It lacks concrete actions (verbs), has no 'Use when...' clause, and is so broad it would likely conflict with more specialized AI-related skills.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks about building LLM applications, implementing retrieval-augmented generation, designing agent architectures, or creating ML pipelines.'

Replace category labels with concrete actions using active verbs, e.g., 'Designs and implements RAG pipelines with vector databases, builds LLM-powered agents, configures ML training workflows, and integrates AI features into applications.'

Include more natural trigger terms users would actually say, such as 'chatbot', 'embeddings', 'vector store', 'prompt engineering', 'fine-tuning', 'inference', 'langchain', 'OpenAI API'.

DimensionReasoningScore

Specificity

Names the domain (AI/ML) and lists several areas like 'LLM application development, RAG implementation, agent architecture, ML pipelines', but these are broad categories rather than concrete actions. No specific verbs describing what the skill actually does (e.g., 'builds', 'configures', 'deploys').

2 / 3

Completeness

Describes 'what' at a high level (AI/ML workflow covering several areas) but completely lacks any 'when' clause or explicit trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also quite vague, so this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords users might say like 'RAG', 'LLM', 'agent', 'ML pipelines', and 'AI-powered features'. However, it misses common variations and natural phrases users would use such as 'chatbot', 'embeddings', 'vector database', 'fine-tuning', 'prompt engineering', 'OpenAI', 'langchain', etc.

2 / 3

Distinctiveness Conflict Risk

The scope is extremely broad—covering LLM apps, RAG, agents, ML pipelines, and AI features—which could easily overlap with more specialized skills for any of those individual areas. It's somewhat specific to the AI/ML domain but not narrowly scoped enough to avoid conflicts.

2 / 3

Total

7

/

12

Passed

Implementation

20%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a table of contents that lists other skills and provides generic, non-actionable guidance. It contains no concrete code, commands, configuration examples, or specific technical decisions—just vague action items and trivial copy-paste prompts. The phased structure provides some organizational value, but the content within each phase adds almost nothing beyond what Claude could infer from the skill names alone.

Suggestions

Replace vague action items with concrete, executable guidance—e.g., instead of 'Choose embedding model', provide a decision matrix with specific model recommendations and trade-offs.

Add at least one concrete code example per phase showing a key integration pattern (e.g., a minimal RAG pipeline, a basic agent setup with tool registration).

Add explicit validation checkpoints between phases with concrete verification commands or criteria (e.g., 'Test retrieval accuracy with: python eval/test_retrieval.py --threshold 0.8').

Dramatically condense the content—collapse the repetitive 'Skills to Invoke / Actions / Copy-Paste Prompts' structure into a compact reference table, and use the saved space for actual technical guidance.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. The skill is essentially a long list of skill names, generic action items (e.g., 'Define AI use cases', 'Choose appropriate models'), and copy-paste prompts that are trivially simple. Most content is organizational scaffolding with no substantive information Claude couldn't infer. The checklists are generic and add little value.

1 / 3

Actionability

No concrete code, commands, or executable guidance anywhere. Every 'action' is a vague directive like 'Set up vector database' or 'Implement chunking strategy' with no specifics. The copy-paste prompts are just 'Use @skill-name to do X' which provide no real instruction. There's nothing Claude can actually execute.

1 / 3

Workflow Clarity

The phases are clearly sequenced and logically ordered from design through security. However, there are no validation checkpoints, no feedback loops, no error recovery steps, and no criteria for when to move between phases. The quality gates at the end are generic checklists without concrete verification methods.

2 / 3

Progressive Disclosure

The skill references many sub-skills which provides some progressive disclosure structure. However, the references are just skill names without clear file paths or descriptions of what each contains. The main file itself is a wall of repetitive sections that could be significantly condensed, with detailed phase information split into separate files.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.