CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-ml

AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features.

37

Quality

22%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-ai-ml/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

25%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description covers an extremely broad domain without specifying concrete actions or when the skill should be selected. It reads more like a category label than a skill description, listing high-level topics (RAG, agents, ML pipelines) without explaining what specific tasks the skill performs. The lack of a 'Use when...' clause and the overly wide scope make it difficult for Claude to reliably select this skill over others.

Suggestions

Add an explicit 'Use when...' clause with natural trigger phrases, e.g., 'Use when the user asks about building chatbots, implementing RAG pipelines, creating AI agents, or integrating LLM APIs.'

Replace broad category labels with specific concrete actions using active verbs, e.g., 'Scaffolds LLM-powered applications, configures vector databases for RAG, designs multi-step agent workflows, and builds ML training pipelines.'

Narrow the scope or clearly delineate boundaries to reduce conflict risk — consider whether this should be split into separate skills (e.g., RAG implementation vs. ML pipeline development) or add explicit exclusions.

DimensionReasoningScore

Specificity

Names the domain (AI/ML) and lists several areas like 'LLM application development, RAG implementation, agent architecture, ML pipelines,' but these are broad categories rather than concrete actions. No specific verbs describing what the skill actually does (e.g., 'builds', 'configures', 'deploys').

2 / 3

Completeness

Describes 'what' at a high level (AI/ML workflow covering several areas) but completely lacks any 'when' clause or explicit trigger guidance. The absence of a 'Use when...' clause caps this at 2 per the rubric, and the 'what' is also quite vague, bringing it down to 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords users might say like 'RAG', 'LLM', 'agent', 'ML pipelines', and 'AI-powered features', but misses many common variations and natural phrases users would use such as 'chatbot', 'embeddings', 'vector database', 'fine-tuning', 'prompt engineering', 'model training', etc.

2 / 3

Distinctiveness Conflict Risk

Extremely broad scope covering all of AI/ML development — this would likely conflict with any other skill related to coding, data processing, API integration, or application development. The description is too sweeping to carve out a clear niche.

1 / 3

Total

6

/

12

Passed

Implementation

20%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a table of contents for other skills, padded with generic action items and checklists that provide no concrete guidance. It lacks any executable code, specific commands, or detailed instructions—everything is delegated to sub-skills without meaningful orchestration logic. The content would be far more effective as a concise routing guide with clear decision criteria for which sub-skills to invoke and when.

Suggestions

Replace generic action items ('Choose appropriate models', 'Set up vector database') with concrete decision criteria or executable examples that add value beyond what Claude already knows.

Add validation checkpoints between phases—e.g., 'Before moving to Phase 3, verify LLM integration works by running: curl -X POST ...' with specific test criteria.

Condense the entire skill to a decision tree or routing table: given user intent X, invoke skill Y with context Z. The current format repeats the same structure 7 times without adding information.

Add at least one concrete, end-to-end example showing how the phases connect (e.g., a minimal RAG application going from design through deployment with specific code snippets).

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. The skill is essentially a long list of sub-skill references with generic action items (e.g., 'Define AI use cases', 'Choose appropriate models') that Claude already knows. The 'Copy-Paste Prompts' sections add minimal value and the checklists are generic enough to be obvious.

1 / 3

Actionability

No concrete code, commands, or executable guidance anywhere. Every 'action' is a vague directive like 'Set up vector database' or 'Implement chunking strategy' with no specifics on how. The copy-paste prompts are just 'Use @skill-name to do X' which is not actionable guidance.

1 / 3

Workflow Clarity

The phases are sequenced logically (design → integration → RAG → agents → ML → observability → security), and checklists provide some structure. However, there are no validation checkpoints, no feedback loops, no error recovery steps, and no criteria for when to move between phases.

2 / 3

Progressive Disclosure

The skill references many sub-skills which is good progressive disclosure in principle, but the references are just skill names without clear file paths or descriptions of what each contains. The main file itself is a wall of repetitive content that could be significantly condensed with better organization.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.