CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-agents-architect

Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration.

35

Quality

20%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-ai-agents-architect/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (AI agent development) but relies on abstract topic labels rather than concrete actions. It lacks any explicit 'when to use' guidance, which is critical for skill selection. The use of 'Expert in' and 'Masters' reads as self-promotional fluff rather than actionable capability descriptions.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks about building autonomous agents, implementing tool-calling loops, designing agent memory, or orchestrating multi-agent systems.'

Replace vague topic labels with concrete actions, e.g., 'Designs agent architectures, implements tool-calling loops, builds memory and state management systems, and orchestrates multi-agent workflows.'

Rewrite in third person active voice describing what the skill does rather than claiming expertise, e.g., 'Designs and implements autonomous AI agent systems...' instead of 'Expert in designing...'

DimensionReasoningScore

Specificity

Names the domain (AI agents) and lists some areas like 'tool use, memory systems, planning strategies, and multi-agent orchestration,' but these are high-level topic areas rather than concrete actions. No specific verbs describing what the skill actually does (e.g., 'generates agent architectures,' 'implements tool-calling loops').

2 / 3

Completeness

Describes a vague 'what' (designing and building AI agents) but completely lacks any 'when' clause or explicit trigger guidance. There is no 'Use when...' or equivalent, which per the rubric should cap completeness at 2, and the 'what' itself is also weak, so this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'AI agents,' 'tool use,' 'memory systems,' 'multi-agent orchestration,' and 'planning strategies,' which users might mention. However, it misses common variations like 'agentic workflows,' 'ReAct,' 'function calling,' 'agent loop,' 'LLM agents,' or 'autonomous systems.'

2 / 3

Distinctiveness Conflict Risk

The focus on 'autonomous AI agents' provides some distinctiveness, but terms like 'tool use' and 'planning strategies' are broad enough to overlap with general coding skills, LLM integration skills, or architecture design skills.

2 / 3

Total

7

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a conceptual overview or knowledge base article about AI agent design rather than an actionable skill for Claude. It extensively explains concepts Claude already understands (agent loops, memory types, tool selection) without providing any executable code, specific implementations, or concrete workflows. The Sharp Edges section, while well-structured, is verbose and describes obvious failure modes without actionable remediation code.

Suggestions

Replace abstract pattern descriptions with concrete, executable code examples (e.g., a minimal ReAct loop implementation in Python, a tool registry class, a memory system with actual RAG retrieval code).

Remove explanations of concepts Claude already knows (what ReAct is, why silent failures are bad, what memory types exist) and focus on project-specific conventions, preferred libraries, and concrete implementation patterns.

Add concrete workflow sequences with validation checkpoints, e.g., 'Step 1: Define tools with this schema → Step 2: Validate tool descriptions with this checklist → Step 3: Test with this prompt template'.

Reduce the Sharp Edges section to a concise table or checklist format (pattern | symptom | fix) rather than verbose multi-paragraph explanations for each issue.

DimensionReasoningScore

Conciseness

The skill is extremely verbose, explaining concepts Claude already knows well (ReAct loops, memory architectures, tool calling patterns). It reads like a textbook chapter rather than actionable instructions. Sections like 'Why this breaks' explain obvious consequences, and the 'Expertise' and 'Capabilities' sections largely duplicate each other. The entire document could be reduced to a fraction of its size.

1 / 3

Actionability

Despite being lengthy, the skill contains zero executable code, no concrete commands, no specific API calls, and no copy-paste ready examples. Everything is described abstractly ('Register tools with schema and examples', 'Use RAG for retrieval') without showing how. The 'Recommended fix' sections are bullet-point advice rather than concrete implementations.

1 / 3

Workflow Clarity

While patterns like ReAct and Plan-and-Execute describe conceptual steps, none have concrete sequenced workflows with validation checkpoints. There are no feedback loops, no verification steps, and no clear 'do this, then check that' sequences. The patterns are descriptions of concepts rather than executable workflows.

1 / 3

Progressive Disclosure

The content is organized into logical sections (Patterns, Sharp Edges, etc.) with reasonable structure and headers. However, it's a monolithic document with no references to supporting files, and the Sharp Edges section is very long and could be split out. The 'Related Skills' section references other skills but provides no navigation to detailed content.

2 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.