Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration.
37
22%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-ai-agents-architect/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (AI agent design) but relies on high-level buzzwords rather than concrete actions. It lacks an explicit 'Use when...' clause, making it difficult for Claude to know precisely when to select this skill. The use of 'Masters' and 'Expert in' reads as self-promotional fluff rather than actionable capability descriptions.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks about building autonomous agents, implementing tool-calling loops, designing agent memory, or orchestrating multi-agent systems.'
Replace vague category names with concrete actions, e.g., 'Designs agent architectures with tool-calling loops, implements persistent memory systems, builds ReAct-style planning, and orchestrates multi-agent pipelines.'
Remove self-promotional language like 'Expert in' and 'Masters' and use third-person action verbs instead (e.g., 'Designs and builds...').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (AI agents) and lists some areas like 'tool use, memory systems, planning strategies, multi-agent orchestration,' but these are high-level categories rather than concrete actions. No specific verbs like 'creates agent scaffolding' or 'implements tool-calling loops' are present. | 2 / 3 |
Completeness | Describes what the skill covers at a high level but completely lacks any 'Use when...' clause or explicit trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also vague, this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'AI agents', 'tool use', 'multi-agent orchestration', and 'memory systems' that users might mention. However, it misses common natural variations like 'agentic workflows', 'agent loop', 'ReAct', 'function calling', 'agent framework', or 'autonomous system'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on 'autonomous AI agents' provides some distinctiveness, but terms like 'tool use' and 'planning strategies' are broad enough to overlap with general coding skills, LLM integration skills, or architecture skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a conceptual overview or textbook chapter on AI agents than an actionable skill file. It is extremely verbose, explaining many concepts Claude already understands deeply, while providing zero executable code or concrete implementation guidance. The content would benefit enormously from being condensed to its novel insights, adding actual code examples, and splitting detailed content into referenced sub-files.
Suggestions
Replace abstract pattern descriptions with concrete, executable code examples (e.g., show a complete ReAct loop implementation in Python, a tool registry with actual schema definitions, a memory system with working code).
Cut the 'Why this breaks' explanations in Sharp Edges - Claude already understands these concepts. Keep only the 'Recommended fix' sections with concrete code/config examples.
Remove redundant sections (Expertise, Capabilities, and Prerequisites largely overlap) and the role description preamble - these waste tokens on things Claude can infer.
Split Sharp Edges into a separate SHARP_EDGES.md file and pattern implementations into a PATTERNS.md file, keeping SKILL.md as a concise overview with clear links.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is very verbose, explaining many concepts Claude already knows well (what ReAct is, what memory types are, why silent failures are bad). The 'Sharp Edges' section is particularly padded with 'Why this breaks' explanations that are obvious to Claude. The role description, expertise list, capabilities list, and prerequisites are largely redundant with each other and with Claude's existing knowledge. | 1 / 3 |
Actionability | Despite being lengthy, the skill contains zero executable code, no concrete commands, no specific API examples, and no copy-paste ready implementations. Everything is described at an abstract/conceptual level (e.g., 'Register tools with schema and examples' without showing how). The patterns and fixes are all bullet-point descriptions rather than concrete guidance. | 1 / 3 |
Workflow Clarity | The patterns section does outline sequences (e.g., ReAct loop steps, Plan-and-Execute phases), and the Sharp Edges section provides clear problem-solution structures. However, there are no validation checkpoints, no feedback loops for error recovery, and no concrete verification steps. The Checkpoint Recovery pattern mentions saving state but doesn't show how to validate or verify. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for detailed content. Everything is inline - the Sharp Edges section alone is massive and could be split out. There are no links to separate reference files, examples files, or detailed pattern implementations. The 'Related Skills' section mentions other skills but doesn't link to them meaningfully. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
636b862
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.