Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool ...
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill ai-agents-architect64
Quality
51%
Does it follow best practices?
Impact
82%
1.13xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ai-agents-architect/SKILL.mdDiscovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description adequately covers the AI agent domain and includes an explicit 'Use when' clause with trigger terms, which is good for skill selection. However, it relies on high-level capability categories rather than concrete actions, and the truncated trigger list suggests incomplete coverage. The 'Expert in' and 'Masters' phrasing is somewhat fluffy rather than action-oriented.
Suggestions
Replace abstract categories with concrete actions (e.g., 'Designs agent architectures, implements tool-calling loops, builds memory/retrieval systems, orchestrates multi-agent workflows').
Complete the trigger term list with natural variations users would say: 'agentic system', 'LLM agent', 'agent framework', 'ReAct pattern', 'function calling agent'.
Remove fluffy qualifiers like 'Expert in' and 'Masters' in favor of direct action verbs in third person.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (AI agents) and lists some capability areas (tool use, memory systems, planning strategies, multi-agent orchestration), but these are high-level categories rather than concrete actions like 'design agent architectures' or 'implement tool calling loops'. | 2 / 3 |
Completeness | Clearly answers both what (designing/building autonomous AI agents with tool use, memory, planning, orchestration) and when (explicit 'Use when:' clause with trigger terms), meeting the rubric requirement for explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes some natural keywords users would say ('build agent', 'AI agent', 'autonomous agent', 'tool'), but the truncation ('...') suggests incomplete coverage and misses common variations like 'agentic workflow', 'agent framework', 'LLM agent', or 'ReAct'. | 2 / 3 |
Distinctiveness Conflict Risk | Reasonably specific to AI agents, but 'tool use' could overlap with general coding skills, and 'planning strategies' is vague enough to potentially conflict with project planning or task management skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable conceptual overview of agent architecture patterns but fails to deliver actionable, executable guidance. The Sharp Edges table is particularly problematic with truncated solutions, and the code blocks contain bullet-point pseudocode rather than working examples. The content describes rather than instructs.
Suggestions
Replace pseudocode bullet points in Patterns section with actual executable code examples (e.g., a working ReAct loop implementation)
Complete the Sharp Edges table solutions - each 'Solution' cell currently ends with a colon and no actual code or guidance
Add concrete, copy-paste ready examples for at least one complete agent implementation pattern
Remove or condense the Capabilities/Requirements sections which describe what Claude already knows about the skill
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary framing (role description, capabilities list) that Claude already knows. The patterns section could be tighter. | 2 / 3 |
Actionability | The code blocks contain pseudocode/bullet points rather than executable code. The Sharp Edges table references solutions but doesn't show them - entries like 'Always set limits:' are incomplete with no actual code. | 1 / 3 |
Workflow Clarity | Patterns describe workflows conceptually (ReAct, Plan-and-Execute) but lack concrete implementation details, validation checkpoints, or error recovery steps. The Sharp Edges table mentions issues but solutions are truncated. | 2 / 3 |
Progressive Disclosure | References related skills at the end, but the main content is somewhat monolithic. No clear navigation to detailed resources for each pattern or anti-pattern. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.