Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).
33
18%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-autonomous-agent-patterns/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is too vague and abstract, reading more like a topic label than an actionable skill description. It lacks concrete actions the skill performs and entirely omits trigger guidance ('Use when...'). The references to Cline and OpenAI Codex add some distinctiveness but don't compensate for the missing specificity and completeness.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about building AI coding agents, agentic loops, tool-use architectures, or references Cline/Codex-style patterns.'
List specific concrete actions the skill performs, e.g., 'Provides patterns for tool-use loops, file editing strategies, error recovery, sandboxed execution, and agent orchestration.'
Include natural keyword variations users might say, such as 'AI agent', 'agentic workflow', 'code agent architecture', 'agent loop', 'tool calling patterns'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description mentions 'design patterns for building autonomous coding agents' but does not list any concrete actions. It references inspirations (Cline, OpenAI Codex) but doesn't specify what the skill actually does—no verbs like 'generates', 'scaffolds', 'implements' are present. | 1 / 3 |
Completeness | The description partially addresses 'what' (design patterns for coding agents) but is vague, and there is no 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when' clause caps completeness at 2, and the weak 'what' brings it to 1. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'autonomous coding agents', 'design patterns', 'Cline', and 'OpenAI Codex' that a user might mention. However, it misses common variations like 'AI agent', 'code agent', 'agentic workflow', 'tool use loop', or 'agent architecture'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'autonomous coding agents' and specific projects (Cline, OpenAI Codex) provides some distinctiveness, but 'design patterns' is broad enough to overlap with general software architecture or coding pattern skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a comprehensive reference document or tutorial than a concise, actionable skill file. It explains patterns Claude already knows (agent loops, tool schemas, permission systems) at great length without providing a clear workflow for when and how to apply them. The massive inline code blocks would be better served by progressive disclosure into sub-files, with the main skill providing a lean overview and decision framework.
Suggestions
Reduce the main SKILL.md to a concise overview (~50-80 lines) with a decision tree for which pattern to use, and move detailed implementations into separate referenced files (e.g., TOOLS.md, PERMISSIONS.md, BROWSER.md).
Remove code that teaches standard patterns Claude already knows (basic agent loops, enum definitions, file I/O) and focus only on project-specific conventions, gotchas, or non-obvious design decisions.
Add a clear sequential workflow: 'To build an agent: 1) Define tools → 2) Set up permissions → 3) Implement the loop → 4) Validate with test task → 5) Add safety checks' with explicit validation checkpoints.
Ensure code examples are complete and executable — add all necessary imports, define the ToolResult dataclass, and provide a minimal working example that can actually be run end-to-end.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Includes extensive boilerplate code for concepts Claude already understands (agent loops, tool schemas, permission enums, browser automation, context management). Much of this is standard software engineering that Claude can generate on demand without being taught. The multi-model architecture section even hardcodes specific model names that will become outdated. | 1 / 3 |
Actionability | The code examples are fairly concrete and near-executable, but they are incomplete — missing imports (json, os, shlex, subprocess, datetime, Any, Enum, base64, requests, playwright), missing ToolResult class definition, and missing helper functions like html_to_markdown and _extract_name. They serve more as reference patterns than copy-paste-ready implementations. | 2 / 3 |
Workflow Clarity | Despite being about building autonomous agents (a complex multi-step process), there is no clear workflow for actually building one. The content presents isolated patterns/classes without sequencing them into a coherent build process. There are no validation checkpoints, no 'build this first, then add this' guidance, and no error recovery workflows for the agent construction process itself. | 1 / 3 |
Progressive Disclosure | This is a monolithic wall of code — all patterns are inlined in a single massive file with no references to separate detailed documents. The content would benefit enormously from splitting into separate files (e.g., TOOLS.md, PERMISSIONS.md, BROWSER.md) with a concise overview in the main skill. The external links at the bottom are just general resources, not structured sub-documents. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (765 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
f1697b6
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.