Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).
33
18%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Critical
Do not install without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-autonomous-agent-patterns/SKILL.mdSecurity
4 findings — 1 critical severity, 1 high severity, 2 medium severity. Installing this skill is not recommended: please review these findings carefully if you do intend to do so.
Detected high-risk code patterns in the skill content — including its prompts, tool definitions, and resources — such as data exfiltration, backdoors, remote code execution, credential theft, system compromise, supply chain attacks, and obfuscation techniques.
Malicious code pattern detected (high risk: 0.90). The skill contains high-risk intentional patterns: automatic unrestricted file/context ingestion (read_file, add_folder) combined with LLM calls (easy data exfiltration and credential leakage), dynamic code generation + write-and-hot-reload of MCP servers (direct path to remote code execution and persistent backdoors), and permissive command/code execution patterns (sandbox allows running python/node/git and passes through env), which together provide clear, deliberate capabilities for backdoors, RCE, and exfiltration.
The skill handles credentials insecurely by requiring the agent to include secret values verbatim in its generated output. This exposes credentials in the agent’s context and conversation history, creating a risk of data exfiltration.
Insecure credential handling detected (high risk: 0.90). This skill reads file/URL contents into the model context and exposes tools that require the LLM to emit tool-call arguments (e.g., type_text, send_input, run_command, edit_file) so secrets from files or user input could be included verbatim in LLM outputs, creating exfiltration risk.
The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.
Third-party content exposure detected (high risk: 0.90). The skill explicitly fetches and ingests arbitrary public web content (e.g., ContextManager.add_url using requests.get, BrowserTool.open_url/get_page_content, and VisualAgent.find_and_click) and then injects that content into LLM prompts and action logic in the SKILL.md workflow, so untrusted third‑party pages could influence tool use and actions.
The skill prompts the agent to compromise the security or integrity of the user’s machine by modifying system-level services or configurations, such as obtaining elevated privileges, altering startup scripts, or changing system-wide settings.
Attempt to modify system services in skill instructions detected (high risk: 1.00). The skill exposes tools that read, write, and edit arbitrary filesystem paths, execute shell commands, and even generate and hot-reload new server code (allowing arbitrary code execution), and several implementations lack strict path/privilege checks despite some sandboxing suggestions, which clearly enables changing system state and potentially compromising the host.
431bfad
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.