CtrlK
BlogDocsLog inGet started
Tessl Logo

autonomous-agent-patterns

Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).

33

Quality

18%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Critical

Do not install without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-autonomous-agent-patterns/SKILL.md
SKILL.md
Quality
Evals
Security

Security

4 findings — 1 critical severity, 1 high severity, 2 medium severity. Installing this skill is not recommended: please review these findings carefully if you do intend to do so.

Critical

E006: Malicious code pattern detected in skill scripts

What this means

Detected high-risk code patterns in the skill content — including its prompts, tool definitions, and resources — such as data exfiltration, backdoors, remote code execution, credential theft, system compromise, supply chain attacks, and obfuscation techniques.

Why it was flagged

Malicious code pattern detected (high risk: 0.90). The skill contains high-risk intentional patterns: automatic unrestricted file/context ingestion (read_file, add_folder) combined with LLM calls (easy data exfiltration and credential leakage), dynamic code generation + write-and-hot-reload of MCP servers (direct path to remote code execution and persistent backdoors), and permissive command/code execution patterns (sandbox allows running python/node/git and passes through env), which together provide clear, deliberate capabilities for backdoors, RCE, and exfiltration.

Report incorrect finding
High

W007: Insecure credential handling detected in skill instructions

What this means

The skill handles credentials insecurely by requiring the agent to include secret values verbatim in its generated output. This exposes credentials in the agent’s context and conversation history, creating a risk of data exfiltration.

Why it was flagged

Insecure credential handling detected (high risk: 0.90). This skill reads file/URL contents into the model context and exposes tools that require the LLM to emit tool-call arguments (e.g., type_text, send_input, run_command, edit_file) so secrets from files or user input could be included verbatim in LLM outputs, creating exfiltration risk.

Medium

W011: Third-party content exposure detected (indirect prompt injection risk)

What this means

The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.

Why it was flagged

Third-party content exposure detected (high risk: 0.90). The skill explicitly fetches and ingests arbitrary public web content (e.g., ContextManager.add_url using requests.get, BrowserTool.open_url/get_page_content, and VisualAgent.find_and_click) and then injects that content into LLM prompts and action logic in the SKILL.md workflow, so untrusted third‑party pages could influence tool use and actions.

Medium

W013: Attempt to modify system services in skill instructions

What this means

The skill prompts the agent to compromise the security or integrity of the user’s machine by modifying system-level services or configurations, such as obtaining elevated privileges, altering startup scripts, or changing system-wide settings.

Why it was flagged

Attempt to modify system services in skill instructions detected (high risk: 1.00). The skill exposes tools that read, write, and edit arbitrary filesystem paths, execute shell commands, and even generate and hot-reload new server code (allowing arbitrary code execution), and several implementations lack strict path/privilege checks despite some sandboxing suggestions, which clearly enables changing system state and potentially compromising the host.

Repository
boisenoise/skills-collections
Audited
Security analysis
Snyk

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.