Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool ...
Install with Tessl CLI
npx tessl i github:boisenoise/skills-collections --skill autonomous-agent-patterns71
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description establishes a clear domain focus on autonomous coding agents and includes an explicit 'Use when' clause, which is good practice. However, it lists topic areas rather than concrete actions, and the truncation prevents full evaluation. The trigger terms are relevant but could include more natural variations users might employ.
Suggestions
Replace topic areas with concrete actions (e.g., 'Guides implementation of tool calling patterns, designs permission boundaries, structures agent loops' instead of 'Covers tool integration, permission systems')
Add more natural trigger term variations like 'agentic systems', 'LLM agents', 'autonomous workflows', 'agent architecture'
Ensure the full description is not truncated to capture all 'Use when' scenarios
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (autonomous coding agents) and lists several areas covered (tool integration, permission systems, browser automation, human-in-the-loop workflows), but these are topic areas rather than concrete actions the skill performs. | 2 / 3 |
Completeness | Clearly answers 'what' (design patterns for autonomous coding agents covering specific areas) and includes explicit 'Use when' clause with trigger scenarios (building AI agents, designing tool...), though the description appears truncated. | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'AI agents', 'tool integration', 'browser automation', but the description is truncated ('designing tool ...') and misses common variations users might say like 'agentic systems', 'LLM agents', or 'autonomous systems'. | 2 / 3 |
Distinctiveness Conflict Risk | Focuses on autonomous coding agents which is somewhat specific, but 'tool integration' and 'design patterns' are broad terms that could overlap with general software architecture or API integration skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive reference for autonomous agent patterns with excellent actionability through complete, executable code examples. The main weaknesses are verbosity (could be tightened by 20-30%), missing validation/recovery workflows for agent operations, and a monolithic structure that would benefit from splitting into focused sub-documents.
Suggestions
Add explicit error recovery workflows showing what happens when tool execution fails mid-task (e.g., 'If step 3 fails: rollback changes, log error, retry with modified approach')
Split detailed implementations (browser automation, MCP integration, context management) into separate reference files, keeping SKILL.md as a concise overview with links
Remove the 'When to Use This Skill' section - this duplicates the frontmatter description and wastes tokens
Add a validation checkpoint pattern showing how to verify agent state between critical operations
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some unnecessary verbosity, such as the 'When to Use This Skill' section that restates the description, and some code comments that explain obvious concepts. The ASCII diagram adds visual clarity but consumes tokens for something Claude can infer. | 2 / 3 |
Actionability | Provides fully executable Python code examples throughout, including complete class implementations for AgentLoop, Tool schemas, EditFileTool, ApprovalManager, SandboxedExecution, and BrowserTool. Code is copy-paste ready with proper imports and error handling. | 3 / 3 |
Workflow Clarity | The agent loop pattern is clearly sequenced (Think → Decide → Act → Observe), but the skill lacks explicit validation checkpoints for risky operations. The permission system is described but there's no clear workflow for what happens when validation fails or how to recover from errors in multi-step agent tasks. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear section headers and a logical progression from core architecture to advanced patterns. However, this is a monolithic 400+ line file that could benefit from splitting detailed implementations (browser automation, MCP integration) into separate reference files. The Resources section provides external links but no internal file references. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (765 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.