CtrlK
BlogDocsLog inGet started
Tessl Logo

autonomous-agents

Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The challenge isn't making them capable - it's making them reliable. Every extra decision multiplies failure probability.

38

Quality

24%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-autonomous-agents/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads more like an introductory paragraph from a blog post or textbook about autonomous agents than a skill description. It defines a concept and states a design philosophy ('the challenge isn't making them capable - it's making them reliable') but fails to specify what concrete actions the skill enables or when Claude should select it. The absence of actionable triggers and a 'Use when...' clause severely limits its utility for skill selection.

Suggestions

Add a clear 'Use when...' clause specifying triggers, e.g., 'Use when the user asks about building agents, designing agentic workflows, implementing tool-use loops, or creating autonomous AI systems.'

Replace the conceptual definition with concrete actions the skill performs, e.g., 'Guides design and implementation of autonomous agent architectures, including goal decomposition, tool orchestration, error recovery, and reliability patterns.'

Reframe the philosophical statement ('the challenge isn't making them capable...') into actionable guidance, e.g., 'Emphasizes reliability-first design patterns to minimize compounding failure in multi-step agent workflows.'

DimensionReasoningScore

Specificity

The description discusses what autonomous agents ARE conceptually but does not describe concrete actions the skill performs. Phrases like 'decompose goals, plan actions, execute tools, and self-correct' describe the concept of agents, not what this skill does for the user.

1 / 3

Completeness

The description explains a concept (what autonomous agents are) but does not clearly state what the skill DOES or WHEN Claude should use it. There is no 'Use when...' clause or equivalent trigger guidance, and the 'what' is conceptual rather than actionable.

1 / 3

Trigger Term Quality

Contains some relevant keywords like 'autonomous agents', 'AI systems', 'plan actions', 'execute tools', and 'self-correct' that a user interested in building agents might use. However, it lacks common variations like 'agentic workflows', 'tool use', 'agent framework', 'multi-step tasks', etc.

2 / 3

Distinctiveness Conflict Risk

The term 'autonomous agents' provides some specificity, but the description is broad enough that it could overlap with skills about tool use, planning, multi-step reasoning, or AI system design. The lack of concrete actions makes it harder to distinguish from related skills.

2 / 3

Total

6

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive but bloated reference document that tries to cover too much in a single file. It explains many concepts Claude already understands (error probability math, what ReAct is, why demos differ from production), resulting in severe token waste. The code examples provide moderate actionability but many rely on undefined helper functions, and the lack of any bundle structure means everything is crammed into one massive file with no progressive disclosure.

Suggestions

Cut the content by 60%+ by removing explanations of concepts Claude knows (what autonomous agents are, why errors compound, what ReAct stands for) and keeping only the novel, project-specific guidance and code patterns.

Split into bundle files: PATTERNS.md (ReAct, Plan-Execute, Reflection), SHARP_EDGES.md (failure modes), GUARDRAILS.md (safety patterns), and keep SKILL.md as a concise overview with links.

Replace pseudocode with fully executable examples - define or import all referenced functions (summarize(), planner.plan_next(), verify_restaurant_exists()) or remove them in favor of complete, runnable snippets.

Move metadata sections (Capabilities, Scope, When to Use, Limitations, Collaboration, Related Skills) into YAML frontmatter where they belong, freeing the body for actionable content only.

DimensionReasoningScore

Conciseness

Extremely verbose at 500+ lines. Explains concepts Claude already knows (what autonomous agents are, what ReAct is, how error probability compounds). Includes extensive 'Sharp Edges' sections that read like blog posts rather than actionable instructions. The 'Capabilities', 'Scope', 'When to Use', and 'Limitations' sections are metadata that belong in frontmatter, not body content. Massive amounts of redundancy (cost control appears in multiple sections, guardrails repeated).

1 / 3

Actionability

Provides code examples that are mostly concrete (LangGraph checkpointing, ReAct implementation, guardrailed agent class), but many are pseudocode-like with undefined functions (summarize(), planner.plan_next(), verify_restaurant_exists()). The code uses triple-quoted strings instead of proper code blocks in several places, and mixes frameworks without clear guidance on which to actually use. Some examples are executable but many require significant undefined infrastructure.

2 / 3

Workflow Clarity

The ReAct and Plan-Execute patterns describe clear sequences, and the guardrailed autonomy section has good step-by-step validation. However, there's no overarching workflow for 'how to build an agent from scratch' - it's a collection of patterns without clear sequencing between them. The validation checks section lists anti-patterns but doesn't integrate them into a coherent build workflow. Missing explicit feedback loops in several multi-step processes.

2 / 3

Progressive Disclosure

Monolithic wall of text with no bundle files to reference. Everything is inline - the Sharp Edges section alone is hundreds of lines that could be in a separate file. References to other skills (agent-memory-systems, multi-agent-orchestration) exist but there are no actual linked files. The content would benefit enormously from splitting patterns, sharp edges, and validation checks into separate referenced documents.

1 / 3

Total

6

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (1085 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.