CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-orchestration-improve-agent

Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.

62

1.54x
Quality

47%

Does it follow best practices?

Impact

82%

1.54x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./docs/v19.7/configuration/agent/skills_external/antigravity-awesome-skills-main/skills/agent-orchestration-improve-agent/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description provides a reasonable high-level overview of agent improvement capabilities but suffers from abstract language and lacks explicit trigger guidance. It would benefit from more concrete actions and a clear 'Use when...' clause to help Claude distinguish this skill from general prompt engineering or performance analysis skills.

Suggestions

Add a 'Use when...' clause with trigger terms like 'improve agent', 'agent performance', 'agent debugging', 'optimize agent', or 'agent iteration'.

Replace abstract terms with concrete actions such as 'analyze agent logs', 'refine system prompts', 'identify failure patterns', or 'A/B test agent responses'.

Include natural user phrases like 'my agent isn't working well', 'make my agent better', or 'agent quality issues' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (agent improvement) and some actions ('performance analysis, prompt engineering, continuous iteration'), but these are somewhat abstract and not fully concrete actions like 'analyze logs' or 'rewrite prompts'.

2 / 3

Completeness

Describes what it does at a high level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'agents', 'prompt engineering', and 'performance analysis', but missing common user phrases like 'improve my agent', 'agent not working', 'optimize prompts', or 'debug agent behavior'.

2 / 3

Distinctiveness Conflict Risk

'Prompt engineering' could overlap with general prompt writing skills, and 'performance analysis' is generic. However, 'agents' and 'iteration' provide some distinctiveness for agent-specific work.

2 / 3

Total

7

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a comprehensive framework for agent optimization with strong workflow clarity and explicit validation/rollback procedures. However, it suffers from verbosity, explaining concepts Claude already knows, and uses pseudocode placeholders rather than executable examples. The monolithic structure would benefit from progressive disclosure to external reference files.

Suggestions

Replace pseudocode command blocks (e.g., 'Use: context-manager') with actual executable commands or clarify these are conceptual placeholders

Remove explanatory content about well-known concepts (A/B testing basics, semantic versioning) to improve token efficiency

Split detailed content (evaluation rubrics, test suite templates, rollback procedures) into separate reference files with clear links from the main skill

DimensionReasoningScore

Conciseness

The skill contains useful content but is verbose in places, explaining concepts Claude likely knows (e.g., what A/B testing is, basic versioning semantics). Some sections like 'Continuous Improvement Cycle' and 'Post-Deployment Review' add padding without actionable specifics.

2 / 3

Actionability

Provides structured guidance and some command examples, but most code blocks are pseudocode or placeholder syntax (e.g., 'Use: context-manager', 'Use: prompt-engineer') rather than executable commands. The metrics templates are useful but not copy-paste ready.

2 / 3

Workflow Clarity

Clear four-phase workflow with explicit sequencing, validation checkpoints (A/B testing, staged rollout), and rollback procedures with specific triggers. The feedback loop for rollback is well-defined with concrete thresholds.

3 / 3

Progressive Disclosure

Content is well-organized with clear headers and phases, but it's a monolithic document with no references to external files for detailed content (e.g., test suite templates, evaluation rubrics). The 200+ lines could benefit from splitting advanced topics into separate files.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
duclm1x1/Dive-Ai
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.