CtrlK
BlogDocsLog inGet started
Tessl Logo

agentfolio

Skill for discovering and researching autonomous AI agents, tools, and ecosystems using the AgentFolio directory.

36

Quality

21%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/agentfolio/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is too vague and lacks concrete actions, explicit trigger conditions, and natural user-facing keywords. While the mention of 'AgentFolio' provides some distinctiveness, the description fails to communicate what specific operations the skill performs or when Claude should select it over other skills.

Suggestions

Add a 'Use when...' clause with explicit trigger terms like 'find an AI agent', 'search AgentFolio', 'compare autonomous agents', 'agent directory', or 'browse AI tools'.

List specific concrete actions the skill performs, such as 'Search the AgentFolio directory by category, compare agent capabilities, retrieve agent details and documentation, filter by use case or ecosystem'.

Include natural keyword variations users might say, such as 'AI agent lookup', 'agent catalog', 'find automation tools', or 'agent recommendations'.

DimensionReasoningScore

Specificity

The description uses vague language like 'discovering and researching' without listing concrete actions. It doesn't specify what operations can be performed (e.g., search by category, compare agents, filter by capability).

1 / 3

Completeness

The description loosely addresses 'what' (discovering/researching agents) but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is too vague to merit even a 2.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'AI agents', 'tools', 'ecosystems', and 'AgentFolio', but misses common user phrasings like 'find an agent', 'agent directory', 'agent search', 'AI tool comparison', or 'agent marketplace'.

2 / 3

Distinctiveness Conflict Risk

The mention of 'AgentFolio directory' provides some distinctiveness as a specific tool/platform, but 'AI agents and tools' is broad enough to potentially overlap with other AI-related skills.

2 / 3

Total

6

/

12

Passed

Implementation

20%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a guide to browsing a website directory, which provides very little value that Claude couldn't infer on its own. It's heavily padded with generic advice about evaluating software, using search bars, and building comparison tables—none of which require a skill file. The complete absence of concrete, executable actions (no API, no code, no structured output schemas) makes it minimally actionable.

Suggestions

Remove all generic advice Claude already knows (how to use a search bar, what a comparison table is, how to evaluate software) and reduce to just the URL, any specific API endpoints or data structures, and the unique evaluation framework.

Add concrete, executable outputs—e.g., a specific JSON schema for agent comparison results, or a structured markdown template for landscape reports.

If AgentFolio has an API, provide actual API calls with example requests/responses instead of 'visit the website and search.'

Collapse the entire skill to ~20 lines: the URL, any non-obvious navigation tips specific to AgentFolio, and a structured output template for agent evaluations.

DimensionReasoningScore

Conciseness

The skill is extremely verbose for what amounts to 'browse a website and take notes.' It explains obvious concepts like how to use a search bar, what a comparison table is, and how to evaluate software—all things Claude already knows. The 'Capabilities' section largely restates the intro, and the example workflows are padded with generic advice.

1 / 3

Actionability

There is no concrete, executable guidance anywhere. No API calls, no code, no CLI commands, no structured output formats. The entire skill is 'visit a website, search for things, take notes'—vague direction that Claude could infer without any skill file at all.

1 / 3

Workflow Clarity

The steps are listed in a clear sequence (open, search, evaluate, synthesize) and the example workflows provide reasonable structure. However, there are no validation checkpoints, no concrete outputs to verify, and no feedback loops—though the task is low-risk enough that this is less critical.

2 / 3

Progressive Disclosure

The content is organized with clear headers and sections, but it's a monolithic document with no references to external files. Given the length (~80+ lines of content), some sections like example workflows or prompts could be split out. The structure is decent but the content itself is too long for what it conveys.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.