CtrlK
BlogDocsLog inGet started
Tessl Logo

cloudflare-agents

Build AI agents on Cloudflare Workers with MCP integration, tool use, and LLM providers.

45

Quality

31%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/cloudflare-agents/skills/cloudflare-agents/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a reasonably specific domain (AI agents on Cloudflare Workers) but relies on feature-category language rather than concrete actions. Its biggest weakness is the complete absence of a 'Use when...' clause, making it unclear when Claude should select this skill over others. Adding explicit trigger conditions and more specific action verbs would significantly improve it.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user wants to build AI agents on Cloudflare Workers, mentions MCP server integration, or asks about deploying agentic workflows to the edge.'

Replace abstract feature categories with concrete actions, e.g., 'Scaffold Cloudflare Workers AI agent projects, configure MCP tool handlers, connect LLM providers, and deploy agentic workflows.'

Include common term variations users might say, such as 'model context protocol', 'serverless agents', 'edge AI agents', or 'Workers AI'.

DimensionReasoningScore

Specificity

Names the domain (AI agents on Cloudflare Workers) and lists some capabilities (MCP integration, tool use, LLM providers), but these are more like feature categories than concrete actions. It doesn't specify what actions are performed (e.g., 'scaffold agent projects', 'configure tool handlers', 'connect to LLM APIs').

2 / 3

Completeness

Describes what (build AI agents on Cloudflare Workers with certain integrations) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also only moderately detailed, warranting a 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'AI agents', 'Cloudflare Workers', 'MCP', 'tool use', and 'LLM providers' that users might mention. However, it misses common variations like 'agentic workflows', 'model context protocol', 'serverless agents', 'edge AI', or specific LLM provider names.

2 / 3

Distinctiveness Conflict Risk

The combination of 'Cloudflare Workers' and 'AI agents' with 'MCP integration' is fairly specific and narrows the niche. However, it could overlap with general Cloudflare Workers skills or general AI agent building skills without clearer boundaries.

2 / 3

Total

7

/

12

Passed

Implementation

29%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill functions primarily as a resource index with excellent progressive disclosure and reference organization, but fails as actionable guidance. The code examples are non-functional placeholders that don't use any real Cloudflare Agents SDK APIs, and there is no workflow for actually creating, configuring, or deploying an agent. The content would benefit greatly from real executable examples and a clear getting-started workflow.

Suggestions

Replace the Quick Start and Agent Pattern code blocks with real, executable code using actual Cloudflare Agents SDK imports and APIs (e.g., importing from 'agents' package, extending the Agent class).

Add a step-by-step workflow: create project with wrangler, install dependencies, configure wrangler.toml, write agent code, test locally, deploy — with validation checkpoints.

Remove the duplicate Agent Pattern section or make it meaningfully different from Quick Start (e.g., show a real multi-tool agent with actual LLM provider configuration).

DimensionReasoningScore

Conciseness

The content is reasonably lean but includes some unnecessary elements — the Quick Start code example is not real executable code (processWithLLM is undefined/fictional), and the Agent Pattern section repeats a similar skeleton without adding much value. The resource listing is efficient though.

2 / 3

Actionability

The code examples are not executable — `processWithLLM` is undefined, the agent pattern is a skeleton with a comment placeholder, and there are no real imports or framework-specific APIs shown. A developer cannot copy-paste and run any of this code. It describes rather than instructs.

1 / 3

Workflow Clarity

There is no workflow or sequenced process described. No steps for setting up a project, deploying, testing, or validating. The skill reads as a feature list and resource index rather than a guide for building an agent.

1 / 3

Progressive Disclosure

The skill excels at progressive disclosure — it provides a concise overview with well-organized, clearly signaled one-level-deep references to detailed documentation, integration guides, advanced features, error catalogs, and templates with line counts for context.

3 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
secondsky/claude-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.