CtrlK
BlogDocsLog inGet started
Tessl Logo

competitor-monitoring

Visit each competitor's homepage, features page, pricing page, and blog using Chrome MCP, then write a structured competitive intelligence report saved to Google Drive. Use for a standing weekly competitive pulse or an on-demand deep-dive.

73

Quality

67%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./product-skills/skills/competitor-monitoring/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly articulates a specific workflow (visiting competitor pages, generating reports, saving to Drive) and provides explicit usage triggers. The main weakness is that trigger term coverage could be broader to capture more natural user phrasings like 'competitor analysis' or 'market research'.

Suggestions

Add more natural trigger terms users might say, such as 'competitor analysis', 'market research', 'competitive landscape', or 'benchmarking'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: visiting competitor pages (homepage, features, pricing, blog), using Chrome MCP, writing a structured competitive intelligence report, and saving to Google Drive.

3 / 3

Completeness

Clearly answers both what (visit competitor pages via Chrome MCP, write structured competitive intelligence report, save to Google Drive) and when ('Use for a standing weekly competitive pulse or an on-demand deep-dive'), providing explicit trigger scenarios.

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'competitive intelligence', 'competitor', 'pricing page', 'Google Drive', but misses common user variations like 'competitor analysis', 'market research', 'competitive landscape', 'comp intel', or 'benchmarking'.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche combining web scraping competitor sites via Chrome MCP with structured report generation saved to Google Drive. Unlikely to conflict with other skills due to the specific workflow described.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-structured prompt template for competitive intelligence gathering with clear output formatting and useful tips. However, it leans more toward a prompt recipe than an actionable skill — it lacks concrete MCP invocation examples, validation checkpoints for multi-step browser operations, and explains motivation that Claude doesn't need. The workflow would benefit from explicit error handling and verification steps given the fragile nature of web browsing operations.

Suggestions

Add concrete MCP tool invocation examples (e.g., exact Chrome MCP commands to navigate to a URL and extract page content) rather than just describing what to visit.

Add validation checkpoints after Step 1 (verify competitor list parsed correctly with expected fields) and Step 2 (verify each page was successfully loaded before extracting data).

Remove the introductory paragraph explaining why manual competitor monitoring is tedious — Claude doesn't need motivation, just instructions.

Include a sample competitors.md file format so Claude knows exactly what structure to expect when loading the competitor list.

DimensionReasoningScore

Conciseness

The introductory paragraph explains why competitor monitoring is tedious, which is unnecessary context for Claude. The prompt template itself is reasonably efficient, but the overall skill includes some padding (e.g., 'Staying on top of competitors manually means visiting dozens of pages...').

2 / 3

Actionability

The skill provides a structured prompt template with clear steps and output format, but it's essentially a prompt to copy-paste rather than executable code or concrete commands. It relies on placeholders and MCP tools without showing exact MCP invocation syntax or API calls, making it more of a guided template than fully executable guidance.

2 / 3

Workflow Clarity

The three-step workflow (load list → research → write report) is clearly sequenced, and there's a rule about skipping unavailable sites. However, there are no validation checkpoints — no step to verify the competitor list was loaded correctly, no verification that pages were actually visited successfully, and no feedback loop for handling partial failures beyond 'note it and skip.'

2 / 3

Progressive Disclosure

The content is organized into logical sections (Prompt Template, Setup, Placeholders, Tips) which aids readability. However, the prompt template is quite long inline, and there are no bundle files or external references for the detailed report format or competitor list template — everything is in one monolithic file when some content (like the report template) could be separated.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.