CtrlK
BlogDocsLog inGet started
Tessl Logo

competitor-monitoring

When the user wants to set up ongoing tracking of competitor activity — pricing changes, feature launches, hiring signals, content, or public mentions. Also use when the user mentions "track competitors", "what are competitors doing", "competitor alerts", or "market watch".

80

Quality

76%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/competitor-monitoring/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description excels at trigger term coverage and completeness, with a clear 'Use when' clause and natural keywords. Its main weakness is that it describes what to track rather than what concrete actions the skill performs (e.g., does it set up automated alerts, generate dashboards, scrape websites?). Adding specific actions would strengthen the specificity dimension.

Suggestions

Add concrete actions the skill performs, e.g., 'Sets up automated monitoring dashboards, configures alerts for pricing changes, and generates periodic competitor activity reports' instead of just listing what to track.

DimensionReasoningScore

Specificity

The description names the domain (competitor tracking) and lists several types of activity to track (pricing changes, feature launches, hiring signals, content, public mentions), but it doesn't describe concrete actions the skill performs — it focuses on what to track rather than what the skill does (e.g., 'sets up alerts', 'generates reports', 'scrapes websites').

2 / 3

Completeness

Clearly answers both 'what' (set up ongoing tracking of competitor activity across multiple dimensions) and 'when' (explicit 'Also use when...' clause with specific trigger phrases). The when clause is explicit and well-defined.

3 / 3

Trigger Term Quality

Includes strong natural trigger terms: 'track competitors', 'what are competitors doing', 'competitor alerts', 'market watch', plus specific activity types like 'pricing changes', 'feature launches', 'hiring signals'. These are terms users would naturally use.

3 / 3

Distinctiveness Conflict Risk

The focus on ongoing competitor tracking with specific signal types (pricing, hiring, features) creates a clear niche. The trigger terms are distinctive and unlikely to conflict with other skills like general market research or one-off competitive analysis.

3 / 3

Total

11

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured monitoring skill with a clear workflow and useful output template. Its main weaknesses are moderate verbosity (some explanatory content Claude doesn't need) and a lack of truly executable/concrete setup instructions — it describes what to do at a strategic level rather than providing copy-paste-ready commands or configurations. The content would benefit from being trimmed and having reference material split into separate files.

Suggestions

Add concrete, executable setup instructions for at least one monitoring tool (e.g., exact Google Alerts query syntax, specific Visualping configuration steps) rather than just naming tools.

Move the frameworks tables (job posting signals, threat levels) and common mistakes into a separate REFERENCE.md file, keeping SKILL.md focused on the workflow and output format.

Trim explanatory parentheticals like '(sentiment shifts)', '(reveal target customers)' — Claude can infer these from context.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary explanation that Claude would already know (e.g., explaining what changelogs are, what job postings signal in general terms). The frameworks tables are useful but the 'common mistakes' section and some of the monitoring surface descriptions could be tightened.

2 / 3

Actionability

The skill provides a clear workflow and concrete output format with a markdown template, which is good. However, it lacks executable commands or specific tool setup instructions — it names tools like Visualping and Google Alerts but doesn't provide concrete setup steps. The guidance is more descriptive than executable.

2 / 3

Workflow Clarity

The 5-step workflow is clearly sequenced from defining the monitoring surface through to generating the report. Each step has clear sub-items, and the analyze step includes a structured framework (threat levels) that serves as a validation checkpoint for interpreting signals before producing output.

3 / 3

Progressive Disclosure

The skill references related skills (competitive-analysis, daily-product-digest, review-mining, market-research) which is good, but the main content is quite long and monolithic. The frameworks tables and threat level definitions could be split into a reference file, and the output template could be a separate file to keep the SKILL.md leaner.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
shawnpang/startup-founder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.