CtrlK
BlogDocsLog inGet started
Tessl Logo

requesthunt

Generate user demand research reports from real user feedback. Scrape and analyze feature requests, complaints, and questions from Reddit, X, GitHub, YouTube, LinkedIn, and Amazon. Use when user wants to do demand research, find feature requests, analyze user demand, or run RequestHunt queries.

68

Quality

82%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly communicates specific capabilities (scraping and analyzing user feedback from named platforms), provides explicit trigger guidance with a 'Use when' clause, and occupies a distinct niche. The inclusion of the product name 'RequestHunt' and specific platform names further strengthens both trigger quality and distinctiveness.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Generate user demand research reports', 'Scrape and analyze feature requests, complaints, and questions' from named platforms (Reddit, X, GitHub, YouTube, LinkedIn, Amazon).

3 / 3

Completeness

Clearly answers both what ('Generate user demand research reports from real user feedback, scrape and analyze feature requests...') and when ('Use when user wants to do demand research, find feature requests, analyze user demand, or run RequestHunt queries').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'demand research', 'feature requests', 'user feedback', 'complaints', 'questions', plus platform names and the product name 'RequestHunt'. These cover common variations of how users would phrase such requests.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche combining demand research, specific social platform scraping, and the unique product name 'RequestHunt'. Unlikely to conflict with generic data analysis or social media skills due to the specific focus on user demand/feature request analysis.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with strong actionability — concrete CLI commands, realistic examples per product category, and a complete report template. Its main weaknesses are the lack of validation checkpoints in the workflow (e.g., checking scrape job status before report generation, handling failures) and the monolithic structure that packs extensive reference tables inline rather than splitting them into supporting files. The platform selection guide, while valuable, is verbose relative to the token budget.

Suggestions

Add explicit validation checkpoints: after scrape start, check `requesthunt scrape status` and wait for completion before proceeding; include error handling guidance if scraping returns zero results for a platform.

Extract the Platform Selection Guide tables and the Commands reference section into separate bundle files (e.g., PLATFORMS.md and COMMANDS.md) and reference them from the main skill to reduce inline bulk.

Condense the platform recommendation tables — the 'Platform Strengths' and 'Recommended Platforms by Category' tables overlap significantly with the 'Quick Selection Rules' section; keep the quick rules and trim the tables.

DimensionReasoningScore

Conciseness

The platform selection guide tables are useful but quite extensive, adding significant token weight. Some sections like the detailed platform strengths table and the recommended platforms by category table could be condensed. The prerequisites section includes reasonable detail but the alternative build-from-source note and headless environment instructions add bulk. Overall mostly efficient but could be tightened.

2 / 3

Actionability

Provides fully executable CLI commands for every step (install, auth, scrape, search, list), concrete bash examples for different product categories, a complete report template in markdown, and specific flag usage. Commands are copy-paste ready with realistic arguments.

3 / 3

Workflow Clarity

The 3-step research workflow (Define Scope → Collect Data → Generate Report) is clearly sequenced, but there are no validation checkpoints between steps. There's no guidance on verifying scrape completion before generating reports (e.g., checking scrape status before proceeding), no error recovery steps if scraping fails, and no feedback loop for validating report quality against collected data.

2 / 3

Progressive Disclosure

The content is a single monolithic file with no bundle files. The extensive platform selection tables and command reference sections could be split into separate reference files. External links to docs exist but the skill itself would benefit from splitting the platform guide and command reference into separate files with clear navigation from the main skill.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
resciencelab/opc-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.