CtrlK
BlogDocsLog inGet started
Tessl Logo

requesthunt

Generate user demand research reports from real user feedback. Scrape and analyze feature requests, complaints, and questions from Reddit, X, GitHub, YouTube, LinkedIn, and Amazon. Use when user wants to do demand research, find feature requests, analyze user demand, or run RequestHunt queries.

86

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly communicates what the skill does (scrape and analyze user feedback from specific platforms to generate demand research reports), when to use it (demand research, feature requests, RequestHunt queries), and includes natural trigger terms. The description is concise, uses third person voice, and provides enough specificity to distinguish it from other skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Generate user demand research reports', 'Scrape and analyze feature requests, complaints, and questions' from named platforms (Reddit, X, GitHub, YouTube, LinkedIn, Amazon).

3 / 3

Completeness

Clearly answers both what ('Generate user demand research reports from real user feedback, scrape and analyze feature requests...') and when ('Use when user wants to do demand research, find feature requests, analyze user demand, or run RequestHunt queries').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'demand research', 'feature requests', 'user feedback', 'complaints', 'questions', plus platform names and the product name 'RequestHunt'. These cover common variations of how users would phrase such requests.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche around user demand research from social platforms with distinct triggers like 'RequestHunt', 'demand research', and specific platform scraping. Unlikely to conflict with general analytics or social media skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with excellent concrete examples and executable commands covering the full RequestHunt workflow. Its main weaknesses are verbosity in the platform selection guide (three overlapping reference tables), lack of validation/error-handling checkpoints in the workflow, and a monolithic structure that would benefit from splitting detailed reference content into separate files.

Suggestions

Add validation checkpoints to the workflow: check `requesthunt scrape status` after Step 2, handle cases where scrape returns insufficient data, and verify data quality before generating the report.

Consolidate the three platform selection references (Platform Strengths table, Recommended Platforms table, Quick Selection Rules) into a single concise reference — the current overlap adds ~40 lines of redundant content.

Split the detailed command reference and platform selection guide into separate referenced files (e.g., COMMANDS.md, PLATFORM_GUIDE.md) to keep SKILL.md as a lean overview.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes substantial content that could be more concise — the platform selection guide tables are extensive and somewhat redundant (platform strengths table + recommended platforms table + quick selection rules all overlap). The content safety section and some explanatory text could be tightened.

2 / 3

Actionability

Provides fully executable bash commands for every operation (install, auth, scrape, search, list), concrete examples for different product categories, a complete report template in markdown, and specific verification steps for authentication. Commands are copy-paste ready.

3 / 3

Workflow Clarity

The 3-step research workflow (Define Scope → Collect Data → Generate Report) is clearly sequenced, but lacks validation checkpoints — there's no step to verify scrape completion (scrape status is mentioned in the commands section but not integrated into the workflow), no error handling for failed scrapes, and no feedback loop for insufficient data quality or volume.

2 / 3

Progressive Disclosure

The content is well-organized with clear sections and headers, but it's monolithic — the platform selection guide, report template, and full command reference could be split into separate files. References to external docs exist (requesthunt.com/docs, setup.md) but the skill itself is quite long with inline content that would benefit from separation.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
resciencelab/opc-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.