Advanced content and topic research skill that analyzes trends across Google Analytics, Google Trends, Substack, Medium, Reddit, LinkedIn, X, blogs, podcasts, and YouTube to generate data-driven article outlines based on user intent analysis
Install with Tessl CLI
npx tessl i github:alirezarezvani/claude-code-skill-factory --skill content-trend-researcher36
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (multi-platform content research) and lists specific platforms, which helps with identification. However, it lacks explicit trigger guidance ('Use when...'), relies on somewhat abstract action verbs, and misses natural user phrases that would help Claude select this skill appropriately.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when the user asks for content ideas, trending topics, what to write about, or needs research across social platforms'
Include natural user phrases as trigger terms: 'content ideas', 'trending topics', 'what should I write about', 'blog post research', 'viral content'
Make actions more concrete: instead of 'analyzes trends', specify 'identifies top-performing content patterns, extracts engagement metrics, compares topic popularity across platforms'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (content/topic research) and lists platforms, but the actual actions are vague - 'analyzes trends' and 'generate data-driven article outlines' are somewhat abstract without specifying concrete operations like 'scrape engagement metrics' or 'identify top-performing keywords'. | 2 / 3 |
Completeness | Describes what it does (analyzes trends, generates outlines) but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps this at 2, and the 'what' is also weak, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes good platform names (Google Analytics, Reddit, YouTube, etc.) that users might mention, but lacks natural user phrases like 'content ideas', 'what should I write about', 'trending topics', or 'blog post ideas' that would trigger selection. | 2 / 3 |
Distinctiveness Conflict Risk | The specific platform list (Google Analytics, Substack, Medium, etc.) provides some distinctiveness, but 'content research' and 'article outlines' could overlap with general writing, SEO, or social media skills without clearer boundaries. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a product marketing document or feature specification rather than actionable instructions for Claude. It extensively describes what the skill 'does' and 'can do' but never provides concrete steps, code, or methods for actually performing content trend research. The verbose explanations of basic concepts (user intent types, platform descriptions) waste tokens on knowledge Claude already possesses.
Suggestions
Replace capability descriptions with executable code showing how to actually query trend data (e.g., Google Trends API calls, Reddit API queries, web scraping patterns)
Add a concrete workflow with numbered steps: 1) Query platforms, 2) Aggregate data, 3) Analyze intent patterns, 4) Generate outline - with validation checkpoints
Remove explanations of concepts Claude knows (what informational intent is, what each platform does) and replace with specific implementation details
Convert the output JSON schema into an actual working example showing how to transform raw platform data into the structured output
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanations of concepts Claude already knows (what user intent types are, what each platform does, basic content strategy concepts). The 'What This Skill Does' and 'Capabilities' sections explain obvious concepts at length rather than providing actionable instructions. | 1 / 3 |
Actionability | Despite its length, the skill provides no executable code, no actual API calls, no concrete implementation steps. It describes what the skill 'does' conceptually but never shows how to actually perform trend research. The JSON schemas are output formats, not executable guidance. | 1 / 3 |
Workflow Clarity | No clear workflow or sequence of steps for actually conducting research. The skill describes capabilities and outputs but never explains the process: how to query platforms, how to analyze data, how to synthesize findings. No validation checkpoints for a multi-platform research process. | 1 / 3 |
Progressive Disclosure | Content is organized into sections with clear headings, but it's a monolithic document that could benefit from splitting detailed platform-specific guidance into separate files. The 'Integration with Other Skills' section references external skills appropriately, but the main content is overly long for a SKILL.md overview. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.