Assists in writing high-quality content by conducting research, adding citations, improving hooks, iterating on outlines, and providing real-time feedback on each section. Transforms your writing process from solo effort to collaborative partnership.
68
26%
Does it follow best practices?
Impact
98%
1.44xAverage score across 6 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./business-productivity/content-research-writer/SKILL.mdQuality
Discovery
25%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description lists several writing-related actions but wraps them in vague, marketing-style language ('Transforms your writing process from solo effort to collaborative partnership') that adds no selection value. It lacks a 'Use when...' clause, uses second person ('your'), and is too generic to be distinguishable from other writing or editing skills.
Suggestions
Add an explicit 'Use when...' clause with trigger terms, e.g., 'Use when the user asks for help writing articles, blog posts, essays, or long-form content that requires research and citations.'
Remove the second-person marketing sentence ('Transforms your writing process...') and replace with concrete scope details like content types supported (blog posts, reports, essays) or specific output formats.
Narrow the scope to a distinct niche — specify what kind of content (e.g., 'long-form articles,' 'academic papers') to reduce conflict risk with other writing-related skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names several actions like 'conducting research, adding citations, improving hooks, iterating on outlines, and providing real-time feedback,' but these are somewhat generic writing-assistance actions rather than highly concrete, tool-specific capabilities. The second sentence is pure marketing fluff. | 2 / 3 |
Completeness | Describes what it does (writing assistance with research, citations, etc.) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'when' is not even implied clearly, so this scores at 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'writing,' 'research,' 'citations,' 'hooks,' and 'outlines' that users might mention, but misses common variations like 'blog post,' 'article,' 'essay,' 'draft,' 'editing,' or specific content types. The terms are moderately useful but not comprehensive. | 2 / 3 |
Distinctiveness Conflict Risk | The description is very broad — 'writing high-quality content' could overlap with virtually any writing, editing, research, or content creation skill. There are no distinct triggers that would differentiate this from other writing-related skills. | 1 / 3 |
Total | 6 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is significantly over-engineered and verbose for what it accomplishes. It contains extensive template scaffolding and placeholder text that Claude can generate on its own, explains basic writing concepts unnecessarily, and includes generic productivity advice ('Take breaks', 'Set deadlines') that wastes tokens. The concrete examples (hook improvements, section feedback) are the strongest parts, but they're buried in a wall of redundant structure.
Suggestions
Cut the content by 60-70%: remove 'When to Use', 'What This Skill Does', 'Pro Tips', 'Best Practices', and 'Related Use Cases' sections entirely—Claude already knows these things.
Consolidate the output templates into a single referenced file (e.g., TEMPLATES.md) and keep only the core workflow and key principles in SKILL.md.
Add validation checkpoints to workflows: e.g., 'Verify all citations are from real, accessible sources before presenting research' and 'Confirm voice match with user before proceeding to next section'.
Replace the verbose instruction templates with a concise decision tree: topic → outline → research → draft sections with feedback loops → final review, keeping only the non-obvious guidance.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400+ lines. Explains obvious concepts like what a hook is, how to ask clarifying questions, and basic writing advice ('Take breaks', 'Read aloud'). Template structures are overly detailed with placeholder text that Claude already knows how to generate. The 'When to Use This Skill' and 'What This Skill Does' sections are redundant with the actual instructions. | 1 / 3 |
Actionability | Provides structured templates and example prompts, but most content is template scaffolding with placeholders rather than executable guidance. The examples (hook improvements, section feedback) are concrete and useful, but the core instructions are more about formatting output templates than giving Claude specific, actionable techniques for research or writing improvement. | 2 / 3 |
Workflow Clarity | Multiple workflows are listed (blog post, newsletter, tutorial, thought leadership) with clear sequences, but they lack validation checkpoints or feedback loops. The basic workflow has numbered steps but no verification points. There's no guidance on what to do when research fails, citations can't be verified, or the user rejects feedback. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inline. Despite recommending a file organization structure with separate files (outline.md, research.md, feedback.md), the skill itself dumps all content into one massive document. No references to external files for detailed templates, citation format guides, or workflow-specific instructions that could be split out. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (540 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3dd3ac0
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.