CtrlK
BlogDocsLog inGet started
Tessl Logo

content-research-writer

A token-efficient writing partner that uses bounded research, numbered citations, evidence packets, and parallel sub-agents to outline, draft, and refine high-quality content with minimal context growth.

59

1.34x
Quality

40%

Does it follow best practices?

Impact

94%

1.34x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.github/skills/content-research-writer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is heavily focused on internal implementation details (bounded research, parallel sub-agents, token efficiency) rather than user-facing capabilities and trigger scenarios. It lacks a 'Use when...' clause and natural user language, making it difficult for Claude to know when to select this skill. The jargon-heavy phrasing would not match typical user requests for writing help.

Suggestions

Add an explicit 'Use when...' clause with natural trigger terms like 'write an article', 'draft a blog post', 'research and write', 'essay', 'long-form content', etc.

Replace implementation jargon ('bounded research', 'evidence packets', 'parallel sub-agents', 'minimal context growth') with user-facing capability descriptions like 'researches topics, writes articles, essays, and reports with inline citations'.

Specify the types of content this skill handles (e.g., articles, reports, essays, blog posts) to improve distinctiveness and help Claude differentiate it from other writing-related skills.

DimensionReasoningScore

Specificity

It names some actions ('outline, draft, and refine') and mentions techniques ('bounded research, numbered citations, evidence packets, parallel sub-agents'), but these are more architectural/process descriptors than concrete user-facing capabilities. It doesn't specify what kind of content or what concrete outputs are produced.

2 / 3

Completeness

While there is a partial 'what' (outline, draft, refine content), there is no 'when' clause or explicit trigger guidance. The description lacks any 'Use when...' statement, which per the rubric should cap completeness at 2, and the 'what' is also vague enough to warrant a 1.

1 / 3

Trigger Term Quality

The description uses technical jargon like 'bounded research', 'evidence packets', 'parallel sub-agents', and 'minimal context growth' that users would never naturally say. Natural terms like 'write', 'article', 'blog post', 'essay', 'research paper', or 'editing' are largely absent.

1 / 3

Distinctiveness Conflict Risk

The mention of specific techniques like 'evidence packets' and 'numbered citations' provides some distinctiveness, but 'writing partner' and 'outline, draft, and refine' are extremely broad and could overlap with any general writing or editing skill.

2 / 3

Total

6

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-thought-out writing workflow skill with strong phase sequencing, clear constraints, and good token-efficiency principles. Its main weaknesses are the lack of executable/copy-paste-ready artifacts (no template files, no actual sub-agent dispatch examples) and some verbosity in introductory sections. The content would benefit from splitting reference material into bundle files and providing concrete templates rather than descriptions of templates.

Suggestions

Add concrete, copy-paste-ready template files (e.g., an evidence packet template EP-000-template.md, a source log template, a brief template) that Claude can directly create, rather than describing what should go in them.

Move the detailed folder structure, evidence packet schema, and sub-agent output schema into separate bundle reference files (e.g., TEMPLATES.md, FOLDER-STRUCTURE.md) and reference them from the main SKILL.md.

Remove or significantly trim the 'When to Use / When Not to Use' sections — Claude can infer appropriate usage from the workflow itself.

Add a concrete example of sub-agent dispatch showing actual tool usage (e.g., how to invoke a sub-agent with the required output schema) to make Phase 4 fully actionable.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary explanation (e.g., 'This skill acts as your writing partner' preamble, 'When to Use / When Not to Use' sections that Claude can infer). The folder structure and evidence packet format sections are useful but could be tighter. Some redundancy between the evidence packet schema in Phase 4 and the standalone 'Evidence Packet' section.

2 / 3

Actionability

The skill provides concrete structures (folder naming, evidence packet schema, sub-agent output schema, citation format) but lacks executable code or copy-paste-ready commands. The workflow phases describe what to do conceptually but don't include actual tool invocations, file creation commands, or template files that Claude could directly execute. The sub-agent instructions are specific but still somewhat abstract without showing actual sub-agent dispatch syntax.

2 / 3

Workflow Clarity

The six-phase workflow is clearly sequenced with explicit stop conditions (Phase 2), merge rules (Phase 4), escalation ladders (Guardrails section), and validation checkpoints (Phase 6 review with claim-to-evidence tracing). The escalation ladder at the end provides a clear feedback loop for when evidence gathering is blocked.

3 / 3

Progressive Disclosure

The content is well-organized with clear headers and sections, but everything is in a single monolithic SKILL.md with no bundle files. The folder structure and evidence packet format could be split into referenced templates. For a skill of this length (~200 lines), some content like the detailed sub-agent output schema or the folder structure could be in separate reference files.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
0xrabbidfly/eric-cartman
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.