CtrlK
BlogDocsLog inGet started
Tessl Logo

content-research-writer

A token-efficient writing partner that uses bounded research, numbered citations, evidence packets, and parallel sub-agents to outline, draft, and refine high-quality content with minimal context growth.

59

1.34x
Quality

40%

Does it follow best practices?

Impact

94%

1.34x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.github/skills/content-research-writer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is heavily focused on internal implementation details (token efficiency, bounded research, parallel sub-agents) rather than user-facing capabilities and trigger scenarios. It lacks a 'Use when...' clause and natural user-facing keywords, making it difficult for Claude to know when to select this skill. The technical jargon would not match how users naturally describe their writing needs.

Suggestions

Add an explicit 'Use when...' clause with natural trigger terms like 'write an article', 'draft a blog post', 'research and write', 'essay', 'long-form content', or 'writing with citations'.

Replace implementation jargon ('bounded research', 'evidence packets', 'parallel sub-agents', 'context growth') with user-facing language describing what the skill produces, e.g., 'Researches topics, outlines structure, drafts long-form articles and essays with inline citations.'

Specify the types of content this skill handles (e.g., articles, reports, essays, blog posts) to improve both specificity and distinctiveness from other writing skills.

DimensionReasoningScore

Specificity

It names some actions ('outline, draft, and refine') and mentions techniques ('bounded research, numbered citations, evidence packets, parallel sub-agents'), but these are more architectural/process descriptors than concrete user-facing capabilities. It doesn't specify what kind of content or what concrete outputs are produced.

2 / 3

Completeness

While there is a weak 'what' (outline, draft, refine content), there is no 'when' clause at all — no 'Use when...' or equivalent trigger guidance. The rubric caps completeness at 2 for missing 'Use when', and the 'what' is also vague enough to warrant a score of 1.

1 / 3

Trigger Term Quality

The description uses technical jargon like 'bounded research', 'evidence packets', 'parallel sub-agents', and 'context growth' that users would never naturally say. Natural terms like 'write', 'article', 'blog post', 'essay', 'research paper', or 'editing' are largely absent.

1 / 3

Distinctiveness Conflict Risk

The mention of specific techniques like 'numbered citations', 'evidence packets', and 'parallel sub-agents' provides some distinctiveness from generic writing skills, but 'writing partner' and 'outline, draft, and refine' are broad enough to overlap with many other writing-related skills.

2 / 3

Total

6

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-thought-out workflow skill with excellent phase sequencing, clear stop conditions, and good guardrails for context efficiency. Its main weaknesses are that it's somewhat verbose for what it conveys (explaining concepts like 'writing partner' and when-to-use sections), and it lacks truly executable artifacts like template files or concrete examples of evidence packets and sub-agent prompts. The content would benefit from splitting detailed schemas into referenced template files.

Suggestions

Provide a concrete, copy-paste-ready evidence packet template (e.g., an actual EP-001-example.md file) rather than describing the format in prose.

Move the detailed sub-agent output schema and folder structure into referenced companion files to reduce the main skill's token footprint.

Remove or significantly trim the 'When to Use / When Not to Use' sections — Claude can infer appropriate usage from the workflow itself.

Add a concrete example of a sub-agent prompt/invocation showing exactly how to spawn and merge sub-agent outputs.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary explanation (e.g., 'This skill acts as your writing partner' intro, 'When to Use/Not Use' sections that Claude could infer). The folder structure and evidence packet format sections are useful but slightly verbose with the naming conventions. Overall mostly efficient but could be tightened.

2 / 3

Actionability

The skill provides concrete structures (evidence packet schema, folder layout, citation format, sub-agent output schema) but lacks executable code or copy-paste-ready templates. The sub-agent output schema is described in prose rather than as an actual template file. Phase descriptions tell Claude what to do conceptually but don't provide exact commands or runnable examples.

2 / 3

Workflow Clarity

The 7-phase workflow (Phase 0–6) is clearly sequenced with explicit stop conditions (e.g., 'Stop researching as soon as every factual claim has at least one sufficient evidence packet'), escalation ladders, merge rules, and validation checkpoints (Phase 6 review with claim-to-evidence tracing). The guardrails section provides a clear escalation ladder for when things go wrong.

3 / 3

Progressive Disclosure

The content is well-organized with clear headers and sections, but everything is in a single monolithic file. The folder structure suggests where artifacts go on disk, but the skill itself doesn't reference any companion files (e.g., a template evidence packet file, a source log template, or a detailed sub-agent instructions file). For a skill this long (~180 lines), splitting detailed schemas and templates into referenced files would improve token efficiency.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
0xrabbidfly/eric-cartman
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.