Write developer blog posts from video transcripts, meeting notes, or rough ideas. Extracts narrative from source material, structures content with hooks and technical sections, formats code examples with placeholders, and checks drafts against 31 AI anti-patterns with structural variant detection, three-pass scanning (surface, skeleton, soul check), and rewrite auditing. Auto-updates anti-pattern list from Wikipedia before each session. Includes interactive onboarding to learn the author's voice from writing samples. Persona files live at ~/.claude/blog-writer-persona/ by default, with symlink support for custom locations (e.g. Google Drive for backup). Optional global voice saves your voice profile to Claude Code user memory so it applies across all projects. Use this skill whenever the user wants to write a blog post, draft a blog, turn a transcript into a blog, work on blog content, or mentions "blog" in the context of content creation. Also trigger when the user provides a video transcript and wants written content derived from it, or when continuing work on a blog series.
97
94%
Does it follow best practices?
Impact
99%
1.43xAverage score across 7 eval scenarios
Advisory
Suggest reviewing before use
A senior backend engineer submitted a blog post draft about debugging a memory leak in a Node.js service. The technical content is solid and well-structured. The draft has already been through one editing pass — there are no obvious grammar issues, no AI vocabulary problems, no structural gimmicks.
The problem is that it reads like a postmortem template, not a blog post. It's technically correct but completely lifeless. The blog editor wants the prose to feel like it was written by a human who lived through the debugging experience and has opinions about it — not a documentation generator.
Review the draft below for voice and personality issues. Produce a revised version that preserves the technical content but makes it read like an actual person wrote it. Also produce a report explaining what was wrong with the original and what you changed.
Produce the following files:
revised-draft.md -- the full draft with voice and personality issues fixedrevision-report.md -- a report explaining what problems you identified in the original prose (beyond surface-level pattern matching), and what you changed to address themThe following files are provided as inputs. Extract them before beginning.
=============== FILE: inputs/draft-to-revise.md ===============
Our team runs a Node.js service that processes webhook events from third-party integrations. In January, the service started consuming more memory over time. The process would reach its container limit and restart every eight hours. We needed to find and fix the leak.
We added heap snapshots to the service using the v8 module. The snapshots showed retained objects growing by approximately 50MB per hour. Most of the retained objects were event listener closures attached to the HTTP client. The closures held references to request context objects that should have been garbage collected.
We checked the application metrics in Grafana. The memory graph showed a steady upward slope that reset sharply every eight hours when the container hit its 512MB limit and restarted. The pattern had started approximately two weeks before anyone noticed, coinciding with a dependency update.
The HTTP client library created a new listener for each request. The listeners were added to a shared emitter but never removed after the request completed. Each listener held a closure over the request context, which included the full request body. Over time, thousands of stale listeners accumulated on the emitter.
We traced the issue to a minor version bump in the HTTP client library. The previous version had cleaned up listeners automatically. The new version changed the cleanup behavior as a side effect of an unrelated performance optimization. The changelog did not mention the change.
We replaced the per-request listeners with a single persistent listener that routes events by request ID. The new approach uses a Map to track active requests. When a request completes, its entry is removed from the Map. The fix reduced the listener count from thousands to one.
We also added a monitoring check that alerts when the active listener count exceeds 100. This check runs every 30 seconds and sends an alert to our on-call Slack channel if the threshold is breached. The check was deployed alongside the fix.
After deploying the fix, memory usage stabilized at 180MB. The service has not restarted due to memory pressure since the deployment. Heap snapshots confirm that request context objects are now garbage collected within seconds of request completion. The monitoring dashboard shows a flat memory line where it previously showed a sawtooth pattern.
We ran a load test at 2x production traffic for 48 hours to confirm the fix held under pressure. Memory remained stable throughout the test. The service processed 4.2 million webhook events during the test period without exceeding 200MB.
Alex Chen is a backend engineer at WebhookOps. They have worked on distributed systems for six years. Prior to WebhookOps, they worked at a consultancy building event-driven architectures. In their spare time, they contribute to open-source observability tooling. =============== END INPUT ===============
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
example-persona