Write developer blog posts from video transcripts, meeting notes, or rough ideas. Extracts narrative from source material, structures content with hooks and technical sections, formats code examples with placeholders, and checks drafts against 22 AI anti-patterns with structural variant detection, two-pass scanning, and rewrite auditing. Auto-updates anti-pattern list from Wikipedia before each session. Includes interactive onboarding to learn the author's voice from writing samples. Use this skill whenever the user wants to write a blog post, draft a blog, turn a transcript into a blog, work on blog content, or mentions "blog" in the context of content creation. Also trigger when the user provides a video transcript and wants written content derived from it, or when continuing work on a blog series.
94
Does it follow best practices?
Validation for skill structure
These are hard rules. If you catch yourself writing any of these patterns, rewrite. During the anti-pattern check (Phase 3 and Phase 4), scan the draft for every pattern listed here. Zero tolerance.
The tell: "Not X. Y." or "Not X. Just Y." or "It's not about X. It's about Y." A negation followed by an affirmation, framed as revelation.
Symptoms:
Examples:
Structural variants: The negation doesn't have to lead the sentence or use the word "Not."
Why it's a tell: This is the single most common LLM writing pattern. It sounds confident and pithy to a machine. To a human reader it sounds like a LinkedIn post.
Instead: Make the point directly.
The tell: Balanced A/B sentence pairs with mirrored structure. One thing does X, the other does Y, and the two clauses are suspiciously symmetrical.
Symptoms:
Examples:
Structural variants: Watch for identical grammatical skeletons even when the vocabulary differs.
Why it's a tell: Real comparisons are messy. Things don't map 1:1 onto neat structural parallels. When they do in prose, it's because the writer manufactured the symmetry.
Instead: Show the comparison through narrative. Tell the story of A failing, then tell the story of B succeeding. Let the reader draw the conclusion.
The tell: Three short fragments in sequence, followed by a punchline. "X. Y. Z. And then [dramatic conclusion]."
Symptoms:
Examples:
Structural variants: The three items don't have to be single words or short fragments.
Why it's a tell: The published posts DO use lists of three sometimes. The difference: they occur inside natural paragraphs, not as standalone staccato fragments designed to sound dramatic.
Instead: Embed the list in a flowing sentence.
The tell: Multiple sentence fragments in sequence for "dramatic effect." Each fragment is 1-5 words, usually noun phrases.
Symptoms:
Examples:
Structural variants: Fragments don't have to be noun phrases.
Why it's a tell: It substitutes rhythm for meaning. Every fragment carries equal weight, which means none of them carry any weight.
Instead: Write actual sentences.
The tell: Any sentence where the two halves mirror each other structurally. The clauses are balanced like a seesaw.
Symptoms:
Examples:
Structural variants: The symmetry can span separate sentences and use different vocabulary.
Why it's a tell: These read like fortune cookies. The structural balance makes the writer feel clever, but the reader feels lectured.
Instead: Make the point once, directly, without the balancing act.
The tell: A short noun phrase posed as a question, immediately answered by its own punchy fragment. The "question" never actually asks anything.
Symptoms:
Examples:
Structural variants: The question can be any length. The tell is the structure, not the word count.
Why it's a tell: It creates fake dramatic tension where none exists. It's a formatting trick pretending to be rhetoric — a setup/punchline couplet disguised as inquiry.
Instead: Use real rhetorical questions that invite the reader to actually think.
Real rhetorical questions work because the reader pauses to consider them. The self-answering version never invites thought.
The tell: Paired em-dashes used to set off an aside, where commas or parentheses would do the same job.
Symptoms:
Examples:
Why it's a tell: Paired em-dashes make every aside feel like a dramatic reveal when it's just a subordinate clause. LLMs scatter them everywhere because they pattern-match on "emphasis" without understanding that not everything deserves it.
Instead: Use commas or parentheses.
Note: A single em-dash for a hard break at the end of a clause is fine: "It worked — barely." It's the matched pair acting as fancy commas that's the problem.
The tell: More than two em-dashes per section, even when used correctly.
Symptoms:
Why it's a tell: The published posts use em-dashes, but moderately. One or two per section is fine. Five per paragraph means you're using them as a crutch instead of writing clearer sentences.
Instead: Use commas, colons, semicolons, or periods. The em-dash is a spice, not a staple.
The tell: Announcing what the post is about to do before doing it.
Symptoms:
Why it's a tell: The TLDR handles the preview. The reader clicked the title. They know what the post is about. Announcing it again is filler. The closing variant is the same move in reverse — restating the point the reader just read.
Instead: Just start. Open with the hook, the story, the confession. End with the CTA or the kicker, not a restatement. The reader doesn't need a table of contents narrated to them, and they don't need the post summarized back at them.
The tell: Filler phrases that soften a point without adding information.
Symptoms:
Why it's a tell: These are throat-clearing. They signal the writer isn't confident enough to just make the point. LLMs insert them as politeness padding. The didactic variants are worse — they talk down to the reader, implying they need to be told what's important.
Instead: Delete the hedge and start with the actual point. If it's important, the reader will know because you showed them why, not because you announced it.
Never. Zero. Not even in the TLDR. Not even ironically.
The tell: Specific words and phrases that appear 5-50x more frequently in AI-generated text than in human writing. These are LLM "comfort words" — they sound authoritative to a model but scream machine to a reader.
The watchlist:
Verbs: "delve", "underscore", "highlight" (as verb), "foster", "leverage", "harness", "showcase", "streamline", "navigate" (abstract), "cultivate", "illuminate", "orchestrate", "spearhead", "bolster", "enhance" (when inflating mundane improvements), "garner", "align with", "resonate with", "exemplify", "encompass"
Nouns: "tapestry", "landscape" (abstract), "realm", "journey" (abstract), "ecosystem", "paradigm", "trajectory", "blueprint", "interplay", "intricacies", "testament", "focal point", "commitment" (abstract), "diverse array"
Adjectives: "pivotal", "crucial", "vital", "nuanced", "multifaceted", "robust", "seamless", "comprehensive", "cutting-edge", "groundbreaking", "transformative", "enduring", "vibrant", "meticulous/meticulously", "renowned", "nestled", "profound", "rich" (figurative: "rich history", "rich cultural heritage"), "key" (as adjective: "key role", "key turning point"), "valuable" ("valuable insights")
Inflation phrases: "plays a significant role in shaping", "serves as a testament to", "it is important to note", "a vibrant tapestry of", "in the heart of", "setting the stage for", "reflects broader", "deeply rooted", "marks/represents a significant shift", "evolving landscape", "Additionally," (as sentence opener — research-backed strong tell)
Why it's a tell: Research tracking word frequency before and after ChatGPT shows "delve" spiked 10-50x, "tapestry" and "landscape" (abstract) 5-20x, and "plays a significant role in shaping" appeared 207x more often in AI text. Readers have learned to pattern-match on these words even if they can't articulate why.
Instead: Use plain words. "Delve into" → "look at." "Leverage" → "use." "Navigate the landscape" → "figure out." "Pivotal" → "important" (or better: show why it matters instead of asserting it). "Nuanced" → "detailed" or just describe the actual nuance.
One of these words in a 2,000-word post is fine. Three in a paragraph is a contamination event. Scan for them.
The tell: LLMs avoid simple "is", "are", and "has" constructions, substituting elaborate verbs that inflate the significance of mundane statements.
Symptoms:
Examples:
Structural variants: The inflated verb doesn't always replace "is" — it can replace any simple verb.
Why it's a tell: These substitutions make everything sound ceremonial. A config file doesn't "stand as" anything. It IS the source of truth. The inflated verb implies the sentence is making a grander point than it actually is.
Instead: Use "is", "are", "has." They're not boring — they're precise.
The tell: Every sentence is roughly the same length and structure. No rhythm variation between short punchy sentences and longer complex ones.
Symptoms:
Why it's a tell: Human writers naturally vary sentence length. A short sentence after a long one creates emphasis. A long sentence after two short ones builds complexity. LLMs produce text with remarkably uniform sentence length — researchers call this "low burstiness." It's one of the most reliable structural signals of AI text, even when the vocabulary is clean.
Instead: Read your paragraphs out loud. If every sentence takes the same number of breaths, rewrite. Break a long sentence into a punchy two-worder. Combine two medium sentences into one that flows. The goal is rhythm, not uniformity.
Good target distribution: roughly 20% short (under 10 words), 50% medium (10-20 words), 30% long (20+ words). You don't need to count — just listen for monotony.
The tell: The AI simulates personal encounters, emotional reactions, or sensory moments that never happened. It narrates an internal state to sound human.
Symptoms:
Why it's a tell: Blog posts SHOULD have personal voice and real experience — that's what makes them good. The problem is when the AI manufactures these moments instead of drawing from the author's actual experience. The fabricated version is always vague ("I find this interesting") where real experience is specific ("I tried this at 2 AM and the build broke in a way I'd never seen").
Instead: Personal moments must come from the source material — the transcript, the author's notes, the actual events. If the author said something funny on camera, use that. If you don't have a real moment, don't invent one. Write the point directly instead of wrapping it in fake experience.
The tell: "From X to Y" constructions that claim universal applicability without evidence. The range sounds inclusive but says nothing specific.
Symptoms:
Examples:
Why it's a tell: Real tools have a sweet spot. Real blog posts are honest about who they're for. "From X to Y" is a hedge disguised as inclusivity — it avoids committing to an audience because the AI doesn't know who the audience actually is. It's the equivalent of a restaurant claiming to serve "everyone from toddlers to gourmands."
Instead: Be specific about who this is for and own the limitation.
The tell: The same thing is called by a different name every time it appears. "The CLI" becomes "the tool" becomes "the interface" becomes "the command-line solution" across consecutive paragraphs — all referring to the exact same thing.
Symptoms:
Examples:
Structural variants: Cycling can happen within a single sentence or across sections, not just adjacent paragraphs.
Why it's a tell: LLMs are trained with repetition penalties that discourage reusing tokens. So the model reaches for synonyms even when the original word was the right one. In technical writing, this creates confusion — the reader wonders whether "the tool" and "the interface" are the same thing or different things. Repetition of the precise term is clarity. Variation with imprecise synonyms is noise.
Instead: Use the same word. "The CLI" is "the CLI" every time. If it appears too often in a passage, restructure the sentences or use a pronoun — don't swap in a vaguer synonym.
The tell: The draft contains Unicode characters that a human typing in a text editor would not produce. These are character-level fingerprints of LLM-generated text.
Characters to scan for:
\u201C (") and \u201D (") — instead of straight " (U+0022)\u2018 (') and \u2019 (') — instead of straight ' (U+0027)\u2026 (…) — instead of three dots ...\u2022 (•) — instead of markdown - or *\u2013 (–) — instead of a hyphen or em-dashWhy it's a tell: A human writing in a text editor, markdown file, or code editor produces straight quotes, three dots, and hyphens. The CMS or publishing platform converts these to typographically correct characters for the reader. When the markdown source already contains curly quotes or a Unicode ellipsis, it means the text was generated by an LLM (ChatGPT in particular outputs curly quotes by default).
Instead: Use plain ASCII in the draft. Straight quotes, three dots, hyphens. Let the publishing system handle typography. During the anti-pattern scan, search the draft file for these Unicode characters and replace any that appear.
The tell: A sentence ends with a present-participle ("-ing") clause tacked on as fake analysis. The clause sounds like it's adding insight but says nothing the main clause didn't already say.
Symptoms:
Examples:
Why it's a tell: LLMs append these clauses to make simple statements sound analytical. The participle phrase restates the main clause's implication as if it's a separate insight. "Reduces build times by 40%" already means "enhances developer productivity" — saying both is redundant inflation.
Instead: End the sentence at the real point. If the implication is worth stating, give it its own sentence with specific evidence.
The tell: Mundane facts are framed as historically important moments, broader shifts, or lasting legacies. The sentence asserts significance instead of showing it.
Symptoms:
Examples:
Why it's a tell: LLMs inflate significance because they pattern-match on authoritative encyclopedia prose. A version bump is not a pivotal moment. A migration is not a significant shift. These phrases are significance-assertions without evidence — the written equivalent of an applause sign.
Instead: Show the impact with specifics. Let the reader decide if it's significant.
The tell: A formulaic structure: acknowledge positives, pivot to challenges with "Despite," then resolve with vague optimism about the future.
Symptoms:
Examples:
Why it's a tell: This is a structural formula, not analysis. The "despite" pivot is a template the LLM fills in for any subject. Real analysis of challenges names specific problems and proposes specific solutions. The sandwich structure exists to sound balanced without committing to an actual opinion.
Instead: If there are real challenges, name them specifically and say what you'd do about them. If there aren't, don't manufacture them for "balance."
The tell: Bullet points where each item starts with a boldfaced term followed by a colon and then a description. The format is inherited from READMEs, sales pages, and how-to guides.
Symptoms:
Examples:
Why it's a tell: This format is a glossary pretending to be prose. It signals that the content was generated as a list of definitions rather than written as part of a narrative. Blog posts should flow, not read like a feature matrix.
Instead: Integrate the points into the narrative. If a list is genuinely the right format, use plain bullets without the bold-colon template.