CtrlK
BlogDocsLog inGet started
Tessl Logo

jbaruch/blog-writer

Write developer blog posts from video transcripts, meeting notes, or rough ideas. Extracts narrative from source material, structures content with hooks and technical sections, formats code examples with placeholders, and checks drafts against 31 AI anti-patterns with structural variant detection, three-pass scanning (surface, skeleton, soul check), and rewrite auditing. Auto-updates anti-pattern list from Wikipedia before each session. Includes interactive onboarding to learn the author's voice from writing samples. Persona files live at ~/.claude/blog-writer-persona/ by default, with symlink support for custom locations (e.g. Google Drive for backup). Optional global voice saves your voice profile to Claude Code user memory so it applies across all projects. Use this skill whenever the user wants to write a blog post, draft a blog, turn a transcript into a blog, work on blog content, or mentions "blog" in the context of content creation. Also trigger when the user provides a video transcript and wants written content derived from it, or when continuing work on a blog series.

97

1.43x
Quality

94%

Does it follow best practices?

Impact

99%

1.43x

Average score across 7 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Overview
Quality
Evals
Security
Files

task.mdevals/scenario-6/

Clean Up a Blog Draft for Publication

Problem/Feature Description

A developer blog editor received a draft from a writer who used AI assistance heavily during the writing process. The draft covers a real technical topic -- migrating to trunk-based development -- and the narrative arc is solid, but the prose has quality issues throughout that make it read like AI-generated text rather than authentic developer writing.

Review the draft below for writing quality problems. Produce a cleaned-up version that preserves the technical content and narrative structure but fixes the prose quality issues. Also produce a detailed report documenting every issue you found, with the original text and your replacement.

Output Specification

Produce the following files:

  • cleaned-draft.md -- the full draft with all prose quality issues fixed
  • edit-report.md -- a report listing each issue found, with: the original problematic text, what the problem is, and the replacement text

Input Files

The following files are provided as inputs. Extract them before beginning.

=============== FILE: inputs/draft-to-clean.md ===============

How Our Team Adopted Trunk-Based Development

In this post, we\u2019ll explore how our team navigated the transition to trunk-based development. What follows is a deep dive into the lessons we learned, the tools we leveraged, and the streamlined workflow we built along the way.

The Problem

Not a branching strategy. A survival mechanism.

Our team of eight engineers had been running GitFlow for three years. Release branches, hotfix branches, feature branches that lived for weeks. Merge conflicts were a daily ritual. It\u2019s worth noting that we spent more time resolving conflicts than writing code some weeks. Interestingly enough, nobody questioned the process until our deploy frequency dropped to once every two weeks.

The CI pipeline \u2013 which had been configured by a contractor two years ago \u2013 served as the backbone of our entire release process. It boasted a grand total of 340 test cases and stood as a testament to our commitment to quality. The integration system showcased automated linting and type checking across all branches.

To be fair, GitFlow had worked when we were four people. But at eight engineers with three active feature branches, the build infrastructure was buckling. It bears mentioning that the merge queue alone added two hours to every release.

The Decision To Embrace Change

Where feature branches give you isolation, trunk-based gives you speed. Where long-lived branches give you safety, short-lived ones give you momentum. One approach trusts the branch. The other trusts the team.

The irony? We'd spent three years building a branching strategy to prevent exactly the kind of chaos that the branching strategy was causing. The beauty of it is that the solution was simpler than the problem.

We decided to adopt trunk-based development with feature flags. The result? Zero rollbacks in the first month. The best part? Everyone could deploy independently. And the tests? All green, all the time.

Fast. Reliable. Tested. And shipped on a Monday.

Feature flags. Dark launches. Percentage rollouts. Pure control.

The code doesn\u2019t define the process; the process defines the code.

The Migration

The pipeline \u2013 which we had built over three sprints \u2013 handled the transition gracefully. The config \u2013 our most critical file \u2013 was updated to support the new workflow. The tests \u2013 all 200 of them \u2013 passed on the first run. The monitoring \u2013 our Datadog setup \u2013 confirmed zero regressions.

We reviewed the pipeline configuration. We updated the branch protection rules. We enabled the feature flag service. We migrated the first three services. We monitored for regressions closely. We documented the new workflow thoroughly. We trained the team on the process.

Industry experts widely agree that trunk-based development represents the future of collaborative engineering. According to recent reports, teams that adopt this approach see dramatic improvements in velocity and quality. Many seasoned practitioners have noted the shift.

What I find genuinely interesting about this transition is that it fundamentally transforms how you think about the development landscape. I keep coming back to the idea that trunk-based development isn\u2019t just a technical choice\u2026 it\u2019s a cultural one. The team delved into the codebase with renewed energy, navigating the complexity of the migration and fostering a more robust and seamless development experience. This pivotal shift was nothing short of transformative.

Results And Future Outlook

We shipped the new pipeline on a Thursday \ud83d\ude80 and by Monday our deploy frequency had tripled \ud83d\udd25. The team was finally able to ship features daily instead of biweekly \ud83c\udf89.

Not just a workflow. A philosophy.

From solo developers just learning version control to hundred-person engineering orgs running multi-region deployments, this approach works. Whether you\u2019re running a side project or a Fortune 500 deployment pipeline, the same principles apply. The automation layer handles everything regardless of scale.

\u2022 Deploy frequency went from biweekly to daily \u2022 Merge conflicts dropped by 90% \u2022 Mean time to recovery fell from 4 hours to 20 minutes

The build infrastructure proved that small, frequent commits beat large, infrequent merges every time. Deploy frequency went from biweekly to daily, a 1,400% improvement, saving roughly 80 engineering hours per month across the team.

But here's the thing: the technical migration was the easy part. And here's the most interesting part: the culture shift took three times longer than the code changes.

The implications for our engineering org are significant. The direction is clear. This changes everything about how we ship.

We proved that trunk-based works. We proved that feature flags prevent rollbacks. We proved that small commits reduce conflicts. The key takeaway: if you're still on GitFlow with more than four engineers, you're paying a tax you don't need to pay.

This migration marked a pivotal moment in our team's engineering journey, fundamentally reshaping how we approached software delivery.

Despite the initial learning curve, trunk-based development continued to thrive in our organization. Despite these challenges, future improvements to our CI pipeline could further enhance our deployment capabilities.

The future of our deployment pipeline looks brighter than ever. Exciting times lie ahead for the entire engineering organization.

We used feature flags to control rollout percentages, configured branch policies through our CI pipeline, and tracked deploy metrics on the Datadog dashboard. The automated testing suite ran on every commit, giving us confidence in each deployment.

Here's what we learned:

  • Feature flags: Control what users see without deploying new code
  • Short-lived branches: Keep branches under 24 hours to avoid merge conflicts
  • Automated testing: Run the full suite on every commit to catch issues early
  • Incremental rollouts: Deploy to 1% of users first, then scale up gradually

Our deploy frequency tripled in the first month, significantly enhancing our team's velocity and demonstrating the value of continuous delivery practices.


Jordan Kim is a senior engineer at ShipFast, where they build deploy pipelines that occasionally deploy on time. Previously spent five years at a consulting firm where \u201ctrunk-based development\u201d meant everyone committed to main and hoped for the best. Once mass-deployed to production during a company all-hands and discovered that \u201cfeature complete\u201d and \u201cworking\u201d are different concepts. =============== END INPUT ===============

evals

README.md

SKILL.md

tessl.json

tile.json