Catch up with DevCon Fall conference talks in our YouTube Playlist
Logo
Back to articlesAs AI coding agents take flight, what does this mean for software development jobs?

8 Jan 202613 minute read

Paul Sawers

Freelance tech writer at Tessl, former TechCrunch senior writer covering startups and open source

AI is shaping up to affect all manner of industries, perhaps none more directly than software development and engineering. Over the past few years, most developers have become familiar with copilot-style assistants from the likes of GitHub, Cursor, or Windsurf that sit inside an editor and suggest code as you type.

However, the industry is entering an era where AI systems are increasingly able to carry out longer sequences of work across the software development lifecycle: writing code, running tests, making changes across a repository, and iterating based on feedback. These more end-to-end, “agentic” systems don’t just respond to prompts; they take action within defined boundaries — blurring the line between assistance and execution.

‘Are we cooked?’: Abstraction reshapes how software gets built

But if AI is doing more of the software building grunt work, what does that mean for the humans involved? It’s a question that’s front-of-mind for just about everyone in the industry. Anthony Goto, a staff engineer at Netflix, addressed this matter directly on TikTok a few weeks back. The most common question that he hears from new graduates and early-career engineers, he said, is whether they’ve made a mistake entering software development just as AI tools are accelerating. “Are we cooked?” is how he succinctly summed up the concern.

What’s notable about Goto’s response is that he doesn’t try to reassure people by downplaying how capable AI tools are becoming. Instead, he argues that more powerful tools change what engineers spend their time solving.

To make that case, he refers back to John Carmack, a pioneering game developer behind early first-person shooters like Doom) and Wolfenstein. In the early 1990s, Carmack and his peers had to solve extremely low-level problems — including hand-optimising difficult mathematical operations like inverse square roots — just to make real-time 3D graphics work. Writing a game meant wrestling directly with hardware limits and CPU cycles.

Over time, faster machines and higher-level game engines abstracted that work away. Developers no longer needed to be mathematical savants, but game development didn’t disappear. Instead, projects grew larger and more ambitious, and many more became far more viable.

Goto frames AI in similar terms: “a higher level of programming language,” as he puts it, one that lets everyone contribute more to the process. Lowering the barrier to building software doesn’t satisfy demand, he argues — it fuels it. “Our hunger for more functionality, more apps, more ecosystems… just gets higher and higher and higher,” he said.

Even if AI systems increasingly handle the mechanics of building software, Goto’s point is that ambition expands alongside them. The bottleneck shifts from can this be built at all?, to what should be built, and how complex should it be? He’s careful to note that this isn’t guaranteed, however, conceding that he could be wrong and that “the world could end” — but his argument rests on a familiar pattern in software history: abstraction tends to widen the scope of what people attempt, not close it off.

“People are going to be able to build things way faster, and we're going to see a massive renaissance in creating amazing things,” Goto added.

Jevons’ paradox: Efficiency makes more software projects viable

This line of thinking jibes squarely with that of Box CEO Aaron Levie, who argues that AI’s most profound impact may come not from replacing work outright, but from radically lowering the cost of doing it. Levie frames this through the lens of Jevons’ paradox — a 19th-century economic idea which holds that efficiency gains often lead to greater overall consumption, not less (Jevons’ paradox has in fact been a recurring reference point in AI circles over the past year).

Put simply, as tasks become cheaper and easier to perform, demand tends to expand rather than contract – a dynamic that, applied to software development, suggests more projects become viable.

“Imagine the 10-person services firm that didn't have any custom software before for their business,” Levie wrote on LinkedIn last week. “From a standing start, it may have taken multiple people to develop a full app, keep it running, keep customer requests incorporated, ensure the software stays secure and robust, and so on. The project just doesn't even get started because of this. Now, someone on the team builds a prototype in a few days, proves out the value proposition in a matter of days. You can analogize this to any other type of work or task in an organization.”

At first glance, that surge in experimentation might appear to strengthen the case that AI itself will simply absorb the additional demand. But Levie argues that while AI can automate individual tasks, producing real value still depends on humans pulling those pieces together into something coherent and durable.

“The reality is that despite all the tasks that AI lets us automate, it still requires people to pull together the full workflow to produce real value,” Levie said. “AI agents require management, oversight, and substantial context to get the full gains. All of the increases in AI model performance over the past couple of years have resulted in higher quality output from AI, but we're still seeing nothing close to fully autonomous AI that will perfectly implement and maintain what you're looking for.”

‘Supervised collaboration’: Repricing engineering skills

That overarching sentiment was echoed this week by Ruby on Rails creator David Heinemeier Hansson, who has been testing AI coding agents in real production settings and, it’s fair to say, is totally sold on them.

“At the end of last year, AI agents really came alive for me,” Hansson wrote. “Partly because the models got better, but more so because we gave them the tools to take their capacity beyond pure reasoning.”

However, Hansson argues that while these tools have crossed a key threshold, he remains cautious about overstating their autonomy.

“They’re fully capable of producing production-grade contributions to real-life code bases,” he wrote, before adding that “pure vibe coding remains an aspirational dream” for professional work. What has arrived, Hansson argues, is not hands-off automation but “supervised collaboration,” where humans remain responsible for direction, quality, and long-term coherence.

“It all depends on what you're working on, and what your expectations are,” Hansson said. “The hype train keeps accelerating, and if you bought the pitch that we're five minutes away from putting all professional programmers out of a job, you'll be disappointed.”

Elsewhere, in his latest edition of The Pragmatic Engineer newsletter, Gergely Orosz argues that while AI is likely to write a growing share of code, its impact on employment will be more of a reshuffling of which skills are scarce and valuable.

He suggests that some forms of expertise — particularly narrow implementation knowledge, prototyping, or familiarity with specific frameworks or languages — may decline in relative value as AI systems make that work easier to generate.

“Being a language polyglot will probably be less valuable,” Orosz writes. “With AI writing most of the code, the advantage of knowing several languages will become less important when any engineer can jump into any codebase and ask the AI to implement a feature – which it will probably take a decent stab at. Even better, you can ask AI to explain parts of the codebase and quickly pick up a language much faster than without AI tools.”

At the same time, he sees demand increasing for engineers who can operate at a broader level: understanding product context, maintaining quality as code volume rises, and integrating AI-generated output into systems that still need to run reliably in production. This doesn’t make the job easier, so much as it changes where the difficulty sits — shifting effort toward judgment, prioritisation, and stewardship rather than manual implementation.

The human touch: Agent enablement

The long and short of all this is that AI systems are already absorbing more of the mechanical work involved in producing software, and the evidence suggests that trend will continue. But that doesn’t automatically translate into fewer engineering jobs — it points to a reconfiguration of where value sits.

Part of that shift is being driven by steady improvements in the underlying models themselves. Newer systems are becoming faster, cheaper, and more capable across multiple languages, making it practical to use them continuously across the development process. Models such as Google’s Gemini 3 Flash and MiniMax’s M2.1, for example, are explicitly optimised for speed, cost efficiency, and multilingual software work — characteristics that make them suitable for being embedded deeply into everyday tooling.

At the same time, tooling is bringing previously separate parts of the software lifecycle closer together. Recent moves, like Cursor’s acquisition of Graphite and Amp’s addition of agentic code review to its coding toolkit, reflect a push to let AI systems operate across stages that were once handled by different teams.

Where this begins to matter most for human developers is in how these systems are coordinated and sustained over time. As agents take on longer-running tasks, questions of context engineering — how systems are given the right information, constraints, and historical knowledge — become central. Without durable memory, agents remain shallow. That’s why a growing number of teams are treating memory as a first-class problem, such as Tessl, which is focusing on providing structured specifications and shared context via its registry.

Agent enablement is the name of the game here. Models get better. Agents get broader scope. Tools collapse silos between writing, reviewing, and maintaining code. But humans remain responsible for the underlying infrastructure, the plumbing that holds these systems together, shaping intent, supplying context, deciding what matters, and owning the outcomes. The work shifts from typing every line by hand, toward steering systems that can act — but only as well as the constraints, memory, and judgment they’re given.

What seems clear is that the developer’s role is shifting from pure implementation toward operating and maintaining complex, AI-augmented systems. That means more time spent on infrastructure, coordination, and long-term system health — ensuring agents have the right context, constraints, and memory to behave predictably, and stepping in when they don’t.

Join Our Newsletter

Be the first to hear about events, news and product updates from AI Native Dev.