2 Feb 202611 minute read

Why ‘it’s better to overshoot than undershoot’ in the age of agentic coding
2 Feb 202611 minute read

Software engineering has always been in flux, but rarely at this pace. In just a few short years, AI has pushed code generation from a labour-intensive, human-centric activity to something cheap, fast, and increasingly automated.
While this shift heralds exciting times for the builders of the world, it also raises questions around the best way to respond. Established processes are under strain, long-held assumptions are under scrutiny, and it’s not entirely obvious how much caution — or speed — is appropriate.
In all that uncertainty, there exists a choice. Teams can move cautiously and risk falling behind, or they can push ahead and accept that mistakes will happen along the way. However, there is a growing sense that the risks of undershooting may now outweigh the pitfalls of overshooting.
That’s certainly the view of Gabor Soter, founder of Palindrom, a London-based generative AI consultancy, who penned a recent LinkedIn post laying out his plans to go all-in on AI coding tools, arguing that software engineering is being reshaped faster than teams can adapt, and that waiting for best practices to settle is itself a risk.
“There has never been a profession that has been disrupted as fast as software engineering is today,” Soter wrote. “There are no playbooks, no best practices - the rules are being written as we speak. In such uncertainty, it’s impossible to find the right balance: we can either undershoot (and risk being disrupted) or overshoot (and make mistakes, like shipping bad code). Either of these will happen, and I’d personally rather be on the overshooting side than the undershooting one.”
Tessl caught up with Soter to dig further into some of his musings on how he’s responding to the rapid rise of AI coding tools, including where he’s choosing to move faster, where he’s trying to stay cautious, and what trade-offs that creates for teams.
When LLMs left the lab
To rewind just a little, Soter completed a PhD in machine learning and autonomous systems at the University of Bristol back in 2020, working on robotics-related problems spanning computer vision, time series forecasting, and sensing for autonomous systems. That work concluded just as large language models (LLMs) were moving out of research settings into broader commercial use.
Soon after finishing his doctorate, Soter served as CTO of a healthcare AI company, where he said he cold-emailed OpenAI co-founder Greg Brockman – a move that led to his team working directly with OpenAI’s early models, including GPT-3.
“I didn’t end up talking to Greg directly, but one of his staff got back to me,” Soter explained. “I sent them a three-minute Loom video about our startup, and after that we jumped on a call with OpenAI and started working with them for a few months.”
That early access didn’t translate into immediate breakthroughs. At the time, Soter says, models such as OpenAI’s GPT-3 and Codex were difficult to use reliably.
“We got access to what was probably the very first version of Codex, and it didn’t really work,” Soter said. “I remember trying to translate a SQL query to Cypher, and if I had more than 10 lines, it just stopped working.”
Fast-forward to today, and large language models are a very different proposition. Newer generations of models from companies such as OpenAI, Anthropic, and Google have become far more capable, particularly when paired with tools that let them write, run, and modify code across larger projects.
Key figures from across the industry have noticed this shift of late, with the likes of Ruby on Rails creator David Heinemeier Hansson recently describing a clear change toward the tail end of 2025.
“At the end of last year, AI agents really came alive for me,” Hansson wrote. “Partly because the models got better, but more so because we gave them the tools to take their capacity beyond pure reasoning.”
Soter, for his part, says that turning point is increasingly hard to ignore. “I think most of the anti-AI people are now turning, and kudos to them, because they’ve been open-minded and low-ego,” he said.
A product of its environment
Founded as a generative AI consultancy back in 2022, Palindrom works with a small number of clients at a time, typically in high-complexity industries such as healthcare, finance, insurance, and engineering. The common thread, Soter says, isn’t experimentation for its own sake, but helping organisations set up and move AI-powered software into production quickly, often before they have built dedicated internal AI teams.
That means avoiding open-ended consulting engagements in favour of a tightly defined, time-bound model focused on speed and handover.
“We don’t want to be this traditional consultancy that stays with you — we want to work with you for, like, 18 to 24 months,” Soter said. “We bring in expertise so you can hit the ground running very quickly, and very often we put an AI product into production within three months. That’s our main value proposition. After that, we’ll build you a couple of products, but around month 12 you start hiring your own AI engineering team, and we’ll help you with that.”
That focus is shaping how Palindrom plans to operate over the next year. Looking ahead to 2026, Soter says the company is deliberately rethinking how software engineering work is organised internally, with a stronger emphasis on product judgment as AI-driven development accelerates.
One part of that shift is treating software engineering itself as an internal product. Rather than optimising for individual tasks or roles, Palindrom is putting more effort into designing end-to-end processes that solve concrete problems for clients, and then iterating on those processes as conditions change.
That organisational shift, Soter argues, also changes what teams should look for in the people doing the work. As AI takes on more of the mechanics of writing code, he believes engineers need to bring stronger product judgment to the table.
“I think there are still a lot of people in the industry who are ‘just engineers’,” Soter said. “And I don’t think if you work in SaaS that doing ‘just engineering’ is going to be enough anymore. The best engineers I work with have a great product sense – they can have a conversation with technical and non-technical stakeholders, understand the problem, and turn that into a product vision — and then go down and do the implementation.”
This is where Soter’s emphasis on so-called “T-shaped” engineers comes in. By that, he means people with deep expertise in at least one area, but who are also comfortable moving across disciplines. As AI tools take on more of the mechanics of writing code, he argues, the ability to move up and down the stack quickly becomes invaluable.
“I think the best people are going to be able to orchestrate 15 AI agents in the morning, and then have a product review in the afternoon, and maybe jump on a call with a more technical stakeholder or even sell the thing the day after that,” he said.
Why moving too slowly is the bigger risk
Among the key takeaways from all this is that “overshooting,” as Soter calls it, isn’t about recklessness, or showing blind faith in new tools. It’s a recognition that, in a world where the cost of building software is negligible, the bigger risk is standing still. Teams can debate process, wait for consensus, and move carefully — but that caution comes with a very real cost.
“Other people are just gonna be able to work faster,” Soter said. “I’ve seen this already, where our engineers have managed to reproduce somebody else’s monthly output in days.”
There is also a more immediate, practical consequence: cost. As productivity gaps widen, pricing follows — and with it, trust. “If you charge 100k for a project, and somebody else charges 20k, and you’re both delivering similar quality, you risk a lot of the good faith that you’ve built up in your industry,” Soter said.
Of course, overshooting isn’t without its risks. Moving faster increases the likelihood of mistakes to some degree, from poorly structured code to more serious security issues — particularly in domains where sensitive data is involved. Soter points to healthcare and finance as obvious pressure points, where errors around personally identifiable information (PII) or system integrity can’t be ignored.
There is also a more subtle danger. As AI takes on more of the work of writing software, Soter warns that teams risk converging on the same patterns and designs. “If you outsource your thinking to the AI,” he said, “you can end up having the same SaaS products with the same design over and over again.”
For Soter, the answer isn’t to slow down entirely, but to recognise these risks and design around them — investing more heavily in verification, observability, security checks, and product judgment.
“If the cost of code generation is going down to zero, then the bottleneck is going to be verification,” Soter said. “It’s not just ‘does this code work?’ There are security verifications, and also product questions — is this actually a good product, or just an average one?”
As AI continues to reshape how software is built, where teams choose to place those guardrails may prove just as important as how fast they move.
Join Our Newsletter
Be the first to hear about events, news and product updates from AI Native Dev.
Related Articles

As AI coding agents take flight, what does this mean for software development jobs?
8 Jan 2026

Beyond tests: What to verify in AI-generated code
13 Nov 2025

From Vibe Coding to Spec-Driven Development
6 Jan 2026

With Gemini 3 Flash, Google gives developers a low-cost model ‘built for speed’
23 Dec 2025

Anthropic launches Claude Opus 4.5 with a focus on durable, real-world coding
25 Nov 2025
