From AI Assistants to Agents: How Sourcegraph is Transforming Enterprise Development

Join us for an enlightening conversation with Quinn Slack, CEO and co-founder of Sourcegraph, as we explore the transformative impact of AI on software development. Discover how AI is reshaping coding practices and the future of developer roles.

Episode Description

In this episode of the AI Native Dev podcast, hosts Dion Almaer sit down with Quinn Slack, a trailblazer in the tech industry and CEO of Sourcegraph. Known for his contributions to enhancing developer tools, Quinn shares his insights on the inception of Sourcegraph, its evolution with AI integration, and the journey towards creating the advanced AI tool, Cody. From initial challenges in code search to the development of agentic AI, Quinn provides a comprehensive look at how Sourcegraph is pioneering AI-driven software development. Listeners will gain valuable knowledge on the role of AI in automating coding tasks, the importance of a collaborative developer mindset, and the future prospects of AI in the industry.

Chapters

1. [00:00:00] Introduction to Quinn Slack and Sourcegraph
2. [00:02:00] The Early Days of Sourcegraph and Code Search
3. [00:05:00] The Impact of AI Tools like ChatGPT and Cody's Creation
4. [00:07:00] Challenges in Code Retrieval and AI Integration
5. [00:10:00] The Evolution of Cody and Enterprise AI Impact
6. [00:14:00] The Importance of Prompt Libraries in AI Development
7. [00:18:00] Agentic AI: Automating Software Development Tasks
8. [00:25:00] Shifting Developer Mindset and Embracing AI
9. [00:29:00] Measuring Success and ROI in AI Tools
10. [00:42:00] Future Prospects of AI in Software Development

1. Early Days of Sourcegraph

The inception of Sourcegraph was inspired by Google's internal Grok system, a tool that Quinn Slack experienced firsthand. He describes the early challenges faced in navigating massive codebases, which highlighted the dire need for efficient code search solutions. Quinn recounts the initial goal of Sourcegraph: to accelerate human developers by automating parts of software development. "We wanted to go and solve that. We wanted to accelerate human developers," Quinn explains. This ambition laid the groundwork for what Sourcegraph would eventually become—a pivotal tool in modern software development.

Sourcegraph's journey began with the vision of making codebases more navigable for developers. In those early days, developers struggled with understanding and maintaining large volumes of code. This struggle was further compounded by the repetitive nature of certain tasks, which Sourcegraph aimed to alleviate. The team envisioned a tool that could not only search code but also offer insights that were previously buried within vast codebases. This foresight was instrumental in addressing one of the biggest challenges in the software industry—making code more accessible and manageable.

2. The Role of Code Search in AI

Code search was not only a foundational step for Sourcegraph but also crucial in the subsequent integration of AI functionalities. Quinn shares a personal anecdote of using Sourcegraph tools, which saved significant development time and effort. "Two weeks into building it, it had already saved me two weeks," he recalls. This transition from traditional code search to advanced AI capabilities marked a significant turning point, allowing developers to understand and generate code more efficiently.

Sourcegraph's code search capabilities laid the groundwork for incorporating AI, enabling a transition to more advanced functionalities. The ability to index and search across massive codebases provided a fertile ground for AI to thrive. This integration allowed developers to not only find code snippets faster but also understand the context and functionality of those snippets. The AI-enhanced search capabilities are what set Sourcegraph apart from traditional code search tools, making it an indispensable resource for developers looking to improve their efficiency and output.

3. The AI Revolution and the Birth of Cody

The emergence of AI tools like ChatGPT marked a revolutionary moment in software development. Quinn reflects on Sourcegraph's collaboration with Anthropic and the creation of Cody, an AI that enhances code explanation, generation, and testing. "We knew that was the future," he states, emphasizing the transformative potential of AI in coding. Sourcegraph's foresight in leveraging AI positioned it as a leader in this rapidly evolving space.

Cody represents a significant leap forward in AI-assisted software development. Initially, AI tools were limited to basic autocomplete functions, but Cody expanded on these capabilities by offering comprehensive code explanations and automated test generation. This evolution was a game-changer, as it allowed developers to not only automate repetitive tasks but also gain deeper insights into their code. Cody's capabilities have since evolved, continually adapting to the needs of developers and the complexities of modern software development.

4. Challenges and Innovations in Code Retrieval

Quinn highlights the unique challenges of code retrieval compared to text retrieval, noting that code requires specialized handling. "Code isn't just text in a document," he explains. The evolution from basic retrieval techniques to sophisticated methods has been crucial for Sourcegraph's continued success. Their code index and retrieval methods remain invaluable tools for developers, bridging the gap between traditional coding and AI-enhanced processes.

Effective code retrieval is pivotal in creating a seamless development experience. Unlike text retrieval, which relies on straightforward keyword matching, code retrieval demands a deeper understanding of syntax, dependencies, and context. Sourcegraph's innovations in this area have been instrumental in overcoming these challenges, providing developers with the tools they need to efficiently navigate complex codebases. The continuous refinement of retrieval methods ensures that Sourcegraph remains at the forefront of AI-driven development tools.

5. Enterprise Impact and the Prompt Library

Consistency in prompts across an organization is vital for maximizing AI effectiveness, as Quinn notes. The introduction of Sourcegraph’s prompt library aids in sharing and standardizing prompts, significantly enhancing AI outputs. Quinn shares an example of successful enterprise-level implementation: "We have customers where they're driving more than 80 percent of their chat usage through the prompt library." This standardization leads to improved quality and consistency in AI-driven development.

The prompt library is a crucial component in unlocking the full potential of AI tools within enterprises. By standardizing prompts, Sourcegraph ensures that developers across an organization can leverage the best practices and insights of their peers. This collaborative approach not only improves the quality of AI outputs but also fosters a culture of innovation and continuous improvement. The prompt library exemplifies how Sourcegraph is committed to enhancing the developer experience and driving efficiency across entire organizations.

6. The Rise of Agentic AI in Software Development

The concept of agents plays a significant role in automating repetitive coding tasks. Quinn describes agents as tools that "take away some of the rote tasks." Real-world applications, such as those seen at enterprises like Booking.com and Palo Alto Networks, showcase the potential of these agents. The future of these agents lies in their ability to transform enterprise software development by automating complex and repetitive tasks.

Agentic AI represents the next frontier in software development, offering unprecedented levels of automation and efficiency. These agents are designed to handle routine tasks, freeing developers to focus on more complex and creative aspects of their work. As companies continue to explore the capabilities of agentic AI, the potential for innovation and productivity gains is immense. Sourcegraph's commitment to advancing agentic AI underscores its role as a leader in the evolution of software development.

7. Developer Mindset and the Changing Role of AI

The advent of AI is reshaping the role of developers, necessitating a shift in mindset. Quinn addresses skepticism among developers, encouraging them to embrace AI as a companion. "You don't have them actually write a prompt because frankly, they're not good at writing prompts," he comments, emphasizing the importance of a collaborative approach. This shift towards chat-oriented programming signifies a new era in development practices.

The integration of AI into the development process requires a fundamental change in how developers approach their work. Rather than viewing AI as a replacement, developers are encouraged to see it as a partner that enhances their capabilities. This mindset shift is essential for fully realizing the benefits of AI, as it allows developers to leverage AI tools to their fullest potential. By fostering a collaborative relationship with AI, developers can unlock new levels of productivity and creativity.

8. Measuring Success and ROI in AI Tools

Enterprises are increasingly seeking ways to measure AI's impact on development efficiency. Moving beyond traditional output metrics, the focus is shifting towards business impact and ROI. Quinn highlights the importance of aligning developer productivity with business objectives, stating, "Developers are getting to a world where we can no longer hide behind output metrics."

The ability to measure the success of AI tools is critical for justifying their adoption within enterprises. Traditional metrics, such as lines of code or the number of commits, are no longer sufficient. Instead, businesses are looking at the broader impact of AI on operational efficiency, cost savings, and strategic objectives. By aligning AI initiatives with business goals, companies can ensure that their investments in AI deliver tangible value.

9. The Future of Code AI

Quinn offers predictions for the future of AI in software development, pointing to ongoing challenges and opportunities in automating complex tasks. He encourages aspiring developers to join the evolving field of AI in coding, stating, "This is the battleground and I'm really excited that we get to work on this every single day." The potential for AI to optimize and transform software development is immense, signaling a promising future for the industry.

The future of code AI is bright, with endless possibilities for innovation and growth. As AI technology continues to evolve, so too will the ways in which it can be applied to software development. Developers and organizations that embrace these changes will be well-positioned to capitalize on the opportunities that AI presents. Sourcegraph's commitment to pushing the boundaries of what's possible with AI ensures that it will remain a leader in the industry for years to come.

Full Script

**Quinn Slack:** [00:00:00] You don't have them actually write a prompt because frankly, they're not good at writing prompts. That's part of the problem. You just have them click a button to run a prompt that someone else in their team made to generate a great unit test or something like that. Then they're like, holy shit, this actually worked pretty well.

And then they can go see what's the kind of prompt that I had to write. And so often when we would find people saying, Hey, this stuff didn't work well. We'd say what prompter are you using? And there, they'd go boneless again, but once we find out what the prompt they're using, it was obvious

**Simon Maple:** you're listening to the AI native dev brought to you by Tessl.

**Dion Almaer:** Hi, welcome to the AI Native Dev podcast. I'm really excited today to be chatting with a fantastic guest, Quinn Slack from Sourcegraph. I'm Dion Almaer. I work at Tessl now and excited to be chatting with you on the podcast. [00:01:00] Quinn, welcome. Would you like to introduce yourself?

**Quinn Slack:** Yeah. Thank you, Dion. I'm happy to be here and talk about code AI and agents and all of that.

I'm Quinn. I'm the CEO and co founder at Sourcegraph and been a developer all my life. I've felt what it's like to work on massive projects and code, got some patches to curl, curl dash help. You'll see one of the flags I added. Some patches to open SSL and Gnu TLS and all that. And that was these formative days.

And as a coder where I got to feel what it was like to work in a massive code base and try to understand the damn thing. And it's hard. That was before AI.

**Dion Almaer:** Yeah, that's awesome. So let's go back a bit in time just to get a bit of a ground in. So Sourcegraph, like we know of Cody now on the AI side but it started with code search, I think, inspired by Google's internal grok system and the like.

Can you talk about the early years a little bit and talk about the code search side of things and the problems you were solving there and still do?

**Quinn Slack:** Yeah. Seeing those pains in working in [00:02:00] these big code bases, that's why we started Sourcegraph. And now we've got amazing customers. We have every dev at Stripe and Uber and four of the six top US banks using us and so on.

But back when we formed Sourcegraph, it was because we had felt that pain. It's hard to code and it's way more repetitive than it should be. We felt that pain ourselves. My co founder felt that pain at Google. And together, we felt that pain inside two big U. S. Banks where we're working inside and writing code.

So we wanted to go and solve that. We wanted to accelerate human developers. And if you go back to our seed deck, step two is we want to go automate software development. And there is no technology available to really do that in the way that would be magical back when we started Sourcegraph. But we knew that if we could build code search, that would do two things.

It would immediately make it so that human developers face this massive problem, the biggest problem in software development, trying to understand the damn thing, would be able to do so better. And it would also [00:03:00] prepare and groom and till the fields of the code base, so that when there was some magical brain that could ingest all the code and do amazing things with it then all the code would be in one place, we'd have something that all the devs in a company were using.

So it felt like a necessary step. And it just so turned out that code search, it was not just a good, first step. It was also something really valuable by itself. And so that's what we built. We built that. And the coolest moment that I remember in the first two weeks after we started Sourcegraph is, we were building Sourcegraph and we had this way for you to see where everybody else is calling this function in the open source world.

And I was writing some code and I said, Oh, I'm going to use Sourcegraph and see if someone's already done this thing. And there was some other Go package that somebody had written that did exactly what was going to take me the next two weeks. So two weeks into building it. It had already saved me two weeks.

It was like a perpetual motion machine. It was pretty amazing.

**Dion Almaer:** That's awesome. So I worked at Google and got to use the grok system there. So I was [00:04:00] super excited because I think a lot of Google engineers, when they would leave, like they missed certain tools that they didn't have access to anymore.

And so it was awesome to get access to some of those tools thanks to yourselves at Sourcegraph. So that was awesome to see. Then fast forward in a little bit when the whole GenAI space blew up again with GPT. If you can think back to then, I'm curious what your initial reactions were like were you skeptical at first?

When did you know that potentially change? And then, what happened to trigger you like really investing and jumping into something big with, building out Cody.

**Quinn Slack:** So the summer before ChatGPT, we were working with Anthropic and we had access to Claude, which was amazing. It was in our Sourcegraph Slack and we had access to the API and we found that, hey, this can do amazing things.

This can actually explain [00:05:00] code. Not just autocomplete, which was the only code AI functionality that was available at the time. It could fix code. It could generate unit tests. It could generate whole files of code. And obviously it's gotten a lot better in the two and a half years since then. But that was amazing.

And we knew that was the future. We knew that, what we were so well positioned, given that we had all these big customers using us for code search, we had all their code. They were already trying to solve a lot of the same problems. Hey, how does this damn thing work? And they'd use code search for it.

And wouldn't it be amazing if we could also give them AI for it? So when ChatGPT came out, we said, Oh yeah, this is like Claude that we have. And this is really cool. No way. And I even remember I would show my parents and my family like, hey, look it can create song lyrics. And everyone around us thought oh ChatGPT oh, that's nothing new.

But of course it was. And that was a huge moment in the industry. And I think looking back at that [00:06:00] time it's like a BC, AD moment. We obviously put a lot more investment into our AI since then. And we've come a long way and still literally every week, every other week, our customers are doing something with it that is just surprising and new and it's never stopped being so exciting.

It feels like people are working at 300 percent ever since ChatGPT came out. And I absolutely love it.

**Dion Almaer:** That's awesome. That's so funny that you got to see it the other way around being early on the Anthropic side. Whereas I think most people, it was, ChatGPT was their first kind of view into that moment.

That's fun, especially now, given how fantastic that Claude Sonnet is at writing code. All of the improvement since then. That's really fun that you had an extra early kind of seat at the table to watch this happen. So I'm really curious about the underlying technology and concepts that you had with code search.

And how that maps to kind of modern RAG cause I see time and time [00:07:00] again, the kind of same old stick with retrieval where it's just like easily chunking up a document, doing some cosine similarity and there we go, we've done retrieval, but code isn't just text in a document or anything like that.

So like how do you see as code being different than text for the use case of actually like really doing a good job of retrieval and how did code search, I'm assuming here, like help there.

**Quinn Slack:** Yeah. It's such an interesting topic. There's three groups of people here. There's the academic ivory tower people.

There's the enterprise ivory tower people. And then there's the people actually use the damn thing. And RAG was a term, we were doing it and I was meeting with some more academic people and they're like, what do you think of RAG? And I'm like, what are you talking about? And they're like, retrieval augmented generation.

I'm like, what? And I'm like, Oh, you mean using search to find code and then present. And they're like yeah, that's RAG. And I looked at, there's all these academics who were doing stuff. It's just so funny that there was such a divide [00:08:00] there. And now it feels any kind of RAG approach. Most of the industry never actually did it that well.

99 percent of RAG implementations never did any chunking or never measured in a rigorous way. And that's absolutely crazy. We do a lot of rigorous measurement of it and now it's already past the point where RAG is sufficient. Now, most context fetching is done with some agentic you can say like RAGentic to figure out, is the first fetch sufficient? What other terms might you need to explore? What other tools might you need to use to pull in context? So I think it's already way beyond RAG. And the challenge is to be really rigorous about for what use cases is this kind of context fetching good but really it's a black box of a lot of different techniques.

The code search that we built. Before all the AI stuff came out, that was incredibly valuable. Just having a code index and all the repos was incredibly valuable. But we've had to add in a lot of new ways to do retrieval and indexing. And it turns out that virtually all of [00:09:00] those things, they're also useful for humans just doing code search.

We found ourselves like how we hoped it would work out in a really good position where code search is a great foundation for any AI system built on top.

**Dion Almaer:** Yeah, that's awesome. So could you walk us through a little bit of like the evolution of Cody from the beginning, from, code completions, instructions, now hear a lot about chat oriented programming.

And what were the kind of like the insights along the way as you went through that journey?

**Quinn Slack:** Yeah, we started out with the chat. That was the first way that you could use Cody. And that was to fix code, to understand code. Then we built autocomplete, we built edits. And. Basically it was a new tool that was out there.

We were the first to come out with a code chat that at the time really everyone was using autocomplete. And you've got a few kind of competing features in AI. You obviously have the autocomplete. I think everyone, uses that. You have chat. With chat there's a lot of different ways that can be exposed.

And [00:10:00] I'll talk about some of the things that we do differently that makes sense for the enterprise versus for the individual. Then there's edits where you select some code and you do option K and you say, oh, go and do this. That became really popular. Now auto edits is valuable. I, coding a ton on a little side project over the last few weeks in the holidays. And. I think about, 50 percent of the code that I wrote was from our auto edits, which is where it's like autocomplete, but it can go and suggest edits elsewhere in the file too. You have all these features at a feature level and if you're an individual dev that's just doing stuff, you can figure out and maybe consistency and quality is not the most important thing.

You're just trying to move fast but we are totally focused on the enterprise. And I think the enterprise is the most interesting place to be building code AI because they have a massive code base and they do things where you have a thousand times per day that they're generating a unit test and they have the kind of scale that you can [00:11:00] actually automate and that is calling for a completely different kind of code AI product than what you'd make.

If you're just trying to make someone who's building an app from scratch, who's working on a kind of low stakes personal project really fast. And I love the enterprise because it's where all the real software gets built. All the best devs work at the enterprise. All the best software is built in the enterprise.

And if you disagree with that, I would just say, hey, if a dev is so damn good, they're gonna build such good software that their company becomes so successful that in a few years it's going to be an enterprise. So the enterprise is where it all happens. And there's so much. more potential to make code AI great in the enterprise than what you can do for the individual.

**Dion Almaer:** Yeah, that makes sense. It does. I don't know about you. It drives me a little nuts at times to see the kind of Twitter echo chamber of look at this AI generating a little sample app from scratch, which is, it's a use case it's good for prototyping and all of those kinds of things.

But to your point, that's [00:12:00] just very different than the 99 percent case of you're already an existing code base and in the enterprise is probably massive and maybe you're in a very large mono repo, maybe they've got lots of different repos. How do you make sure your best practices, working between these different worlds the onboarding experience, like one of the most fun things that I saw talking to a customer with Augment in the past was just this new employee. And she just joined and she's got a little bit of imposter syndrome, right? Doesn't know the team yet and is able to use chat like like you're talking about with Cody to just get this understanding of the code base and like it just the amount of just trust we have to get from the system.

And it's obviously there 24 7. It's not oh man, my TL is time zones away and they're not available right now. It just utterly changed the onboarding [00:13:00] experience, which again, these enterprises have a few developers in there, right? So being able to help with those kinds of use cases that's gold, right?

Yeah the quick demos, aren't really often, showcasing those type of features. Are there other things that you've seen in the enterprise from customers that are just like, I actually like this is where they're getting a lot of value.

And it's not just the, autocomplete this bit of code.

**Quinn Slack:** Yeah. We like to look for areas where if you step back, you think first principles, hey, it's crazy how everyone's doing this. And the RAG situation was one where everyone was talking about RAG, but basically nobody was actually chunking.

In a thoughtful way. No one was measuring how good their chunking is. No one was measuring the effectiveness or accuracy of their pipeline. And for a lot of the companies that were, they had a separate academic team that had a bunch of techniques, but none of them were actually getting it in production.

So it was like measurement, but it was totally pointless. And one big area we saw that was like the emperor has no clothes with code AI in the enterprise was the way that [00:14:00] developers were using chat across an enterprise. They were all basically this was a year ago, chicken typing prompts ad hoc on their own each time.

And so if two developers wanted to go make a test or wanted to go, add a new UI screen, they would go and they would type type. And this developer would do it differently from this developer and this developer. And there was no sharing of the prompts. And what a wonder that it actually worked.

Our context fetching is pretty damn good. We'd find some files, but hey, obvious opportunity to make it so that if two developers are doing the same thing in an enterprise, use the same damn prompt.

And one of our customers, Stripe, has an internal chat tool that's for general purpose chat and they have talked publicly about this prompt library that they built internally so that Stripes could share prompts for writing a good documentation or a good, public summary or, that Stripe marketing field.

They do such a good job with their marketing and, hey, why don't we make that for our [00:15:00] AI. And we have a prompt library. That means that if two developers are generating unit tests or making a new UI screen or, any of these kinds of things, they're going to use the same damn prompt and if a senior dev wants to make sure that everyone on my team who's doing this thing, they do it really well, they can go and make that prompt really good.

And they can edit it if it's, needs to account for something else. That is such a basic fundamental thing is so obvious. The prompt is such an important part of the input you get to the AI. And it was crazy to me that still, even now that we've had this out there for more than six months and in massive use by our enterprise customers, that no other code AI tools have it.

We have customers where they're driving more than 80 percent of their chat usage through prompt library. And that's amazing because the quality is better. The consistency is better. And think of all the time saved from developers, not chicken typing their little prompts on their own. So that's like just one of many examples of where in the enterprise you can do that because by the way, you cannot do that if you're building an AI [00:16:00] tool for individuals.

Because an individual is not going to put in 20 minutes up front to sit down, write a great prompt for doing something that they only do once a month. They're not going to do that. But in an enterprise, it's someone's job to do that. It's the tech lead or the manager's job to do that. And there's an incentive to put in time upfront because it's amortized over all the times that it's used.

So I love the enterprise.

**Dion Almaer:** Yeah, that's awesome. Yeah, you think about being like that someone who's a principal engineer at a large company and they're really struggling with how to spend their time, right? They want to help as many people on the team as they can, but they've got a bunch of code that they need to write as well.

And so like where do they put their time? And so being able to go in there and be like virtually on the shoulder of all of the engineers, helping out, not in a creepy way. Like that impact is just massive, right? It's like being able to, like you say, amortize that across [00:17:00] is just absolutely huge.

And so all of the value that gets put into the system, being able to spread that across the team, I think is a game changer. And yeah, the thing I'm super excited about within the enterprise to and I'm curious if you've seen this at all is I saw some companies that totally changed the structure of their engineering teams where they could go back to having smaller teams with these tools.

And with maybe utilizing another core team that's setting up some of those pieces in particular domains and then they can move fast again because they're back to being a small team without all of their coordination costs. And so they can do so much more.

**Quinn Slack:** Yeah, totally. I think the point about how it changes, how a senior developer tech lead has influence over the team is really interesting. I hadn't thought of this, but it's almost like they're shifting left their work where instead of waiting until code review, which is like post micromanagement, which is the most annoying kind of micromanagement. It's yeah, not only are these a bunch of annoying comments, but I wish you [00:18:00] had told me before I wrote the whole damn rest of the PR now they can actually influence how the code is being written.

And one next step that we're taking is making it so that every line of code written will run through the standards that have been set for the code base. So we get that feedback instantly. It's not just being used when you're generating new code. But it's also being used for all the code that you're writing as a human.

I think that's going to be a much better kind of mode than waiting for a senior engineer to review something. And you've got to wait on that because I think in code review, every single code review comment should be viewed as a failure. It's not that code review is bad. You should do it, but it's a failure that it's, failures may be a mean term, but think of it like a factory every time there's a defect in a product and a widget made on the conveyor belt, it's not saying someone fucked up.

It's saying that's a failure in our process. So every code review comment. It's a failure that the person writing that code did not know that they should have done it somehow different. It's not their fault, but the system has to think, how can we make it? So they would not have made that mistake in the future.

And that's how you could move so much faster. You can get these dumb, [00:19:00] repetitive facts out of the human brain and make it so they can think about the higher level things. So we want to eliminate code review comments, not by saying, let's throw out code review, obviously that's the purpose, but how do we make it so it's not as necessary?

And that is a really exciting frontier that a lot of our customers are pushing on.

**Dion Almaer:** Oh, that's awesome. Yeah. And another area that I saw, I'm curious if this resonates too is just like there's all of these areas that are really painful. Like accessibility is an example. I've never seen a company that has invest enough in accessibility.

It's always a really, maybe it's a really small core team of expertise. Maybe it's just like a few people on the team that kind of know it, right? And I was talking to one company where there's one engineer that's just really passionate about accessibility and has a lot of expertise. And he was just like, hey in code review time, because it hasn't happened earlier.

One in 10 reviews I'll be on. And so basically that means one in 10 PRs have good accessibility, [00:20:00] right? Whereas being able to put that into the system and having that as part of the best practices for accessibility itself just eliminates that entire problem. And then you have performance, security, all of these like horizontal kind of cross cutting concerns that a lot of app devs, maybe don't have the full expertise.

You can't expect everyone to have that level of it, right? You now can have this ability where you can have the app devs run to your point really fast, but you've got the kind of guardrails in place by giving the experts in the company the ability to have that influence earlier on.

**Quinn Slack:** Exactly. And that's such a shift from someone who's really passionate about accessibility, thinking that we need to educate every single human dev on how to do that well, versus how can they make it so that the system has high accessibility.

I think there's a bit of a moral valence here. Yes. It's good for every developer to know and care about accessibility. On a personal level, my mom went blind in the last five years. And so I've gotten to see how she uses her devices and it's incredible. Apple [00:21:00] does amazing things with accessibility and I've seen also like Cookie banners make websites so hard to navigate.

This is personal for me and I've been using a  Sourcegraph AI when I'm coding to understand, like for me, it was a lot of basic stuff, how to do ARIA and how to do roles on elements. And I didn't know that I've got some prompts and some techniques that helped me but the moral valence here, I think is important.

We have to change the idea of what it means to be a software developer. It's actually a virtue to not need to know all the low level stuff. And we've accepted this with so many other evolutions in the role, where you're not writing assembly code, you don't know a lot of the fundamentals.

Hell, most programmers, dont't know how to do some basic CS fundamentals. And look, we've gotten there as an industry. So you got to eliminate the moral valence. And it's okay if developers don't know all about security or performance. The reality is they don't know that today, but you think that it should be their job.

Instead, let's figure out how do we make it so the system has great accessibility, great security, great performance. And how can AI sitting on the shoulder of developers actually [00:22:00] help there? And again, that's only possible in the enterprise because those things, security, performance, accessibility means something different based on the code base, the stack, the product you're working on, the business requirements, and you're only going to take the time up front to figure out how can we make a system that enforces those things well in an enterprise.

**Dion Almaer:** Yeah, that makes sense. Have you seen anything around helping with dependency management migrations? Every enterprise I've been in is they're always working on projects to migrate from here to here, which seems like a lot of a waste of time from like the business value side of things. But it's really important to stay up to date. And so I think that could be an area too, where AI could just like massively help just, with these massive, large code bases, a lot of it is legacy stuff. How do you change that whole system? And just like you're saying, with like code review getting earlier, what if the world is constantly updating and staying up to date.

And so you don't get into the situation of needing a project Phoenix [00:23:00] to rebuild everything from scratch. Do you see that at all?

**Quinn Slack:** Yeah, this comes up a ton. Every enterprise has dozens or maybe hundreds of code migrations that they want to be doing. And most of them are not getting done. Some of them are.

And this is also where we've seen customers start to build agents on Sourcegraph to go and automate some of these. We had this event in December of 2024, where a bunch of our customers talk about the agents that they built on Sourcegraph. Palo Alto Networks has built a code review agent.

Booking has built a bunch of code migration agents. And these are in those cases for booking, these are one off agents that they use for big migration, like a monorepo to microservices migration for this part of the code. And we've seen, incredible results from these kinds of AI driven migrations.

But the important point here is it's not a button. We are not selling a button. You cannot build a button, you push, and then it does it automatically. And that's okay. And that gets to those three groups. I mentioned, you have [00:24:00] the academic ivory tower people. I, criticize their approach on a RAG.

You have the enterprise ivory tower people, some people, they don't want to try it until it's a push button and they're going to be sitting there. In their ivory tower forever, because there's amazing stuff, but it's not a push button and that's okay. If you can remove 96 percent of the work that's one of the kind of customer results we've seen from these migrations.

That's amazing. You should take that all day, every day and stop waiting for the push button.

**Dion Almaer:** Yeah, that's awesome. So you mentioned agents there. We hear a lot of the term agentic, and it seems to mean different things to different folks. What does this world of agents and agentic mean to you?

What do you have now? And how do you think about it going forward?

**Quinn Slack:** I think an agent is anything that automates software development that takes away some of the rote tasks. That's what an agent is in everyday language. I don't hire an agent. If I'm a, basketball player, my agent doesn't shoot the baskets for me.

It doesn't take over my whole job. It does the part of my job that [00:25:00] I hate doing as a basketball player, to be clear. I am a coder. I'm not a basketball player, but my agent, I love the whole basketball playing part. I don't like the whole signing contracts part. That's what an agent does.

It takes the stuff that you hate doing. There's a lot of gate heating about agents. And I think it's so funny because, ultimately I think that the agents that actually have a real impact that are working today, those are the ones that I think are their future. And we've got these customers that are building these agents on our platform.

There's so much more that we have coming out to make that possible, more APIs. And this is something that has really accelerated over the last six to eight months with customers just starting to build these code migration agencies, code review agencies, test generation agents. And the important thing is that the agents that we see working today are the ones that automate this repetitive, dumb process.

They're not the ones that replace the human developer end to end. It's not a moral thing. I'm not saying, oh, the people that try to replace human devs are [00:26:00] bad, whatever. I'm not making a moral statement. I'm saying those do not work today and they will only work if they can be built on a foundation where you have a lot of these simpler, real agents that automate the more concrete things like the things that we can do today, you can automate significant parts of a code review. You can automate significant parts of test generation. And if you can automate so much so that 80 percent or more of the work is being done in a pretty prescriptive way. Then I believe that an end to end agent that's actually trying to write a new feature or fix any arbitrary issue will actually have a much greater rate of success because if you look at where those things fail today, a lot of it is not actually doing the hard part. A lot of it is writing a good test that meets my standards or, these other things, you got to codify all of those. Those frankly do not work and they will not work until we see agents automating away a lot of the more repetitive parts of the development process. So that's the horizontal agent that is this task for every developer in the company. Let's take that away. [00:27:00] Let's automate that. The vertical agents are the ones that try to replace human developers end to end.

And frankly, those just don't work yet.

**Dion Almaer:** Yeah I've seen the same thing and again, where you see some of the demos where they work, where it's again, give me a to do list at web app from scratch. They're good at that because yeah, you can one shot that with some good. Exactly. Yeah, that's funny.

That's awesome. So one of the interesting things I see in the enterprise is the kind of change from the early days where, management was like over my dead body are you gonna do this AI stuff to my code base to very quickly, oh my God, we're not going to be able to compete with our competition if we don't start using these tools.

And then trying to work out the, calculating the ROI of these tools and For me, some of the early metrics that people use, like completion rate were a little comical, to be honest, like we could, tweak the back [00:28:00] end to do different things again, very different completion results.

And when I step back a lot of the time, it was more about not getting stuck as a developer. It's not like giving you the perfect one off line of code. It's like when you don't get into the state of oh my God, what do I do from here? As long as I can keep pulling on a thread. Moving forward as a developer and get my tasks done like that's fine.

Keep me in that world. I'm curious what you've seen around and if you've seen enterprises maturing a little bit too on how they're actually measuring all of these tools, how they're deciding, when to use tools, how to use tools and what's been going on there.

**Quinn Slack:** Yeah, it's changed so much.

And we saw a lot of people, as you said, start out where they say, oh we're never going to use this. And then their concession was, we'll use it when it is as accurate as a human developer. And I love that because then what do you think I asked them? I say, okay, how accurate is your human developer?

And then they spit out their coffee and they don't measure it. And if they did measure it, it would [00:29:00] probably be embarrassing. So much of what we have had to do is understand the psychology of the enterprise and in particular some of the people in the enterprise ivory tower and how to get them to walk down, how to get them to understand this is not a button that they push that is perfect.

We're not claiming it is. And actually that's maybe not what they want, that there is something real today. And we've had a lot of customers that when they were trying  Sourcegraph and a bunch of other AI things, they would run a quick test. They'd get developers to spend a day that, maybe 10 developers, each of them would use a different tool and they would try a bunch of representative tasks.

We loved when they were actually hard in their existing code bases and they would get to some of the messiness that developers have. And they'd see, How long did it take developers to do that task on the different tools? They'd see the quality of the code and then they do a qualitative review of that code.

And we want a ton of those that starts to get a little bit more rigor. And then a lot more ways to measure. You obviously have some of the output metrics, [00:30:00] like what percent of code is being written. We actually measure what percent of code is being merged, which is a nice higher bar, because it means that code has gone through code review. But then, ultimately I think developers are getting to a world where we can no longer hide behind output metrics. If you take the point of view of an enterprise engineering leader or, a CEO of a big company. Every other department, other than software engineering is asked to basically report on some key metrics.

Like how much money did you make the company? How much money did you save the company or how much risk did you reduce? And developers, man, we've gotten by for so many years with Oh, we're working so hard. You can't measure us. And developers are paid way more than anyone else. And you have way more developers for most of the big banks.

They have more people with a job title software developer than any other job title. And I think that is going to change. And that will change in that developers are required to have business impact because we can cut out in the future, [00:31:00] a lot of the kind of friction. And we actually can get closer to, hey, we built this product that generated this amount of revenue.

And look developers are smart people. We can understand the business side. We can go work with customers, the whole economy of startups and people that were coders in a dark room that can go to being amazing salespeople, like it shows it being a developer takes high IQ so we can do those things.

And I love that because that means basically the way to measure developer impact is what is the business impact. And with agents, you get so much closer to the actual business impact. With an agent, you can take some business problem. Like we need to come into compliance with the EU digital markets act on our code base.

And that means 10, 000 changes. And if we do that, it's going to save us 150 million dollars in fine from the EU and if you can make an agent that can automate 95 percent of that, which we've got customers that have, then I would say that agent is worth 150 million and that is beautiful because it gets us so much closer rather [00:32:00] than, oh, developer productivity.

And how do you measure it?

**Dion Almaer:** Yeah, absolutely. I feel like we've gotten off of the path at times when I know, like working at Google, for example, at times you'd see all of these situations where people follow incentives, right? Not in a bad way. It's just the natural course of humanity.

And then you say oh, there's a job ladder that requires certain technical complexity to get to the next level. So what's going to happen? People are going to build things with that level of technical complexity. Yes, is solving the particular business impact in the most simple way possible and everything else.

My other favorite one was within the enterprise was once you got a product like you have, that's partially in the IDE. Showing the customers how much AveRAGe time their developers were actually writing code Was very eye opening to some I remember someone like saying like what do you assume it was and like you know our developers [00:33:00] work six days a week and they're at least writing code eight hours a day I'd be like, that's interesting because the data we have for your developers is one hour a day writing code, like you've got them in meetings all day, right? You've got all of this other stuff. And so being able to like totally break open what developers are actually doing and how they're doing it.

I think that's so much part of all of this revolution as well.

**Quinn Slack:** Yeah. And that's a great example where developers want to be coding more. And so you find where's their alignment between developers and managers. I think that maybe some developers have a fear that more measurement of what they do is going to be bad is but no actually it is, there's no CEO in the world who wants their developers to be spending time in meetings or doing grunt work.

And so once you can shine a light on that, I think it's going to help everybody. And by the way, when developers have more business impact, then they make more money.

**Dion Almaer:** Totally. Absolutely. So Steve [00:34:00] Yegi on your team wrote an interesting piece on the death of the junior dev, which got a lot of people very link baked to your kind of title there to begin with.

And then I think he just updated it to be death of the stubborn dev. Can you explain a little bit the core premise to what he's talking about there? Yeah.

**Quinn Slack:** Yeah, Steve Yegi on our team has been incredible. He was responsible for building a lot of Google's internal code search system that was an inspiration for us.

So it's amazing to have him on our team now. And he also came out with this idea of chat oriented programming. Which is, among other things, a way to tell those people in the enterprise ivory tower, that the job of developer is changing. And the point is not that AI is going to do all the work.

The point is that you need to have all your developers treat it like a companion and work through when it maybe gets something wrong and, you get familiar with it. He's talking to so many developers out there, so many customers, he's coding himself, he's seeing how all this stuff works.

[00:35:00] And there's still a really big disconnect that a lot of us who are working in code AI feel with some developers. And, I was just talking to a friend who's at a company that is very relatively AI forward and developer forward, but even in his company, he experienced so much AI skepticism from other developers.

And it's this stuff that you just do not understand how to wrap your mind around. It's developers saying, oh no, AI can't do that. And you say have you tried it? They say no. And then you try to get them to sit down next to you and say, I'm going to do this. And you watch and you can see them like trying to wriggle away from you.

Like I've got three year old and when I pick him up, sometimes he tries to wriggle away and he goes boneless. I know it's like that same feeling. They do not want to see it. It's they do not want to know that AI can do this. And that stubbornness, it is going away fast, but Stevie, he wanted to go and make it go away faster.

And we try all kinds of different ways. And sometimes you gotta be really blunt [00:36:00] with these people that the way that they think, is possible is no longer and stubborn developers. That's true. And the junior developer idea, it's a mindset. You can have a junior developer who's 50 years old.

You can, have a junior developer straight out of college. Turns out that, actually a lot of the youngest developers are ones that have been using the AI coding tools ever since they started to code and have a leg up. It's a mindset. You learn by doing. You learn with the help of AI. This is not the kind of thing where expectations would be low first five years of your career.

The whole ramp has changed in a good way. And anyone with high IQ and I drive is going to thrive in this kind of world. I think that the message is being received and the beautiful thing about this is AI is so obviously transformational and powerful that as soon as someone changes their mind and says, hey, this stuff is actually pretty good.

You ask them, Hey, were you ever against AI? And they say, oh no. I was always for it. This is going to be, flashed in the pan [00:37:00] in two years from now. Everyone's gonna be like, oh yeah, I knew this was, going to be good all along.

**Dion Almaer:** Yeah. That's so funny. Isn't it? Like I sometimes have and still some skeptics who are just like, ah, it's just like this, I'm pairing with this really bad junior developer that doesn't know much.

And it's okay. Let's say that it is a junior developer. It's a junior developer that knows everything, like every human developer has disjointed knowledge, right? It could be an expert in this, and that's why I hate the term junior , right? And like beginner versus, so I could be an expert in this technology and a beginner in that technology.

And so just that notion and already seeing people that were, I talked to developers who are in a mono repo for their company where the backend is all Rust and their front end developer in TypeScript and they never touched the Rust. And now they touch the Rust, like it's heaven there, right?

And then. It turns out it's not that [00:38:00] junior of a developer. Obviously it's not perfect to your point. It's not just like the one button. It does the thing. It doesn't understand all of your intent. It doesn't, right? Like you have to work with it.

**Quinn Slack:** You gotta tell it stuff. And this has been with our prompt library, a big unlock because in any given enterprise, our job is to win over a hundred percent of the devs.

We can't say, oh, those 10 percent they're too far gone. If we don't do that then the enterprise customer is not very happy. And it turns out that if you can give that last 10 percent of devs that are stubborn, you don't have them actually write a prompt. Cause frankly, they're not good at writing prompts.

That's part of the problem. You just have them click a button to run a prompt that someone else in their team made to generate a great unit test or something like that. Then they're like, holy shit, this actually worked pretty well. And then they can go see what's the kind of prompt that I had to write.

And so often when we would find people saying, Hey, this stuff didn't work well. We'd say what prompter are you using? And there, they'd go boneless again. But once we get, I know what the prompt they're using, it was obvious. There was one case with a very smart [00:39:00] developer. I was at a customer site in the room big projector up and he was walking through some of his workflow and he said, hey, this didn't work well.

And we were helping him with stuff. And basically he wanted to summarize a really long CI error log. And his prompt to Sourcegraph was please explain this in detail. And he wrote that out and then he pasted it in. And he said, the problem is that it's like too damn long. I just want to get a summary of it.

And we're like what if you just ask it to summarize? And he's I did. And he tried to summarize and it was still a little bit too long. It was like two paragraphs long. It's like this still way too long what if you ask you to summarize in 50 words or less? And so he did that and then it worked.

And that's just some of the like handholding that we have had to do. But when you think about why is everyone not using this? It's things like that. And there's a lot of, blocking and tackling that's got to happen for this whole, behavior to change. But we view it as our job to help do that.

I think Tessl does too.

**Dion Almaer:** [00:40:00] Yeah, absolutely. Yeah, I think it's interesting to see how much English is going to be involved in building things, right? And it already is, we already use it in requirements docs and comments and this, that and the other. But now with chat and, in our world, we deal a lot with, specifications and being able to define things that it becomes more about.

Not geeking out on like syntax in a particular language of how to get something done but it's more just how do you break down problems and actually think through what you're trying to build. And that's developing and it's okay to be doing less of the other stuff that when you like step back, it's like that's I don't want to be spending time in that toil like you were talking about at the beginning, like it's okay.

It's safe. Developers like there's still plenty to be done. Yeah, it should be the higher level, more value driven, impactful things that make sense anyway. So yeah, super excited to see more and [00:41:00] more people get over that. And to your point of watching the people just coming into the industry now, that even with my kids that are coding, right?

Like they've had this from the beginning. And I can't imagine if I was like, oh, actually we're going to turn that off. And you have no access to that. It'd be like, you're insane, right? It'd be like, you're going to take away the syntax highlighting too. Why would you do any of this stuff?

**Quinn Slack:** Yeah. I know one good test is. Does a company when they're hiring engineers, allow the engineers to use AI in the interview? And I think the answer should be absolutely yes.

**Dion Almaer:** Absolutely. No, that's really funny. So as you think about the future of code AI, what are the things that you think, pretty clearly you think there's a solid understanding of this is where things are going to change.

And are there areas that you still think are very like up in the air that you are not sure of.

**Quinn Slack:** The thing that is really concrete is we now have a road map for iteratively automating more of the software development process, a lot of these rote [00:42:00] tasks. We can build an agent that can go and generate tests, not with 100 percent accuracy, but pretty good.

We can do that for updating the change log for cutting a release, for creating a new UI screen, all these things. And across an enterprise, you just look at what are the most common things developers are doing and what can we automate and anything in the intersection of both of those, that's something that we want our customers to be able to build an agent to do.

And there's a ton of work. There's a lot of stuff that we'll need to do in particular to get that feedback loop to see, was that correct? And if not, then go run it again. And that could occupy us and our customers for many more years to come. It's like turning software development into a factory.

And by the way, when I say factory, that is a good thing. That is you no longer need to make it by hand at home. I'm not thinking software developers are in the factory. I'm thinking machinery is in the factory and software developers get the nice widget on the conveyor belt afterwards.

So that's actually a good vision. I think though what's unclear is when the holy grail [00:43:00] of replacing some end to end human developer tasks will actually work well enough for the vast majority of development. And the vast majority of development happens in the enterprise on complex code bases that make shit tons of money and that can't break and all of that. And our vision is that by stacking up a bunch of these little micro automations, these agents that automate small parts of it, that we're going to be able to do that. But I think there's a lot of uncertainty. I think there's some competing visions out there that say GPT 9 is going to just dump out perfect code that somehow does everything we need.

I think that's fanciful thinking. But this is the battleground and I'm really excited that we get to work on this every single day we're hiring by the way. Any really smart folks that want to work on this, please join us. Reach out. That's the big battleground and it's literally the multi trillion dollar question.

**Dion Almaer:** That's awesome. Quinn, thank you so much for taking the time to chat through these things. I feel definitely very aligned on what we've talked about today, too. And it's really fun to think about, what's a [00:44:00] world of when you've got AI, like my kids, AI native from day one, what does development look like and how do we iterate from here to there?

Yeah there's a few years of work for us to work on and it's just super excited to see this to see this all change. So thanks again.

**Quinn Slack:** Yeah, absolutely. Thank you. It's great to chat again and to everyone who's listening, happy coding.

**Simon Maple:** Thanks for tuning in. Join us next time on the AI native dev brought to you by Tessl.