Automating Development: AI Beyond Coding Assistants with Devin Stein from Dosu

In this episode, Simon Maple sits down with Devin Stein, founder of Dosu, to explore how AI is transforming the development landscape beyond mere coding assistants. Tune in to learn how AI can automate non-coding tasks, helping developers focus more on what they do best: writing code.

Episode Description

Join Simon Maple as he chats with Devin Stein, the innovative founder of Dosu. In this episode, they delve into the evolving role of AI in software development, focusing on automating tasks outside the IDE. Devin shares his journey from early engineering roles at startups to founding Dosu, a company dedicated to alleviating developers' maintenance burdens through AI. They discuss the challenges and triumphs of integrating AI into development workflows, the precision and context required for effective AI responses, and the future potential of AI in creating higher-level abstractions and primitives. Whether you're a seasoned developer or new to the field, this episode offers valuable insights into the future of AI in development.

Resources Mentioned

Chapters

  1. [00:00:15] Introduction - Simon Maple introduces the topic and guest, Devin Stein.
  2. [00:01:04] Background of Devin Stein - Devin shares his journey and the mission of Dosu.
  3. [00:02:28] Asynchronous Tasks vs. Real-Time Assistance - The need for speed and accuracy in different AI tasks.
  4. [00:03:50] Precision and Accuracy in AI Responses - Importance of delivering high-quality responses.
  5. [00:07:53] Contextual Understanding in Large Codebases - Challenges and strategies for navigating large codebases.
  6. [00:11:05] Developer Style and Input Refinement - How developers' inputs influence AI performance.
  7. [00:18:29] Challenges in Patch Generation - Why patch generation is complex for AI.
  8. [00:21:05] Learning and Adapting: AI as an Employee - Treating AI systems as learning entities.
  9. [00:27:47] Future of AI in Development - Speculations on higher-level abstractions and the future of AI in coding.

Full Script

[00:00:15] Simon Maple: On today's episode, we're looking beyond the usual coding assistants. We've already talked a little bit about various, code suggestions and things like that, about these tools offer us in line as we code in our IDE.

[00:00:26] Simon Maple: There are also a number of other ways in which we can use AI to generate code for us through natural language in a more automated way, and we're going to look at a range of options, all the way up to defining components as specifications as well. My name is Simon Maple. I'm the host for the episode and joining me today is Devin Stein, founder of Dosu.

[00:00:46] Simon Maple: Welcome to the podcast episode, Devin, how are you?

[00:00:49] Devin Stein: I'm doing well. Thanks for having me, Simon.

[00:00:51] Simon Maple: Awesome. It's a pleasure. And why don't we actually, before we jump in, I know you've been, a software engineer, various places, before, before founding, Dosu, tell us a little bit about, about what Dosu is and a little bit about your background as well.

[00:01:04] Devin Stein: Yeah, for a quick background,I've been an early engineer at various startups, most recently, at an ML startup called Viaduct in the automotive space. So different from what we do today, but a lot of parallels as we'll dive into, and about a year, just over a year ago started Dosu, and at Dosu, we're really focused on, all the work, helping automate the work engineers do outside of the IDE, so taking things off, engineers plate so they can actually focus on coding, whether that's answering questions, so they don't, they aren't interrupted, helping take a first pass on issue triage,or helping them maintain documentation, are all the types of things that we'll focus on at Dosu.

[00:01:43] Simon Maple: Yeah. Amazing. and so when we think about the type of interaction AI needs to have, in these various forms, obviously in an IDE, it needs to be an, a coding assistant, right? It's a line for line coding assistant. What kind of differences do you feel a user need or a developer needs to have in the various places?

[00:02:03] Simon Maple: So we have the IDE, maybe there's other things like, various tickets or pull requests. How have you found. the need of that interaction style to be different.

[00:02:12] Devin Stein: Yeah, I think the main difference, is a lot of the tasks that we focus on at Dosu, and the things that happen outside of the IDE are more asynchronous, so you have more time. when you have something like GitHub Copilot in the IDE, people, speed really matters. You want to be super snappy.

[00:02:28] Devin Stein: But the problem space that we're focused on is like, how do we, stop engineers from being interrupted, essentially. And you think about like when you get interrupted, it's like getting pinged at Slack about a question. or, on a new ticket in Jira that, some customers have an issue that you have to leave what you're doing and look into.

[00:02:46] Devin Stein: and so with those tickets, the kind of delta between,getting pinged and responding could be minutes. and so you actually have a lot more time, in the background to make sure you're gathering the right context, and then providing a valuable, information back to end users.

[00:03:02] Devin Stein: So it's a slightly different interaction pattern, yeah.

[00:03:05] Simon Maple: Yeah, and it's quite interesting, actually, because with that additional time that you have, it's it's hard, It's almost like you don't, you want to be wrong less in a sense because when the speed matters so much in a code completion point of view, I guess there's always a balance between time and accuracy.

[00:03:20] Simon Maple: And we're in a code completion point of view. It's important that, the code suggestions are happening at the time that a developer is thinking and wanting to act, whereas with these PRs and tickets and things like that, like you say, when you have a few minutes, you still don't, you want to be super accurate at that point.

[00:03:36] Simon Maple: You want to actually do something that is, is truly what a developer wants to do, youwithout causing more work for that developer. is that something that you've played with more or able to do a greater number of things because you have that less at that time,restraint?

[00:03:50] Devin Stein: Yeah, I think that's actually spot on and something we hear from users consistently is, when you're working in this kind of asynchronous,more work domain, precision really matters. you rather, deliver one good response or, than, five average responses. and that's part of the reason, with Dosu, our product strategy has always been very public and we've launched really early.

[00:04:12] Devin Stein: And part of that is trying to understand, all the types of requests developers get. and figuring out what we call internally like a confidence threshold. Like, when does it make sense for Dosu to respond? when does it think it has, a good answer or relevant context? because, like you're saying, it's like delivering incorrect information, you may be caused poor work, versus not responding, it's status quo.

[00:04:38] Devin Stein: but if you deliver, high quality information, good response. that you'd actually save a ton of time. Being careful about when we respond is something that, we've been getting better at, but there's still a ton of work to do. And I think generally, in the LLM space, there isn't as much, discussion about, robustness.

[00:04:54] Devin Stein: Thinking, like, when should LLMs respond? Versus, I think right now they have a bias towards always responding and saying yes, very positive. they will do what you ask them to do because that's how they're trained. And so there's a bit of, like some on the modeling side, but also on the product side, trying to unwind some of that.

[00:05:13] Simon Maple: Yeah, absolutely. And it's, yeah, it's a real thing. An LLM not wanting to say no at times to your requests. And, I've been playing a little bit with, with AI through content generation and things like that. And it's funny when I ask it to do things like, Oh yeah, scale this, the Tessl website to see if there are other blogs and podcasts and things like that have similar themes or content that we can link to as a kind of like useful thing so that it's an automated thing that I don't have to then go away and do.

[00:05:43] Simon Maple: And it's interesting because if it doesn't find something, it'll make up content that doesn't exist and it will say, Oh yeah, I've even created the title and the link to it and things like that. So it's a real problem in not wanting to say, to say no, is this something that you have to add into the kind of like the system prompts and things like that to be able to make it more accurate based on things that are absolutely true versus what it wants to be true.

[00:06:06] Devin Stein: Yes, we, basically spend a lot of time, and trying to gather context and make sure that the final context that we show, those who perform response is high quality, and if we can't find kind of high quality context, then it will respond saying that, it couldn't find any relevant information, maybe prompt the user for information, I think,

[00:06:25] Devin Stein: there is still, and something that, we're still working on, is there's still like a bias towards LLMs want to jump into solution space, it might find out that,the answer is impossible, like what the user is asking for is impossible, but then it'll be like, you could also try this other thing, but the correct answer is it's impossible.

[00:06:43] Devin Stein: But, LLMs will tend to try push towards, you could maybe do this instead. we definitely have to. and have been building like guardrails around that, as well. And then generally output filtering, hallucination checks is very important in this space.

[00:06:56] Simon Maple: Yeah, no, absolutely. I think those filters are extremely important. I want to switch back to something that you mentioned earlier or all around context and I guess you know for a couple of different ways in which we might want to develop an application. Some might be, it might be a, first of all, an application that's being created from scratch, a new component or something like that.

[00:07:16] Simon Maple: Very little context, versus a, an app, an existing application, a large application that we're trying to make changes to, what are the advantages and disadvantages, of each approach from the user's point of view or from the LLM's point of view in terms of whether it can be more accurate, more timely in responses, those kinds of things.

[00:07:34] Devin Stein: It's a good question. I'll speak to my experience of both at Dosu and personally just using LLMs and AI as a developer. I, so at Dosu we're very focused on large code bases, that's where the maintenance burden of code kind of increases with the size and complexity of code base, number of people working on it.

[00:07:53] Devin Stein: But personally, I always use LLMs for kind of that zero to one generation. I think LLMs are really good at writing code and writing like scripts. And so if you want some, a webpage to do something. or a script to help you automate some sort of part of your workflow, I think, they're very good at that because they can rely on,the pre training, like what they already know, and don't have to worry about,what is the appropriate dependency to use, or is this code in the right style, or, maybe this function's already implemented somewhere in this codebase and they should import it instead of rewriting it.

[00:08:31] Devin Stein: So I think LLMs thrive at that. And then when you put them in a larger code base, part of the challenge is that context of making sure that any code you generate is in the correct style. It's using the libraries and the versions,that code base uses or it's, following best practices or some, like most code base have some weird patterns that some engineer early on started enforcing and, you need to follow, because generally kind of standardization is important at that scale.

[00:09:00] Devin Stein: and then, so that's one part is like understanding kind of how the code should be written, or where it lives. And then for us at Dosu, we're, we work in this space where, you know, often outside of the IDE where people are not talking about code in a code sense. So when people raise issues or questions to Dosu, they're often describing things in product language.

[00:09:25] Devin Stein: you know,how, how do we handle this scenario in this product feature? there's no mention of a class name or a file name, and so a lot of the work we have to do is figure out how do we, generate a map of product level concepts back to engineering world, which is like code files, functions, etc.

[00:09:48] Devin Stein: And I think that is, This is really important when you think about,LLMs for like larger code bases because, product is just how, people talk about development, at scale. They don't really, get too in the weeds on like function names. They're saying like, I want to change this feature.

[00:10:06] Devin Stein: I want to understand why this is happening with this product feature.

[00:10:09] Simon Maple: Yeah, that's really interesting. There's a few things that I'd love to break down. So one of the things I loved that you mentioned was about the developer style. And I think, yeah, developer style is one of those things where you often see probably more, I would say probably more experienced or senior developers who are more,

[00:10:26] Simon Maple: attached to the style of either their code or the components code that they're working on. In terms of obviously with a smaller project or a project from scratch, it's extremely hard unless you're, unless the tool or application that you are using to help generate this, suggested code knows you as an individual, knows you as a user.

[00:10:49] Simon Maple: It, it doesn't necessarily have a code base to go from. how accurate can the LLM be in terms of providing, suggestions in the style of the existing code base? Is that good enough today, do you think, or do you feel like we've got further to go?

[00:11:05] Devin Stein: I don't think it's a clear cut answer. I think, part of the challenge with LLMs is that you have to describe what you want often, and describing the style of a codebase is really hard. Some codebases have really,written style guides, and in those scenarios, it's possible that I could see an LLM, be able to generate code or an edit

[00:11:27] Devin Stein: in that style. But even within a very detailed style guide, there's probably subtleties around, we actually have, all utility functions stored in this part of the monorepo. Or, if we're gonna define a new, service, we want to make sure it's uses, follows this naming pattern and we create a new directory.

[00:11:47] Devin Stein: I think it's really hard to capture all of those subtleties, but, I think if you give an LLM a very comprehensive description, it can do a pretty solid job even in today's world. I think where you see, I saw, I read recently Google put out,a blog post paper on how they do, have been using LLMs for code migrations.

[00:12:07] Devin Stein: And, I think a very common pattern and where we're seeing LLMs excel and something we've been leaning into at Dosu is, using examples. So showing examples of like here, this is the type of style that I want, and having a few different variations of that. I think LLMs are really good at running kind of implicit styles through examples.

[00:12:28] Devin Stein: So I think there's something there around if you can maybe learn or find examples of this developer's code in the past. I think LLMs might actually be able to learn that developer's style almost implicitly, rather than the explicit, style guide approach.

[00:12:44] Simon Maple: Yeah. Very interesting. And I think one other area, actually, when we think about, when we think about existing code bases like that, large repos that may be monorepos or whatever it is. If I was a front end developer or if I was a back end developer and I'm, writing some code and I need it to be able to connect to, the front end or the back end, vice versa,how good is it today at, do you feel if I'm writing some front end code, being able to connect those requests into the correct back end pieces, is it good enough to pick out the right back end?

[00:13:15] Simon Maple: calls that need to be made or is there still a context problem there?

[00:13:20] Devin Stein: Definitely a bit of a context problem. I think this, going back to what I was saying before on the product side, and I think, I think thinking about things from a product perspective actually forms really natural grouping between these disparate parts of codebases. So front end, back end.

[00:13:34] Devin Stein: You might be looking at, and I'm thinking in Dosu world, but like a list of data sources. So there is a page on our web app that has that, and then there's back end code around data sources and how we, process and store those. So connecting those two as like the data source, places where you would look when you were working with data sources, I think is how you can, create a mapping between these disparate parts of the code base.

[00:13:58] Devin Stein: And I think it gets even harder when, at most companies, they're not all monorepos. So you're talking about multi repositories, and so trying to figure out, okay, given this feature, which repositories do I need to change? And then which files within those repositories. figuring that out,is challenging.

[00:14:16] Devin Stein: And I think,it comes down to,it's a context problem. and so figuring out how to, break down the request into, and find the relevant pieces of the code base, it's, but I think I really think that there's a lot of potential and power in thinking things from like a product first perspective,

[00:14:34] Devin Stein: the same way you would as a developer.

[00:14:36] Simon Maple: Yeah. And I guess two things, so I'd love to dig a little bit deeper into the kind of like describing things as product features versus code. I think one thing maybe even before that as well is in order to break the existing code base down or the many repos, there's almost somewhat, it feels like the LLM needs to

[00:14:54] Simon Maple: recognize and understand the architecture that exists so that it can almost understand the user flows or the product flows going through. So it recognizes, okay, if a user request,has this path, it needs to touch the front end. It needs to go grab some data, maybe pass that data back in a certain format to the front end, which displays it like this.

[00:15:14] Simon Maple: It, it really needs to understand that. And I think. This is almost the style at which a user maybe wants to map a request into an LLM in terms of being able to say, this is what I want from that, from this new use case to be designed. how good do you feel, from your experience with Dosu, are developers at describing with enough detail, but not too much context almost, these kinds of requests?

[00:15:42] Devin Stein: I think, so on the architecture point, I think that's a really,it's a good one. I think it's the same type of what are abstractions that we can give LLMs in order for them to understand dependencies that are more complex. So I think looking at it from a product angle is one. I think from an architecture perspective also, helps LLMs, understand those dependencies.

[00:16:02] Devin Stein: From what we've seen, LLMs are not that strong in, in their current form of, reasoning about higher level architecture. And like the implications of that. I can speak about our own. It's it's very async, pub, sub. like event driven. And, LLMs definitely struggle with reasoning about, exactly what's delivery and item potency and the implications of that.

[00:16:23] Devin Stein: But, I think, going to describing all that to the LLM, I think that is, the LLMs are not magic, they are as good as the inputs, and we find that developers,it, it can be, it's a lot to describe. If you think about your, look at any companies like Backlog, the tickets are typically pretty sparse, because there's a lot of implicit context,

[00:16:47] Devin Stein: and it's a lot of work to write things out in detail. So one thing that I think Dosu does well and LLMs could help with is this kind of context gathering where, Hey, you keep it pretty high level. Dosu finds relevant resources that it thinks Hey, you're talking about,these types of files, here are some related PRs, and some documentation I found, and then can actually use that to maybe flush out the description more,

[00:17:13] Devin Stein: and I think that kind of helps with this back and forth and we've been seeing that in a lot of AI developer type products where, there's a,it's a human in the loop experience where LLMs are gathering context for you. You're saying like, yes, that is what I want or no,look for XYZ instead.

[00:17:30] Devin Stein: So I think, one shot, developers describing exactly what they want. I don't think they're generally that good at it, just because it's a lot of work.

[00:17:38] Simon Maple: Yeah. Yeah, no, absolutely. And I think it's not something that developers are naturally, have had to do to someone who isn't going to discuss with them like a human or someone like that. in terms of what an LLM is good at and still needs help with, today we mentioned existing codebases that, an LLM has to trawl through in order to be able to understand and make changes versus greenfield projects.

[00:18:05] Simon Maple: It's probably fair to say they're going to be, better at new creation than performing changes to existing file or existing code bases correctly, because there is so much relevant context in an existing code base if they don't do that correct order. Rather, there are many ways in which they can get it wrong.

[00:18:24] Simon Maple: One, fewer ways that they can get it right with existing, so much existing context. is that fair?

[00:18:29] Devin Stein: I think that's very fair, and, I think there's two parts to that. One on the model level, just where we are at technology wise. LLMs are notoriously bad at patch generation, so actually,making an edit to a file, the, often, the best way to do that right now is just rewrite files, and, that doesn't scale, and in terms of,there are very large files out there.

[00:18:50] Devin Stein: And I think that is a solvable problem, I think patches just happen to be very, out of distribution in the way that models are trained, but it's almost guaranteed in the next, I don't know, year, maybe even less, there's going to be a model that is very good at patch generation.

[00:19:06] Devin Stein: So I think on the modeling side that, will get better over time. And then, like you're saying, when you have a very large codebase, it means you're making many edits, typically. Especially for any, significant feature work. And whenever, LLMs are probabilistic, if you make a thousand edits to a codebase and there's this very small,error percentage, You're going to get things wrong.

[00:19:27] Devin Stein: And so the more places you have to change, the more likelihood there is error as well. so that's another challenge with larger codebases, that you don't get net zero to one.

[00:19:36] Simon Maple: What is it about the patch that makes it specifically that much harder than the complete file rewrite?

[00:19:42] Devin Stein: It's there's just a, like the way, you look at a patch, like it's not very,like anything else really. Like it's, has a very specific kind of domain, it's a domain specific language. And so LLMs that are trained on just code and just text,can come up with patches.

[00:19:59] Devin Stein: But there is a strict schema requirement for it. And and there's a lot of ways that it can be wrong. It's not the most intuitive format. I don't know if you've ever looked at like a patch file and it's pluses, add this file, delete this file. and you have to be able to reason about that patch within the context of a larger file as well.

[00:20:16] Devin Stein: So it's. It's basically a domain specific language. And, right now, LLMs, at least out of the box, aren't super reliable with that patch format.

[00:20:27] Simon Maple: Yeah. And what else is still need, what are the areas of an LLM do we still need it to improve in, in order for our general experience through, through development to be better with AI?

[00:20:39] Devin Stein: So I think, there are like endless things that, experiments,and, ideas I want to explore about ways we can make LLMs better at navigating code bases, but one thing that we have been, really focused on, and I alluded to a bit before at Dosu that I think is important is when people talk about, LLMs or AI software developers or agents, more generally, they often compare them to employees, right?

[00:21:05] Devin Stein: It's, we're building an AI software engineer, we're building an AI, sales development rep, or AI executive assistant, and I think that analogy is actually really apt because when you,hire someone as, let's say, a software engineer, you don't expect them to come in and read your entire code base and then just know what to do.

[00:21:27] Devin Stein: What you do is you, expect them to read it, try, get a sense of what's going on, try tackle a ticket, or, and either one, make mistakes and then be corrected, say on a PR or just,in a conversation or ask questions, Hey, I'm trying to do this, but I don't really understand this, or should I do this?

[00:21:46] Devin Stein: Should I do A or B and then get a new answer? And then over time, through these corrections and answers. You expect them to become more and more proficient, and that's something we're really trying to lean into on with Dosu is how do we,Dosu is going to make mistakes, LLMs will make mistakes, but how do we learn from that, get feedback and then make sure they don't make the same mistake twice, so that over time as you invest in the product and as it sees more and more examples, it gets better and can achieve a level of proficiency that you would expect of kind of an employee.

[00:22:23] Simon Maple: Yeah, absolutely. And when we think about, when we think about the way we are currently describing things that we want to change, code changes and so forth, how do we need to think about how we describe code changes to an LLM?

[00:22:40] Devin Stein: That's yeah, so the way it's done today, you know is often I think people have aligned on this kind of sub task like problem breakdown approach. So you describe this higher level problem. The LLM tries to come up with a, a plan or break it down into smaller pieces, and then it tries to execute on that plan in like smaller kind of components one at a time, and then synthesize the results.

[00:23:09] Devin Stein: This is, we follow something very similar to this at Dosu. and the challenges with that is, there's a lot of steps where it can go wrong. If you break something down into a 12 step plan, there are like, those sub plans might then be broken down further, and, again, going back to oh, they're probabilistic, like the more times you're making a call or a change, there's, a higher likelihood of failure.

[00:23:34] Devin Stein: And you often need humans in the loop. what we've seen is you have someone checking the plan or, watching its progress, making sure it's okay. And then, similarly, if you,the plan is too vague, which actually is something we see a lot in production, is people, like we're saying, under specify, and so if things are vague, it'll often try to find more context that isn't relevant to the problem, if you're not specific enough, and more context can confuse LLMs as well, and so that's how it's done today.

And then looking forward, I think, with all things in software engineering, I think we need better abstractions, and better primitives where LLMs, are not having to think in such, small, granular detail, so that it's able to make larger sweeping changes more reliably.

[00:24:22] Devin Stein: than what exists today.

[00:24:24] Simon Maple: Yeah, and you mentioned a little bit there about too much context almost. how do you at Dosu identify what the right amount of context is?

[00:24:33] Devin Stein: So the way we approach it, again, specific to our asynchronous use case, where we have a bit more time, but it is basically, oftentimes we found that there's a correlation between the amount of tokens in and the length of the output you get. and sometimes, the answer to a complex question could be just one or two sentences, and something, so that conciseness, something we've been, working on a lot, but at a high level, the way we approach, the too much context thing is we spend the time basically you can think of it, refining the context, so like a, thinking almost like a sculptor, but really, chiseling away at this larger block of context to only the pieces that we believe are relevant to this problem, and the, final prompt, we're crafting is, ideally very high quality with only the specific information.

[00:25:21] Devin Stein: So we just take some time to refine the larger inputs into a smaller,higher quality, informationally dense prompt at the end.

[00:25:31] Simon Maple: Yeah, interesting. Because I think that's one of the biggest kind of like problems today I think is we recognize that context is so valuable in terms of providing a far better answer, but we're still having problems in terms of, determining what the right level of context, and I think it's a lot of it is us learning,

[00:25:48] Simon Maple: what we need to provide. we're going to look to the future a little bit just to wrap this up. but before we do, we're going to try something new actually on the podcast, which is, we're going to do a, we're going to do a share the screen, piece, which we're going to, we're going to wrap up in as a separate episode, which is going to follow this immediately.

[00:26:03] Simon Maple: In terms of, before we jump to the future, what are the kind of things that we can look forward to seeing in that, in that next episode that you're going to show us? You're going to show us it through Dosu, but we want to really talk about a lot of the, a lot of the technologies that we've, that we've talked about.

[00:26:16] Devin Stein: Yeah. I guess we didn't get too much into Dosu, but, at, Dosu is extremely popular right now within open source, helping open source maintainers, support and grow their community with, with maintenance. And so Dosu is installed on thousands of repositories and open source is public.

[00:26:32] Devin Stein: And so there's a lot of, pretty cool, examples of Dosu doing really well. providing a ton of value to developers, and also, helping showcase where LLMs fall short and where there's room for improvement. so I think it would be useful to walk through some examples of,Dosu doing,helping developers out based off, the context of their code base.

[00:26:51] Devin Stein: Answering questions that, would not be, aren't well documented, but the answer does live in the code base. And that's something that's important to us, is like, treating code as a source of truth, like a developer does. And other examples of, those who autonomously, resolving, issues with a combination of code and, documentation.

[00:27:08] Devin Stein: And then also ones where it can,it gets to a solution, but it isn't the optimal solution. tying it back to what we were saying before of like LLMs want to give you a solution when the solution might be there is no solution.

[00:27:23] Simon Maple: Yes, absolutely. Excellent. So yeah, for those who are listening, that's going to be followed straight after. So just skip to the next episode and you'll see that. Obviously best with video this one, as we're going to be doing a fair bit of screen sharing. So yeah, looking to the future, just to wrap this, this episode up, looking to the future, we're talking about, and particularly when we're talking about, making changes or creating new code through prompts.

[00:27:47] Simon Maple: And even when we, elevated it that little bit up where we're trying to get users effectively to talk not through code, but actually through use cases almost. So talk to me via product words and we'll turn that into code. Where is this leading us? What's the kind of future of this a few steps, a few years down the line?

[00:28:08] Simon Maple: What do you foresee as the interaction between developers and the, some backend that will generate code for them.

[00:28:16] Devin Stein: Yeah, I'll caveat everything I say in that. It's very hard to predict the future right now. Things are moving very

[00:28:23] Simon Maple: Oh my God, particularly in AI.

[00:28:25] Devin Stein: Yes. I think, yeah, the, this rate of change has been incredible. It's really hard to predict like how things will evolve, but I think something that,has been true for many years now is that as software development has progressed, the level of abstraction has also increased and I'm my take, I'm like a pretty firm believer for LLMs to,

[00:28:47] Devin Stein: or AI to become,actually building apps for us, similar to how, what we expect from software engineers. I think that code itself, as it exists today, isn't necessarily the right abstraction. If you think about, basically like programming languages, like we were saying before, there's to, implement a feature or to modify behavior requires a lot of changes.

[00:29:10] Devin Stein: And, this, like the level of abstraction within programming languages hasn't really changed that much in the past, 10, 20 years. We have new languages, like Rust, Python is still, I would say,one of the more abstract and like higher level programming languages and it also happens to be what LLMs are very good at.

[00:29:31] Devin Stein: But I think in order for, LLMs to become more proficient at building apps, the way we expect, software engineers to, we need, like abstractions around what apps are, that LLMs can think in and modify, and so the one that, top of mind, one that comes is really like REST APIs, I think, there's a lot of work that's been done around REST APIs and code generation, but it feels like something that is such a key concept, or like core primitive to modern web apps,

[00:30:02] Devin Stein: that you could imagine like a language or spec or, yeah, like some abstraction around REST API or RESTful services. That allow LLMs to operate in that domain. They're not worrying about kind of the implementation details of that REST API. They're thinking about, what types of filters do we need to support?

[00:30:25] Devin Stein: What is the input output? and then there are tools to compile that into, maybe it's, Python or code under the hood. but I think thinking about, having higher level primitives that, L1s can think about and not worry too much about the low level, quote unquote low level, code that we use today.

[00:30:44] Simon Maple: Yeah. And it's really, so I love the idea of actually starting from the REST API or any API in particular, because it allows people to think about the usage of that component, the use cases and the flows that you'd like to make. Python. So interestingly, you went, you talked about a Python like language.

[00:31:03] Simon Maple: Is that because there's too much ambiguity almost through, through the English language or, you feel like it needs to be styled in a specific way. So you use more kind of like a DSL like,Python notation.

[00:31:13] Devin Stein: Yeah, so I mentioned Python because I think Python's a good example of, a higher level, programming language, today that has helped developers be more productive without worrying about too much of the details under the hood. I think for, I think there's some people, believe like English is the future of code.

[00:31:31] Devin Stein: I am concerned with that. I think, with code and you all, you need code because it helps you be specific. and so I think whatever this abstraction, and that is like the challenge with abstractions is finding what is the level of specificity that we actually need for LLMs to do their job effectively.

[00:31:48] Devin Stein: That is the key, challenge. And I think, REST APIs is just an easy example where, there isn't that much, like surface area. If you were to create some sort of abstraction DSL, there's a lot of already opening, I'd love open API specs and code generation around it, where, you can think about, things in terms of, input output and, like kind of parameters and you can generate a lot of code from that.

[00:32:13] Devin Stein: Yeah. No, I love it. Love it. And what does a developer look like then in that case? If they're, if they're more on the specification angle, what's the role of a dev?

[00:32:23] Devin Stein: I don't think developers are going away, but I think the developers will be more productive and expected to do more. and it also, I think, lowers the bar for developers, similar, honestly, to Python. you try to program in C and there's a pretty high bar to entry. You're going to be running into a lot of errors that are hard to debug and reason about, but Python makes it a lot easier for people to write scripts who, maybe are self taught.

[00:32:46] Devin Stein: And I think the same thing will happen within LLMs, especially for, creating, more simple CRUD apps, or,you will have people that are able to make non engineers or engineer related roles, like PMs, designers, etc. that are able to spin up, web apps, for side projects, for internal tools, that sort of thing,

[00:33:06] Devin Stein: and then similarly on the kind of developer, I think, yeah, they'll just be more productive.

[00:33:12] Simon Maple: Yeah. Amazing. That's great. That's a great, that's a great point to end the episode. I feel it's,it's a very positive looking angle in terms of what the, what developers have to look forward to and also, how it broadens that role. Devin, thank you so much. Really appreciate your insights and looking forward to jumping in next, to actually get our hands dirty a little bit and actually see some of this in action.

[00:33:34] Simon Maple: So Devin, thanks very much. And, looking forward to the next session.

[00:33:37] Simon Maple: Thank you, Simon. A ton of fun.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.