Monthly Roundup: Gen AI powered TDD, Understanding vs Generating Code, Speciality vs General models, and more!

In this episode, industry leaders discuss the transformative effects of AI on software development, covering AI testing, code generation, and the evolving roles of developers. Tune in to hear from experts at Codium AI, AWS, Poolside, and CINQ.

Episode Description

In this month's episode of the Tessl podcast, hosts Simon Maple and Guy Podjarny explore the transformative impact of AI on software development. Featuring insights from industry leaders such as Itamar Friedman from Codium AI, James Ward from AWS, Jason Warner from Poolside, and Bouke Nijhuis, the episode delves into AI's role in testing, code generation, and the future roles of developers. Listen as they discuss complex topics like AI-generated tests, the dichotomy between understanding and generating code, and the importance of tests over code. This episode is essential listening for anyone interested in the intersection of AI and software development.

Chapters

  1. [00:00:21] Introduction - Simon Maple and Guy Podjarny introduce the episode and the topics to be covered.

  2. [00:00:57] AI Testing with Itamar Friedman - Itamar Friedman discusses the complexities of AI in testing and Codium AI's approach to generating effective tests.

  3. [00:02:02] Code Generation Models with Jason Warner - Jason Warner explores the intricacies of code generation models and the importance of understanding code.

  4. [00:04:52] TDD and Code Generation with Bouke Nijhuis - Bouke Nijhuis discusses using Test-Driven Development (TDD) to generate code and his iterative loop tool.

  5. [00:10:08] The Dichotomy of Understanding and Generating Code - Itamar and Jason provide insights into the complexity of understanding versus generating code.

  6. [00:13:28] The Future Developer's Role - Discussion on how AI is changing the roles of developers towards product management and architecture.

  7. [00:19:14] The Importance of Tests Over Code - Bouke's perspective on tests becoming the most important artifact in development.

  8. [00:27:26] Specialized vs. Generalized AI Models - Jason discusses the competition between specialized and generalized AI models.

  9. [00:30:10] Recent AI Developments and Announcements - Overview of recent funding announcements and developments in the AI dev space.

Full Script

[00:00:21] Simon Maple: Hello and welcome again to another monthly episode, our second monthly episode.

[00:00:27] Guy Podjarny: We're getting, getting to be pros here or rather, recurring.

[00:00:29] Simon Maple: We're getting into a rhythm. So this month, we're getting more into, uh, our normal cadence of weekly episode. And we're actually going one week each. So myself doing one and then yourself. So we did four,this month, excluding of course, the monthly, which we released shortly as, as, learning the right piece. Yeah. so we started off with, Itamar, from Codium AI and talked about AI testing. Then we moved on to, the wonderful James Ward, a friend of mine, from many years back.

[00:00:57] Guy Podjarny: From the ancient world of Java,

[00:00:58] Simon Maple: From the ancient and existing world of Java, James, just moving over to AWS, actually from Google in very interesting role, as a dev advocate in the world of AI and Q Developer from Amazon, and we also talked about Jason Warner, wonderful episode, with yourself and Jason.

[00:01:13] Guy Podjarny: Super interesting stuff. Yeah. Digging into code generation models and more.

[00:01:16] Simon Maple: Yeah, of course. Jason, previously CTO of GitHub, now CEO of, Poolside, and Bouke Nijhaus, really interesting hands on episode about how we can use TDD to generate, code from our own tests, which we write and this lovely little iterative loop that he created,as a tool,

[00:01:34] Simon Maple: to build, effectively components from tests. Yeah.

[00:01:38] Guy Podjarny: And I love that episode very much came from him sharing his work, it's not anything around broader, maybe this sort of the previous establishment of, some, credentials to maybe you have something smart in this space.

[00:01:49] Guy Podjarny: It's he's actually built a thing. He presented about it. And that's the only thing that caught our attention when we reached out talk to him.

[00:01:53] Simon Maple: Yeah, very interesting. It was very conference driven development, that one actually, because he was talking about, I can't remember what the topic was, he was talking about something and someone said, why can't we do it the other way around?

[00:02:02] Simon Maple: Why can't you get code rather than, generate tests about your code, which is where we were talking with Codium. The other way around. what about if your code was generated from your test and you thought, oh, it's an interesting project, let's go away and do it. And so that's how, yeah, conference driven development.

[00:02:14] Simon Maple: I love it. All about the feedback. Yeah. So let's talk about some topics then. One, one, which I know very much interested you was the idea of how LLMs have a greater capacity or sometimes find it easier to effectively understand, more than generate. And then sometimes generate is easier than understanding.

[00:02:35] Simon Maple: I know Jason had, very good insights into this as well. And sometimes maybe different from others that were there.

[00:02:41] Guy Podjarny: From those of Itamar. So first of all, I think it's a, it's interesting to, to set the stage around what is the problem or what is the, the, different perspectives you might have, which is one assumes that the complicated thing, when you talk about generating anything about code, generating docs, generating, tests, generating,new codes to add into that system that the complicated piece is understanding the code and Itamar was very crisp about saying, I think the hardest thing is to know what to test for is to understand the code, to understand what are correct and incorrect systems within it.

[00:03:14] Guy Podjarny: And it's not all the knowledge about what's test for sits in the, in the code itself. So maybe it's a bit more than that. But that's the hardest thing. And then it's not that it's trivial to generate the tests, but that is the easier part once you know what you're testing for. So that was interesting and it drove, I think we'll touch on that a little bit.

[00:03:29] Guy Podjarny: It drove them to actually create more tools that revolved around whatever it is that they understood in there.

[00:03:35] Simon Maple: And incidentally, actually on that, I remember, One of the early sessions that we did with Rishabh from Sourcegraph, the makers of Cody, they, he also mentioned when you think about the human interaction that, if you've got 200 tests, there's a certain area of code that you want the human to focus on.

[00:03:53] Simon Maple: And it's almost like helping the AI to understand what to test, not the fact that it's created so many, but the areas the AI need to focus on. So it's an interesting kind of, yeah.

[00:04:03] Guy Podjarny: Delta between those. And I think, there's something sensible about this, right? If you think about what it means to be a good developer, if you think about the effort of writing tests today, there's an effort in, whatever, writing down the lines of code, definitely in maintaining those,

[00:04:15] Guy Podjarny: that's a different story. the real kind of tough questions is, do you know what type of tests do you want to run? Then yeah, you need to not be lazy and actually write those down. So it's interesting just that, that difference between understanding what to test for and with it, understanding the code, and then generation.

[00:04:33] Guy Podjarny: And,and I think Itamar was sort of implying that's the complexity and the hard thing that they're dealing with, which again, is not just understanding the code, but it is about identifying what they did. And Jason's view seems to have been, that the LLMs are actually further along down the route of understanding code, than they are generation.

[00:04:52] Guy Podjarny: First of all, he also, unsolicited,pointed out this dichotomy, and interesting to just think about this as two things, understanding the code and creating it. And it's, it's not surprising. A lot of the sort of startups in the domain, of AI co generation are priding themselves around the, their ability to index an enterprise code, but, but also, understanding the code and then generating it.

[00:05:15] Guy Podjarny: But I guess the perspective, that I understood from Jason is that, the LLMs as they stand today are actually much better because I asked him about what are the edges right now of capabilities, and he felt like understanding code is actually something LLMs are a lot better at today.

[00:05:29] Guy Podjarny: And that the generation parts of it is maybe the parts that they're still working on, they're still evolving, and he was pointing out that, these LLMs are not yet junior developers. And again, we can talk a little bit more about sort of the evolution of the individual.

[00:05:41] Guy Podjarny: So I guess what I found interesting, in the observation is just, first of all, Coming back to understanding that when you talk about whether something works or it doesn't work, there seems to be a focus on, did it not work because it didn't understand your context? It didn't understand your existing code base.

[00:05:57] Guy Podjarny: last time we talked about how that context might be bad. there, there's a lot that you might not want it to,mimic in your context. But it needs to understand what the context was, and then the other part is the Gen part of the Gen AI.

[00:06:10] Guy Podjarny: And maybe the piece that I talked to, Itamar about is this notion that it's called Gen AI and we think a lot we lean into the generative, but actually still the interpretation, still the sort of the analytical part of AI, but using still the same LLMs and their innovations, have driven,maybe how much of the value I guess that we're occurring right now comes from that.

[00:06:29] Simon Maple: Yeah. Yeah. And it's interesting actually 'cause when we go back to the session that we had with Amir Shevat, we were talking a lot about the categorization and I think on the last, monthly you mentioned about how we kind of,initially started off with all these categories of tools, and then when we talked to the vendors, of course, vendors are doing multiple things.

[00:06:47] Simon Maple: I know this is something that you've thought a lot about in terms of when we think about the commoditization effectively of that understanding of that code search , the additional aspects to actually provide the value to the user, whether it's testing or whether it's something else, it's actually probably a smaller step and actually becomes a,a more unique part for that tool. So there's less of a jump to get a testing tool or documentation tool and so forth on that. what are your thoughts, in terms of the tooling ecosystem, going forward? if that becomes commoditized and how will that change how tools are produced?

[00:07:25] Guy Podjarny: I think that's really the eventual conclusions from this. There's two theories that you might embrace. One is that the hardest part is understanding the code and that becomes the center of gravity and that a few big platforms become really good at understanding your code.

[00:07:38] Guy Podjarny: And because they really get it, they understand the code, they understand what's right about it. Why, how it works. they are able to produce a bunch of these surrounding functionalities like generating tests, generating documentation, generating new code within that surrounding resolving bugs because they get it.

[00:07:54] Guy Podjarny: They understand your code, they understand your context very well. And so that theory of how the market might evolve leads into more of like single platforms that as a company, maybe you would really centralize on. I'm going to use this platform and I would, I'm going to use its set of tools because it really understands my code.

[00:08:11] Guy Podjarny: That's one path. The second path assumes that understanding your code, while very important, will get commoditized. That the LLMs will just understand it, or there will be some diminishing return,paths or, approaches to, to improve it, to get better at understanding your code. And so everybody will do it at around the same level and without much effort.

[00:08:30] Guy Podjarny: So you just point it to my code base and I'll understand it. And from there, that leads more to best of breed tools. Now it's okay, if you already, everybody understands the code and it's not hard to get the tool to understand it. Now you really want to say, I want my testing tool to really get, testing.

[00:08:44] Guy Podjarny: I wanted to understand maybe my product analytics and I wanted to understand,real world traffic that flows through my system, and I would like you to understand my business case and what is important to me. my documentation tool maybe relates to how I disseminate that and which platforms are using it and how do I integrate.

[00:08:59] Guy Podjarny: And, similarly for cogeneration or bug resolution of those, those, maybe it's related to what's my platform, maybe what's doing it more secure. So each of these, different tools might bring its own specialization. And so that's more of a best of breed tool. and they both, they revolve really around what's your perspective around how hard, both from an IP perspective and from an implementation perspective, would it be to understand your code and how much is that eventually the deciding factor?

[00:09:25] Simon Maple: Yeah. Cause I guess if it's not, and it does get commoditized, like you say, into a model, let's say it puts everyone on a much more level playing field, and then I guess it comes down to, again, That dev stickiness, the UX,as our dev tooling aficionado Guy from, I think you've had what, 50 successful startups now in the dev tools space?

[00:09:42] Simon Maple: yeah,

[00:09:43] Guy Podjarny: I've met 50 startups in the dev tools space!

[00:09:45] Simon Maple: You've invested in 50. It's again, down to whatever the dev favorite tool is. Not about the capability of the tool per se, but about how easy it is to use and potentially it comes down to consolidation. Again, if there's one tool that can do enough of it well enough and it's very easy to, for me to use in my workflow.

[00:10:05] Simon Maple: It very often wins.

[00:10:08] Guy Podjarny: Yeah. And I think, I would like the second theory that assumes that the understanding of the code would either prove not as critical or be sufficiently commoditized, because I think that leads to a more thriving ecosystem, right? If you create, if you assume that there's going to be one, like deeply understanding platform that gets your code.

[00:10:26] Guy Podjarny: And therefore it would have all the cards. then it doesn't lead to competition. It doesn't lead to innovation. versus if you assume that everybody can do that, everybody can understand. Everybody's probably an exaggeration. Fine. Not everybody. You need some level of competency. and investment.

[00:10:41] Guy Podjarny: But once you've built that understanding of the code, then you can build these things. Now, each of these tools can specialize and can optimize. They might optimize indeed on ease of use. They might optimize on a specific ecosystems and verticals. They might optimize, on maybe a new stack that, that comes along.

[00:10:58] Guy Podjarny: Maybe it's just opinions. There are different ways to document. They're different, but they're sometimes opposed. So if you subscribe to, if you've prefer if your taste, if your preference is more on one route, you'll choose that tool.

[00:11:08] Simon Maple: Great example being QDeveloper,

[00:11:09] Simon Maple: which yes, it's a generic code completion,

[00:11:12] Simon Maple: actually other things as well, a coding system there, it has a quite a specialization around the Amazon, the AWS services base. So if if you wanted to code against an Amazon AWS API, it's very good at actually attaching to existing services and things like that. So those kinds of specializations whereby if you're developing an Amazon stack, your milage will go much further.

[00:11:33] Guy Podjarny: Yeah. And I think that's a good example. I think in general, anything that's within an ecosystem, might specialize because people would have invested in that. While if you're sort of imagining a place in which it's so hard to understand the code and that's important or critical to the success, then yeah, maybe there's a few companies that are happy about it if they are the ones that have won, but then it means probably companies pick one, and maybe eventually they have more.

[00:11:56] Guy Podjarny: But, mostly they try to concentrate on one, and that means, a new vendor, has to plug into it, build on their understanding. So there are, it's not all gloom. If it is about that sort of central element, then what you'd expect is it expected those that, make available or are able to produce that understanding of the code to become a platform and allow others to connect.

[00:12:16] Guy Podjarny: But, I, I guess my, my, my hope is that it's less about this holistic understanding of the code. but also, also I think that's my expectation. I don't know. So looking at the code and maybe it's aligned with what Jason was saying, which is he feels the systems understand the code pretty well.

[00:12:31] Guy Podjarny: He's building a foundation model. So maybe he's okay, my foundation model, we'll just understand it as is. So there's like different levels of platform here. but but I also, I guess I, I expected that because I don't know, I don't know what your opinion is. Like I feel, I always equate it to human understanding.

[00:12:47] Guy Podjarny: And I think, there's a level of understanding of your code that is sufficient that, at that point, if you want to write docs, if you want to write tests, if you want to write all of these other things, then okay, you have enough understanding of the code from here on, it's about your competency in that.

[00:13:00] Simon Maple: Yeah, absolutely. leads us nicely on, actually, we talked about this a few times, prior to this month, but came up a number of times during the sessions this month, which is really about what a future developer will look like, and I think there's a couple of, pieces here, one talking about the role of a developer today, and how I guess AI is currently impacting on that or assisting that role today and perhaps even an overestimation of its capability today, which we'll talk about that in a second.

[00:13:28] Simon Maple: And then secondly, in future, how do we expect AI to change our jobs and in what capacity? Maybe we start with Jason, first, because he talked a little bit about the impact of AI, on our behaviors today. And this notion, many people have batted about, which is, it is as good as a junior developer today.

[00:13:48] Guy Podjarny: Yeah.

[00:13:49] Simon Maple: Now his opinion of course was

[00:13:51] Guy Podjarny: Yeah, that it's not at all. Yeah. I think the quote was something like, none of these are junior developers. .

[00:13:55] Simon Maple: Yeah. The quote was, AI assistants are evolving, but there's still a leap to make before we could even start thinking about them as autonomous junior developers.

[00:14:02] Simon Maple: And there's a lot of talk about hands on keyboard and the need for developers to still have hands on keyboard. because it's just not there, it's not there in that fully automatable sense.

[00:14:10] Guy Podjarny: I find all this quite fascinating. First of all, it's an observation that is about the current state of the art in terms of the technology and, in his defense, he was referring to his own platform as well.

[00:14:20] Guy Podjarny: So he says like, it's not like the models just produced junior developers out of the blue. I like, use the word autonomous there. last time we geeked out a little bit about, about the term autonomy. I talked to Des at Intercom, about that and, autonomy probably is around just sort of the, it's a, it's a size question.

[00:14:36] Guy Podjarny: So there's some,can you autonomously summarize an email for me? They can do that today. but you don't call that autonomous because it's too small. It's the task is too small. So I guess there's a little bit of lack of clarity about what is a junior developer? What is the scope of a task that a junior developer can do that is enough to give it that sort of title autonomy, right?

[00:14:57] Guy Podjarny: But, but I think it's, I thought that was really interesting around,first of all, it led to a bunch of this conversation around understanding the codes as well. He implied, I don't think he said that explicitly, that it would be better able to explain the code to you at a relatively senior level than be,able to execute it.

[00:15:13] Guy Podjarny: And I think really a lot of that boils down to autonomy, right? Andif you think about this in the context of what is a, what is the future of a developer, I think it's interesting. and I guess he was still leaning to in the next few years, we're still thinking about primarily AI as assistants and the developer engaging with them,

[00:15:33] Guy Podjarny: which I think is,is like a little bit unexciting, but maybe it's a transitionary period. I don't know how much do you like reviewing tests and, and code that got generated.

[00:15:41] Simon Maple: It's going to become like the code reviews, right? Whereas, if it's a, if it's a few lines, then maybe I'll look at it.

[00:15:46] Guy Podjarny: If it's a 500 line, maybe I'll scan it and have no comments about it and just accept it. And I think it's a little bit, it's a bit of a bummer thought to think that we will get stuck in this mode. It's fine, Copilot, Poolside,whatever, all these things, they're going to generate code for us.

[00:16:02] Guy Podjarny: And then we will, we will review that and, you would have written it faster. Fine, maybe we're faster, but as long as it's not autonomous in the sense that we can trust it, then,basically our job becomes reviewers and that's not a fun job.

[00:16:17] Simon Maple: And that's the key, isn't it? It's trust.

[00:16:19] Simon Maple: It's trust in the ability to say, look, I'm accepting what you are, what you're providing me. Yeah. And that's not through my blind trust of your ability to create code. It's through testing and assertions that are passing based on that,

[00:16:32] Simon Maple: but once you are able to trust, you're not reviewing, so much anymore, or, it's not in the usual workflow to, dip into the code and start looking at that. Jason mentioned two parts, product, and architects. Now we've battled around architect a little bit in the past, as well as the future of development.

[00:16:47] Simon Maple: I think , that resonates with you as well.

[00:16:49] Guy Podjarny: It does. Yeah. Absolutely. So I think once you get past the hope that the future of a software developer is not a code reviewer. I don't think we want that, and so because we don't want that, I don't think it'll stay there.

[00:16:57] Guy Podjarny: And let's assume from an autonomy perspective, we will have gained enough trust to, to give it that title of autonomy. Then, okay, what does a developer do? And I really liked how Jason phrased that, quite aligned to how I think, which is on one side, you might have some developers that lean, product.

[00:17:12] Guy Podjarny: They lean, user needs. They lean an understanding of what is it that the software should be able to provide to the user to address the problem. And so I think that's one path. It's probably a bit more well understood right now, and some developers love that, and some do not. And so if you're a developer that more leans into, into the technical aspects,I'm drawn into software development because of the, problem solving aspects to it,

[00:17:37] Guy Podjarny: then you might go more down an architect route. and the architect route is interesting because architects actually don't touch code much, I don't know, like I, I'm at this point overhead and quite rusty. And so hard for me to,comment on what I do today. But as I was more in architect roles, I understood code.

[00:17:54] Guy Podjarny: I cared about code, I could look about code, I could understand the implications of code. My job wasn't really about writing code. Also, the more, and I think we discussed this, the more towards architecture you go, the more kind of naturally polyglot you become. And he leaned into that. And he talked about all sorts of esoteric languages generating them. I think it's interesting, it's these two paths. If you were drawn into software development, because of the creation element because you wanna solve problems and it's exciting to you that you can write some code and suddenly you can have a system that does X, you might move up the product manager route.

[00:18:26] Guy Podjarny: Yeah. Which certain developers do today. And if you're drawn into software development more because of the problem solving, then you might move up this architect route that is more about solving problems at a slightly higher resolution with code being a solved problem just as, as we do today with abstractions in languages, it's send a request over the network and you don't feel like you are solving a lesser problem because you don't need to actually write the bytes and the bits on the, on the wire.

[00:18:54] Simon Maple: Yeah. And I think there are two other, points of view here, which are well worth mentioning. mentioning. First of all, the testing point of view, and secondly, the kind of more of the role where we think about it from a larger specs, space, the testing point of view is interesting because it kind of was really focusing on the code no longer being the most important artifacts that a developer would work on.

[00:19:14] Simon Maple: And this is really. something that would help gather that trust of generated code. An interesting quote here from,Bouke. He said, developers are increasingly, test writers and , one of the things that was leaning into is the fact that if developers are focused more on tests and the tests are creating a specification as to what the application should look like and then the application can be generated. Number one the tests are the most important artifact in that because that's your source of truth.

[00:19:44] Simon Maple: And yeah, number two, you don't even need to look at the code because if your tests are good and the tests pass, you don't care how it's implemented, almost. And in our session, there was in fact, I fell in the trap of, Oh, can you show me the code? I'd love to see the implementation. And Barker was like, no, you don't need to look at the code.

[00:19:57] Simon Maple: And it was an interesting conversation and it's more kind of habit to say, I want to see the code. I'm curious as to how it's done that. So that was one piece, an interesting piece. And, the second piece, was Itamar, and when you were talking the future, he was talking about it being PRD and specification and from that, the tests can be generated directly from that specification that kind of leans very much into the AI Native, viewpoint of having the specification and things being pulled from that specification, very different style of development, which we're used to today.

[00:20:26] Guy Podjarny: Yeah, absolutely. And I think there's different things that pull at this future role of a developer. One aspect of it is what would the tools make available, right? What can they do? And therefore, what is the job that is left to be done? And so, if tools are able to write code, and they're able to write code to a level that you trust then you have to start thinking about what's my job, It doesn't matter what I, what I don't like doing what is needed of me.

[00:20:55] Guy Podjarny: I think, I'd like to think that writing tests is just like this is just the outcome of the validation for most people is really no more fun than reviewing code, and so I think it is very functional, it's a way for us to drive I think the move towards PRD and specs and those terminologies, similar to the statement of architects also touches, on what could be the next craft.

[00:21:19] Guy Podjarny: Because I don't think you're going to get a lot of 18 year olds signing up to university to become test writers or code reviewers, but would they sign up to be, product requirements writers? So your problem solving is on the user base or would they sign up to become architects?

[00:21:33] Guy Podjarny: I think those are our real paths, I think we're combining this, today to be able to achievevalue from the LLMs. You need to either allow them to write to the code and then review it because you don't trust them yet and do that less and less,

[00:21:48] Guy Podjarny: or you need to provide tests that give you confidence. In both cases, what you're doing is you're verifying their work. And I think really we're gonna evolve into a place in which it's a more collaborative mode in which,you are trusting that they will do this piece, and therefore you are doing something that is more high level.

[00:22:05] Simon Maple: Yeah, another good Jason's quote, I wish I was 25 years old again. The next 20 years would be totally different. Two questions, Guy. Question one, do you remember being 25 years old? And question two, it's actually really hard to be someone just coming into the space right now as a developer, whether through university or just being 25 and, trying to make your way because, you can't see where you're going to be in 2, 3, 4, 5 years if AI does take over this space. But secondly, it was at least a little bit clear in terms of your trajectory, your path, Having the idea of product and the architect side, it's lovely to be able to view that path.

[00:22:45] Simon Maple: What kind of advice would you give to people starting off now? Is it something they should be frightened about? Something they can get ahead in?

[00:22:52] Guy Podjarny: It's really tough, right? It's, I think that the factual part, I think in Jason's quote there is the next 20 years will have been very different.

[00:23:00] Guy Podjarny: I think that's, really hard to contest. If you start coding today, or if you're 25, maybe you're a few years into that coding, then your next 20 years are different than if you did that 20 years ago, or whatever, a hundred years ago, like myself. and,and but I think whether you wish the first part of I wish I was, I think that's a bit debatable.

[00:23:20] Guy Podjarny: It's hard to really saywhat is the advice to build in? I think there's the life advice, which is, you invest in, in resilience and, and be adaptable and learn to learn and, and be able to adjust. But I think also a lot of the core principles are similar to, maybe guidance that you would give people around being too enamored with a single language.

[00:23:40] Guy Podjarny: And so to say, really, again, when you're a senior enough developers, I don't care. I initially learned to program in Pascal, back in the dark ages, and then I learned C and C++ and then Java, and then .Net, and then JavaScript. And it's okay. Like it's all programming, it's all development and they're all tools to a means.

[00:23:57] Guy Podjarny: And so it's maybe just leaning into what is making you tick and think about code, not as an identity, but rather as tools of the craft.

[00:24:06] Simon Maple: Yeah, absolutely. I think that's really good advice. we talked about the code, not looking at the code and let's, I'll mention a couple of quotes and then we'll talk about, Jason.

[00:24:14] Guy Podjarny: Love that provocation, right? Why do you need to look at the code?

[00:24:17] Simon Maple: And it's, it is against what we as developers want to do, right?

[00:24:22] Guy Podjarny: It's a visceral.

[00:24:22] Simon Maple: As soon as people see a project, I want to see the code. I want to see the code. so James said, I'm wondering why does that code actually need to exist in a human readable form if the AI can actually get it right? If, the AI, has some form or way to express what I'm trying to get to, then why does the code need to be readable?

[00:24:41] Simon Maple: Bouke also said you wouldn't need to look at the code, if the tests are good enough, you have to trust the generated code, from,from the AI. The tests are actually what become, paramount. And, now Jason, a hard time ingesting that kind of viewpoint, he still thinks there are experts that will need to understand the code.

[00:24:59] Simon Maple: Your thoughts?

[00:24:59] Guy Podjarny: Yeah. And I think he even phrased it a little bit as it's hard for me to foresee that, that you wouldn't need to know code at all. Again, two interpretations that might be here. One, which I think is right, is we're having, some separation anxiety here, and I think it's absolutely right that if we get that trust and that if is probably a word to lean in, if we get to that trust, that whatever higher level representation we have, is trustworthy, that it would create something that, that actually provides that, writes the code behind the scenes, or does whatever it is that it wants to do to provide us with the functionality that we requested.

[00:25:35] Guy Podjarny: Then why do we need to look at the code? At the same time, a lot of, developers today, they perceive themselves as coders, a lot of, the world of software development, of open source, they are all about the source, they're about the code, and people take pride in their craft.

[00:25:49] Guy Podjarny: you I think the world of creation has seen this, for instance, with the move to digital, and it probably still is the case where a lot of people feel that, drawing with brushes and paints is not the same as drawing digitally. And some people feel the opposite. And some people feel that pain, you know, playing an instrument, that is, actually producing vocally, the sounds, is very different than is the true music creation versus anything digital, let alone EDM or electronic sort of sampling.

[00:26:15] Guy Podjarny: So I think it's a new mode of creation and I think it'll take a while. I do want to point out that,we still have mainframe developers out there. They're paid very well, because technology doesn't die and it it sticks around and someone needs to maintain it, but ,it's hard for me to imagine how a cutting edge developer in five years time, let alone 10, is really coding,much, if anything at all.

[00:26:39] Simon Maple: Yeah. Very interesting, and of course. Models will improve over time to, match that reality. one of the things Jason, obviously at poolside creating, looking at creating a specific,Gen AI model around coding, and there's a lot of talk around specific models versus generic models and which will win, which are needed.

[00:26:58] Simon Maple: and, yeah, great quote which Jason also said was in a world with infinite resources, the general purpose model is key to AGI. But in reality we face constraints on energy, data, time. How do we balance these limits? So what are your thoughts in terms of whether, generic models will get to that stage where they're going to be good enough for coding versus specific models that are trained for that very purpose,

[00:27:23] Simon Maple: are going to be a league ahead of those generic models.

[00:27:26] Guy Podjarny: Yeah, that is the multi billion dollar question based on the amount of venture investment right now in, in foundation model. I thought that perspective,was very interesting and very valuable. And I don't know, like maybe I'm too much of a reductionist here, but I like, trying to find the primary pivot points in a bunch of these decisions.

[00:27:45] Guy Podjarny: And I think the key one here is around scaling. If you believe that we are,going to be able to continue to feed these, these training processes more and more data, and continue to see, as we have, that if you give them, large volumes of data, large volumes of compute, and if you do substantially more than that, Then you will get substantially better results,

[00:28:09] Guy Podjarny: then I think that diminishes the, the value of dedicated models like Poolside, like robotics models, to an extent, even, Itamar talked about their sort of test specific models. Although I'm not sure how those are built. We didn't talk as much about that. And if you're going generic. On the other hand, if you think, and there's some rumors around that, that we're actually at a bit of a local maxima in terms of our ability to train, and that the GPT 5 and equivalent models will actually have a hard time and they will become more like 4.1, 4.2, and it'll take us a bunch of years, two, three, four years to figure out how to make some advancement. Maybe power is indeed the, the obstacle, maybe it's something else. And so we resort to algorithmic changes, things like that. Then it actually is easy to believe that if you took a code only model, and you basically, Yeah, picked amidst the huge volumes of data, the subset of data that is relevant to coding.

[00:29:03] Guy Podjarny: However, you need to be good at that. then maybe you produce something that is a lot better because it's purpose built and doing the same for different domains. So I think it's really interesting. And I guess as a consumer, I'm happy. That there's enough venture money, you know, to try these different paths.

[00:29:19] Guy Podjarny: There are environmental aspects to it, which are not negligible, which is not awesome, as a, as an investor. Those are very hard decisions to make, and I guess they're still being made because if they do prove correct, there are very big amounts to be won. So, I think as developers, as tool builders on top of LLMs, I think it's yet another reason to believe that the LLMs will continue to get better and better at generating code because we have very capable, very well funded companies trying both paths and others.

[00:29:51] Simon Maple: Yeah. Yeah. Super interesting. It'd be interesting to see how that changes over the years. So that kind of almost wraps us up. But, but guy, we're into September now. What's that? Fall? That's fall now, isn't it? First, first, first week of fall. So August really was pretty good for AI news around, around a number of startups making various announcements.

[00:30:10] Simon Maple: Of those announcements, anything catch your eye?

[00:30:12] Guy Podjarny: Yeah, we had a bit of a flurry of funding announcements in the AI dev space specifically right now. we saw Cursor, which is an AI focused IDE and more, announced a 60 million round. We saw Codeium with an E, which is different than Codium AI, who had on it.

[00:30:27] Guy Podjarny: That's a very kind of unfortunate reality, I think, for both, both Codium companies. It's not like you name your company Codium and you say, Oh, I think many people will do that. anyways, it is what it is. So Codeium with an E, has raised a $150 million round at a $1.25 billion post, and maybe a bit more,interesting beyond the sort of the VC,hype maybe, or like frenzy around this domain, not to say that it's overhyped.

[00:30:49] Guy Podjarny: It's just, but there's definitely a lot of hype is the, is that they publish that they have 700,000 active users. I don't know what active means here. Is this like from the beginning or not, even if we assume 700,000 users that have used the product at least once, that's substantial, and over a thousand customers.

[00:31:03] Guy Podjarny: And again, I don't know if it's an individual developer, and I guess the last one that, it may be sorted by, by size here is Magic dev, which is more of a model company. They're somewhat secretive, so I don't know everything around it, but they raised, $320 million.and they claim that they effectively can achieve a hundred million token context window for code related things through some smart summarization or something of that nature, and, first of all, reinforces the point that, there are many different smart companies that are trying to get code generation to work better in many different ways.Definitely reinforces the point on, on dollars flowing into this world to try this out.

[00:31:43] Guy Podjarny: Which is, which is cool. and, I guess it shows that it's not a Copilot only world, which, which is exciting because I think, I'm at fault of that as well, which is, it's easier. It's hey, AI dev tool, which one would you name? And I think there's like a strong bias to, towards Copilot.

[00:31:59] Guy Podjarny: But I'm happy to see a bunch of these companies get some pretty substantial traction in adoption. And I think that would lead to a better dev tooling ecosystem.

[00:32:08] Simon Maple: I had a look at some of the Copilot stats from just before this session.

[00:32:11] Simon Maple: And actually there's not many numbers apart from the bigger announcement a while back, which people have maybe heard from Satya. 40 percent of GitHub's growth this year.

[00:32:20] Guy Podjarny: Came from Copilot and the likes.

[00:32:22] Simon Maple: Yeah, absolutely, and, open source developer platforms now reached $2 billion run rate.

[00:32:27] Simon Maple: Copilot has over 77,000 organizations which are adopting, 180 percent increase from the previous year. So yeah, it's a slightly different order, but they're certainly not alone in this space. Comparable numbers and growth from other companies, which is great to see.

[00:32:40] Guy Podjarny: I think Copilot is doing a great job and GitHub and Microsoft commercially are doing a great job kind of bundling in and, making it available to many people. And I think they're helping the ecosystem in that they're driving, familiarity and trust and maybe getting people to find out what they do and what they don't like about Copilot, which opens the door to now for a new vendor to come and say hey here's a pain.

[00:33:04] Guy Podjarny: It's very hard when you build tools, when you build a startup, a disruptive entity, you're disrupting some pain, you need to go to someone and say, Hey, I know you're doing this thing and you really want to do it, but this aspect of it really annoys you and you're building those out. And when it's a totally brand new practices, yeah, I know you've been using these AI coding assistants, for all this time.

[00:33:25] Guy Podjarny: And aren't you annoyed that when really the, all this time is I've been using them for five minutes, it's hard to disrupt. And so to an extent they are establishing them, establishing themselves as an incumbent, as the starting point, which on one hand gives them some ability to reach many users.

[00:33:42] Guy Podjarny: On the other hand, identifies, there's definitely a lot of magnifying glasses on all the things that they're not doing correctly. And, a whole bunch ofstartups trying to,address those in more or less systematic ways.

[00:33:53] Simon Maple: Yeah. we'll see what happens, in the next few months as well.

[00:33:56] Simon Maple: We'll, we'll talk about anything that we find interesting in the news as well.

[00:33:58] Guy Podjarny: And I would, and I would say I guess if I were a betting man, but also I know of a couple of them, I think September and October will have their own share of,exciting AI dev tool announcements.

[00:34:08] Simon Maple: Excellent. we've got a few more sessions, great sessions coming up, including one that I just listened to actually for next week. So I'm looking forward with that one. Tamar, the chief product officer of Glean, so yeah, very interesting session, which we'll be releasing next week and many more for the rest of the month. So stay tuned for other sessions coming up later this month.

[00:34:27] Simon Maple: In the meantime, thanks all for listening and tune into the next session.

[00:34:31] Guy Podjarny: See you there.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.