Crossover episode with The Infra Pod - AI Native Development with Guy Podjarny
In this episode of the AI Native Dev podcast, Simon Maple introduces a special crossover event with the Infra Pod, featuring Guy Podjarny, CEO of Tessl and former founder of Snyk.
Episode Description
In this episode of the AI Native Dev podcast, Simon Maple hosts a special crossover event with the Infra Pod, featuring Guy Podjarny, CEO of Tessl and former founder of Snyk. Guy shares his journey from Snyk to Tessl and delves into the revolutionary concept of AI Native Development. Joined by Tim Chen and Ian Livingstone, they discuss the Tessl Vision and how AI is poised to reshape the software development landscape. Guy's insights on the separation of specification from implementation and the evolving role of AI in software creation provide a compelling vision of the future. With a focus on community engagement and upcoming events like AI Native DevCon, this episode is a must-listen for anyone interested in the future of AI and software development.
Chapters
- [00:00:00] Introduction to AI Native Dev - Host Simon Maple introduces the crossover episode.
- [00:01:00] Guy Podjarny's Journey - Guy shares his entrepreneurial journey and transition to Tessl.
- [00:09:00] Understanding AI Native Development - Discussion on the AI Native Paradigm with a focus on change and trust dimensions.
- [00:16:00] The Future Vision - Guy's vision for a spec-centric future in software development.
- [00:25:00] Challenges and Opportunities - Exploring the role of specifications and building trust.
- [00:32:00] Tessl's Role and Community Engagement - Tessl's mission to shape AI Native Development.
- [00:40:00] Conclusion and Takeaways - Summarizing the key insights from Guy Podjarny.
- [00:42:00] Spicy Future and Hot Takes - Guy shares his bold predictions for the future of AI and software development.
- [00:50:00] Tessl and the Future of Development - Guy discusses Tessl's platform and community initiatives.
Full Script
Simon Maple: [00:00:00] You're listening to the AI Native Dev, brought to you by Tessl.
Hey there, Simon Maple here. Today on the AI Native Dev, we welcome a crossover episode from our friends at the Infra Pod, a podcast all about software infrastructure by Timothy Chen and Ian Livingstone. Now, Guy Podjarny was a guest on the Infra Pod recently and he discussed the Tessl Vision and AI Native development generally.
And also how through the abstraction of natural language, the future of software development is far more accessible to many, many more developers than ever before. So, I hope you like the episode!
Tim Chen: Welcome back to the pod. This is Tim from EssenceVC, and let's go Ian.
Ian Livingstone: This is Ian Livingstone, lover of DevTools and infrastructure, and I'm [00:01:00] super excited today. We're joined by a good friend of mine, Guy Podjarny, who is currently on his next company journey, CEO of Tessl, former founder of Snyk.
Guy, could you give us a little introduction to yourself?
Guy Podjarny: Oh thanks Tim and Ian for having me on here. I'm yeah, I'm Guy Podjarny or Guypo. Someone said I'm pulling a Madonna, with the one name. I am indeed founded Snyk. I founded Blaze before that and sold it to Akaimai, was CTO there for a bunch of years before founding Snyk.
Now I'm the founder and CEO of Tessl, trying to reimagine software development. I'm sure we're going to talk about that a fair bit more here. And yeah, I couldn't help it. I'm an active angel investor. I've got a little bit over a decade, about a hundred angel investments, love the learnings and excitement that I feel from learning from all these other founders.
And yeah, that's me. Also a lover of DevTools, a nerd and geek about many topics, including Dev and AI.
Ian Livingstone: You've been at it with Blaze. Since the very beginning, right? Like 2011, 2010 and before that, right? With Watchfire, like you worked at Watchfire, I think as well. So you've [00:02:00] been here from the get go in terms of the main take on the industry and you've seen a lot of the different sort of like phases of evolution as well.
So I'm super excited. My main question though, is Guypo, what is it that you saw that you said, you know what, I'm going to go and start another company. I think this is a question every multi time founder gets you're on company three now. So what is it that made you say, after, especially after the success of Snyk, like what made you say, you know what, this is the moment where it's time for me to go get back at it.
I want to get in the trenches and start from zero again.
Guy Podjarny: It wasn't an easy decision. The sort of the push and pull maybe of it on one hand on the personal front. I spent the last year and a bit at Snyk in a slightly more part-time function trying to figure out what it is that I do. I started Snyk and I was CEO and I brought about five years in, brought in Peter McKay, who's done a great job and I've focused or took on the sort of the product strategy role in there.
And then after a bunch of excitements and a period of time, I really felt like I need to reassess what is it that I'm doing in the company and [00:03:00] what is my role. And so on one hand, I had my sort of personal journey. We have a family charity, try to figure out, do I want to do that full time? I'm doing angel investment.
Do I want to do that full time? Do I want the comfy life of some part time job in a company I believe in and then promoting. And during that time came to realize that what I want to do is found another company. And I was resisting that conclusion. It doesn't make sense from a practicality perspective.
We're going to donate all the sort of the funds. I still have that Snyk as we're doing it. We're, like I've got all the whatever access and such that I need. So there's really no practicality to it. So it took a while to realize that it's okay to do it because that's what I want to do.
And that's where my passion came to this realization that satisfaction comes out of struggle. If you want to feel that satisfaction to feel, that meaning, then you have to put yourself into a thing you might fail in. And, while I learn a lot, these journeys with angel investing and such, it's not the same when you don't own it, when you're not in the trenches and needing to do it, and I guess alongside that, I was focusing a lot on [00:04:00] Snyk's AI strategy and Snyk secures software development. And so if you want to figure out what should Snyk do about AI, you have to have some hypothesis about where is software development going because Snyk's role very much relates to that.
And the more I dug into that and the more the sort of picture crystallized in my mind, the more I felt I want to build that. That's what I want to do. It also felt like a domain that matched my skills, my passion, but was not competitive to Snyk. And yeah, the combination of those two made me make the leap.
And I will say my wife figured out that's what I'm going to do probably a good couple of months before I did. I wanted to talk to her about it a little bit more. So she was like with a smile saying yeah go ahead. Talk where it's leading, sure. Let's talk about it. And here we are.
Ian Livingstone: That's incredible. I've gone through some of that myself as well, and I always come back to builders just want to build, and, how do you enable that? Yeah. Was there some insight, though, that you saw as well around at the time that you were thinking of starting the company and making this transition, working on the Snyk AI strategy?
[00:05:00] What did you see in what you were learning that said, okay, this is different? This is an opportunity to do a generational shift, like I'm sure, now third time founder, you have learned a lot about what to look for in terms of okay, there's a big potential opportunity here that I can grab on to is like what built up over that time that led to you saying, okay, I see the generational shift, or I see something attached to that gives me an opportunity to do something, that's rewarding, it is going to be really difficult, but the upside is there, whether it's personal or financial at this point.
There has to be something, right? So I'm really curious to understand is, was there some unique insight or some unique specific observation that got you to like, okay, this is a place I'm going to make the bet.
Guy Podjarny: And I think first of all, like what drives me is impact, in terms of is it money?
Is it sort of product that like, really, I think what motivates me is making a dent in the universe and doing something that I think is significant. I felt like with Snyk we've done that, there's a lot to still build on the business, a lot to still do, but I feel like a core part of that mission of getting security more embedded into development and making that be a norm of [00:06:00] a good practice in development.
Like I think a lot of that is achieved. And so the driver is impact. And I saw a lot of things you mentioned, like funding multiple companies several times. Like I learned a ton from that process, but I also learned a ton from angel investments and from working and accompanying those journeys. Someone once told me that as a founder, you learn in sequence and as an investor, you learn in parallel.
I think it depends on your level of involvement with the companies and all that. And I think the combo, at least for me, proved very useful in building some pattern recognition and seeing things that worked and don't work. And I guess what I've seen also through the angel investments, because I was investing in AI dev tools and such.
One is people are very short term minded. I think AI is full of that. There's just so much opportunity to use AI to improve things that people are drawn to the immediate. Two is that things are just. Startups are just not differentiated. You you look around and you talk to two companies, they're doing, and it's like, they're really the same.
And they have all these like false perspectives of how they will differentiate. They're talking about, hey, we'll have a [00:07:00] data mode. Really? you're going against an incumbent. You're going to be around for a year doing this. Maybe you got seven customers, maybe 70 customers. And yet somehow you will have a data mode to an incumbent that has 70, 000 of those.
And so poor differentiation strategy. for like where would you build it even if you are temporarily differentiated, very short term orientation. And I felt like everybody was just thinking small and the ones that were thinking big from a tech perspective not a product perspective.
You look at all the top companies like in terms of big names in the AI dev space and the ones that are really like the go big players they're very tech first versus product first companies. Magic dev, Cognition, Poolside and not to diminish. Like I think these are amazing companies and amazing individuals building them.
But my sense is that for the most part, they are building amazing technology and then out of that building the product. And I'm a product guy. I think about what is the product that is needed? How do I anticipate [00:08:00] users changing? How do I anticipate ecosystems and markets change from there? I build technology in service of the product.
And so I felt these gaps were there, the long term orientation, thinking big, thinking product first, they were all lacking and then strong differentiation. And I guess I had an answer to all of those, right? Like the picture that formed in my mind felt like I have a sense of what is the long term destination, how to build something towards it that is good from a user perspective and that would be sustainable.
Not all of it is stuff that I can share here on the podcast. Some of those are still strategies that are in house, but without that, I think founding an AI company right now is actually more risky than founding a company is in the generic sense, because everything is changing the landscape is shifting from under you.
Everything is overfunded. And so it's actually a higher risk than typical.
Tim Chen: So I think you already alluded to this topic and I'm very interested. To really go down this rabbit hole as far as we can, which is really about this AI [00:09:00] native developer theme, or we even call it transformation, right?
Cause I'm reading really the two big blog posts you put out, one is like AI native. almost like AI Native developer and there's also like the cloud native comparison. So maybe we'll start with the AI native developer, like transformation here. You put up a two by two matrix, right? Comparing like the existing tools and existing AI, almost products and tools.
And Tessl is on the far right, this new category that there's nobody else is there, right? Just you. And I think it's intriguing. Everybody who reads this will be super intrigued, including me, and I'm sure Ian as well. And we really don't know what does that mean. Of course, I think you have, like you just alluded to, you have strategies in your head.
So maybe you can help normal and bystander developers like us. Can you help us, maybe give us a little bit of a mental picture of, okay, how did you, maybe can you quickly summarize [00:10:00] what this 2x2 is? Because I think a lot of people haven't really read that post perhaps, right? How do you think about the world?
And give us maybe that somewhat of a hint. Of okay, the current tools that isn't the truly paradigm shift we're looking for, what could it be right? Maybe we can start a little bit from that, but I think maybe we can talk about this magical 2x2. I think we'll be very on a good start here. Yeah.
Guy Podjarny: So I will say this 2x2. I published it under that title of charting your AI native journey. So AI native is dominant in its narrative is something that I've been brewing on and evolving in my mind over the last really almost two years of exploring investments and thinking about ideas, then eventually at Tessl. And of course, it's neat.
There are many AI solutions out there, and indeed some of them are short term, long term, and they oscillate between something that feels like, wouldn't this just be the next feature of OpenAI, Anthropic or whatever? The things that feel like science fiction. It's no, this will never work. And so the attempt here [00:11:00] with the 2x2 is to give us a little bit of a structure to place companies or at least current offerings of companies within that. And so the two way two goes with two dimensions. One is a dimension of change. So how much does this new tool require me to change the way I work to use it? And then the second dimension is one of trust. It goes from attended to autonomous.
How much do I need to trust it to get it right? For it to be useful. So let's maybe look at some examples in the different quadrants here that this creates, right? If you draw it as a graph on the bottom left, what you'll see is the low change, low trust environments. And those are tools that are very easy to slap on the dev space.
That might be code completion. You're already typing, right? You're literally already typing. In fact, you're already familiar to auto completion because of IntelliSense and such, and so it's just a better auto complete. You eyeball it, you say if it's correct or not, you just continue. There's pros and cons to that, but still very easy, very low trust.
Other domains, it might be something that points out a potential problem on an X ray or that writes a [00:12:00] quick SDR, like a cold outreach type email. All of these things are like, they're small units. There are things you're already doing and therefore you're just prompted. This is the so the bottom left quadrant, it's the low friction.
Why wouldn't you use this? It needs to be useful. So if it doesn't provide business value, if it doesn't work well enough, still worthless. If it does work, then why wouldn't use it? It just makes you better. That's the high adoption category, as I see it. And it's massively competitive.
That's really where you would see it, how is one coding assistant better than the other coding assistant? Like you're going to have a million of them. How is one tool that creates tests for me or that captures the documentation or like all these things, like how is one different? It's oftentimes hard because they don't really invent any new methodology.
They're just doing legwork and they're doing legwork in small units so you can review them. So that's the sort of the bottom left. If you go up the trust route, that's really an IP evolution. It goes from attended to autonomous first, maybe examples of what needs to show up [00:13:00] over there. So for example Intercom have Finn, their support chat bot, and Finn tries to resolve tickets when someone approaches a company autonomously, if a human has to review every response that Finn provides to confirm that it doesn't hallucinate anything, the product would be useless.
So you have to trust that it gets it right for it to be useful. Even more extreme version of that is robo taxis, like actually very little change, nothing changed. You're still interacting as you before interacted with someone on a chat support conversation here in robo taxis, you might open your Uber and an order a taxi and get in the car and get dropped off elsewhere.
But mighty, the High trust that you need to be able to do this. So that's up with the trust route. In development world, that's a lot about this autonomous development world. And I think it hasn't been cracked. And that's a lot of what I think the sort of the Magic.dev or the others, like at least from outside statements are trying to crack.
I will just resolve this for you. And it's worth realizing or highlighting that there's an element of the magnitude [00:14:00] of the task involved in that trust element, right? So if you're writing the next line of code, I can review it every single line. But if you are writing full applications, I can't review them every time.
So for you to be useful to me, I need to trust that it works. And if not, then It doesn't serve the use case you're trying to get me to believe that it does, right? Or a lot of workflow creations. So I think a lot of this autonomous AI engineer, whatever, AI developer, AI engineer is now a loaded an ambiguous term, right?
But agent engineer. A lot of those are really up the trust axis and because they're trying to do things more autonomously. And I'll say again that this is an IP game and so if you're a company and you're trying to build over there it's because you believe that there's an IP moat that is really hard to replicate.
And I think in general now versus two years ago, I think conviction that you can have a true IP mode in AI implementations is different. I think two years ago, you might think that people are, they can really be ahead of the game. And now you see that every time [00:15:00] someone is ahead, they're just three months ahead or six months ahead, but you still hear, like you hear a Cursor, for instance, which is a different type of company.
Talking about how all they need to be is ahead of the next freedom podcast by a few months. And that can be significant. So that is their strategy is to try and remain ahead, which I find to be a challenging strategy. Like you still aspire to that, but it's hard for that to be your strategy.
So I guess that's a half, I don't know if I fit the briefly comments, but that's half the quadrant.
Tim Chen: Yeah. Yeah. And I think it's a very helpful overview because I think a lot of people. We probably intuitively know about it, but what's helpful about it, almost like a mental model, is that it really helps you think about the existing world, and of course it's very helpful to think about what your new world is very proposing.
And, yeah, I know we're not able to get into the details of Tessl, obviously, about the product, because things are pretty early. I'm very interested because I think this whole new IP transformation and like high trust to be able to generate high trust, it almost requires like [00:16:00] this trust earning journey, the product needs to require to get to the end.
Guy Podjarny: And maybe hold that thought a sec. Let me talk about the change route and then come back precisely to that, to the journey. So like the trust route, like you go into that car and you need that trust and there's a trust building exercise. Maybe you'll take shorter rides because it's a small, maybe you would only do it after a friend of yours took it, right?
Like you might do all these things. But it's still the trust route. I think the bigger, harder path is one of change. So change is about changing the way you work. And I think probably the best analogy there is the text to video or text to image presentation. So I'm a fortunate to be an investor in Synthesia.
Synthesia is a text to video solution that focuses on training and things like that. One of the leaders in the space. And the way you create the video in Synthesia is entirely different to the way you would create a video pre AI. There's really nothing in common about how you would set up a studio and get actors and shoot with the video and all that and how you would write that text.
And because of that, that is. dramatic [00:17:00] change and who is it that would be able to operate that successfully? What are the skills? What are the workflows in the business that you would build? And there's effectively zero chance that the winner in the AI generated videos will be the same companies that are the ones producing the videos in person today.
But the difficulty over here is when would you start using it and how would you use it? And in Synthesia have actually figured out a good specific niche or sort of slice. So maybe that's a demonstration of a journey around these training videos that already were very methodical in the different languages and such.
But when you go and you go to to Runway or you go to the more loose form generations. How do you generate a feature in that fashion? There are new problems that you would need to deal with that are brand new. You don't really know how to overcome them around character inconsistency.
It's only a different person. You don't get that problem, right? When you're shooting a video and suddenly a doppelganger, of an actor shows up instead of the real actor. Someone similar, but not quite the same. And so these are new [00:18:00] problems. They're new workflows, but if you succeed in building a new way that is truly better, and text to video is a good example of one in which there's like massive advantages, if successful, it's like dramatically cheaper and faster, then you can unlock tremendous value and you can really win that space. And so I think that's really the disruption corner. That's the place in which you can really rethink an industry. But the slowdown factor isn't just trust, it's change, and people are slow to change. And so I think you can develop IP faster than you can get people to change. And it's actually quite hard, if I bring it back to development, it's quite hard to think about.
What does this mean in development? I think a little bit of that is this notion of chat based creation. And so everybody immediately gets the code completion, but you'd find conflicting and sometimes hostile opinions about generation with chat, because that's a new way of doing it. It's a different methodology.
How do I. Do I talk about my document? Like, how do I work with that? And so it takes longer to adopt. Maybe it's better. Maybe it's not, but it takes longer [00:19:00] to adopt and it's a difference. I think the V zero or the sort of the bootstrap of an application this way, again, it's a different way of doing it.
So it gets you somewhere. Is it the right where? Is it the right starting point? What do designers think about it? Some of them have very kind of vicious opinions about them and some of them are in love with it. So I think that change route is interesting. And then let me just quickly talk about the top right, which is really just a combination of those.
And to an extent in what you referred to when we talk about how do you build the trust? That's the hardest quadrant, the sort of the top, right? It is a new way of doing it and assumes trust. And so really at the moment, if you just build for the top nobody would use it. You require them to change and to trust at the same time.
It's just not going to happen, but eventually every company should understand what is its top right positioning. If you're going to build a company acquire a company to any domain, you have to have a thesis about what is in the top right, because that is what we will eventually have is when the technology is trusted.
[00:20:00] And whatever useful changes are there to be had, they have been adopted. And so we'll get back to Tessl and AI native development, but what we're talking about in the AI Native development model is let's start by imagining the top right. And then there will be many journeys to get there, but you have to agree first directionally about what we think that top right is.
And you have to have that be a lively conversation and have a lot of people engaged and try things in it and all that for us to be able to conform that destination. And if all your building is for the bottom left, you might get some immediate dollars, but your company is going to become invalid or irrelevant.
It wasn't a very short answer.
Ian Livingstone: But it was a great answer. The way I think about this is a self driving car model, right? In many ways, like the assisted sort of driving cruise control, is a good example of a co pilot, fully self driving car. We still have to have, I have this. thing on my Subaru or if I have my hands on the wheel, the car will drive itself, but I'm still like there. And then there's, the full self driving version, which like Tesla's Oh geez, you can just do whatever you want. And then in future you have, obviously the robo taxis, like if anyone's been [00:21:00] in a Waymo, that is incredible.
And that though, if I were to think about it, like the AI native future in development would be like the Waymo version of whatever software development is. One of the things I think about is, today, the example is the co pilot, the auto completion or the chat, basically like this change in how the developer works inside the IDE.
And so we've seen, 50 percent speed ups. What do you think an example of an AI Native future for a software developer looks like? Is there like a mental model you have, or like a gap in experience that you've been thinking about that demonstrates the difference between I'm driving a car and I always have my eyes on the odometer and the car in front of me, to, Oh, now the taxi is like driving me around.
Have you thought about what that could look like?
Guy Podjarny: First of all, it's interesting that in the self driving car, there are actually examples of two big companies that are taking different journeys to it. Waymo is taking the IP route and they're very much, they have no co pilot. You're jumping straight into self driving.
Their only graduation is like the distances of the cities they operate in. While Tesla is taking the assisted driving route and trying to [00:22:00] evolve their features like that and time will tell what's better and it might both be viable. But they're very different so it's interesting like these are two giants and they're both taking very different routes to it.
I think the software analogy to self driving cars is a little bit tough because self driving cars have a physical limitation and so you can't really make a self driving car 10 times faster, right? Or you can make it much better in various ways and over time you can create a reality that is a hundred times better.
If, there's no parking lots anymore and all that space has been reclaimed by cities and there are fewer accidents and car ownership, it becomes a non issue because everybody's just reusing these cars. So you can imagine a better future, but the driving itself, the driving experience is shorter, I think for software development.
There's actually a better opportunity because you can both improve the journey itself, right? I get from point A to point B by 100x and you can improve the ecosystem and the totality of it by another factor that is similar. Maybe I can [00:23:00] talk a little bit about what I think is AI native development. Does that make sense?
Ian Livingstone: That'd be amazing. Yeah.
Guy Podjarny: Again, I think this is a group definition, right? Like in our community, it's a new paradigm and we need to work together as a community doing it and just point out as a bit of a plug that we do have a conference that we're funding and operating and all that, but with a lot of kind of bright opinions about it that is running what I think would be late the week that this airs on November 21st called the AI Native DevCon, and we're going to have a lot of bright speakers over there talking about all sorts of aspects of AI Native development.
But with that said, let me talk a little bit about our definition. I think software today, software development is very code centric. You get some requirements, you write some code, very quickly you make a hundred decisions in the code that never make it anywhere else, or it might be too low resolution, they might have not been bothered to go out and the code intertwines what needs to be done with how to do it.
It's like literally in the same lines, you read the code and you need to parse out what is it doing and how is it doing it and separate the two and the [00:24:00] LLMs to try to learn the code, they need to do the same. And I believe that we will move to a world that is spec centric. In which we can separate the two and a user can specify what they want, which is not a trivial problem and AI will handle the implementation in that world where AI does the coding and implementation.
There are many things that suddenly become dramatically better, for instance, AI native software will be autonomously maintained, maintenance by definition is change the software without changing the spec. Keep it behaving the same way, but change the operating system, change the dependency, fix that vulnerability, but don't change the spec.
And so if you start from anchoring on the spec and have a good verification mechanism to know that the software is working correctly, you don't need to maintain that anymore. And maintenance is like the productivity killer. So that alone is like massively valuable. It is dramatically more accessible.
Because there are just that many more people that can specify what they want and can be the judge of that and even think architecturally sometimes, [00:25:00] but are not able to write code, it would be very adaptable to your environment. So you'd be able to create software that you know on Ian's infrastructure is optimized for that and on teams infrastructure is optimized for their environment or that learns from the data and operations of how your specific users are using the system and your specific business needs, right? Are you flush with cash and you're really looking to provide the best experience and you want to optimize for latency and such, or are you more worried about costs and trying to do it, or maybe different times a day, you want different things, right?
And for different users. And like an extreme level of adaptability and personalization and automation can be there. It can have a deep relationship with data. I read some stats that I'm not sure if it's coherent, but it aligns with my general thinking, which is that software generally is thought that it can be 10, 000 times faster if you really fully squeezed and optimized every aspect of it.
But that's expensive because like human time is expensive, but maybe [00:26:00] with AI, if you're tied to it. So it's just better on so many fronts. And I guess our conviction is that building software like that, being able to specify what you want, being able to provide and then evolve the verification mechanisms to know, to trust that the software works the way you want, and then thinking about how that software works over time, like thinking about how do you write the specification, how do you write the verifications, how do you edit those in future versions of it, how do you Package those.
How do you version stuff like this? This is language agnostic. You can have a JavaScript version, a Python version of it. Are they the same version? How do I think about them? What happens if they have some language specific ecosystem change on it? Is that a new version for both? How do you observe a system like that? How do you know what has occurred? And so all of those require a different software factory. They require a different development paradigm and methodology, and they require a different development platform. What we're perceiving here, and we [00:27:00] can go into each of these paths, is deeper if you'd like is that on one hand we think this is a new development paradigm and we think this is software development and a lot of the answers like we will not be the ones thinking about them and they will also not have one answer both because of the complexity of software and the world so it could be that the way you would verify a mobile game is just very different to the way you would verify an e commerce site and to the way that you would, verify an internal in house application or any of these others, but also because it's not going to be a point in time, right? Like it will change now, maybe in some magical world, figured out all the ways to doing it, like over time, different organizational contexts, but also new technology comes along, things change, there are opinions and multiple ways to do it.
And so we think this AI native development. This is a methodology and it has some, like in DevOps, like in CICD, there are some themes that are recurring. There are some practices that span ecosystems, but there are many tools and they plug [00:28:00] together and they work together. And so we think there's an importance for a dev movement around it, which is why we're running the conference, which is why we're, looking to build and help foster a community around what is AI native development, what is that top right for development.
And on the other side, on the company side, on the product side is we're trying to think what does a development platform for that look like? How do you write those specs? How do you advance it? And how do you plug the tools along the way? Clearly to build a product that has to be usable, it can be dependent on something else.
So it's useful in its own right. But how do we think about that as something that facilitates the participation of others versus excluding them? Yeah. So that's my view on AI native development and I've got many more I can drill into any of these things. I think the role and the deep topics.
Tim Chen: Yeah, I'm sure we can have a 24 hour podcast because it's actually really exciting.
The more you mention it, it gets a little bit clearer where you're actually trying to achieve. And you think about the future, like you mentioned about Waymo, once you're able to actually [00:29:00] fully trust that this car can actually get you from point A to point B, your whole life changes, potentially, right?
The whole rules and everything around changes a lot. I think, definitely, it's really exciting to think about the future developments, if we can fully trust a particular spec, that is actually able to encapsulate what we really want to achieve, then the implementation details may not matter as much.
But of course, the biggest challenge is the spec. It's the creation, maintenance, and the sort of iterations and the team dynamics around the spec. And I'm sure there's so much details and probably won't be able to go to most of them. But you already mentioned there is this specification of what it does and a specification of how it verify it actually does the correct things and there's so many things in between Can we talk about, I'm very curious about the spec aspects. What are the major challenges to make sure a spec can do what it does, because right now when I think about it we've gone from test driven [00:30:00] developments, right? All these little small little paradigm shifts of trying to make sure we're getting the right thing to be delivered.
And, but none of these are coverageable enough, right? And cannot keep up with the changes and all the complexity behind the systems. So it's really hard to imagine there's a spec that can actually take care of everything. So is there a particular mind or pattern or even challenges we had to overcome when it comes to actually figure out how to get the spec correct.
Because I think that could be something to highlight, right? Because this is not an easy thing at all.
Here's like actually really hard things we had to figure out along the way.
Guy Podjarny: Yeah, I think, yeah, I would add, so I think everything you say is correct. It's hard to imagine this type of spec and I'll share my thoughts in a sec.
I would also add, how do you make it fun to write such a spec? Because if it's an agonizing process, it might be functional. But if it's not fun, people are not going to do it. And so not only does it need to be a powerful spec, it also needs to be one that is fun to create, or it is a process that makes it fun to create it.
There are actually two reasons why I think we'll move to spec centric [00:31:00] development. The first one is probably most obvious, which is machines can write code now. But the second that is as important, if not more, is that machines can fill out the spec. If I tell an LLM, create a tic tac toe game. That's a spec.
It's a pretty terrible spec, but it's a spec. And the reason it's a spec is because the LLM can fill in the gaps. It can decide whether it's a web game or a mobile game. It can decide whether it follows the rules of tic tac toe and what are those and whether it's there's a multiplayer or machine and hopefully in a smart system.
It can interact with you and know when to ask you questions as well. And they said tell me, do you want this to be a web game or a mobile game? But one way or the other, it can figure out which questions to ask and it can figure out how to fill out the spec. And I think that is a massive unlock that allows us to break this mould that we have today.
We have either formal specifications. which are such a nightmare to write that they only make sense in the most sort of harsh conditions where you really have to have that right when it's a deep medical device or aerospace and even there they're [00:32:00] limited but they're very painful to write you'd prefer from an experience perspective to just write the code or on the other side you have no code.
Which are toys, like the really smart configurations, all the activity, all the ideas have been pre created and now you're just choosing how to organize them and then watch flows to run them. And so it's always been very limited. And as they evolve, they become code like in apex and in Salesforce.
So I think the ability to fill in the gaps is really the unlock that LLMs introduced that are as important if not more important than the ability to create code. And the ability of the system to fill in the gaps correctly is going to be one of the strengths of specific platforms as they come in.
So if you're a marketing agency and you've already built five websites for me and I'm coming to you and I'm asking you to build the sixth website. I only need to give you a very small spec because you're well familiar with me. You're well familiar with the domain and you'd be able to fill in the gaps very well.
But if I try to come to the same individual and I am asking you to build a mobile game for me, I might need to give you very [00:33:00] detailed specifications. If you're some sort of low cost agency that I'm engaging that I've never worked with before for the same marketing website. I might need to provide very detailed specifications.
And so I think the ability to fill in the gaps is very important. So that's one important bit about the spec. The second is that I don't think there is a spec. I think there will be multiple specs. Once again, the way you would specify a mobile game is probably very different than the way you would specify a software library, right?
Or a marketing website. There will be shared traits. But, if it's more visual, you might. need something that's more visual, if it's very algorithmic, you might need an ability to provide those. And it comes back to me thinking about this as a movement, as a paradigm, as an ecosystem. Like today we don't have one, like we have multiple languages, we have multiple development environments, we have methodologies that are different around their sort of strengths and weaknesses.
Is it more about iteration or is it more about safety? And I don't think that changes in the [00:34:00] AI era. Everything becomes faster, but you still need to choose your trade offs. You still need new platforms and systems that are able to adapt to new technologies and to new preferences and to the industry's references.
And so I think there will be multiple specs. I think they will have many formats. Verification is a subset of the spec. Because it comes back again, like same statement, right? The way you would verify a mobile game is different than the way you would verify a non algo trading system than the way you would verify a marketing website.
And so you need these verifications there. I don't have a perfect answer to it. I think the ability to state it is Is important and TDD helps us and like mocks help us and a bunch of history of what we've been building helps us. I would also point out that just like regular software, it doesn't have to be upfront and there's an element of learning from data.
And do you build a system that works well enough and that you engage with well enough and then with the data that system can learn and can evolve and can become better over time. And [00:35:00] yeah, a bunch of interesting topics here. For instance if you said, I want the button and the LLM decides that button will be red and you might be I don't really care, it's fine.
It's if the button is red or it's blue, I don't really care. But for the next version that comes along, you change the title. You might not be okay with that button changing now from red to blue. And so you might want some visibility into the LLMs decisions and into the persistence of them and whether they're about to change or usability.
And so all of those are the reason we think this requires a different software factory. These are not the types of problems that exist today in the regular tools around software development. They're not just another file in your git repository. They're not just another test in your build. There are different types of interactions and we think they need to be figured out.
I am somewhat long answer here.
Ian Livingstone: It's a long winded answer, but it really helps. I think it really helps me a lot, to be honest, like fully understand sort of your thinking. One of the things I wanted to like zoom in on is you've talked about this being a community effort and so like a mental model for comparison of the cloud wave right [00:36:00] that you know the 2010 to let's say 2020 period was what we collaborate on ultimately like this sort of runtime operating system of the movement the cloud became kubernetes right that was the thing that became the community.
The core part of what was at the core of everyone's stack, everyone kind of contributed around it with attachment to the GitHub and some type of CICD. So it talks to okay, there's some core pieces of software, there's some core community movement, and that community movement is represented by some shared platform that we're all building on top of.
I'm curious. In your mental model, is the spec like a community thing? Is that company proprietary? What parts of this of the sort of this community movement do you think are, let's say, process methodology? So we all believe that the future of software development is, this way and prescribed in this manner in the same way that like the Agile manifesto kind of described a shift from Waterfall to an Agile method in software development.
And what parts do you think need to be core software that are collaborated upon, like things that just occur in the open source?
Guy Podjarny: I think it's just a combination of many of those. I don't think spec is a single standard. I [00:37:00] think there will be multiples. So some specific specs will be company specific and will probably be a bit more closed.
They might have to be like maybe SAP has their own specification methodology that has to do with their systems, right? And specs might compose over time, right? And maybe they, connect different formats. So I don't know, I can't tell you all of the above, but I think spec is a higher level comment.
It's like saying it's all code owned by, I think there will be multiple standards and some will not be standard and some will wish they were standards. Others are not. So I think those will evolve. But I think community collaboration, even in the cloud, is actually much bigger than what you've described.
Yes, we might have settled on Kubernetes, but before that, maybe there's a settling on containers. And what about microservices? And what about mesh networks? And, how do they interact between them? And what about API standards and how those communicate? What about licensing for software?
Maybe that's even an active discussion yet about hosting something on the cloud versus not, right? About access permissions and [00:38:00] IAM models now and jumping like a bastion hosts. So there are all these things that really have evolved as a community. And when I think about cloud, I don't just think about specifically the infrastructure as a service.
I think about CICD and microservices and devops, the whole way that we develop software that has changed dramatically. There are methodologies there that are continuous. Like you can say, generally the best practice is frequent deployments or frequent small builds that get deployed. And that ability to, roll them back.
Not everybody achieves it all the way, but I think it's generally accepted as the ideal. You want immutable infrastructure is desirable. Not everybody invests in it because it has some cost and sometimes effort. Observability is a thing you should have. And so there are practices that are part of a high level best practice that has evolved.
And then there are specifics that are ecosystem specific or just opinions. If you talk about observability many opinions. If you talk about CICD and even maybe my comment about frequency, talk about mono repos versus small [00:39:00] repos. And it's okay, it's thriving, it's the way progress happens.
So when I think about my perception of, say Cognition with Devon, or many of these sort of platforms, they're closed environments. They're maybe more the, the Microsoft of old, saying, come into our world garden. They might be building something that's very powerful.
But I don't think that's the thing that stands the test of time. And so that openness comes both from an open methodology, from some open source components, which I think have to exist, and from pluggable and composable infrastructure. So I host a podcast called the AI Native Dev. I'm not very creative with names.
I had the security developer before and now the AI Native Dev. I guess that's what happens when you build dev movements. And I had Matt Biilmann from Netlify, the CEO and co founder of Netlify on it, and he made a really interesting point around how every new technology paradigm challenges the open web or the openness, because they really drive you towards those environments.
You see this with mobile and you see this with I [00:40:00] guess with the internet at the beginning with sort of the chat systems and social networks today. And he points out that, I think we're going to need to deal with that. Now you're saying, I think in Matt's case, he was referring maybe more to the Vercel versus Netlify reality.
I'm not sure. I'm putting words in his mouth here, but okay, do we build these as features of an ecosystem? Where is it that we build this as collaborative capabilities? Favor of the open web, I'd like that to continue. And so I think there's a big evolution there. I want to say something.
If my answers weren't long enough, I've got something additional to add to it that wasn't even in the question, which is, I think, eventually this notion of software creation that becomes so easy because you just request what you want and you're able to interact with the system and it gets to learn you.
So it fills in the gaps. I think it becomes like literacy. And so the end goal here, which is far. It's far, but the end goal here is really for software creation to be a thing that is just accessible to anyone, right? I think the two of you and like many developers listening to the [00:41:00] podcast probably experienced, like encountering some annoyance in their day to day life and solving it with technology or building.
Whether it's a hobbyist thing or just a small solution or something, some app for themselves. And you want that to just be accessible, to just be a tool that everybody has to solve their problems. And so I do think that there's longterm significant importance over here. And so above and beyond all of those productivity boosts and whatever commercial advantages, I think there's a societal elements over here.
When you say that, I think it becomes even further clear why it's important for this to remain open and remain connected and not be, just a platform. So while I'm building a platform that I hope will be the leading platform in this domain, I think it's critical that it is a leading platform in an open domain.
Tim Chen: Amazing. I think you already got too close to what we actually want to go for the next section, which is the spicy future. Spicy future.
So as we want to hear your hot take about the future. [00:42:00] I'm very curious. What is your spicy hot take maybe about this whole AI native developer? What is some things you believe that most people don't believe in yet?
Guy Podjarny: Maybe the one like that is a bit spicy to people is that I think coding goes away.
It's one of those things that I think many people don't want to believe because coding is fun. I love coding. Coding has this sort of video game mentality of, like you can always level up, right? You get a task, you do it, you complete it. It's, you get your endorphin hit on it. You can always level up or you can do the same level.
It's amazing. So super, super fun. And it's like a little bit unfortunate for it to go away, but I think coding goes away. Coding is also the translation layer to the machines and it is limiting us in so many ways. I don't know precisely the timeline, but I think it's effect happens faster than we think.
This is a world in which progress happens in leaps, not in steps. And I don't think I am advising anyone who is currently a developer to really re evaluate their life choices. I think there's going to be work for a [00:43:00] while, for many times. But do I want my kids to focus now on learning how to code as a core competency?
Five years ago, I would have said coding is literacy. I would have really talked about that for the same goal that I've mentioned right. Now I don't think so. Now I think coding as a skill is a short lived skill. It would remain a hobby. It would remain a specialist skill, but it would not be a prevalent skill.
I don't know, maybe in a matter of a decade, something like that, maybe 10 to 20 years, maybe a decade is a bit too too fast.
Ian Livingstone: I have an opinion here. But I'm curious what do you think the limiting factors are today in terms of reaching that reality? What's holding us back? What are we missing from the toolset?
What are the core fundamental problem that we have to isn't solved today but needs to be solved for us to actually get to the spec based future? Where, anyone can write code. In the same way that anyone can write in a Word doc, they can go and open a Word doc and type it and it can help lead them through it. I assume that's a version of it or a possible future. Yeah, I'm really curious to understand what you think is limiting us. [00:44:00] What are we missing? There must be some fundamental problems we have to solve to get there.
Guy Podjarny: Yeah, you need the Tessl platform to launch on the No maybe there's something a bit more substantive. I think I do think though it is the change in the trust that are the axis and both of them just need to evolve. So on one hand, the tech isn't there. This is at the edge of the possible today.
LLMs are too unreliable, too hard to wrangle and get to do what you want. They're quite limited. I think they're evolving very rapidly, and so this will change, but they are not there today, and it's a bit hard to state. That's why I used the bigger numbers, because it's hard to say whether significant progress where GPT 5 arrives in a year or in five, probably not more than that, or at least that's a decent guess, but it might take a while.
And then the second, probably the biggest load on factor is that of change. And the notion of how do we interact with code like that, and some of that is making the system work to the assumptions that we perceive to be necessary today. So like we're very used to machines being, [00:45:00] deterministic, you tell it something, it does the thing you want.
And by the way, like people that are less technical oftentimes don't have that expectations. They've encountered the finicky machines. And for many of them, one way or the other, it is magic. It works or it doesn't work. And the computer doesn't let me do this is a common perspective.
But so some of it is like getting the new machines to behave to our expectation. And some of it is learning how to adapt our expectations. So if we're referring to yet another sort of episode of the, in the podcast, but like I spoke to Caleb Sima, who's a big security guy, love his thinking. And we had this interesting conversation about how to think about security scanning in the lens of AI, if you have an ability to scan your software and it does, let's say, typically does a better job at finding vulnerabilities that are there and not giving you alerts about vulnerabilities that are not there, two times out of 10 or one time out of 10.
It misses things, or it just hallucinates code. Hallucinates code is like maybe for false positives versus okay. But let's just say it misses things. So you have code, it's been blessed. [00:46:00] You've deployed it to production, and it turns out it just forgot to tell you about something, or hallucinated.
All in all, I think that's a better system, if it's better enough in its accuracy. But it's weird. It's does it require really rethinking how do you do the methodology for security? I think there are just many cases like that. Thanks. So if you go further into the society and into people that are not coding today, that change is even slower because you also have to remove fear.
And you might need a generation. You might need, the kids that sort of grow up with this type of tech and build them out. On top of that, there's regulations and there's other things, but change society, humans, they're the slowdown factors, not the tech.
Tim Chen: And I think we actually didn't really talk too much about Tessl.
I know everything is early. I think we got to the point now you're working on something about spec and you're working something around the community. Maybe tie the bow here. What is Tessl? How do you want to describe Tessl today to our audiences? And I guess you were talking about the conference.
That's probably one big way for to learn about your community, but what will people wait to start getting to learn more about your [00:47:00] product as well?
Guy Podjarny: Yeah Tessl is two things maybe like on one hand we are looking to be a driving force and getting this community that we call the AI native dev. And there's the podcast, check in to tune into that, join the conference on November 21st.
And we're going to have more and more activities to just facilitate this conversation, right? Get people thinking far ahead because so much of the conversation today is about today. And by the way, a lot of the tools of today are useful facilitators of the conversation. But tomorrow, how do you like when you think about tests, you think about documentation generation, some tools I think are a little bit less.
So code completion, that's maybe a little bit less. The other part of Tessl, the primary part of Tessl, maybe, is the platform. So, what we're building is we're building a platform for AI native development. We want that platform to be open and usable by many. And so we're building it with that line of sight.
It doesn't work yet. It's still closed. What you can do is you can go to tessl. io or tessl. ai. route you to tesla. io and join the waitlist and we hope as soon as we can [00:48:00] to make this to get more and more people in to produce software in this fashion what i will say is it's a new paradigm and it's big and i'm a believer that you have to get the product out there and give users the opportunity to tell you that it sucks so that you can ask them why and you can fix that and you can evolve it.
And in the case of a new development paradigm and platform, it takes a while to build even that MVP, but it's still that MVP that we're going to build at the beginning. And so what I would say is if this is interesting, if you want to be part of what is the future of software development, if you want to be a voice in it, join the community, learn about it, share your opinions and such.
If you want to try out what I think will be the first and hopefully one of the key tools in the domain. Then join the wait list and hopefully try out the product. And just the expectation is we want the early adopters. It's going to have some things that you'd say, wow, this is awesome. And it's going to have some things it's going to say we'll have this sucks.
And I guess our hope is instead of just ranting about it or shutting your browser window and not returning again. You tell us this sucks and work with [00:49:00] us to fix it.
Ian Livingstone: I'm super excited to play around with myself. I've been waiting for almost a year. So it's going to be a good time. I have one question.
We have a lot of like entrepreneurs or want to be entrepreneurs that are trying to figure out what to do. I'm curious as you build Tessl, as you thought about this, do you think this wave, the shift to AI native software development, do you think this is a wave of creative destruction that results in many new companies?
Or do you think this is a wave that basically becomes like an enduring advantage to the existing incumbents? I'm curious how you think about that, because there's lots of people who are very excited about AI. And there's an opportunity. From my perspective, it would reinvent a lot of how we think about software development and always represents an opportunity for new company creation, a new category creation.
So I'm curious, for those aspiring folks that are looking to go start or looking to join a new company, how's your mental model there in terms of, is it this is a thing that's going to really reward incumbents or is this a thing that is like a creative destruction that's going to enable us to reinvent a lot of stuff?
Guy Podjarny: Yeah, I think the trust axis is [00:50:00] more in favor of the incumbents because fundamentally it boils down to building IP. It boils down to having data to be able to optimize that IP. And so you can potentially win there. But I think mostly if you're talking about something that's at the bottom left or going up the trust axis, then I think you're better off thinking about this as an acquisition.
All you're doing is you're outrunning the big companies because it's still bigger and you're trying to be friendly to them and they might acquire you and the numbers are big. That's a legit strategy for people founding a company. It's not interesting to me as someone who founded Snyk and looks at something big but it's legitimate that, you could say that's what I did with my first startup.
It wasn't as intentional, but it it worked out and put me in a good place to start Snyk. So that's one path. The change path I think is still favorable to the disruptors. And I think the mistakes that people make are the most common mistake that people make as they found company is that they don't think long term enough.
And this is just a very fast moving space. And so you have to have a view that is a bit contrarian, that is a bit hard to believe. If everybody [00:51:00] nods when you tell them about the story, something is missing, something you're not thinking far out enough. And so I do think that there needs to be some boldness and some long term path.
But I think that change. The same dynamics that always exist remain, it is the existing players, the existing incumbents, they control the existing workflows and it is in their best interest to maintain these existing workflows as long as possible. And everything about those companies is wired to maintain it.
Of course, some of them will manage to break out of the innovator's dilemma or so this counter positioning path, but most won't. And so I think those are the opportunities to go after. They're scarier and they are harder. And as I said, I think starting a company in the AI space right now is actually higher risk than than typical.
But I think the only two kind of truly viable paths are one is build a company that outruns the big companies in a plan to be acquired or build something that's a bit wild. [00:52:00] And that goes further out. Some of this is true in general. Like when I angel invest today, a hundred investments later, then I count the leaps of faith.
And clearly if you talk to a company and you need to have 10 leaps of faith, whether to invest in them or not, is a problem. But if there are zero, that's a problem. That to me is like a bad, it's like something here is obvious. And in the world of AI, it probably means there's like a hundred reasonably funded companies that you just, you haven't heard of them because there's too small, they're like you, right? There might be, there hasn't been enough time for any of them to become a player, but rest assured there's funding. There are smart people and there's a lot of attention. And so more companies will be around.
So you have to have something a bit wild, a bit contrarian that of course you believe in, you're not just like creating a funky tail if you're committing your time to it for it to be worth it. And I guess maybe like the other advice that I would have is, another common saying that I have for founders is nobody cares about your product and they care about the problem that you're solving for them.
And so in the world of AI, you have to think [00:53:00] about problems that will become greater over time that they're real problems. And I think a lot of people find problems today in the AI ecosystem. Those are the high execution mode. Like it's obvious to many people that those are problems. They might be solved by the platform, might be a, the picks and shovels things.
So it's really an execution game over there if you want to win that. But I think the interesting ones is to think about, okay, what would happen when legal reviews are very common or are done so much more easily because of AI? Would judicial system collapse? So should I build something for judges that sort of does this, but what about bias or maybe what happens in the farm?
I don't know, whatever it is that is in your space. Think about it from the lens of understand the ecosystem, understand the pains, anticipate the best you can, what would be the pains that evolve here and then try to figure it out. Once again many leaps of faith here. You have to have some conviction in it.
You have to be convincing about that, but then you can build a sustainable kind of long term differentiated company.
Tim Chen: Amazing. [00:54:00] I guess last thing for our audience, where can we all follow the center of all AI developer movement, which is Guy and Tessl? What social channels or places people should sign up for?
Guy Podjarny: So for the company, it's a tessl.io you can also go to AI native dev. io. I love the io domain. I find that represents development too. So although we bought the ai, again, it's not about the fact that it's AI. It's about the fact that it's a development. So we routed there. Me personally, Guy Podjarny at there's a fortunately Podjarny is sufficiently uncommon that you can find me on the Twitters and the LinkedIn's and I'm probably most active on LinkedIn.
And so if you follow me on LinkedIn that's probably best. We do have a newsletter as well. So if you go to tessl. io, you'll find yourself either able to register the newsletter or join the conference, or you can sign for the waitlist. All of those are good cases to be involved.
Lots and lots to discover and I think fascinating conversations. Like you can be a believer, you can be a non [00:55:00] believer, you can think the path is not, but I've yet to really find anybody that doesn't think the conversation is interesting.
Ian Livingstone: Awesome. Thank you so much, Guy. This has been really insightful and Tim and I really enjoyed it and I think this is a great one that we've done.
So I hope you have a I can't wait for the AIAnative DEF CON to be honest.
Guy Podjarny: Thank you. Thanks for having me on.
Simon Maple: Thanks for tuning in. Join us next time on the AI Native Dev brought to you by