Monthly Roundup, Live: Embracing AI in Development and Infrastructure, with Liran Tal, Amara Graham, Armon Dadgar and Patrick Debois
Join Simon Maple and Guy Podjarny as they explore the profound impact of AI on development and infrastructure. With insights from industry leaders Armon Dadgar, Patrick Debois, Liran Tal and Amara Graham, this episode dives into the challenges and opportunities AI presents in the tech world.
Join Simon Maple and Guy Podjarny as they explore the profound impact of AI on development and infrastructure. With insights from industry leaders Armon Dadgar, Patrick Debois, Liran Tal and Amara Graham, this episode dives into the challenges and opportunities AI presents in the tech world.
Episode Description
In this episode of the AI Native Dev podcast, hosts Simon Maple and Guy Podjarny discuss the transformative role of AI in development and infrastructure. Featuring guests Liran Tal from Snyk, Armon Dadgar from HashiCorp, DevOps pioneer Patrick Debois, and Amara Graham from Camunda, the conversation covers AI code assistants, security in AI-generated code, and the cultural shifts in tech organizations. The episode offers a comprehensive look at how AI is reshaping the landscape and what it means for developers and tech companies.
Podcast Chapters
- [00:00:00] Introduction to AI in Development
- [00:01:00] The Rise of AI Code Assistants with Liran Tal
- [00:02:00] Crafting Secure AI Prompts with Simon Maple
- [00:03:00] Context and Assumptions in AI Models with Armon Dadgar
- [00:04:00] AI in Infrastructure and DevOps with Armon Dadgar
- [00:05:00] The Evolution of AI Tools with Patrick Debois
- [00:06:00] User Behavior and AI-Driven Documentation with Amara Graham
- [00:07:00] GitHub Universe Announcements
- [00:08:00] AI Native DevCon Announcements
- [00:09:00] Summary and Key Takeaways
The Rise of AI Code Assistants
Developers today are increasingly leveraging AI code assistants to enhance their coding efficiency and output. As Liran Tal highlights, "Are you saying that developers use like code assistants and do not write all the code on their own?" This statement captures the essence of the modern development landscape where AI tools like GitHub Copilot and others are shaping how code is written. The discussion draws parallels with Stack Overflow, a staple in developer communities for years. Stack Overflow allowed developers to copy and paste code snippets manually, which required careful consideration and adaptation. The reduced friction of AI tools—where a developer can simply hit the tab key for suggestions—contrasts sharply with this more manual and thoughtful process. This shift introduces new security challenges as AI-generated code may not always align with best practices or secure defaults, emphasizing the need for vigilant oversight and validation. Developers must now consider the implications of relying heavily on AI for code generation and ensure rigorous testing and review processes are in place.
Crafting Secure AI Prompts
The conversation progresses into the realm of security, where Simon Maple and Liran Tal delve into the challenges of crafting secure AI prompts. Simon questions, “Is there a way I can craft a prompt to actually provide me, increase my chances of getting something more secure?” Unfortunately, as their experiments revealed, simply instructing an AI to produce secure code does not guarantee a secure outcome. This highlights a critical gap in AI's ability to generate secure-by-default code. The discussion underscores the importance of using community-vetted libraries and frameworks to mitigate these risks, ensuring that AI-generated code adheres to established security standards. Developers are encouraged to define clear security parameters and engage in continuous monitoring to safeguard against potential vulnerabilities.
Context and Assumptions in AI Models
Armon Dadgar introduces the concept of "context freeing" in AI models, a notion that underscores the limitations of current AI systems in understanding context-specific security implications. Armon notes, “The models really don't have a good sense of things like what are the security implications of these things?” This points to a significant challenge where developers need to provide explicit instructions and assumptions to guide AI behavior. For example, making an S3 bucket private requires explicit directives, as Armon mentions, “I need to be explicit. So, I’m just going to modify the generated Terraform code and say private equals true.” This illustrates the critical role developers play in refining AI outputs to align with security and operational expectations. It is imperative for developers to understand the intricacies of their infrastructure and communicate these clearly to AI systems to prevent misconfigurations.
AI in Infrastructure and DevOps
The integration of AI into infrastructure and DevOps is explored further with Armon Dadgar, who discusses the use of AI for infrastructure generation. He emphasizes the balance between underspecifying AI models and the importance of context in infrastructure as code. “How much can you leave unspecified, particularly in a world of infrastructure where details matter, right?” Armon's insights reveal that while AI can streamline infrastructure setup, the lack of context can lead to insecure configurations. Developers are encouraged to codify essential assumptions to enhance both security and functionality in AI-driven infrastructure setups. This requires a strategic approach to infrastructure design, balancing automation with manual intervention to maintain control and ensure compliance with organizational standards.
The Evolution of AI Tools and Ecosystem
Patrick Debois shares his observations on the rapid evolution of AI tools, highlighting their impact on startups and enterprises alike. He reflects on the cultural journey from DevOps to AI, noting the similarities in adoption challenges. Patrick muses, “Do we get ourselves to a place at which everything is LLM powered?” This question encapsulates the industry's trajectory towards AI integration, drawing parallels to the gradual acceptance and integration of DevOps practices. The discussion reveals the need for organizations to adapt culturally and structurally to fully harness AI's potential. This includes fostering a culture of experimentation and collaboration, where teams are empowered to explore AI capabilities while maintaining a focus on security and ethical considerations.
User Behavior and AI-Driven Documentation
Amara Graham provides insights into the shift in user behavior driven by AI in documentation. At Camunda, AI agents are used to enhance documentation access, transforming how users interact with information. Amara observes, “One of the most important things for me was a tool that cited its sources.” This reflects a broader trend towards AI tools that not only provide information but also build trust through source citation and validation. As users grow more accustomed to AI-driven documentation, their confidence in these systems increases, reducing reliance on traditional support channels. This shift necessitates a reevaluation of documentation strategies, ensuring they are designed to meet the evolving needs of users and facilitate seamless interaction with AI systems.
GitHub Universe Announcements
Simon Maple and Guy Podjarny discuss pivotal announcements from GitHub Universe, focusing on GitHub Copilot's multi-modal capabilities and GitHub Spark. Simon notes, “GitHub Copilot has gone multi-model effectively.” This development signifies a shift towards more versatile AI tools that offer developers choices in model selection, enhancing flexibility in coding practices. Additionally, GitHub Spark introduces micro apps, allowing developers to create applications using natural language, further simplifying the development process and demonstrating the growing potential of AI in software engineering. These advancements highlight the ongoing evolution of AI tools and their increasing integration into everyday development workflows, offering new opportunities for innovation and efficiency.
AI Native DevCon Announcements
The podcast wraps up with exciting news about the upcoming AI Native DevCon. Scheduled for November 21st, this virtual conference promises a lineup of influential speakers and practical sessions focused on AI applications in development. Simon Maple emphasizes the importance of collaboration, stating, “We’re all about collaboration in around AI native.” This conference represents a platform for sharing insights, fostering innovation, and shaping the future of AI in development through community engagement and feedback. Attendees can expect to gain valuable knowledge from industry leaders and explore the latest trends and best practices in AI development, positioning themselves at the forefront of technological advancement.
Summary
In conclusion, the October episodes of the AI Native Dev podcast offer a rich exploration of AI's role in development, infrastructure, and security. Key takeaways include the critical importance of context and explicit assumptions in AI-generated outputs, the parallels between AI adoption and DevOps, and the evolving landscape of AI tools. As the industry continues to navigate these changes, secure AI practices, user trust in AI-driven documentation, and the strategic integration of AI in infrastructure and development remain paramount. The journey towards AI-native development is just beginning, and the insights shared by our expert guests provide valuable guidance for the road ahead. By embracing these insights and fostering an environment of continuous learning and adaptation, organizations can successfully navigate the complexities of AI integration and unlock its full potential.
Full Script
[00:00:00] Liran Tal: Are you saying that developers use like code assistants and do not write all the code on their own?
Simon Maple: I know you're a JavaScript developer yourself and we know how rife that ecosystem is with vulnerabilities.
Liran Tal: Millions of years ago when we had this like small website called Stack Overflow, right? And I'm not saying I used it and I'm not saying you used it, Simon, to copy and paste code.
I think you were a bit more like concerned and judgmental on what was going on and you like Should I copy paste it? I think it's not the same. I think the difference lies in the fact that there is friction in copying code from Stack Overflow versus hitting the tab key, not to the extent that you literally can just prompt the AI on your LLM, complete this test or something, no more [00:01:00] thinking.
Simon Maple: Is there a way I can craft a prompt to actually provide me, increase my chances of getting something more secure? Does not work at all from the experiments that we have done, like at all. So is there. Some form of saying using these libraries are mostly going to be more likely acceptable because a wider community is using them.
Liran Tal: And there's going to be some user at some enterprise company installing it. And at that point it's like game over, it's going to be over.
Armon Dadgar: I'll introduce a new term because we'll come back to it a few times. There's this notion of almost context freeing. The models really don't have a good sense of things like what are the security implications of these things?
What is secure by default? And what you can't expect from them is that they're going to generate things that are secure by default. Either I have to change my statement to the element, say, make it a private S3 bucket. I need to be explicit. So Or I'm just going to modify the generated Terraform code and say private equals true.
Guy Podjarny: I love that example around the S3 bucket, which is, it's not right or wrong to create that S3 bucket. [00:02:00] It's not a hallucination. It's a fantasy. It's a fantasy of the user that the LLM can read your mind and know what it is that you wanted.
Armon Dadgar: But I think there's this interesting tension too, between sort of underspecifying LLM to fill things in.
So there's this interesting balance here. How much can you leave unspecified, particularly in a world of infrastructure where details matter, right? I think context is the key between where we are today and where we want to go in the future. The word assumption is an important one here. It's what are the set of assumptions the LLM is going to be making that you want to make explicit because you have context and that's going to be everything from production environments, regulatory regimes, 80 percent of the value is in codifying 20 percent of the assumption.
Patrick Debois: I wouldn't want to be in open ice shoes, serving the world. So that's definitely the speed, the impact, the companies playing into the hype, the new technology, the startups, the funding that's there.
That kind of like [00:03:00] rocket ship doing this. We're like in the fast iteration phase today. This company's hot tomorrow. There's going to be another, we're just churning through until we settle on a base that. Not saying the winner takes it all, but it could be in very different forms. If they're all telling me about model training, I know this is going to be a hard sell because the company is overrun by the data science folks.
I guess that was true for cloud and it's debatable for AI or LLMs. Do we get ourselves to a place at which everything is LLM powered? And it's not just about running your own LLM. Like even if they're all a SaaS version of the LLM, you still have to say, we're going to put the right, like check marks for the policies.
Amara Graham: So we know that people are changing some of their behaviors that very naturally led us to looking at how do we go from just doing the like text based searching of like keywords and things like that [00:04:00] to how do we introduce something that is going to offer our users a dialogue and how do we do it in a way that people are truly comfortable with it and trust it and don't run to our support agents to say.
Can you help me find this in the docs when they're like, I'm just going to search and couldn't you just do that? But one of the most important things for me was a tool that cited its sources. And if we needed to do any sort of validation or checking. We could go through and basically like spot check.
Simon Maple: Have you seen varying levels of usage on the agent in terms of people perhaps starting off, not trusting it as much or trusting it more and more as they get used to it?
Amara Graham: I noticed people having kind of those conversations where they were playing with it. And I think it was. Do I trust this thing or more generally, do I trust [00:05:00] this?
Simon Maple: Hello and welcome to a special edition of the AI Native Dev. Joining myself today, Simon Maple is Guy Podjarny. Guy, welcome.
Guy Podjarny: Hello Hello.
Simon Maple: We're going to do a few things today.
We're going to, first of all, talk through a number of the key takeaways from this month's episodes in October, but we talked to Armon Dadgar that we started the month talking with Armon who is one of the co founders of HashiCorp. We talked a little bit about how a, or rather you talked a little bit about how AI can help with the generation of infrastructure.
A little bit about, some of the defaults that you get with that, some of the concerns and a little bit about AI and DevOps. The week after we talked with or I talked with Liran Tal talking all over AI security, whether LLMs can generate a secure code or whether even if you ask.
It's to create secure examples, you will still get insecure code. The week after that was Patrick Debois talking a lot about some of the lessons learned from his experiences and our experiences as an industry from our cultural journey in adopting [00:06:00] DevOps, what we can take away from that and actually, use as part of our adoption of AI and just this week.
In fact, yesterday we launched the episode with Amara Graham, who's senior Dev Rel at Camunda talking about their adoption of AI agents in terms of providing documentation, relevant documentation to users. Guy, we've got a ton to get through.
Guy Podjarny: Really fun episodes and you should all go check out and listen to these things, or these conversations that relate to to your world on it cause they're all packed. We're just going to scratch the surface over here. I will say that it was really fun listening to you and Liran banter around for those who are unfamiliar with the context of it. Simon sadly is a Java true fan. I shouldn't say sadly, although he's a big JavaScript fan.
And this is an age long back and forth between the but it was a lot of nostalgia over there. It is also like a little bit funny to hear two Snykers, one active, one X talking about vulnerabilities in open source. So yeah. Fun conversation above and beyond the the content itself.
Simon Maple: Absolutely. Yeah. We're going to get to [00:07:00] that in just a second, but we want to talk about some news first. For those listening, if you wanted to hear more of sessions, just like that, make sure you subscribe we're available on Apple podcast on Spotify and many other platforms. So yeah, feel free to subscribe so you get notified of future episodes.
Let's get into some news. Yesterday was a GitHub Universe Guypo. And there's some very interesting discussions. One of them that caught my eye was how GitHub Copilot has gone multi moda l effectively. So developers can now choose between which model they're effectively using under the covers, whether that's any of open AI's models, whether that's Claude 's 3.5 Sonnet, or whether it's Gemini 1.5 Pro, thoughts on that.
That's an interesting move, right? Given developers that choice, but it's also leaning a little bit away from their Open AI
Guy Podjarny: I want to start with a bit of a deep thinking comment on it, which is I'm really annoyed by the choice of the words model and modal in those. So as a non native speaker on it, and I know Simon, you've had some challenges with pronunciations in the past.
Yeah, it's Java is my first language model. Is it multimodal? Cause I think Anthropic have a [00:08:00] multimodal. Actually a bunch of these sort of doing it at the same time. Anyways, very, so in this case I guess you have to be like very punctual with the the phrasing over here.
Yeah. The multimodal look, it's interesting. It's interesting how how GitHub Microsoft has put tons of money into OpenAI and I believe has the rights to use Open AI's IP and, have all sorts of favorable agreements with them on. So it's interesting commercially, right?
That GitHub, who is clearly part of Microsoft It's giving this choice. I think at the end of the day humans and any user of any of these systems is very default driven. And so if the default is GPT, which I believe it is, or I'm assuming it is it's probably going to be like vast majority, so it's a nice touch, but still even making the investment to be able to choose different models is an interesting move by by GitHub.
Generally, I'd say good for users, right? We we get to choose those. And I wouldn't be surprised if behind the scenes enterprises now or later would be able to plug in their choice of whatever enterprise certified model,
Simon Maple: Came in. And also somewhat good for Microsoft as well, right?
Because, if you think about it, if certain people are already making an investment in Claude, why would Microsoft want you not [00:09:00] to use GitHub Copilot, the tooling, whereas the model's actually becoming a little bit more commoditized there, right?
Guy Podjarny: Yeah, no, for sure. And it's a little bit along the lines of what they do in Azure and all the clouds, right?
Like you want to be able to run with all of them, but maybe there's a cost and a contract and but there's like other reasons to lure you to the home platform. But still great to to see it. I will like for those who haven't played with AI, it's not as trivial. The APIs are like super identical.
Call. LLM of your choice and pass some conversation object and doing it very easy to abstract the actual choice of the prompts and how is it that you make them work well in all of those a lot of similarities, but really not at all the same, especially when you get into the fine details of engineering.
And so from an investment perspective, it's not that trivial. No, that's of course assuming that they've done a good job kind of making them work equally well in all these different platforms. But still interesting. It was a good good announcement to see.
Simon Maple: Yeah, absolutely. A second announcement actually, one of the big ones is GitHub Spark. And for those of you who didn't see the announcement or have been on social media for a couple of days, GitHub Spark is effectively [00:10:00] another AI powered tool, but it allows you to create these things called sparks, which are micro apps. And you can effectively using natural language, describe the application you want.
And without apparently writing any code or deploying any code, you'll have an application which is running in a managed runtime. that hosts these sparks, these micro apps. And you have other things behind it. If you wanted like mini databases, like data storage, those kinds of things, you can change the theme of your spark.
And yeah, effectively you can stand that up to, to be used across mobile or your desktop applications. This seemed like a fairly big step from what GitHub already had with their co pilot, particularly with that kind of REPL style of running things. AI native was used a lot.
I'd love to hear your opinion on whether this is truly AI native or when this is everybody loves the term now, the next step to it. Yeah. Yeah.
Guy Podjarny: We got that first for the podcast, right? Of, of course. Yeah. Yeah, of course. Yeah. And nobody really means it quite correctly as we do on the . Yeah, exactly.
It's always the case. And as that anybody would say I think that one I think like got a little bit more I don't know, noise than it really deserved. Like [00:11:00] it's a good move, but it's a bit of a me too. Type move, right? We had Felipe on the podcast before showcasing all sorts of great work that he's done on Anthropic artifacts and, projects with them, it's that they were probably the true pioneer over here.
OpenAI has since launched canvas. There's like open source bolt that kind of creates applications actually goes further away. And it's still interesting. And I'm generally in favor of, a native development. I think people should be trying all sorts of approaches to developing with AI.
And I'm in favor of them doing it. And maybe the fact that there's like a direct tie to the Git repo behind the scenes is a little bit different but otherwise it feels like more of the same. So again, still generally supportive. But I think maybe what it does speak to is just the fact that GitHub has a certain megaphone that expands to developers that is different to, what Anthropic or OpenAI have.
And it's just interesting to see people maybe get exposed to these for the first time because, but at this point, I think it's like a tried and true path of give me a blurb and I will create something. And [00:12:00] especially for things that are like very simple web applications in which the visuals are very clear.
Verification is both simple, Hey, did it create the buttons correctly? The layout correctly? I pushed the button. Did it do what I want? And also very visual, like very like non hidden. And, V zero, just like all these different applications do that. Like they still generate the code and then you have to deal with the code and build from it. But it's become, I think like now, like a pattern,
Which we've seen in all this LLM world, right? One player does something and then, within three to six months, it was novel at the time and three to six months, everybody had it.
Simon Maple: Yeah, and it's interesting, actually, because the announcement came out, there's a lot of noise, but actually I was looking at this thinking, there's nothing we haven't seen here before compared to what other people have done or will do.
A number of people on socials were talking about, oh my gosh, this is going to be the death of cursor. And I thought, actually, it's a very different workflow to cursor, like cursor, you're in an IDE, you can play with code as much as using then an AI chat agent or something like that to help you build that code.
Whereas, yeah, you mentioned, the what the session actually that Felipe gave. But yeah, you mentioned that what he was showing there with [00:13:00] Athropic's artifact support, it feels more like that, particularly with the ripple that they have had for months now.
Showing the React example. You can build up some React and it will show you that live in its own little rebel that you can click around with. The interesting thing is actually being able to stand that app up and actually use that within their kind of their directory effectively launching the sparks as it were.
And some of that additional support behind the scenes, but yeah. Maybe that was maybe more the kind of like area that I thought was the interesting bit as well. But again, with things like the cell and zero the, a lot of these things
Guy Podjarny: Deploy that as well. So like when you see this from platform, it's a good point, which is when you think about Anthropic or OpenAI providing them, they don't have the end to end platform.
And so they might be generating it, then you might be plugging it into something to be able to execute it, to version it, to do these things while GitHub might have the long term element for cell with the zero might have that sort of long term like place to plug it into for you to continue evolving it.
The continued evolution basically comes back to these are just bootstraps. They're about get started which is fine and it's valuable. [00:14:00] But at the end of the day, when you think about the life of a developer and how often do you bootstrap a project versus how often do you evolve it? Do you maintain it?
Do you doing it? It's a fraction of the time. Again. Cool. Wow. Effect for sure. Not terribly novel, generally positive development, but I wasn't terribly impressed. A little bit of a spoiler, which is there's an episode we've not yet aired, but I've already recorded with with Matt Billman from Netlify, CEO and co founder of Netlify.
And we talk about Bolt a little bit over there and about deployments to Netlify and some of the advantage there. So you can hear a bit more about that in the next episode.
Simon Maple: Absolutely. Make sure you subscribe to to get notified of that episode as it comes out next Tuesday, I believe. So let's move on.
More news. We at Tessl have announced the lineup for our new virtual AI native DevCon. There were still some slots left and so don't worry too much if you're on the CFP, but haven't been notified just yet we are still working out the final slots there But yeah, it's on the 21st of November It [00:15:00] is running from 4 p. m. To 9 30 p. m. UK time, which is 11 a. m. through to 5 30 PM Eastern time. So we're mostly trying to get as many of the U S folks as well as the European folks as we can, but Hey, time zone is really, it is, it's hard. You can't please everyone. We've got a number of speakers who have already announced Guypo yourself included.
We have Lisa Raes. Whose going to be talking to us about a number of dev tools that you can use AI and really get effective development processes with today. So there's a lot of practical sessions. We have Patrick Debois, who's gonna be talking about what's in AI code assistance and things like that.
We have Tanya Janca as well, who's an amazing speaker. She's gonna be providing us with some great insights around security with AI. We have Jason Han from Data Datadog. We have Itamar Friedman from Codium who is now, how do you pronounce that Guy? I'm gonna leave that to you.
Guy Podjarny: What Codium?
Simon Maple: Yeah. No, not Codium.
Guy Podjarny: The new Codium. Codium AI yeah. Codium without an E It's very unfortunate. I gotta say like for Codium and Codieum for the [00:16:00] sort of the two
Simon Maple: they rebranded. They rebranded, right? They're now Qodo, QODO, I think.
Guy Podjarny: Oh, I'm hearing that news live.
I'm . We're back into the, you can tell I was away with the family for a few days. .
Simon Maple: Yeah.
Guy Podjarny: In the world of ai, that's like a year's worth. You can't,
Simon Maple: you shouldn't be taking family vacation in the AI space. If you work with
Guy Podjarny: ai, you cannot take vacation. Absolutely. Yeah.
Simon Maple: It all changes. Yeah. Codium arguably one of the biggest testing groups in the AI space.
Renamed, rebranded to Qodo. They're a partner of the conference as well as giving a session. Yeah, feel free to register for that conference, AI native dev.io. It's free to register and we'll record everything and make that available to you.
Guy Podjarny: Indeed. Yeah. And I'm super excited that we're running the conference, not just because of the, some of the great talks around it, but because really, like we think of a native development as a dev movement, right?
We think about our role in it as a place to kinda give all these brilliant people space, a platform to share their perspective, to talk about the work that they're doing, they're evolving it. And while we invest and are confident that we're able to contribute to this conversation and this sort of shaping of this new [00:17:00] paradigm of development.
We think of this as a group activity and and I think we have some really exciting talks and enlightening ones. So I'm excited for this one and I'm excited that we're getting started on what will be a recurring conference for us.
Simon Maple: Absolutely. And one thing that we would love as well, we're all about collaboration in around AI native, because nothing's set in stone here.
If you have any feedback around the format of this show, around guests that you'd like, topics that you'd not just around the AI Dev podcast, but also about the AI Native conference, let us know, because this is like Guy says, this is a recurring format and we'll absolutely take that feedback on board and try and add that into future episodes and versions.
So talking about the podcast guy, we had some great sessions, as I mentioned before actually a lot of the theme this month was in and around infrastructure in and around DevOps in a little bit about security. We had one of Amara session actually, which probably didn't actually fit very well with that with the overall months themes.
Yeah. I want to talk about that a little bit. Cause it did actually sit really well with a couple of previous talks that [00:18:00] we had. And incidentally with Tamar Tamar Yehoshua. I think she's the head of products is she, or she's a, I think
Guy Podjarny: she's the president there, but as
Simon Maple: well and also it reminded me a little bit about some of the discussions that you have with Des Traynor about the differences that how people react, first of all, to a search versus a chatbot and with search, people just tend to throw in a couple of terms with a chatbot, people tend to, or try to more talk in full sentences.
And it's very interesting to see. I think with how Des was talking was people who talked knowingly like this user or this bot rather is an LLM in the background, rather than some kind of like search functionality, people got better results and it was similar in terms of the reaction that that Amara discussed with how people were using the Camunda site to use an LLM.
To go and find information and find documentation for her that uses Kappa AI in the background. So yeah, definitely take a listen to that as
Guy Podjarny: I really liked the [00:19:00] perspective of a user. I think like a lot of the people that we, this is a new world. And so naturally the people that we get to come in and talk about it, like there's a good portion of those who are building tools in this space, it was interesting to hear Amara's perspective as a builder in terms of like adopter of these tools, but her experience indeed in engaging and using tools in this case, Kappa for engaging with the commutation. But it comes back like a lot of recurring things. And I felt the same thing about the echoes back to the first episode with Des talking about people engaging with the chat interface and what is it that they ask and how do you know if they're going to get it right?
I also really liked her comment about learning more what people are searching for. By seeing what they ask about the documentation when they come and engage with it. And I think that's interesting because we do this, like when you have docs today, you talk to users, you try to understand how they use the product, you try to build your documentation to, on one hand, reflect the product and his philosophy.
And on the other hand. See how our developers integrating with the APIs and we're talking about the dev tools, right? How are they using it? [00:20:00] So you're putting, there's oftentimes a lot of information. So you're trying to scaffold the paths. That are useful for developers to learn about, but how do you find out if those are the right ones or not?
You can do that a little bit with like search queries and see what did people search to be able to arrive, but chat can give you actually like much more detailed interaction because they're like pseudo talking to an entity there and they're saying I'm trying to do X, Y, Z. And I'm getting an answer or it did not have the answer.
It was the answer a little bit weird. And so I think it's a really interesting user discovery path, which I found this to be an interesting insight. So yeah, really interesting. And in general, again for those listening, it'll be great to know from you, what's more interesting to you or like to what level to hear more about the sort of the builders of the tools.
Those are built inside. Those are built products for reviews. But also how interesting is it to you to talk about people who are trying to use a bunch of these AI dev tools to help their sort of developer experiences be better. And if there are things that you want. So I thought that was a really good episode from, I agree a little bit like out of theme, [00:21:00] maybe more like episodes in terms of grouping so it's getting a little bit the short straw here in the summary.
Simon Maple: Absolutely But let us know if you want more episodes like that, podcast@tessl.Io and we'll absolutely get more on for you. Let's go into talking more about the devops the infrastructure style, which we had a couple of episodes first of all, when we think about how you can use AI to help generate infrastructure.
We always talk about generation of things versus the maintenance. Like a little bit, as you alluded to at the very start, let's start with the generation and we'll move on to the maintenance,
Guy Podjarny: The next steps of it,
Simon Maple: yeah, one thing that was really interesting that was like mentioned was first of all, it was easier or rather maybe not easier, but you're more likely to get accurate results when you're trying to generate infrastructure versus code, because code from a syntax point of view, there are many ways to do the same thing with infrastructure. There are presumably less, more like paved road patterns at which you can do certain things. Is [00:22:00] that what you said?
Guy Podjarny: I think it's like a little bit less about generating infrastructure per se, but rather infrastructure as code.
Yes. So if you look at like the richness of Terraform or even Puppet or Chef, like any of these the richness of language is much smaller. There are a lot more declarative. Many of them actually have YAML kind of roots and things like that are very declarative. And so basically there's like less ways to do the same thing, fewer ways, and also fewer mistakes maybe to do in the process.
And so for the. AI, it's just easier to tell it to follow instructions. If you ask AI to whatever, write a function that sorts on array, there are a thousand ways in which the same action can be done. And in Terraform, it's not the case, like there are fewer ways to doing it.
So it's just technically easier to create something in Terraform. But of course, like the catch that Armon has pointed out talked about is like the implications are pretty substantial, right? And we had I think like my favorite example, from him was the comment on S3, which is Hey, I said you want to create an S3 bucket, but is [00:23:00] that that might've been actually my comment on it.
He had a better one after I'd like, so could you want that to be a public or a private S3 bucket? And that's a good one. Cause there isn't like a right or wrong decision, but what I liked even more is his, This conversation about about an instance. It's Hey, create an instance for me.
Where do you want that instance? What operating system do you want? How much memory do you need? What is it going to run? How do you lock it out? And then at some point, if you start saying all of these things, you've effectively written the Terraform which to begin with is declarative.
So I thought that was a good point, but I do want to say that I didn't get to that as much in the conversation we talked about it a bit. Which is, while that's true, at the same time, if I go to someone on our team or like it was true in Snyk's team, and I say, hey, can I create an EC2 instance, a lot of those decisions are actually implied because they are implied in some form of practices or guides or things like that, that we have there in existing norms in the organization, in existing settings.
And so while I agree that the sort of the the assumptions are very much, if you need to specify everything, you might as well write the code. I do think a lot of it and [00:24:00] we got into that a lot of it is if you know what are your general best practices and you're able to describe those to the AI, now suddenly you have some something to cook with here because you're able to give lightweight instructions and AI will get pretty good at figuring out how to fill in the gaps, which was really like the crux of that conversation, which is.
One way or the other, it's going to make a decision about whether this is, Windows or Linux and US East or US West or whatever.
Simon Maple: Yeah. And I think that's the key, isn't it? It's a case of, are you comfortable with the LLM filling in those gaps and making those decisions for each of those things.
And one of the quotes that I absolutely loved was 80 percent of the value is codifying 20 percent of the assumptions. And the important thing is recognizing what is those 20 percent of the assumptions? What are the areas that are core to me to actually make this whether it's secure, whether it's, whatever it is, those are the pieces that I care about.
Areas. I'm fine for the LLM to make those decisions and fill those in for [00:25:00] me. In fact, I don't really care what the operating system is, or I don't really care which region it's located, but it's about us as humans getting used to recognizing what that 20 percent is. And one other quote that he provided was, the LLM is effectively going to make this bucket private or public because it's in its training data.
It's in more public buckets. So as a default, it's going to say public, this is statistically more likely what you want, but actually if that's not the assumption that you want for this, infrastructure as code artifact then you're actually producing something massively insecure and it's about how do you recognize this shouldn't be an assumption.
This should be something I fill in, even if I do want to make it public, perhaps I want to make that decision not you. Yes. Guy, a question. Secure by default is an interesting theme that we've talked about a lot with code. I could argue actually that being secure and being insecure infrastructure, it's actually far more damaging because you could actually open up huge amounts of PII data and things just in a slight [00:26:00] misconfiguration.
And there's no need for someone to hack it. It's already open. Is there an outlook where an LLM is secure by default in infrastructure as code or just generally
Guy Podjarny: yeah, it's a really like it's a thought that I ponder a fair bit. And I think the answer is the LLM itself is not going to be truly trustworthy maybe ever and so for the LLM what you can do is you can make it more frequently secure versus secure by default.
I think it depends on, the data that it gets trained on, which by the way, oftentimes we don't know. And so it's hard yes, maybe you can guess that it's more public buckets, but can you guess on whether it's more windows or Linux machines? I don't know. Like it's those are a bit hard to guess.
You can expect that they will improve because some form of training data will reinforce the weight of what were deemed good, including secure decisions for that system. And so I think some element will improve, but they will not be secure by default. But I think where I'm very optimistic, I talked about this a little bit with Caleb last month Caleb Sima [00:27:00] is in the automated generation process.
And I think that theme comes up again and again, we hear about the Glean combining symbolic AI kind of, search capabilities with LLMs to produce something that is a total better result on it. We know that at Snyk Snyk code uses symbolic AI for assessment and combines that with LLM to produce fixes that actually fix the vulnerability.
And even not just combining it with other AI, just combining it with automated generation process, now you can do something that says, okay, if I define a policy, which I might've even created with AI, but then reviewed and every time I generate a piece of infrastructure as code, every time it runs against my policy and I can tell whether it is compliant with my creation.
But the. code itself that gets generated. I don't think we can trust that it will be secure. We can strive and invest in making it more often secure. But if you want something [00:28:00] that really gives you like higher level of confidence then really what you want to lean into is more the automation that happens around that code creation.
Suddenly you've automated the developer, or at least a piece of the developers work and now you get the opportunity to introduce those other checks in a core fashion and even loops that once a problem was found, it can get automatically fixed and continue along. And that I think is quite exciting.
That can basically make a whole bunch of problems go away, right? It just takes the tedious methodical testing, including security testing and run it. And I agree by the way, with your comment that in infrastructure, that's extra hard. And also quite obscure.
It's hey, here's a whatever a VPC configuration, is it secure? Is it not secure? Like you need a pretty decent level of depth and understanding. And if you created this with some GitHub spark, or some sort of bolt above, you may barely know what the infrastructure as code is.
Yeah. And so you are quite likely to not have the skills involved in doing that.
Simon Maple: The good news is, of course, this isn't a new ecosystem where all of this needs to be [00:29:00] built. All of this already exists. And in fact, even because of things like DevOps and our speed going to production is much, much quicker.
Tools look at Snyk as a great example, right? Liran who came from Snyk, I think you're familiar with their work as well. A little bit. A little bit. The dev angle, pulling security into dev, running that through the existing developer workflows, the whole point of this is to catch security issues earlier and to do it natively and in an automated way in the workflow.
So adding LLMs into that workflow, we already have the guardrails there. We don't need to build new guardrails there. We just need to, like you say, automation, building those frameworks in which an LLM can exist just like another developer. So
Guy Podjarny: yeah. I think the risk when we're having this conversation is that, we're carry the risk of becoming a Snyk commercial a little bit in the element, which cause, like we're both believers in that sort of automated security. into the dev process. I do think AI introduces a new form of automation, right? So a new opportunity for automation could be super powerful. I do think, though, like the other point that I know, resonate with you as well. It was around but that Armon mentioned was [00:30:00] around not just the beginning, but also the continuation, right?
Then I found in both my conversation with Armon and my conversation with Patrick, both of them, when you start the conversation, both are like smart DevOps people, they know the journey. And where they went was Armon initially said let me explain the different stages, the day 0, day 1, day 2, day 3, day 4 eventually you build a system, then you found an incident, then you wanted to evolve it, then you had to realize you had to scale it, eventually you might want to sunset it.
And when we think about AI, we keep thinking about, Hey, I'm going to write some code. I'm going to do it. Is it good? Is it not good? And really like his immediate mindset was, okay, how do I think about AI across the life cycle for it? And Patrick was similarly talking about culture, about sort of evolution.
And about, the adoption of these tools in the organization. And the other thing that really struck a chord with me in both of those conversations was really I think we're so enamored in the magic of Hey, I'm going to give you a line and [00:31:00] the AI will magically produce, a spark or whatever.
We're so enamored with that sort of moment that we just forget that oftentimes, not only are we not solving the maintenance problem, we're actually causing a maintenance problem. Like you have this thing over here you might have even made it secure at the beginning, but is anything, if you're going to start deploying the stuff all over the place, who's going to maintain it, who's going to find the issues.
And it was like, I found the, there isn't really one solution. It's almost like a bit of a Hey, don't forget this, or we're going to get in trouble type comments. But definitely anybody that's that comes from the infrastructure world and has experienced a bit more of like the downstream implications of a bad upstream decision is very mindful of these risks that come with AI dev.
Simon Maple: Absolutely. And, I'd I'd love to hear from people actually in the comments in terms of, is the LLM flow today, the LLM creation flow today far nicer from the creation point of view getting to that stage where you can deploy versus almost like getting to that stage where you should deploy and i'd love to [00:32:00] hear like what you know Whether people have got into perhaps trouble.
This is a safe place to share you know whether people have like realized maybe later down the line that actually, yeah, what we deployed needed updating or needed changing. Is that something that is the maintenance burden bigger for you? Let us know in the comments.
One thing also that kind of I found interesting with the Liran episode was he also drew a parallel back to the stack overflow days. And I remember Guy, you were telling me off air that, your development was pretty much entirely copy pasted from it's all copy paste.
Guy Podjarny: You had one of those keyboards, which is just control.
That would have been plausible if it wasn't for my age in which I predates stack overflow. Yeah. Yeah. That's very true. Of course you can't know in 70 years of age, you
Simon Maple: can't copy and paste those punch cards. Can you? I forgot. I forgot about that. Yeah. But the interesting thing there was when
Guy Podjarny: you can literally copy paste the punch card is actually something you can actually physically copy paste.
Simon Maple: Yeah. But the friction was hard.
Guy Podjarny: The physical friction
Simon Maple: is
Guy Podjarny: [00:33:00] physical.
Simon Maple: So I think when Liran brought this up and was like when we think about people, whether they're interested in ID using copilot, let's say, or Cody or tab nine or whatever it is, and they're getting suggestions, and they're accepting suggestions, and they're doing it quickly.
How different is that to stack overflow? And it's interesting because, if you think about the reason why we have to be very careful about what we copy paste and stack overflows, of course, because the average developer is inherently not great at coding, not, quality of codes, maintenance of code, the security of code, all of these aren't going to be great with the average developer.
And so it's, essential that we do testing and validation and things like that as a developer, I might need to go through a few stack overflows before I try and find the thing I want. I'm looking through the code much more in depth than I have to copy it, move it back over to my ID, paste it, make sure it's compiling, make changes to whatever variables I need to do with co pilot.
I hit tab and of course the friction's far less. So actually is it worse problem almost from stack overflow? Because I hit tab. I just get it and I continue. It's already fully integrated into my ID. It's [00:34:00] beautiful from the lack of friction point of view, but from the possibility of accepting vulnerable code or code that isn't of the highest quality, and let's not forget these tools are training on just the same code in Stack Overflow or
Guy Podjarny: Git repos and things like that.
Yeah. And they give you just as much zero liability really around the notion, maybe a little bit more. I think, yeah, it was a really interesting comment. It was this like. The evil side of zero friction. Yeah. And a little bit of a, be careful what you wish for. And it really reminded me of the whole incentive model that's going on right now with social media, right?
If you're on a TikTok or a YouTube shorts and you're just scrolling along because these platforms have become so immensely good at reducing friction of watching the next video and finding something that you may find interesting. Yeah. And it's almost like the analogy of, is a, It's GitHub Copilot equivalent to TikTok, when it comes to code to code writing, where they're just attempting you to just accept the next offer and they've gotten very good at it.
Of course that's like a little bit inflammatory but it's interesting to think that if we are relying [00:35:00] on humans reviewing what has been created then we have to be mindful of actually guiding the users to do that. And if the way the system is built is to hit tab, it's not that much different than swipe.
And you're just going to accept those things because that's what the system guides you to do. So it's maybe like a little bit of like a platform responsibility to say if you're going to encourage behavior like that, then you should be carrying some responsibility around what are the implications of that and maybe build your system to introduce some responsibility.
Simon Maple: Yeah, interesting comment here from rachel as well. Nice parallel with autonomous vehicles The mostly autonomous vehicles leading to accidents because people aren't used to paying attention It's almost do you entirely if you entirely switch on or entirely switch off? Yeah, you're more likely to have that shared responsibility so let's with patrick the session with patrick you took a walk down memory lane, Patrick Debois, of course, is the, what is he, the half brother of DevOps or something like that?
The father of DevOps, I think they call
Guy Podjarny: him. I think you called him the grandfather of DevOps. Yeah, oh, come on, Guy. I made one mistake, Guy. I'm not going to let you [00:36:00] forget that, yeah. But I think
Simon Maple: the godfather of DevOps
Guy Podjarny: is
Simon Maple: the subject that was like a little bit more unhelpful. Yeah, that's the kind of nicer way of saying it, so yeah We talked about there's actually there's a lot of really nice comparisons really with previous tech adoption, including cloud and DevOps and things like that. And one of the kind of like the big questions there was why AI tools A more easily adopted thing, cloud infrastructure, which is like a nice one.
And I think it's just far more accessible, right? With the APIs that are available. And that's one of the things that obviously OpenAI broke so quickly in terms of making that accessible to developers. When we think about from a maturity point of view there's a lot of things happening here and a lot of people trying to do a lot with it, whether or not it's ready for it.
We can talk about AI engineers. We can talk about AI engineering. We can talk about, platform teams similar to how DevOps works. whereby they're creating paved roads or creating shared infrastructure in which teams can keep teams can use the best [00:37:00] practices that are already there, or at least best practices that we're trying to create your thoughts in and around that, because I feel like there's a lot of similarities with where we are as a community, as an industry in adopting AI.
Guy Podjarny: Yeah, I think first of all, I think the analogy to DevOps and cloud native is super, super valuable. And we're going to continue probably pulling a little bit on that string because I think there's a lot of learnings from that.
And, history rhymes and you look at what the what has happened and I think that's useful to try and understand where would this wave happen or where would this wave take us? And I think cloud and with it sort of DevOps and continuous deployment and such are probably the latest sort of big, massive evolution.
And so I think it's very much worth doing it. I think one of the key differences is really that of pace and ease of adoption. And what is. In DevOps, there were these like leading teams that would be using the cloud. And sometimes there were like digital transformation teams on it.
They would have their own mandate to operate on this application. Although it was very declarative, it was very like, [00:38:00] this team will move to the cloud. It was an expensive thing or a new application. And indeed, AI is so interweavable that it's easy to throw it in. And so on one hand Patrick described a really kind of good evolution that talks about how, okay, like there will be teams that.
We'll adopt this and we'll run ahead and they will set the standards and maybe there will be some dedicated AI engineers like DevOps teams have come to be and that, that really have their identity and, we see this now and they're very good at that and those teams will draw the envy of the rest of the org and more in the org will want to do this and eventually, Those teams will join and start doing this type of work, but then that type of work will become duplicated between the different teams.
And so that drives the need for a platform team that creates that shared infrastructure, that paved road so that not everybody reinvents the wheel, but also for security and for governance. So that was the journey of DevOps and you can very much apply to AI. And I think the kind of the one caveat is. Will people wait, for, so this process, like now you describe this [00:39:00] process, it sounds slow.
It's what do you mean the board is hammering down, when is my AI chat bot coming out. And when is it that I can, let go have my support team, cause they're going to switch to using AI. And and I think those conversations are much more urgent. And it's easy to interweave AI.
It becomes a feature, a capability. But it behaves very differently. It behaves very differently in testing and evaluation. And so I think the model is a good one and we need to think about these different stages and companies, especially at like larger scale companies need to think about how they adopt those.
But I I guess what I took away is that it might be prudent for organizations to create those platform teams and invest in those shared controls. Early and maybe be a bit more declarative around it because otherwise there's going to be a lot, maybe it'll be a little bit more like SaaS adoption that infrastructure is code adoption, right?
Because SaaS adoption did happen in like a shadow IT type way where people just started using it without telling anybody.
Simon Maple: Yeah, interesting. And do you see that role [00:40:00] almost sitting in the existing platform teams? Or do you see that as almost a separate team altogether, as an AI adoption platform team
Guy Podjarny: or whatever it's going to be?
Yeah, I think it probably like it's probably a dedicated team where it sits is probably organization dependent. But one of the points that both in the conversation with Patrick and with Armon came up. It's this notion of of a bigger gap between the haves and the have nots, which is if you are an organization that is already mature from a DevOps perspective, then you probably have all sorts of advantages.
You're familiar with kind of cloud native approaches. You might be a bit more bit better at breaking down silos. You might be a bit better at autonomous teams and empowered teams. So you can let people run with it. And you also probably have a very good platform team. You also probably have better data about how your system works and you're able to describe that, have better infrastructure as code. So all of those sort of are advantages in your ability to adopt AI. So there might be a compounding effect to doing it. And so I think if you are an organization that's still quite immature in DevOps and doesn't want to go [00:41:00] bankrupt, then you probably want to like really double down and maybe use AI as a, Springboard to jump ahead and create like autonomous empowered teams to build that out.
If you're a DevOps organization, I guess my default assumption is that team, you do need to dedicated people working on it. And they probably need to sit in the platform too. It is scary as a sort of, when my security hat on, it is scary to think about the level of access.
If you let one of Patrick's comments was that some organizations are overridden or they're taken over by the data science folks which I found an interesting comment, but it's, you hear it actually at CISOs will definitely nod at that comment, which is everybody is enamored with running things with AI and running things with data. And then you just glance at the production methodologies of these data stacks that we train on of the data models and how they evolve in our version of access to production, whether they're being used on it. And it is generally not a pre site, there's a lot of, like a lack of [00:42:00] non-production knowledge amidst that that group.
Yeah, it's a it's fascinating, but there's a big culture. Like you needs to think about how do you organizationally organize here. And I think DevOps is a really good starting blueprint.
Simon Maple: And I love your observation that you feel teams that are already good at DevOps and already good at being able to automate things with that level of as they say, cars can only go fast because they've got good brakes.
That ability to know when to stop and build a trust where you are in that automation, they're going to be better off with their adoption of AI because they've been through a lot of these challenges before. So that's a really interesting observation in terms of one of the things that people used when they increase their levels of DevOps maturity, rather than say, let's try and get to the ultimate DevOps, they tried to model this as a maturity.
Lateral maturity model. And they said, okay, let's not just try and get up to level five. We need to work out what we're doing now. How do we take our first step? Because it's not just a technology problem. It's a process issues. It's a people [00:43:00] problem. And so when we think about these same challenges that are going to occur with AI, are we presumably too early for a maturity model, right?
Because we don't know what the best practices are or what this level five even looks like. And when we think about AI native, we don't know what AI native truly looks like or what the best practices are to get there because no one's truly there just yet.
Guy Podjarny: The irony is that probably half the enterprise you will talk to would really love to have a maturity model to say, Hey you're at level two, here are the steps you need to get to level three.
And the ironic part is that probably whoever is giving them that maturity model today is just making it up because the right maturity models happen from observing organizations that have tried to this and this is just not tried and true yet. And so it's probably okay to till you know draft some hypothesis around it, to think about what the next step, some journey. So it's not that you can't find smart people and think about, [00:44:00] what should you focus on first and when you do that but yeah, you and I have this conversation from time to time.
It's just a little bit, maybe this is the DevOps purist in me, but it's a bit hard to accept calling it a maturity model when there's basically nobody that's mature let alone patterns that emerge.
Simon Maple: Yeah, it's like having a nursery maturity model, right?
Guy Podjarny: It's exactly, it's just at the beginning of it.
At the beginning, first you need to get to these blocks. And then after that,
Simon Maple: Interesting comment here from a LinkedIn user. Thank you. Thank you, LinkedIn or Mr and Mrs user thinking about DevOps and AI parallels. Isn't there a challenge where DevOps is easier to adopt a grassroot movement in the organization, whereas AI tools require much higher bar of just general trust, or legal review because of the data aspects and things like that? And that's a really interesting question, with DevOps, you didn't have a lot of those challenges. There are more moving parts with various AI tools. And I think the legal aspects and the data aspects of what you need to share to these various tools is probably [00:45:00] another challenge. So it's I think the workflows are quite similar.
The workflow challenges are quite similar, but these are almost like additional challenges that needs to be worked on almost in parallel, compared to this. I
Guy Podjarny: would challenge that a little bit because first of all, because I think a lot of the adoption of cloud was actually very much predicated on data access.
So in that sense, it's the same. So maybe DevOps in terms of, some piece of deployment or methodology, it was more internal. And so I think there were very much data concerns as big obstacles. People were just like, there's no way I'm going to data into the cloud. And now this is the same two is there's a lot of shadow ITs so this is more, maybe analogous to SaaS, you use them like nobody necessarily needs to know, not everybody. A lot of people basically remain with vague definitions. And then lastly, is that, not all LLMs are hosted. Yeah, if you want to use GPT or Anthropic you might run it, but there are some pretty good local on prem deployed LLMs that you can use or things that are built into the cloud platforms.
And so maybe you're still within AWS and you're running it. So I think there's actually a [00:46:00] lot of adoption that's necessary. I think it's very scary. And so I think a lot of organizations have issued this kind of broad band that say you have to go through us. That's probably natural.
You can argue whether it's healthy or not, but what is I think guaranteed is it's just a, it's a fast moving world. And so organizations need to figure out how do they limit, they allow some experimentation or they're going to be left behind. So it's really interesting. And I do think that one of the cultural elements that just a last thing to mention from the Patrick conversation, one of the cultural elements that I think is similar to DevOps, and again, is urgent to address is just Patrick was pointing out that you have to figure out which silos needs to be broken here.
At first I was like what silos? There's no silos here, but he pointed out correctly what we talked about, that, the data science side is actually a bit of a silo. And, this touches a little bit about this sort of production readiness and availability, but it really moves you back to say when we tried to, get developers to not throw things over the wall so that ops will operate them, but rather have responsibilities.
We had a bunch of these techniques [00:47:00] of, walk a mile in their shoes. Do you put a developer on call, which at the time was novel? For a while, do you have ops, people need to write some code to build those out. And it's interesting to think about what does that do with with data science?
What does that, do you sit an ops person to train a model? Do you do a and then on top of all of that, what do we do with the acronym? Yeah. Dev dev AI ops. SEC is somewhere in there. . It was exhausted this past.
Simon Maple: And we should add test as well.
Yeah. Interesting, fun comment here from Ben. Wondering what the AI native version of a blameless postmortem for DevOps is? ,
Guy Podjarny: It's blame the AI . . We can all agree, we can all agree that because yeah, we had someone on the team, Niv on the team write a an article says Co-authored by, NIV and seed. GPT . Yeah, . So you blame CGPT? Yeah.
Simon Maple: Yeah. Excellent. Yeah. Determinism is another, so you mentioned the silos. This is, one of the big differences between AI and the DevOps culture. [00:48:00] Determinism is another one which was mentioned in several episodes. And I think it was Caleb Sima who you had a really good chat with him about determinism.
And one of the questions that came up was how useful is a tool? If let's say a security tool, if when you run it. It tells you there are no vulnerabilities, but the next second you run it again, it tells you there are three vulnerabilities. How useful is that determinism? Would you rather you actually had false positives in there, but it gave you the right answer every single time.
And when we think about this level of automation, one thing you absolutely want is some level of determination whereby what's determinism rather than determination. So such that if you run something a hundred times, you're going to get the same result a hundred times. You don't want that. Okay. It's passing, but is that because of just some deterministic kind of outcome non deterministic outcome, or is that because it genuinely is you know correct.
Guy Podjarny: Yeah, I think it's a really interesting question. Yeah. And I guess what I would add is [00:49:00] just that, it's a part of what we're talking about here with AI native development, right?
Which is you can work within the existing workflow and think about how do you optimize it? That, that would only get you so far. I think if you really want to tap into these powers or into the opportunity of the new entity, you have to think about how do you rebuild or reimagine your workflows to accept the pros and the cons, the sort of the strengths and the weaknesses of AI native or like the world of AI of something of an AI native workflow that assumes these capabilities are built in. And I think that is harder. It's like it's not as easy but it's more rewarding if you manage to to make it happen.
Simon Maple: Yeah. Guy, we're already pretty much at time. This flew by and first of all, thank you, Guy for joining me on this on this wonderful new format of a live episode.
Guy Podjarny: Yeah. Super fun. And yeah, a lot easier without all the editing.
Simon Maple: Just joking. We'll do a little bit of that. Yeah.
Yeah, thank you everyone for joining in and thank you for all the questions and the comments that I'm seeing fly fast on linked in a couple of things. You did mention. Actually, you've got a session. I think [00:50:00] it's coming out next Tuesday with Matt Billman, right?
Guy Podjarny: Yeah. A really good conversation with Matt sharp observations about the web and AI, what they're seeing from a Netlify lens, how they think about it inside, but really about in general, how AI is affecting the web. How do we think about it both from building the web and from using the web and even AI as a user of the web, which was really interesting.
Netlify has a new user, which is AI and probably one of the most I don't know, like emotion driving conversations that we had there is also Matt pointing out that any new technology oftentimes challenges the open web, oftentimes challenges the openness as people try to build walled gardens.
And we had a really good section there talking about the the importance of preserving the open web and kind of deciding which side you're on. And what does that mean? So a brilliant insight from Matt, highly recommend listening to it. It ended up being about 10 minutes or longer, some past episodes, just because there was so much to talk about.
Highly recommended coming out soon on Tuesday. So just subscribe to the podcast and you will see it. Did we say subscribe to the podcast already?
Simon Maple: You should. Yeah, generally it's a [00:51:00] good life choice. It's a good life choice here. Now we have a banner. There we go. Yeah, no, yeah do really good
Guy Podjarny: conversation.
I joking aside, like really worth signing up, listening to those.
Simon Maple: Yeah. And if you're interested in seeing some of our amazing sessions that we've now announced for the AI Native DevCon make sure you go to AINativeDevCon. io to, to grab your ticket, free ticket, and everything will be replayed and recorded.
So make sure you register to get those links as we send them. Once again, thank you very much, Guy. And let us know a podcast@tessl.ai what you thought of the episode and what you'd also like to hear about in the AI native space. And we'll chat soon again, tune into the next one.
Guy Podjarny: Indeed.
See you next time.
Simon Maple: [00:52:00] Thanks for tuning in. Join us next time on the AI native dev brought to you by Tessl.
Podcast theme music by Transistor.fm. Learn how to start a podcast here.