Monthly Roundup: AI Security, AI Documentation, Enterprise AI Strategies, with Ben Galbraith

In this episode, we dive into the intersection of AI and software development, exploring key insights from industry leaders. With Ben Galbraith sharing his unique perspectives, listeners will gain valuable knowledge on AI applications, security challenges, and innovative development practices.

Episode Description:

Join hosts Simon Maple and Guy Podjarny as they recap the highlights from September's sessions on the AI Native Developer podcast. Featuring special guest Ben Galbraith, we explore diverse topics ranging from enterprise AI strategies and AI-driven documentation updates to the latest in AI security practices. Ben shares his career journey and insights from his new role at Tessl, providing listeners with a glimpse into the future of AI Native Development. This episode is packed with expert opinions and practical advice for developers looking to harness the power of AI.

Resources Mentioned:

Guy Podjarny's blog on AI solutions

Chapters:

  1. [00:00:22] Introduction and Overview
  2. [00:00:40] Enterprise AI Insights with Tamar
  3. [00:01:03] AI and Documentation with Omer Rosenbaum
  4. [00:01:20] AI Security with Caleb Sima
  5. [00:01:54] Felipe Aguirre Martinez's Development Approach
  6. [00:02:57] Ben Galbraith's Career Journey
  7. [00:10:39] Guy Podjarny's AI Solutions Model
  8. [00:17:16] Security Concerns in AI
  9. [00:24:07] Role of Static Analysis in AI Tools
  10. [00:26:11] Combining AI with Traditional Methods

Full Script

[00:00:22] Simon Maple: Welcome back, and it's time for another monthly recap on the AI Native Developer. My name is Simon Maple. And I'm Guy Podjarny, or Guypo. And so we're going to cover all the sessions that happened in September. First of all, of course, you had a session with Tamar on Enterprise AI, really insightful.

[00:00:40] Guy Podjarny: Tamar is amazing. we'll cover some topics of it, but talking about the Glean and Enterprise Search, but also a lot of learning of how to build this type of

[00:00:47] Simon Maple: AI product.

[00:00:48] Simon Maple: Yeah, very interesting.

[00:00:49] Simon Maple: And I spoke with Omer Rosenbaum, co founder of Swimm. Really interesting about how you can use AI to keep your documentation up to date, and also inform, developers of when they're touching code that has documentation associated with it. Really insightful, sessions there.

[00:01:03] Simon Maple: And then, a fellow podcast host,Caleb Sima,

[00:01:06] Guy Podjarny: you spoke with him.

[00:01:08] Guy Podjarny: Caleb is, is excellent. That was no surprise. He does host the AI security, podcast and surprise, surprise. We talked about AI security, on it and had,really deep conversations. I think I was maybe like a bit more blabbery, opinionated in that conversation.

[00:01:20] Guy Podjarny: It is a topic that is very near and dear to my heart, with Snyk and such.

[00:01:24] Simon Maple: Cor, two geeks talking about security, that must have been a lot of fun.

[00:01:27] Guy Podjarny: It was a delight. It was like this big lovefest of, it's all going down. no, it was actually like a positive, leaning view on AI and security.

[00:01:36] Guy Podjarny: Yeah. Really enjoyed that conversation. We'll share a few highlights here in the account, as always.

[00:01:40] Simon Maple: Amazing. And to round off the month, we had, we had Felipe Aguirre Martinez, who was actually,on earlier this week with his session. And yeah, Felipe reached out to us, to just to chat about various things that we were talking about in the podcast.

[00:01:54] Simon Maple: And I explored various things that he's doing and lo and behold, we had him on the podcast and he came over to London to,to chat with us and give that podcast all about how he develops using AI. Such that he's actually trying as much as he can, rather than writing code, but to actually build an application through prompts and specifications, so really interesting.

[00:02:12] Simon Maple: We'll cover all those in more depth about our learnings and various topics, but for now, GuyI absolutely love talking with you. I've got someone who I think is that little bit more professional, a little bit, more eloquent, so Guy, if you wouldn't mind, leaving for, I don't know, five or ten minutes.

[00:02:28] Simon Maple: We're just done recording. I know, but Guy, we're going to, we're going to try something new and this isn't anything on you. This is just an experiment we're trying. I'll indulge you, but I'm going to call HR. Okay, that's fine. Yeah, if you leave this room and go into the building next door, that's where HR is.

[00:02:41] Simon Maple: So, joining me is Ben Galbraith. Ben, please join, and Guy, yeah, make sure that the door fully closes so we can have a proper, there we go, there we go. Ben, have a seat. Welcome. Welcome to the AI Native Dev Podcast.

[00:02:57] Ben Galbraith: How are you?

[00:02:58] Ben Galbraith: I'm great, Simon. Great to be here. Flattered and I don't think I'm as charismatic as you've let people to believe so hopefully it won't be too big of a deal.

[00:03:04] Ben Galbraith: Well,

[00:03:05] Simon Maple: we'll judge you over the next 10 minutes and then make a decision based

[00:03:08] Ben Galbraith: on that, shall we? Oh, that makes it all better. Okay.

[00:03:10] Simon Maple: So, Ben, tell us a little bit about your illustrious career,till now. All right.

[00:03:15] Ben Galbraith: All right. I'll give you a, uh, should I aim for like a 20, 25 minute survey?

[00:03:18] Ben Galbraith: Absolutely. Let's do that. I'll just give you a quick bit on it. I've been professionally developing software my whole career. I got started early. I think I started professional development when I was 12, left high school and secondary school, what you call secondary school here, midway through to work for Acer full time and just done a lot of stuff since then, working on top of major platforms and eventually working to make those platforms better.

[00:03:41] Ben Galbraith: Some of the highlights of that time period include leading product design and engineering for Walmart's global e commerce stuff. I worked at Mozilla doing dev tools for Mozilla, then I went to Palm. Palm had this big swing at this moment in time, maybe you remember, where they wanted to make the web the operating system that led on mobile.

[00:04:00] Ben Galbraith: I was part of that. I don't want to call it a misadventure, but I had a great time, and I was part of this thing called Ajaxian that was part of leading the Web 2. 0 and Ajax movement of the era. There was a conference series too, that was a blast, and most recently I've been at Google. I was at Google for about eight years.

[00:04:21] Ben Galbraith: I led product and design for this thing called Firebase, which had this vision to sort of remove all the toil of app development for developers. You could just focus on what made your app special. And I spent most of my time at Google on, the Chrome team, working on the web platform and making the web better.

[00:04:36] Ben Galbraith: That was amazing, working with just incredibly talented product managers and engineers and designers, working to make Chromium better, working to make V8 better, Media Codecs, which is just doing a FaceTime with my family, which I do all the time. They're still in the U. S. right now. And, it just struck me how amazing Codecs have become.

[00:04:52] Ben Galbraith: Real pleasure to work with some of those guys and a bunch of other stuff at Google driving initiatives like corporate vitals and baseline Anyway, that's kind of a highlight. Wow, what I've been up to

[00:05:01] Simon Maple: that is unreal most people don't do half of that in their entire career Ben, that sounds amazing. You're like a flattery machine,

[00:05:07] Simon Maple: I want to be on the podcast more often. Just

[00:05:09] Simon Maple: wait for my next question Oh, yeah, so Google great company. We're really good employees. They really look after that staff don't they what would make you? What possessed you to go from an amazing company like Google leading so many different technologies and such an amazing research group, sunny, glorious California. What made you leave all that and come to rainy, cold London to work for this Yeah, he's gone. Work for Guypo of all people. For a startup called Tessl. What on earth would make you do that, Ben?

[00:05:44] Ben Galbraith: Well, the way you put it, now I'm having second thoughts.

[00:05:46] Simon Maple: I know. I'm saying it myself and I'm questioning myself too.

[00:05:50] Ben Galbraith: That was a great question. This has turned into a therapy session now. I do have the weather conversation with my wife from time to time.

[00:05:56] Simon Maple: Really? yeah. That's maybe the most painful part. It's good that you're bringing her across the pond in end of fall/winter, that's when the UK really shines.

[00:06:04] Ben Galbraith: Alright, perhaps we'll talk about relocation logistics in a minute. Getting to the Google thing, yeah,I've been thinking about doing a startup again.

[00:06:10] Ben Galbraith: I've done a bunch of startups, and Google's an amazing place. But, eight years is a long time. And I've been feeling like it's time for me to get out and do something different. And actually, I had this funny moment. I don't think I mentioned it in my survey. Guy and I have been friends for ages. we crossed paths early in my career when I was at Walmart and he was at Akamai and, it was fun to see his journey at Snyk and I had the chance to check in with him regularly along the way.

[00:06:34] Ben Galbraith: And we had this funny moment in February this year where I texted Guy and I said, Hey, I'm thinking about leaving Google to do a startup. And, and I described what I had in mind, using principles that we now call AI native development, and he replied with this sort of wry, emoji smile saying, we should probably talk, and, I've made a change too, and, at first I just felt like, yeah, it sounds great, but London, I don't know that I could do London, and then I came out in the summer, you remember, and, got to spend time with Guy and meet the team here at Tessl, and, I was blown away.

[00:07:02] Ben Galbraith: I was blown away at, learning more about Guy's vision and just meeting you and the rest of the Tessl team just seemed, to be fair, maybe present company excepted, but for the rest of the team, pretty amazing folk. And, and I walked away from that, I came here feeling like, I'll at least have this moment with Guy and explain to him why this is too big of a change for me right now.

[00:07:20] Ben Galbraith: And maybe I'll see if I can be an advisor to the company or something. And then I left that visit feeling like "Wow", I have to be a part of this team. And I was fortunate that you and Guy and the rest of the team maybe felt a little similarly about me being a good fit. And then I had the difficult conversation with my family, but I have to explain.

[00:07:38] Ben Galbraith: We've had a history of adventures as a family. We lived in Hong Kong for two months when I first started at Google. And we loved that. We'd still talk about it all the time. And when I talked to my family about both sort of the professional opportunity and just the chance to have another family adventure, to be honest, we all took a leap and felt like this could be a really special thing for us.

[00:07:55] Ben Galbraith: I did not talk about the weather with my family. Yeah, good call. We did like expat videos on YouTube and the weather came up over and over again. My wife turned and looked at me and said this was not part of the deal. Yeah. But, I'm sure it will be fine.

[00:08:05] Simon Maple: And very briefly, because I think, guys coming back from HR now and they're bored.

[00:08:09] Simon Maple: A number of times, a number of complaints he has for me. They're bored, they just turn him away. So he'll be back any second. If you were to think of a couple of things that have surprised you, or that you've been super pumped about learning and doing at

[00:08:23] Ben Galbraith: Tessl, what would those be? Two big surprises so far.

[00:08:26] Ben Galbraith: First one isn't at Tessl, it just has to do with London itself. I once said that I thought New York City was one of my favorite cities, maybe my favorite place in the world. And I've been to London many times through the course of my career. I had folks that I managed in London when I was at Google. But there's something about being here now, because I spend most of my time here in London.

[00:08:43] Ben Galbraith: I love the city. I love the English countryside far more than I expected, and that's been a huge surprise. I love it here. But getting to Tessl, the other thing that's been surprising is you know, I knew from my visit in London how amazing the Tessl team were.

[00:08:57] Ben Galbraith: The present company may be the exception, but, I've been really surprised at how many developers I've met who share the vision that we've been talking about, that you and Guy have been talking about in the AI Native Dev podcast. People have seen this, exciting possibility of going further than using AI as a sort of code completer and AI copilot to rethink software from the ground up and anyone feels like it's in the water and we're all sort of asking ourselves how would we rethink the way that we build software and I feel like at Tessl that's been maybe my biggest surprise is how many people are rooting for this movement that we're trying to create.

[00:09:32] Ben Galbraith: And really want to be a part of it.

[00:09:34] Simon Maple: Yeah, that really resonates with me actually. Because I was at a user group presenting last week and I was a little unsure as to when I'm giving these messages or talking about where the future is going. And people were actually quite receptive to it. I thought there would be like levels of fear or levels of uncertainty, but people were actually keen to learn more and keen to want to get engaged with the existing AI dev tools. I'm wanting to explore them. Yeah, that really resonates with me. I see guys actually coming, coming back, so we should, so yeah, and then, and wonderful to hear all the amazing things that you've said about Guypo,and yeah,we'll chat later. and so the, to the listeners,we'll probably hear from Ben maybe in the future as well, maybe running a session or two.

[00:10:14] Simon Maple: but Ben, great to see you, great to look forward to it. Don't get too comfy geez.

[00:10:23] Guy Podjarny: You guys done yet? Yeah, how was that? Your days are numbered.

[00:10:26] Simon Maple: Oh, really? If I had a penny every time you told me that, Guy.Cheers, Ben. This month, you released a really interesting blog about how AI tools can be positioned. Tell us a little bit about that.

[00:10:39] Guy Podjarny: So yeah, I had a chance to, to write down, finally, this sort of mental model I have about AI solutions that I, I've been stewing on for a good year plus, a year and a half, around how to think about this sort of messy world of AI solutions. And it's true for the dev space, but it's also true for, for other domains.

[00:10:55] Guy Podjarny: And what I've seen as an investor. And, I don't want to repeat the blog post here. You should check it out on the Tessl blog, but at a high level, it talks about how AI solutions can be mapped across these two dimensions. One is change. So how much do I need to change the way I work to be able to use this product?

[00:11:12] Guy Podjarny: And the other is trust, which is how much do I need to, trust it to get it right for it to be useful to me. so let me just give a couple of examples. the change trust, Synthesia is a good example of it. Synthesia is a text to video, solution on it. So you create videos with text. That's an entirely different way of creating video, and question about whether you can just use it as a copilot to the way we create videos today.

[00:11:36] Guy Podjarny: It's a total reinvention of how you're doing it, which gives them the opportunity to just, entirely replace all sorts of bottlenecks and problems they had.

[00:11:46] Guy Podjarny: But it's a big change, like the people involved in producing a video, a training video, or a media video, whatever, in a company need to do things that are entirely different. You have to rethink the supply chain, some people get startled by it, maybe it's not the same skill set you want in the team, so it's a big change.

[00:12:00] Guy Podjarny: But really I think big kind of disruption opportunities when you start up coming into that space and an opportunity to really rethink the problems in, in a domain. The other axis is one of trust. And so how much do I need to trust it to get it right, even if it didn't require me to change how I work?

[00:12:15] Guy Podjarny: I think the simplest example there is the robo taxis. and so if you,pull out an Uber, app or something like that, and you call a taxi and you, it arrives, the car arrives, you get in and you get dropped off at a location and you get off. It's the same experience, whether a human drove that car, or if it's AI driving it.

[00:12:32] Guy Podjarny: And so very little change in how you work, but massive amounts of trust, right? Like you need to trust that this car will,we'll get you there alive. And so I think that's a good example. So those are like two dimensions. And when you think about AI solutions, you can plot them to say how much change is required to get this, how much trust is required to get this.

[00:12:50] Guy Podjarny: And what do you find is that the tools that get the most adoption today are at the bottom left. They're in that sort of low change, low trust, right? So Copilot, for instance, for development, you're just typing in a letter. It plugs into how you code today. You can eyeball the result and say if it's correct or not, it has some security implications if you don't review it as well, but, but you can, use it very quickly. So just, as long as it works often enough, it just makes it better. No, no reason not to use it. but, but it's, it's incremental it doesn't entirely, address any problems. It doesn't rethink the workflows on it, but still that's where we see the most sort of action and things that can help you today.

[00:13:27] Guy Podjarny: No change, no trust required. And then of course the last quadrant is that sort of top right, uh, which is what we call the AI Native future, which you hear us here talking about, AI Native Development and spec centric and all those things like, that's a far cry. So anyways, that's, The mental model in a nutshell.

[00:13:42] Guy Podjarny: And I have more details, more examples, more how to think about yourself and charting your journey between those on the panel.

[00:13:48] Simon Maple: That got shared a lot. Tell us a little bit about the feedback. Any points that kind of made you think further about it?

[00:13:52] Guy Podjarny: Yeah, it was,it was fun to see. I, as I said, I've been brewing, this for a while now. So I got a lot of feedback over the year from smart people around it. so it was good to, get a lot of kind of response, people liking it and reposting it. It's fine, you'd get some,positive comments on LinkedIn is all nice.

[00:14:07] Guy Podjarny: But I think a better indication is when someone on their own, initiative, posts about it, talks about how it works. What I found,most useful is that people talked about their domains, and how, how this applies to them. So I'm happy to hear that they're using it,and they find it useful.

[00:14:23] Guy Podjarny: And I also liked it, the same week, just after, just before, I think, Microsoft and Salesforce both had big AI announcements. It was nice to see how they both chose different paths. There was Microsoft Copilot Pages, which really,it tries to create this sort of, funky way in which people can collaboratively edit the document.

[00:14:39] Guy Podjarny: Multiple people, each with AI assistance, edit the document and maybe combine the apps. So I think that's firmly in the change path. You look at it, it feels weird. It feels odd, to think about it. And maybe they got it right, maybe they didn't. But it's definitely a change from how it is today.

[00:14:55] Guy Podjarny: And Salesforce launching this agent force thing with automated agents, which is very much the trust path. It's like same work, you open the ticket and it's just instead of a human fulfilling that ticket, it might be an AI agent, but you have to trust that agent got it right. You have to, have that element.

[00:15:09] Guy Podjarny: And so it was another, a nice timely, demonstration of how there are different ways to tackle, using AI to improve our kind of efficiency, our lives, how we do things.

[00:15:20] Simon Maple: Yeah, absolutely. And actually when I was doing the session with Felipe and we were looking through a number of different tools, it made me think about, I'm starting to think about this now in the quadrants about where things sit.

[00:15:29] Simon Maple: And I, and as I mentioned to Ben, I gave a session, where I was walking through a number of different tools last week in London, and I was trying to work out where they should be positioned. And it was interesting because Felipe, when he talks about using things, like Claude, he wanted to then take that code and put it into an IDE to play with it afterwards and do things like that.

[00:15:47] Simon Maple: And it's interesting whereby what we see is the evolution of the tools we use, isn't just like one big leap and we're there very often, there's a big step of innovation and then people work out how we can actually then pull that back a little bit into our IDE. So it's interesting to see something like,pushing into that Claude projects, for example.

[00:16:05] Simon Maple: And then there's this cursor space whereby you can have that conversation and it does it all for you in your IDE. And it just brings it closer to what we're more comfortable with, till that step to the next place actually seems smaller. So yeah, really interesting kind of positioning things more like that now.

[00:16:20] Guy Podjarny: I love that. I think the opportunity and the change track is to rethink our fundamental assumptions. What would make you change your mind?

[00:16:28] Guy Podjarny: Now that in our context, now that machines can write code, now that machines can fill in the gaps of what you didn't tell them in the requirements, what does that, open up, right?

[00:16:38] Guy Podjarny: What opportunities, what problems can we rethink? And then, how exactly those things should become reality? That's a process. So that absolutely resonates somehow. People need to try it out. And, they learn, they open their eyes. It's yeah, but that's, I can't use that yet because, like I need to, maybe leave my IDE.

[00:16:55] Guy Podjarny: And then You know, maybe that's the, in the Claude example, right? You can go to the web and you can write a bunch of things and you can build your application. But people's like, yeah, but I, there's all these other things I need to do and they're in my IDE. Okay, can I bring that experience to me for the IDE? I think it's awesome.

[00:17:10] Guy Podjarny: I love to see that evolution. We do our bit over here, but, but it's good.

[00:17:16] Simon Maple: So let's step out of your comfort zone now of AI, let's talk about security Guy. if you're unsure, feel free to make it up. What are the concerns that were brought up over this last month around AI security?

[00:17:26] Simon Maple: I know we had Tamar and obviously Caleb, whose session was around AI security, but there was some interesting mentions in Tamar's session.

[00:17:33] Guy Podjarny: So, you know, you, maybe, you can take,the kid out of the security industry, but you can't, take the security industry out of, the kid. I was quite happy actually to have security related conversations on it. We had the one with Caleb, which was clearly about security that was designed, that was by, planning, but, even the conversation with Tamar, security was a big piece of it, primarily because of access to data. And so the thing to understand is, in the world of LLMs, as Caleb phrased it very well, there's no separation between the control plane and the data plane. And maybe you think about SQL. In SQL, you have the command select * from table name from something, and that's the structure of it.

[00:18:08] Guy Podjarny: And then there says like where field equals something that is text. and you know that, you know the something that is text, that is the data. That is the data plane versus the control plane that is the command. In LLM world, you don't have that. You just give one instruction. There's no fields, there's no structure on it.

[00:18:25] Guy Podjarny: It's just language like, like we humans do, right? And, and as a result of that,the system can't,delineate very clearly what is it that you said that is data and is control. and effectively there's no solution to that today. I think, Caleb highlighted that. And, I think that's, I have not yet seen anyone to,to really, devise a solution that is deterministic around that,

[00:18:48] Guy Podjarny: and so that leaves you in this awkward position when you're offering tools to enterprise companies, because access to data is very important, and Glean has an interesting approach to that is that the way that they approved it is the LLM actually doesn't know any of the data. So the LLM only interacts with the control plane.

[00:19:06] Guy Podjarny: It is trained on the systems. How do I go to an HR system and able to, can extract information from it or how do I go to, to,to zoom or to go on and extract some recorded conversations from it? But then when the user asks to do that, the actions are being done using RAG, so only the relevant data.

[00:19:23] Guy Podjarny: Per the permissions of this user in a traditional authorization are being done. That's brought into the conversation. And now only in the context of this session the LLM knows this is the data. And so because the LLM never knows the broader data, outside of that sort of context, and because fetching the data is done with the user's permissions,they eliminate, I should say that with an *, right?

[00:19:45] Guy Podjarny: Or greatly reduce the risk of, of data leaking, into the system and a user being able to access data that they don't have, which I thought was really interesting. So that was, I think we talked about all sorts of things, security, and we should talk a bit about,generating code and whether it's secure or not, but I think that was the connecting point between the tomorrow episode and Caleb.

[00:20:03] Guy Podjarny: And I think it's something for us to. truly internalized when we think about security and LLMs.

[00:20:07] Simon Maple: Yeah, absolutely. Cause it's not, you're not allowing the LLM to effectively determine whether it is or isn't allowed to use that data. You're only providing it with the data it's allowed to use for that request, which I think is absolutely core.

[00:20:19] Simon Maple: Now, Caleb, with Caleb's session, of course, you asked, you blatantly asked the question, can an LLM provide code whereby, there are no insecurities at all. And. I love the answer he gave, which is based on the success of your previous company, Guy. No, it cannot produce secure code all the time.

[00:20:33] Guy Podjarny: I was just hopeful. Can you give me an answer? Do LLMs generate secure code or not?Yeah, I think maybe part of it is indeed the, the tongue in cheek,do humans write secure code? Snyk shows there's definitely a need, for, for verifying that.

[00:20:48] Guy Podjarny: With Caleb, one of the many things I liked about how we phrases things is it just has this sort of concrete, approach to thinking about security on it that's very practical. So it says, LLMs are trained on human code.

[00:20:59] Guy Podjarny: Human code, as we know, is not terribly secure, definitely not by and large. And so why would we think that generation of code that is trained on that will be more secure? So above and beyond all the fanciness of, did it understand the code? Not correct. Like just the margin orders were all, with insecure code.

[00:21:15] Guy Podjarny: And so I think that was okay. and I think it's just a simple understanding, that, we shouldn't assume that code is secure. And then it was like, There was a layer of optimism on top of that. And I think the two takeaways that I have are one, it will get better because we do still incentivize the LLM.

[00:21:33] Guy Podjarny: When we do identify this code is good and this code is bad, treat this code, as something that is better. And a bunch of the companies that train on enterprise code bases allow their users to do that as well. So say, Hey, these are like golden pull requests. They're very good. And the second, which I actually personally am more excited by, Is that if the machine writes the code, once you get into like autonomous workflows, then you can really embed

[00:21:58] Guy Podjarny: automation of the testing into the development process. So it's not so much about whether the LLMs produce more or less secure code, but it's more about the fact that machines are not lazy. and I don't mean developers lazy in like an accusatory, fashion. although some guy, so some, yeah, no amongst the team.

[00:22:16] Guy Podjarny: Yeah. Yeah. But, but I don't mean accusatory, I just mean, we're humans, right? And so we don't always, deliver, on, on all of these, toil pieces that we need to do. So machines are, are different, right? Like machines do continue, to run those. And so we can run security tests.

[00:22:33] Guy Podjarny: We can fix vulnerabilities. We can do those thoroughly.

[00:22:35] Simon Maple: Yeah. and combining a couple of things there, to talk about another topic, which is really about how LLMs combined with other ways of doing things, whether it's,static analysis, whether it's the authorization that we talked about with, Tamar's session, it seems that we need to really clearly recognize, we need to combine with various other technologies or ways of working whereby the actual overall result is, either far better or far more consistent, execution after execution. So there are a couple, there are a couple of topics here, one around static analysis, which I know the Swimm guys, there's a really actually good,quote from the Swimm team. I asked,if you didn't use static analysis, determining, which areas of code related to which areas of documentation, how accurate , would you be,how, consistent would you be about saying, this piece of code change actually requires changes to these other pieces of documentation, and Omer have 80 percent of the output that came out of an LLM only approach to that would be noise. And actually it was the static analysis, the creating of the AST, which then the LLM used to determine and that linking between code and documentation, that's what actually creates that far better experience.

[00:23:51] Simon Maple: And I think we'll see that more and more as we recognize what LLMs are good at, where they need assistance and so forth.

[00:23:57] Guy Podjarny: Yeah, no, I love that. That's like hallucination prevention, right? Yeah, exactly. It's similar to what Des was talking about with, with Fin. Absolutely. You have to have some element of, ground truth to, to, keep the LLMs in check.

[00:24:07] Guy Podjarny: yeah, I think that was a very good insight on it. And I think a good,reminder of it. And I think we know, we had similar conversations with Caleb around how static analysis plays a role in the security side, and it's interesting to think, we talked before about this change in trust and it's interesting to think about, on one hand, how do we get predictability?

[00:24:24] Guy Podjarny: How do we get, determinisms that things are not noise? And in other cases, how do we, accept, the lack of predictability for value? And, the conversation we had there is we do static analysis today of code, say, in Snyk and other solutions, and, they are, they're more deterministic, so they would find, if you scan the same code with the same sort of rule set on it, you're going to get the same set of results.

[00:24:47] Guy Podjarny: But,with LLMs, if you scan them,they don't act that way, like they're, they're statistical. So you might scan the same code, 10 times, and maybe 8 times out of those 10, it finds a vulnerability, and twice it doesn't. Maybe sometimes it hallucinates a false positive. You can talk about what's better and what's not, but I think a good question is also how do we adapt, how do we change our methodology to that reality, right?

[00:25:10] Guy Podjarny: When you run a scan, and you say, hey, this is vulnerability free, if you run it with something that's statistical, can you make that statement? Can you give it the stamp of approval? Do you need to scan it 10 times now? How do you work with that? And I think the good challenge from Caleb was, it depends on how valuable it is, right?If in those eight times it gives you massively accurate results, maybe you make peace with that, right? And you just, before you need to bless it, you scan it three times and you think, that's good enough statistics and I, and I will give it a stamp of approval. Yeah. so I think this notion of dealing with.

[00:25:39] Guy Podjarny: Unpredictability of these systems is really interesting. Yeah. And so you made a good point of how do we make them more predictable? That's one part, that's the trust piece. how do we know that they got it right? And then the other side is the change piece.

[00:25:51] Simon Maple: Yeah.

[00:25:52] Guy Podjarny: It's, I think both of those are really interesting.

[00:25:55] Simon Maple: Yeah, and there's a couple of other things that actually stood out from the session that you did with Tamar, one of which is combining types of AI. So for example, with Glean, they have obviously a search as well. And it's that combination of symbolic AI with LLMs that actually creates that really strong power.

[00:26:11] Simon Maple: Another thing actually just on the point of the consistency and getting in that non determinism, Tamar mentioned, when you, as soon as you look to the enterprise, there are CIOs that are spending huge amounts of money on software. If they ask a question today and they ask a question tomorrow or even later that day, they're paying that money and they want the same answer.

[00:26:30] Simon Maple: And it's whether we change our mindset in terms of understanding, There's power in this LLM, but you gotta expect that variance in, in determinism. Or whether we need to add some level of consistency in as part of that. So to say, okay, we're going to use LLMs, but we want to track answers to make sure that we're giving consistency.

[00:26:48] Guy Podjarny: there's work to be done one way or the other. And it's, how do you like, I paid big money for this. It better work.

[00:26:52] Simon Maple: So that, that rounds up another monthly Guy. Can you believe it's, what is it now?

[00:26:57] Simon Maple: Is it October?

[00:26:58] Guy Podjarny: Yeah. What is it?

[00:26:59] Simon Maple: 20? Are we 20, 24 still? I can't

[00:27:01] Guy Podjarny: in October now time continue to be a blurry, Oh,

[00:27:04] Simon Maple: yeah.

[00:27:05] Guy Podjarny: it's fun to actually have those moments of, look back at the month. Yeah. Sometimes the month goes by and such. Such a rush on it,stop a moment.

[00:27:12] Simon Maple: Bit of reflection. This month, think of it as someone's like terrible with my routines or my wife is terrible with, is excellent at was close. That was close. I think I might leave that in the edit. Just to just, yeah, absolutely.

[00:27:24] Guy Podjarny: I was excellent at having a routine and sitting down at the end of the week and summarizing what is taken from this week andI'm jealous about that, I'm just like, totally incapable of that routine. So this is like a nice forcing function. Absolutely. Let's look at the month and see what

[00:27:37] Simon Maple: happens. And it's always great to talk to so many amazing people. And we've got some, we've got some interesting sessions lined up as well. I had a session a little while ago with Liran Tal.

[00:27:44] Simon Maple: Going to continue that kind of like security theme into next month. And we had some good, we had some good fun. who else have we got? We've got Patrick Dubois, who's the, is he the godfather or the father or the grandfather or something, uncle or something of a DevOps.

[00:27:55] Guy Podjarny: You should try grandfather.

[00:27:56] Guy Podjarny: I'm sure he'd appreciate that. Grandfather. Yeah. The great grandfather of DevOps. Definitely the creator of DevOps days and a big big big force in making DevOps and that dev movement come to life. Yeah. Yeah. Looking forward to chatting a little bit about how. So you're

[00:28:09] Simon Maple: going to be chatting with Patrick later today and you're going to be talking about the similarities, between the Cloud push and the DevOps push,

[00:28:16] Simon Maple: how we can like create reflections and similarities between that and AI Native. What can we, what we can expect if it follows the same path. So that's going to be, that's going to be a very interesting discussion. Yeah, I'm super looking

[00:28:26] Guy Podjarny: forward to it. I think Cloud Native is such a great role model for AI Native to apply.

[00:28:31] Guy Podjarny: And I think to me, like following DevOps and cloud natives was also very significant in Snyk. We didn't coin a cash phrase, dev first security. It's not a, there's no native in it, security native, conversation that I don't think it was as big a change on it.

[00:28:42] Guy Podjarny: I think these lessons were so valuable in making Snyk work and how do we embed security into development. And so I'm keen to see how do we learn from those journeys and from Patrick's wisdom and apply that to AI native.

[00:28:56] Simon Maple: Yeah. So Guy, let's catch up in a month and we'll talk through some of these amazing sessions again. And I'll

[00:29:00] Guy Podjarny: try to ignore you for it. Yeah, absolutely. if I'm still here, we'll see how that HR We might have another co host.

[00:29:05] Simon Maple: Yeah. Maybe Ben will be back before we realize it. yeah.

[00:29:07] Simon Maple: That's a

[00:29:07] Guy Podjarny: good idea. Okay, problem solved.

[00:29:09] Simon Maple: Excellent. so Guy, will see you next month. And, in the meantime, I'll be on LinkedIn, looking for work. Thanks all for listening. Tune in next time.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.