Categorizing the AI Developer Tooling Landscape with Amir Shevat. Discussing Copilot, Tabnine, AICodeCompletion, AITesting, and AIDocumentation

In this episode, we delve into the world of AI developer tools with the expert insights of Amir Shevat. Discover how categorizing these tools can help streamline development processes and enhance productivity.

Episode Description

Join us as we explore the categorization of AI developer tools with Amir Shevat, a seasoned investor and expert in developer tools. Amir shares his extensive experience in the tech industry, having worked with major companies like Slack, Google, and AWS. In this episode, we discuss the importance of categorizing AI tools, covering key categories such as code completion, test generation, code documentation, DevOps, and monitoring and debugging tools. Amir provides valuable insights into how these tools can transform development processes and what the future holds for AI in software development.

Resources

Chapters

[00:00:00] Introduction [00:00:41] Guest Introduction
[00:02:39] Importance of Categorizing AI Tools
[00:05:25] Code Completion Tools
[00:11:24] Test Generation Tools
[00:18:35] Code Documentation Tools
[00:24:05] Future Developer Roles with AI
[00:27:55] DevOps Tools
[00:36:45] Monitoring and Debugging Tools
[00:40:18] Additional Categories

Full Script

[00:00:00] Simon Maple: On today's episode, we're trying to categorize AI development tools to really better map out the space of AI native software development, and we will later deep dive into each of these categories in some of our content. Including podcast and blog and other places, we're going to learn and understand about each of these topics in depth.

[00:00:21] Simon Maple: Understand what developers and various teams and organizations should be thinking about in the AI tooling ecosystem overall. And perhaps, make sure they understand what coverage they need across the various categories as well. So that should help frame the discussions going forward. First of all, why don't we introduce a wonderful guest, Amir Shevat.

[00:00:41] Simon Maple: Amir, how are you?

[00:00:42] Amir Shevat: Good. Thank you for having me. How are you doing?

[00:00:45] Simon Maple: Absolutely. And I'm doing very well. Thank you. And to introduce you Amir, it was funny actually, cause I mentioned you on the last episode of the podcast or rather the first episode of the podcast to Guy.

[00:00:54] Simon Maple: And I thought, actually, there's so much that I could say about Amir in terms of, your experiences, the places you've worked at. And I thought, actually, I wonder how Amir would like to be introduced. So Tell us a bit of all the amazing things going from maybe it's twitch or maybe it's some of the work that you did at twitter, maybe it's the investment that you do.

[00:01:12] Amir Shevat: Thank you yes, I have a very built in, introduction that I do every time So if you heard me once I apologize because it's very repetitive i've turned to the dark side about a year ago. I'm an AI'm an investor You I've done a little bit of like angel investment at that, about 35 angel investments and then turned to the dark side in earnest.

[00:01:33] Amir Shevat: And I invest in developer tools in my career. I've been a one trick pony. I've been in developer tools my entire career started Microsoft in the era of dot net. If you remember that, then went to Google, worked on chrome on cloud on android. Worked on a startup program for Google then joined a small startup called Slack.

[00:01:53] Amir Shevat: It was two weeks before we launched the platform and left when we had 250, 000 weekly active developers then joined Twitch. It was a lot of fun working with game developers, connecting between Twitch, which is a streaming platform for gamers, and Riot, and Blizzard, and EA, and all that cool stuff. Then created my own startup called Reshuffle, which got acquired by Twitter.

[00:02:16] Amir Shevat: So Jack Dorsey bought my company, and then Elon closed the APIs. True believer in open platform and open APIs for developers.

[00:02:26] Simon Maple: Yeah, absolutely. What an illustrious career so far. It's great to have you on board. And I think, Amir, when I think about the, who would be a great person to talk about AI dev tools and categorize them, you're top of the list really in thinking.

[00:02:39] Simon Maple: Thinking about how to position the various dev tools. And it's important to be able to categorize things. It's really important, right? To be able to understand because people are thinking, how do we use AI and get AI into our processes? What should people be thinking about? And I think categorizing them gives people that ability to say, look, these are the areas I want to focus on first.

[00:02:58] Simon Maple: I recognize these other categories or areas. We could invest in, but let's be a bit strategic and let's prioritize where we should introduce AI first. Do you think, do you feel that's how people think about that?

[00:03:11] Amir Shevat: Does it help? Definitely. I think that we have so many picks and shovels creators right now.

[00:03:18] Amir Shevat: People will help you with this. People will help you with that. And I think as someone who creates code, you need to understand where is my biggest difficulty? Where is my biggest challenge? Where is the, thing that are mundane that I don't want to do every day that I can offload to these tools to be a lot more productive.

[00:03:34] Amir Shevat: And you can see companies from enterprise to small startups using AI to develop. So it gives you an edge. You just need to choose where you need that edge.

[00:03:44] Simon Maple: Absolutely.

[00:03:45] Simon Maple: And I think what we're going to be doing is once we create these categories, we will have deep dive sessions In all of these to say, okay what should we be thinking about?

[00:03:53] Simon Maple: And let's go deep into those. So yeah, the more you say today, Amir, the more work you're creating for me, but Don't let that stop you suggesting categories. When we think about categories, is there a way we should do this? Should we be doing it by function? Whether it's, a testing thing or something that should we do it by STLC where we say, okay, let's do the development phase, let's do the pipeline phase.

[00:04:11] Simon Maple: how do people think about this? Does it matter?

[00:04:15] Amir Shevat: So I think you could look at it from like the primary development phase. I think we've picked major areas where we see a lot of innovation happening and we see companies that we really like in. Creating work for you.

[00:04:27] Amir Shevat: Maybe creating some work for me. Maybe we can get some of these founders to come and speak in our show and talk about these major areas. I think the key, the way I look at it as an engineer is where do I feel the pain the most? What are the things that are repetitive that are not creative in my life as a developer, and where can I plug in ai?

[00:04:50] Amir Shevat: And I think that's a category we should follow, at least in this session. And we are not covering all of these. We have a lot of brainstorming and thought about a lot of categories, but we'll choose the main categories. And we'd love to hear from the audience if there's like new categories that they're thinking about.

[00:05:05] Amir Shevat: So this is the beginning of a community conversation.

[00:05:08] Simon Maple: Absolutely. Yeah. We're not providing solutions just yet. We're trying to understand the question. So let's, yeah let's, this is absolutely an open conversation. So we'd love to hear if we should merge categories, if we should, if there are major things we're missing and so forth.

[00:05:20] Simon Maple: So Amir, what should our first category be? What's the one top of your list?

[00:05:25] Amir Shevat: So I think the top category is the category that is being used the most today. I, when I'm talking to developers, this is the number one AI enablement that they're seeing and that's a code completion tools, whether it's co pilots there's also Tabnine, which I'm an investor in.

[00:05:42] Amir Shevat: These are tools that are integrated into your IDE and they're like the extension of What used to be like the tab or like the ability to auto complete a lot of the functions or do refactoring when I was a Java developer, I used to do a lot of refactoring and because JavAIs very verbose.

[00:06:01] Amir Shevat: helps you and understands what are the things that you need to do refactoring. So I see this as a evolution of that. So basically you use the IDE and you have a co pilot or you have a Tabnine that integrates into your IDE and helps you write code. Significant amount of code. So a lot of the blueprints, a lot of the table stakes code is done by the AI and you could do just the tweaking, just the checking, and just the writing, the business logic that is unique in your brain.

[00:06:33] Simon Maple: And a lot of people's mileage varies on this one, right? In terms of you'll get different developers that will say, Oh my gosh, this is insane. It's so good. It saves me so much time productivity wise or learning about API. So they don't know exactly where to start and, a co pilot or a Tabnine or a Cody, whatever it is telling me what the next thing I should write when I've actually just got, an empty page straight away.

[00:06:54] Simon Maple: Other developers will say, you know what, it's always providing, it's always. Generating code in a way I don't like or a different style. Is this something that we just need to get used to? Is it something the LLMs need to get better at? How mature, would you say? We are in the code completion tooling category today

[00:07:11] Amir Shevat: So I think the AI will evolve to be more personalized but I think it's at the end of the day if you like Everything handcrafted you would not like what the AI will create for you.

[00:07:23] Amir Shevat: Yeah, it is very hard for AI to use you as the data sample and create code in your image. I think you'll need to understand that this is just like a blueprint. It will have its own opinionated way of writing the code. Tabnine is slightly more, slightly less opinion or adheres to the company's best practices by training on the company's Data set.

[00:07:47] Amir Shevat: So if you and copilot is more like general purpose data set, which is all the code that's in GitHub. But if you want to maybe train a little bit on your own company's data set on the way you do things, then Tabnine might be the right solution. And maybe we'll see a solution where the copilot understands how you write code and maybe uses it.

[00:08:10] Amir Shevat: You as a training set, but I think it's a smaller training set So it will never be as accurate as the way you want it to write the code

[00:08:19] Simon Maple: Yeah, really interesting and it's actually worth looking at the various different types Of tools in this category, right? Because they're all subtly different.

[00:08:25] Simon Maple: Like you mentioned that I think, is it Tabnine as well? That have an, has an offline mode. I can't remember. It's Tabnine, right? So I think you effectively download that model and you're actually running that model locally on your environment. So if you're on a train or on a flight or whatever, or just coding alone somewhere in a cave.

[00:08:41] Simon Maple: You can effectively still use that offline, which is very interesting. So yeah, it's a code whisperer. Another one really good for amazon AWS style If you work with AWS services, it's particularly good in those kind of spaces pulling towards The AWS APIs and things like that.

[00:08:57] Simon Maple: So yeah, really worth that. Do you see people today? I guess one of the things that Guy and I were talking about last week was about the difference between AI Assisted and AI Native. And I think AI Assisted is more that we're in that same workflow. And this tool is making me faster in that same workflow.

[00:09:18] Simon Maple: Whereas AI Native is perhaps a little bit more okay, how are we changing? The way we work because we're thinking about AI, at its core, I guess some the ways that you use each of these tools can lean into both depending on if you're writing one line, or if you're writing a function or a snippet, or if you're writing a module, right?

[00:09:40] Simon Maple: So how do you see people using this mostly today? How do you see it happening going forward?

[00:09:46] Amir Shevat: So yes, and we'll see as we go through the categories, some of these are going to move more towards the. Agent model where we're moving from the assistant, the copilot model to an autopilot model. And there's certain aspects of coding and not just coding, but for example, testing and other things where you want to have someone not just helping you, but maybe taking the entire task altogether.

[00:10:12] Amir Shevat: So I think that you're very right. We're going to see like a movement from. An auto copilot that helps you maybe do small things, maybe refactor, maybe do migrations and stuff like that all the way to developers offloading tasks to these agents and then changing the way. We actually do things.

[00:10:34] Amir Shevat: And I think it also ties to the tools. So all these copilots live inside your IDE because they're not changing the way you're working. They're not changing your day to day. But as we move towards a collaborative approach, it will be you. It's called Synthetic Humans. I think Ryan Hoover from Product Hunt coined that.

[00:10:55] Amir Shevat: What are the synthetic humans that we can have in our engineering team that could do the boring things that humans don't want to do? And we'll see more and more of these as time passes.

[00:11:07] Simon Maple: Yeah. Is it fair to say as well that in this category, you're pretty much seeing new tools, whether it's from startups or existing companies, it's not really repurposing existing tools that maybe we'll see in some of the later categories.

[00:11:18] Simon Maple: But these are purpose built tools for this category.

[00:11:22] Amir Shevat: Yeah. A hundred percent.

[00:11:23] Simon Maple: So what's our next category would you say?

[00:11:24] Amir Shevat: Okay. So our next category is something that I'm really passionate about because as a developer, I was developing during the test driven development, if you remember that stage I'm sure a lot of engineers are still big believers in that.

[00:11:37] Amir Shevat: And that's the tests, generation era or tools. So what can AI help me to generate the tests to do better performance, better. Penetration testing, but also like just testing all the code to see that it meets the testing criteria. So here we have codium leading the charge. A lot of developers are using it.

[00:12:00] Amir Shevat: We're also seeing heal. Which is another company I invested in and takes a black box approach. So you create your website and then you give heel.dev that url and it acts as a black box tester. For those of you who don't know, black box tester is a tester that looks at the site without having any knowledge of the code base.

[00:12:22] Amir Shevat: So they just go and they start pressing on buttons and seeing what is performant, what is not performant, how does the login flow go? Maybe there's edge cases in the inputs. So these are the types of AI solutions. And if you ever been in testing, you know how hard these type of tests are. As an engineer, you're often blind to all the tests that you need to create because you build a certain set of code for a certain set of.

[00:12:51] Amir Shevat: Functionality, but when you put it on in the real world, you see everybody's using or abusing your code in a very different way than what you wanted being able to see all the edge cases and being able to use your code in a way that you haven't thought about and finding all the bugs is a great use for AI Because it could actually look at all the permutations of all the inputs and do a lot of interesting thing from pen testing to just generating unit tests I think this is where AI could be extremely useful in an area that is extremely boring and extremely tedious.

[00:13:31] Simon Maple: Yeah, and I think it's interesting actually, because you look at something like code completion that we just talked about. And you get a lot of developers that are like, oh my gosh, is this going to take my job? Is this going to take my job? I'm worried that it's taking the thing that I love away.

[00:13:43] Simon Maple: You look at something like test generation, and other than those folks who are heavily into TDD, live and breathe TDD, who want to write their tests and want to build functionality that makes those tests pass. Test is very often one of those things that gets squeezed or that developers don't want to do that much.

[00:14:01] Simon Maple: And it often ends up with functionality that hasn't been tested effectively or have been so lightly golden path tested that like you say, these edge cases and things like that just aren't covered. So this is something that Could actually really improve the quality of our applications today because they're done properly and thoroughly, right?

[00:14:22] Simon Maple: AI is not lazy, so it will do a thorough job on this.

[00:14:27] Amir Shevat: I can tell you from my experience as a product leader, the most important flow is often not tested by most developers, which is the onboarding flow. Yes. Most developers create a website and then you can log into that website.

[00:14:42] Amir Shevat: You log in once you don't need to log in again. So most developers, when they test their solution, when they test their SaaS, they'd actually don't go through the work for the onboarding flow again. And when that onboarding flow breaks or it's not comfortable, or it's, I don't know, there's a new browser and there's new standard and it doesn't adhere to that standard, the developers are almost.

[00:15:06] Amir Shevat: Always blind to that flow and going through onboarding again every day to make sure that it's good it's very tedious. I don't know any developer that wants to do that but to your point AI will never get bored. So that's a great thing It's boring task that we can offload to, to AI that will say, hey, there's a new standard.

[00:15:29] Amir Shevat: I don't know, something in your flow that could improve in a flow that nobody tests.

[00:15:35] Simon Maple: Yeah. Let's play devil's advocate for a second though, because. one of the things that is extremely frustrating for developers who have tests that are run, maybe fail, and they look at the test and they think, oh, come on this test is absolute BS, right?

[00:15:54] Simon Maple: it's a false positive. It's wasting my time all of a sudden. me coming from a security, my previous job from a developer security background. false positives were like the, it was the bane of everyone's existence, right? As a developer, security coming with all these tests that actually, maybe don't mean anything or aren't really exploitable.

[00:16:12] Simon Maple: How mature is the AI test generate or test generation in general, but AI test generation specifically so that we don't actually just add all this in and then really frustrate developers. Is this something that's, that we could and should be adding in now?

[00:16:27] Amir Shevat: So I think from what I'm hearing from developers it's pretty useful.

[00:16:31] Amir Shevat: And it's because it's not automated because it's not like it will create every permutation of the test. It's more AI driven in terms of I know what current tests were run on this type of code so I'll generate in that area. I think the tests that you're getting are pretty good. What you need to see is that it has completion, that it actually is covering the use cases that you're trying to do.

[00:16:54] Amir Shevat: And I'm sure there's going to be. False positives. Just to add to the devil advocates action, a developer that doesn't use their own code to test it, do not develop customer passion for it. So for example, if you're building an SDK, it could pass all the tests, but it's still a sucky experience.

[00:17:15] Amir Shevat: Let's say your auth requires 15 parameters and nobody really wants to use it because of that. AI would probably not say, Hey. Your log in function sucks because it has 15 parameters. This is something that you will need to do. So I still encourage developers to even if you have all these test capabilities, dog food.

[00:17:36] Amir Shevat: So use your own code in order to see that like it actually is delightful for the, for customers.

[00:17:43] Simon Maple: Yeah. Yeah. And the developers leaning in, not necessarily to the level of the code, but to the level of the test cases that are actually being written so that they can say, yeah, I can visibly see the coverage.

[00:17:53] Simon Maple: Based on the use cases, not coverage based on the lines or branches and things like that. we'll let the tooling work that out and make sure it generates that. But I now care about the business logic, the coverage in that respect.

[00:18:05] Amir Shevat: A hundred percent. So if you create an API.

[00:18:07] Amir Shevat: It could pass all the tests, but could be a very badly constructed API. So also think about the meta aspect of creating like our craft needs to be good, right? It needs to be delightful. It needs to be connected and have a good service between developers. Automated tests are still not there in terms of making sure that the code is 100 percent delightful.

[00:18:30] Amir Shevat: And it's our job to do that.

[00:18:32] Simon Maple: Yeah. Amazing. Yeah. So two categories so far. Code completion.

[00:18:35] Amir Shevat: Okay, so let's move to the next category. Another category that I used to love, but now I'm a lazy developer, so I don't love it anymore, which is code documentation. Very important. And the problem is that.

[00:18:49] Amir Shevat: When you write code for the first time you might be excited and then you create the documentation and maybe you're like even a good writer so you're like you're articulated and you documented the code in a good way, but then you make a small change or someone else makes a small change and the documentation needs to update

[00:19:06] Amir Shevat: So if you. Ever worked in any enterprise, that between 20 and 80, more closer to 80 percent of documentation is outdated, is not there yet is missing. It's just like big chunks of the documentation is not there. So I think, again, this is the value that AI could bring to

[00:19:24] Amir Shevat: two directions me trying to understand code and me trying to explain code to others and I think in both areas AI documentation could be very useful.

[00:19:34] Simon Maple: Yeah, and actually Just slightly going back into the test point of view as well test and Documentation and code are three living streams that can very easily veer away from each other and if they're not updated together.

[00:19:49] Simon Maple: So I think testing as well as documentation can can follow that. Now, of course, testing will blow up because it can test positive or negative, but are people writing new tests for new functionality? And I think the same kind of like applies for those two as well.

[00:20:05] Amir Shevat: You have a great point.

[00:20:06] Amir Shevat: Yes. So yeah, it would be nice if the. You write the commentation and then the test reads, the test AI reads that documentation, understands the meaning of what you wanted to create and then create tests in, in that in that vein. So I think you're right. It's all connected. And once you integrate more of these tools, and once these tools are going to be integrated with each other, we're going to have a much more delightful developer experience.

[00:20:33] Simon Maple: And would you add user documentation and code documentation together into this category?

[00:20:39] Amir Shevat: So we looked at two examples. One of them is Swimm, which is a startup. We'll share the URL in some way, but the ideAIs I want to understand it's called Swimm. Call code explainability or the ability to understand what's happening.

[00:20:54] Amir Shevat: Who owns this type of code? What was the history of this code? How does this code interact with another code? So understanding the what is happening here is a big problem, and it's a problem of documentation. And Swimm really helps you understand the code. Onboard to a new code base train other people on your code base.

[00:21:15] Amir Shevat: This is one direction, which is I want to understand what is happening in the code. The other side of documentation is I want to explain to someone who's not technical about what's happening. What is the product? How do you configure it? What are the options? Maybe I'm adding configuration in some way, or maybe I want to show how to do a flow.

[00:21:37] Amir Shevat: All of this, that's called the technical writing side of code. You usually have a person whose job it is to follow the changes that are happening in Git, in JIRA, understanding where the code needs to change. A lot of the time it's developer relations. If it's the code is related to SDKs or APIs, and then they need to update the documentation.

[00:21:59] Amir Shevat: Whether it's client documentation, whether it's on the website that needs to be updated all the way to sending it to sales and marketing to understand what's happening in the code. So I think here again, AI could be very useful in becoming this. Documentation as a service, and there's a company called Tadata, T A D A T A, who's trying to understand what are the triggers that require documentation change.

[00:22:28] Amir Shevat: For example, something happens in your source control, something happens in your JIRA, something happens in Slack, and how does that trigger documentation changes on websites, on many other places.

[00:22:41] Simon Maple: Yeah, really interesting as well. So now when we think back to that flow that you mentioned, where, you know, I actually, first of all, before we jump into that intent is really different there as well, right?

[00:22:51] Simon Maple: The intent of, an AI flow where you're trying to think more as a user. More from a use case point of view, someone who doesn't care about implementation effectively, they care about, what do I need to use your API for? How do I use it? etc, etc, or service. The other,

[00:23:07] Simon Maple: it doesn't care anywhere near as much about the intent of the user. It's really about the understanding of the code flows and trying to purvey that back to a developers. They can more architecturally more implementation based understand the flows of how code works, and I really like going back to your original point about starting with the documentation that then writes the tests that then writes the code.

[00:23:31] Simon Maple: That's a really interesting way of looking at it, and when you think about how development roles change as AI gets more established in this space, do you see the role of almost like a developer, leading more onto this side into the code is right from the code, into the test and documentation and actually the implementation piece, the code completion that's more something that's like a that just happens.

[00:23:57] Simon Maple: It's something that the AI does that we don't care about. We care about this other piece because it's more interesting and important to the business.

[00:24:05] Amir Shevat: if you analyze the job of a engineer, if you remove all the crap around meetings and stuff like that, it's about distilling business logic.

[00:24:15] Amir Shevat: It's about taking big problems, breaking them down to smaller problems. The bigger, more complex problem go to the senior engineers and the smaller problems go to junior engineers and then defining the business logic. So connecting between what humans want, so that's the PM and like defining what does the application have and turning that into how do I actually do that with code?

[00:24:40] Amir Shevat: So I think humans are much better in understanding what humans want and then turning it into. Maybe a spec or maybe some sort of architecture or maybe some sort of a new artifact that then AI could turn into actual code, could turn into a set of tests, could turn into a set of technical documentation.

[00:25:02] Amir Shevat: So I think humans will excel in the area of understand what human wants. I understand what the customer wants, and now I need to create assets that will help AI generate all the rest. Think of it as becoming a manager of AI agents, and some of these agents will do this job and some of these agents will do another job, but you'll be the manager because I think right now, at least, AI is not good at doing that part of clearly understanding what humans want and clearly breaking it down to tasks and capabilities.

[00:25:41] Simon Maple: And I think as a result of that and the trust differences from where we are today and where we need to be that level of autonomy that we're willing to allow within the processes is not where it needs to be in order to actually get that flow. But this is the track that I think where a lot of people see that we're on in terms of reaching that

[00:26:02] Amir Shevat: a hundred percent.

[00:26:03] Amir Shevat: And I think that for my experience, talking to developers, the smaller, the task and more defined the task, the better AI will perform. Yeah. And that's like constantly, I talk to developers all the time about that. If you know how to break the task, the smaller task, it is much more predictable. You'll get like less hallucinations and I don't like the word hallucinations, but I think it's wrong from a technical perspective.

[00:26:27] Amir Shevat: We can deep dive to that afterwards, but you will get better results. The more the defined your AI task is.

[00:26:35] Simon Maple: Yeah. Awesome. Before we leave documentation maturity wise, is documentation something that people should be looking at right now, trying it out? Is this something that can actually keep track of changes and actually provide good output?

[00:26:48] Simon Maple: Does it need more work?

[00:26:50] Amir Shevat: I think documentation actually plays on AI strong suits. I think the idea here is that AI is much more verbose than most engineers that I know in terms of writing words rather than code. and it's also something that AI could really change in terms of I write the commentation for customers.

[00:27:10] Amir Shevat: I could change my tone there versus I'm writing for customer success. So I think AI could create, a lot of assets that engineers just don't want to do. So I'm very bullish about AI creating documentation in the future.

[00:27:24] Simon Maple: Yeah, I love that. So the ability to almost effectively customize the documentation for the reader.

[00:27:30] Simon Maple: So that's one of the things that we always struggle with where we as a company, we as an industry provide the right level of documentation for the group. But if you can do that at varying levels in an autonomous way that, that's magic. Amazing. Okay. Documentation. We only have three.

[00:27:47] Simon Maple: I think we're only at three categories. We have two more that we're going to deep dive in and then others that we're going to, we're going to mention and talk about briefly. So what's your fourth deep dive

[00:27:55] Amir Shevat: for the fourth category is DevOps. There was a shift left and we had SREs. We had chat ops.

[00:28:05] Amir Shevat: If you remember GitHub tried to coin that. The idea was, hey, let's not have a dedicated team that runs our code. Let's move the responsibility towards the engineering team that actually Created the code. For example, at Twitter, most of the running of the code was done by the team that created that code and maintaining that code and production was the responsibility of the team that created that service.

[00:28:31] Amir Shevat: So if you had a log in service, the team that was responsible for log in, they made sure that the log in service is up for all the other services. There's a problem there, the problem is that, first of all, you need to have a lot of knowledge that is not around coding, so you need to have knowledge in AWS, you need to have knowledge in Google or Azure, you need to have knowledge in a lot of configuration there's a lot of complexity in running scalable systems and engineers complain and rightly so that most of their time is spent on meetings and documentation and testing and all the other things that are being there. And now they're asked to actually maintain and run their code and wake up in the middle of the night when that code crashes. So what if an AI could do that?

[00:29:22] Amir Shevat: And again, this is a task that most developers don't want to do. It's a task that is pretty repetitive if you think about it, and it's pretty mundane if you think about it, and AI could provide a lot of value there.

[00:29:36] Simon Maple: Any kind of like startups that catch your eye in this space already?

[00:29:41] Amir Shevat: So the last one that I've seen, which is pretty interesting, is called Resolvd.

[00:29:45] Amir Shevat: Which what they do, which is really cool is that they tell developers, Hey show me how you actually deploy to production and I'll create the playbook for that is repeatable. So that's pretty cool. The ideAIs don't do things multiple times. If you ask any developer, you do things more than two times, you should create a script for that.

[00:30:07] Amir Shevat: So what resolved do is that they. Capture the way you actually deploy to the cloud, and then every time you need to do another deploy, they create the run book, and they know how to run it automatically. It is awesome. I haven't seen, and maybe I haven't looked well enough. I haven't seen my dream for this world.

[00:30:27] Amir Shevat: Can I tell you my dream for this world? Go on then. I want to actually create the code and automagically have it run. I want to deploy to a source control or even not a source control. I want it to run on my computer and then tell AI make it run in the cloud. And if it's scalable, make it scalable. If it's not scalable, don't make it scalable.

[00:30:48] Amir Shevat: And if you ask me a few questions around like security compliance, please do that. But now it's your job to actually run that code. I think. We are not far away from that, but I haven't seen anyone that actually gives me that solution. I don't want to do DevOps. DevOps is something that I think is a tax that we have to pay , and I think that it's a very interesting profession for those who do it.

[00:31:14] Amir Shevat: But for me as an engineer, I want to focus on my code. I want the minute my CICD finishes to do nothing. And then the AI will take care of it. And I want the AI to wake up in the middle of the night

[00:31:27] Simon Maple: yeah luckily, I think it's already awake. So yeah, maybe more awake than we think.

[00:31:31] Simon Maple: But in terms of, you know, the more we introduce AI closer to production, we obviously need a greater level of trust in order to be able to do that when we put AI left and we're saying that I want you to create some tests for me. I want you to write some code for me.

[00:31:49] Simon Maple: There are a number of levels of almost gates where we can validate that it's done the right thing. The further we push right though how much more trust do we need? Or are there additional mechanisms that we need to put in place to make sure that this wonderful new AI deployment environment that I've just created, while it's great, 90 percent of the time, 10 percent of the time, it pulls my production down or deploys in such a way.

[00:32:14] Simon Maple: Actually, some of my services are unavailable. Where do we need to be? And I think this is maybe more of a people question. Yeah, to be able to trust .

[00:32:21] Amir Shevat: So I think that's a very good question. And I think the key here is how much opinionated do you need to be in order to do DevOps right because 15 years ago when I worked at Google we had Borg and we were talking about self healing systems.

[00:32:38] Amir Shevat: So if an engineer would deploy something that is not performant, the Borg itself will roll them off. So a system, a cloud that knows how to heal itself and knows how to roll back from bad changes is something that we've had for a very long time. The key is that Google was extremely opinionated in how you develop that code.

[00:33:00] Amir Shevat: You had to use the Google frameworks. You had to use the Google deploy structure. You had a lot of guardrails and very opinionated way of doing that. Generalizing that problem is very hard So building something that will do devops for every company and every way and every opinionated and every language is hard and up until now not feasible, but maybe with AI and if you use the data set in a well enough way you could create a DevOps agent that has trust in your environment and your type of database and your type of language and your type of usage pattern, whether it's B2B or B2C, what type of attacks are you going to see?

[00:33:45] Amir Shevat: So I think we are getting there, but we're not there yet.

[00:33:49] Simon Maple: And other things like, you mentioned, if I write code, I just wanted to deploy into the cloud, for example. Are we creating a greater opportunity for almost, I don't wanna say hybrid cloud or something like that, but we're getting there.

[00:34:01] Simon Maple: When I say to chat GPT or whatever it is, Copilot, whatever, I want this code written and it starts writing it. I'm starting to care less about the implementation if it gets it right. When I say I want this to be deployed, are we doing this in a way whereby we're making strategic decisions about the underlying platforms and the architectures of how things should be deployed?

[00:34:24] Simon Maple: Or do you feel we'll get to this level where I'm like, yeah, if you want to choose GCP, you choose GCP. If you want to choose AWS, you choose that. That's more of an implementation decision, now. I just care that this is based on the inputs I have provided of the right services that I need.

[00:34:40] Simon Maple: You can go ahead and choose what you want and you might actually get a very colorful kind of different deployment based on the decisions it wants to make. Is this a possible future? Do you think? Or do you think people will really still want to keep their stack similar and use AI to deploy onto that similar stack?

[00:35:00] Amir Shevat: I think it depends on the workload that you're trying to build. So for example, if I want to build a website, I don't care which language you do, maybe I'm opinionated about react versus view, but I don't care about the details.

[00:35:14] Amir Shevat: Or what are the parameters that are passed between the single page app and the server side? I don't care about that. I just want a page. And if it's for marketing I just, I want it to look nice. So I would love to talk to chat GPT and say, Hey, create a website or a deploy it. Here's my domain. Do all the DNS.

[00:35:35] Amir Shevat: Magic that DevOps do and deploy that website. That will be amazing. And I think companies like Wix are trying to do that. So that's on the very, very lightweight workloads. If I'm doing server to server, very complex, very high throughput, very opinionated set of APIs let's say I'm building revamping Stripe API.

[00:36:00] Amir Shevat: I want to be very opinionated. So the more complex and more mission critical the workloads, you will want to have more deeper understanding. So like most developers don't know the difference between, or don't care between UDP and TCP. They just want information to move from one computer to the other, but some very hardcore developers actually are very opinionated,

[00:36:27] Amir Shevat: whether to use a UDP or a TCP depending on the infrastructure and the tasks. So I think the task will define how opinionated you want to be, if that makes sense.

[00:36:37] Simon Maple: Yeah, no, absolutely. Okay. Let's jump onto the fifth and final category that we're going to deep dive on and then we'll mention the others

[00:36:45] Simon Maple: in the list as well. Yes. We're talking about A. I. Monitoring and debugging. This is, I think, going back now to a Copilot era where you worked on your code, you deployed it, and now you want to get insights into how that code performs and what's wrong with the code.

[00:37:04] Amir Shevat: We used to have tools for forever there's also interesting companies in this area like digma that look at Your data and production and pull that into your IDE but I think AI could help here as well. The company that i've seen in this area that is pretty interesting is second.dev, and they help you maintain the code and also do migration of code.

[00:37:29] Amir Shevat: For example, if you want to move from an old code sets into a new code set, they could help you do that. And that I formed this under the area of after I've written the code, I need to check that it's running okay debug when it's not running okay. And maybe improve it through maintenance.

[00:37:48] Amir Shevat: And all of these are tasks that will be aided by AI in the future. So that's the major category here.

[00:37:55] Simon Maple: And this is quite a little bit different to, for example, if it was to compare, contrast this to the code suggestion style category, whereas the code suggestion category are pretty much entirely all new tools, new players and startups or.

[00:38:09] Simon Maple: new tools from existing companies. Here with AI monitoring is already a busy space. We're probably going to see a lot of existing monitoring solutions with AI functionality. But there's going to be a lot of space here for new startups as well, right?

[00:38:24] Amir Shevat: 100%. I think we're going to see up until now we've seen a lot of automatic and semi automatic debugging and monitoring tools.

[00:38:34] Amir Shevat: This is an area where AI could use a lot of the patterns and take that as a training set to see what's a normal day look like, and then give you insights into monitoring and debugging Hey, this is irregular. So I think that there's AInteresting area of an AI looks at your traffic patterns and understands what's a good day.

[00:38:58] Amir Shevat: And maybe it goes through a year. So in Christmas, you get more shoppers and more people. So it's not a DDoS in Christmas. It's actually actual human usage. But then if you train it well enough and you give it enough data, it knows how to look at irregular things. It knows how to flag the right things.

[00:39:15] Amir Shevat: You remember you told, you talked about. false positives, monitoring has a lot of false positives, right? So how do you help AI understand what is actually a monitoring event and what it's not a monitoring event. So I think AI will become a lot better there, but I have no doubt that we will use AI because it's I don't know if it's an LLM, but it's definitely a big data problem.

[00:39:41] Simon Maple: Yeah. This feels like a little bit of a less mature space in terms of AI tooling. Is that fair? Or would you say there are tools that you should be looking at? Or this is an area that you absolutely should be looking at right now. Does it need to develop more?

[00:39:53] Amir Shevat: I think it needs to develop more.

[00:39:55] Amir Shevat: I think the incumbents are trying to innovate there. And again, I think we talked about it in the beginning of this session and like, where does the pain rely the most? I think right now the pain is in like the development, the testing, the documentation. And I think in debugging and monitoring, we'll see a lot of innovation, but it will follow after we get more trust in AI.

[00:40:16] Amir Shevat: Yeah. So should we talk about the other

[00:40:18] Simon Maple: yeah, let's go through and describe it. We won't go with another five deep dives. Let's describe the remaining categories.

[00:40:24] Amir Shevat: And this is like a not a super set. This is a subset of all the things that we can think about.

[00:40:30] Amir Shevat: And we're totally open to community feedback on this. So what is the CICD of this world look like in AI? And how can I create better CICD or like developer workflow tool? With AI triaging having a better conversation doing PRs all of that good stuff. There's one company that i've seen that has been pretty interesting pseudo that is in that area.

[00:40:53] Amir Shevat: AI code explainer. We talked about this a little bit but we think that there's a big opportunity to innovate there. I want to be able to better read the code. I want to be able to better change the code in an easy way performance tooling. So there's a company called CodeFlash that we'll talk about a little later, but how can I do better performance?

[00:41:16] Amir Shevat: I spent six months 10 years ago trying to make some Java code more performant. It was fun for the first three months, and then it became very painful to go section by section for each piece of the code and make it more performant. Very time consuming tasks. So maybe AI can help there

[00:41:37] Simon Maple: And it's a data heavy space as well.

[00:41:39] Simon Maple: So it's potentially a good space there to find. Yeah, absolutely.

[00:41:44] Amir Shevat: And there's AI for security tooling AI for UX. As a back end engineer, I understand nothing about user experience, so maybe there's AI that could help me build nice front ends code reviews. Very useful. Wouldn't it be cool?

[00:41:59] Amir Shevat: We actually did that at Slack. We had one of our best engineers he was very good at doing code reviews. We looked at his code reviews and created an AI bot that copied him. That did the same test that he does every time. And

[00:42:15] Simon Maple: with a lot of the tools that, Like you've mentioned before as well with the code explainer, the code testing, the code documentation, code review should actually be a ton easier when you've effectively got all of this largely done for you, but also very well explained as well.

[00:42:33] Amir Shevat: A hundred percent. So I'd love to see a code reviewer. And then there's data management, all the things that you do with your databases or your local storage or S3, any type of data would be interesting as a category. And the last thing that we wanted to talk about is the autonomous agents.

[00:42:53] Amir Shevat: That's Devin with their video that blew the internet for a while. And that's maybe the mission for our world in terms of creating agents that could create code, maybe it's not. Maybe it's just a dream that people have right now, but we think the direction of where this is going is moving from a co pilot to an auto pilot.

[00:43:15] Amir Shevat: And whether Devin is real or not and how useful it is, the jury's still out there, but I think we will have a world where at least some of the tasks are going to be automated by this junior AI developer friend that you have. And that's the autonomous agents category. And how do you have a team of both humans

[00:43:39] Amir Shevat: and AI engineers working together to complete a task.

[00:43:44] Simon Maple: Yeah, i'll also have a shout out here to a company Crew AI Who i've used quite a bit now they're a Boldstart company actually and they're a company that effectively build that framework whereby you can create a crew of agents and each agent effectively has this backstory you can have developers, you can have code reviews, you can have, people with various backgrounds.

[00:44:08] Simon Maple: And you can have a number of tasks that each of these agents can perform. And they're looking at things in a different way. And I think when you break something down into smaller pieces, and the agents are very, laser focused on doing a small thing in a specific way, they're actually pretty good at it.

[00:44:23] Simon Maple: And if you chain these agents together you actually get far stronger results than if you just to, go to a plain GPT and just ask for something generic and it provides, it spits, one answer out. Doing this agent flow is far better. So I definitely recommend people. Take a look at those kind of things.

[00:44:41] Simon Maple: And crew AI was what I played with and used.

[00:44:43] Amir Shevat: I love crew AI. I think they're awesome. There's also CodeFlash. So that's you know how in every team there's this grumpy old guy that does the performance for everyone. Code is great, but if you do tweaks, it could be much more performance.

[00:44:57] Amir Shevat: I used to have that. I used to, have that guy, and then I used to become that guy, giving shit to all my team. Basically CodeFlash. That's what they do. They look at the code, they generate the test, and then they are the person, they are the synthetic human that posts PRs to improve your code. And I think, again, we will see different aspects of these.

[00:45:19] Amir Shevat: agents as we move forward.

[00:45:21] Simon Maple: Amazing. that wraps up our categories. I'm sure we've missed some, I'm sure. I'm sure some will grow into more important categories and others, maybe not so important, but this is where we are learning. Yeah, we'd love to hear from folks what categories you feel we missed.

[00:45:37] Simon Maple: What categories you're most excited about? There are certain categories that are. super high priority. I would love to know which ones they are and we'll dig into these next. Next up is the code completion category. So we're going to, we're going to take a look into that. And in fact, we will be talking to the folks at Tabnine.

[00:45:53] Simon Maple: Look forward to that one next. Yeah, this is a work in progress. Tell us podcast at tessl.io. We'd love to hear from you on Twitter as well, or other platforms. Yeah, let us know how we get on. So Amir wonderful conversation as always. Thank you very much. Had a lot of fun in chatting and appreciate all of your all of your wonderful experience and thoughts.

[00:46:13] Amir Shevat: Thank you for having me. This was fun.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.