DevOps with AI: Identifying the impact zone, with Roxane Fischer

Join us for an insightful conversation with Roxane Fischer, Co-founder and CEO of Anyshift, as she discusses the transformative role of AI in DevOps. Discover how AI technologies are reshaping infrastructure management and learn about the challenges and opportunities they bring.

Episode Description

In this episode of AI Native Dev, hosted by Simon Maple, we sit down with Roxane Fischer, the Co-founder and CEO of Anyshift. Roxane brings her extensive knowledge in AI and DevOps to the table, providing a comprehensive overview of how AI can enhance infrastructure as code flows. With her background in AI research and entrepreneurship, Roxane offers a unique perspective on balancing generative AI with deterministic processes. The discussion covers a variety of topics, including the role of generative AI in accelerating development workflows, the challenges of integrating AI in infrastructure as code, and the potential of Synthesis AI in DevOps. Tune in to explore the future trends in AI and DevOps and learn about Anyshift's mission to improve visibility for SRE teams through a digital twin of infrastructure.

Chapters

  1. [00:00:00] Introduction
  2. [00:01:00] Roxane's Background
  3. [00:03:00] Role of Generative AI in DevOps
  4. [00:06:00] Infrastructure as Code Challenges
  5. [00:09:00] Synthesis AI vs. Generative AI
  6. [00:12:00] Large Language Models and Data Scarcity
  7. [00:15:00] Determinism and AI
  8. [00:18:00] Real-World Applications and Tools
  9. [00:20:00] Future Trends and Anyshift's Role
  10. [00:23:00] Conclusion and Call to Action

The Role of Generative AI in DevOps

Generative AI has become a game-changer in the DevOps landscape, accelerating code generation and enhancing development workflows. As Roxane Fischer mentions, "Gen AI is great and aids all of our lives like 10x, 100x faster." However, the rapid creation of code also brings potential pitfalls, such as increased legacy code and the need for precise code reviews. In the podcast, Roxane emphasizes the importance of providing precise instructions and context to AI systems: "Garbage in, garbage out. What do you put into your prompt?" This highlights the necessity for developers to carefully craft their inputs to maximize AI's effectiveness.

Developers must be aware that while generative AI can significantly speed up the coding process, it doesn't replace the need for critical thinking and context understanding. The AI can produce code snippets or even entire modules quickly, but without the right context, it can introduce errors or inefficiencies. For instance, a developer might use AI to generate a function for handling user authentication, but if the prompt doesn't specify certain security protocols or business logic, the resulting code might be insecure or inappropriate for the application.

Moreover, the automation of code generation raises questions about code ownership and accountability. With AI generating large portions of code, developers might struggle to maintain a sense of ownership over their work. This can lead to challenges in debugging and maintaining code, as developers may not fully understand the AI-generated portions. It's crucial for teams to establish processes for reviewing and validating AI-generated code to ensure it meets their quality standards and business requirements.

Challenges with AI in Infrastructure as Code

The integration of AI in infrastructure as code (IaC) presents unique challenges. Roxane points out the exponential increase in code, which can lead to faster creation of legacy code and a reduced sense of code ownership among developers. She explains that infrastructure code testing is more complex than application code testing, requiring multiple layers of testing. Roxane warns that without proper context, AI-generated code can lead to issues such as hard-coded values and inconsistent metadata, which may cause future outages.

In the context of IaC, the complexity arises because infrastructure configurations are often less visible than application code, making it harder to spot errors. For example, an AI model might generate a Terraform script to set up a network configuration. If the script includes hard-coded IP addresses or lacks necessary tags for resource management, it could lead to misconfigurations that are difficult to diagnose and correct.

To mitigate such risks, organizations should implement robust testing frameworks for IaC, similar to those used for application code. This includes automated testing for syntax errors, policy violations, and security issues. Additionally, incorporating AI-driven tools that specialize in IaC validation can help identify potential problems early in the development cycle, ensuring that the infrastructure remains secure and reliable.

Synthesis AI vs. Generative AI

Roxane introduces the concept of Synthesis AI, which focuses on analyzing logs and metadata to provide insights. Unlike generative AI, which creates content, Synthesis AI is more mature and framed. Roxane explains, "Synthesis AI is something that we believe is more mature in terms of taking a lot of information and finding the patterns." This distinction underscores the complementary roles of Synthesis and generative AI in DevOps workflows.

Synthesis AI excels in environments where massive amounts of data need to be analyzed to discern patterns or gain insights. For instance, in a DevOps setting, Synthesis AI could be used to parse logs from various systems to identify trends, anomalies, or potential issues before they escalate. This capability is particularly valuable for root cause analysis, enabling teams to quickly pinpoint the source of problems in complex systems.

By combining Synthesis AI with generative AI, DevOps teams can create a more holistic approach to infrastructure management. Synthesis AI can identify areas of concern or inefficiency, while generative AI can propose solutions or optimizations. This dual approach allows organizations to not only detect issues but also implement improvements proactively, enhancing overall system performance and reliability.

Large Language Models and Data Requirements

Large Language Models (LLMs) are trained with extensive datasets, but this poses challenges for IaC. Roxane highlights the reluctance to share infrastructure code openly due to its sensitivity, resulting in data scarcity that affects AI performance. She notes, "One of the issues is that nobody really wants to put the infra on clear on GitHub. It's too sensitive." This scarcity limits the effectiveness of AI models in generating accurate infrastructure code.

The sensitivity of infrastructure code stems from the fact that it often contains details about network configurations, security settings, and other critical information. This makes organizations hesitant to share such code publicly, limiting the data available for training AI models. As a result, AI models may not be as effective in generating or optimizing IaC compared to other types of code, such as application logic.

To address this challenge, organizations can explore secure data-sharing arrangements that allow them to contribute anonymized or synthetic data to AI training efforts. Additionally, leveraging AI techniques that require less data or can learn from synthetic data can help bridge the gap, enabling more effective AI-driven IaC solutions.

Determinism in AI and DevOps

The concept of determinism in AI outputs is crucial for infrastructure management. Roxane explains that deterministic graphs represent cloud resources and their connections, providing a stable context for AI models. She states, "Your infrastructure is a graph...you need to have this context, this deterministic context, about your own infra." The balance between deterministic data and probabilistic AI models is essential for generating reliable content.

Deterministic approaches in AI help ensure consistent and predictable outcomes, which is vital for managing complex infrastructure systems. By representing infrastructure as a graph, organizations can map out dependencies and connections between resources, creating a clear picture of their environment. This deterministic model serves as a foundation for AI-driven analysis and optimization, allowing teams to understand the impact of changes and make informed decisions.

Incorporating determinism into AI models also helps mitigate the risks associated with probabilistic outputs. By grounding AI-generated insights in a deterministic framework, organizations can ensure that the recommendations and actions proposed by AI are relevant and accurate, reducing the likelihood of errors or unexpected outcomes.

Real-World Applications and Tools

AI's application in root cause analysis and log triage is transforming DevOps practices. Roxane mentions tools like cleric.io and Datadog, which leverage AI for efficient log analysis. Integrating AI into existing DevOps pipelines enhances efficiency and accuracy, as AI models can quickly sift through extensive datasets to identify issues and correlations.

These tools use AI to automate the process of analyzing logs and identifying patterns that might indicate issues or inefficiencies. For instance, Datadog uses machine learning to detect anomalies in log data, alerting teams to potential problems before they impact system performance. Similarly, cleric.io uses AI to correlate log data from different sources, providing a comprehensive view of system health and helping teams quickly identify the root cause of issues.

By incorporating these tools into their workflows, DevOps teams can reduce the time and effort required for manual log analysis, allowing them to focus on higher-value tasks such as optimizing performance and enhancing user experience.

Future Trends and the Role of Anyshift

Looking ahead, Roxane envisions a future where deterministic and AI-driven processes are seamlessly integrated. Anyshift's mission is to enhance visibility for SRE teams through a digital twin of infrastructure. Roxane explains that Anyshift aims to "give back some visibility to SRE teams to answer actually key questions." AI plays an educational role in explaining complex infrastructure changes, bridging the gap between development and operations.

The concept of a digital twin involves creating a virtual representation of an organization's infrastructure, complete with all its dependencies and configurations. This digital twin serves as a dynamic model that can be analyzed and optimized using AI, allowing teams to simulate changes and assess their impact before implementing them in the real world.

By leveraging the digital twin approach, Anyshift aims to provide SRE teams with the insights they need to manage infrastructure more effectively, reducing the risk of outages and improving overall system performance. The integration of AI into this model enhances its capabilities, allowing teams to make data-driven decisions and implement best practices with greater confidence.

Full Script

**Roxane Fischer:** [00:00:00] The issue is that with generative AI the world is way more open. You can create anything, but because you can create anything, you also need to be like super precise in your instruction. Garbage in, garbage out. What do you put into your prompt? What is the level of details that you give our context?

it's all about context. And so you're opening yourself to way more mistake or actually not just the right guidelines.

**Simon Maple:** You're listening to the AI Native Dev brought to you by Tessl

hello and welcome to another episode of the AI Native Dev. On today's show we have Roxane Fischer, who's going to talk to us a little bit about how AI can assist us in our DevOps infrastructure as code flows. Roxane, welcome. How are [00:01:00] you?

**Roxane Fischer:** Hi, nice to meet you. Good on you.

**Simon Maple:** Yeah, doing well. Thank you.

You're the co-founder and CEO of Anyshift. And in fact, prior to that, you've done a huge amount of research. And I think we partly crossed paths a little bit before as well with an acquisition from Snyk. Tell us a little bit about your past.

**Roxane Fischer:** I'm one of the co-founder of Anyshift.

We are a source graph connected to your cloud how to enhance visibility for DevOps team in the day to day workflow. And so personally, I'm a former AI researcher. I was doing research in FinTech companies and also at Samsung, InVision and financial data. And when I started actually the entrepreneurial journey, I met my co-founder Stephane whose previous company CloudSkift, who created DriftCTL was acquired by Snyk.

So how the world's crossed. Yeah. And here we are started Anyshift a couple of months ago. Trying to bridge actually this gap between DevOps and AI.

**Simon Maple:** Yeah, amazing. And it's interesting when when we think about AI [00:02:00] and how AI dev tools can help our workflows and help our development.

We automatically think very quickly about how we can write code faster, how we can review code and all those kind of things. But there's a ton more which is perhaps in the DevOps space and others. That are going to be very helpful to us over time. First of all, how well adopted do you think, or rather not adopted necessarily, but how much do you think people are focusing on code completion almost too much and coding assistants almost too much where AI can actually help us in various other places of the CICD pipeline of DevOps and things like that , are we over rotating on coding assistants and under utilizing AI in various other places in the pipeline?

**Roxane Fischer:** It's a good question. I think first of all like Gen AI is great and aids all of our life like 10x, 100x faster. Yeah. And we should use it because like entering a new generation of code generation and what capabilities we can create, but still there are some issues about that.

As you said you have so many more like line of codes, [00:03:00] which are created now, like in an exponential way, which create like issues about legacy you can create legacy code way faster. And we were like a limited number of reviewers to actually review the code which was generated, which is actually even more crucial and difficult with infrastructure as code, where like it's complicated to test your infra, like the layers of tests for your infra.

Like not as a mature like it's more complicated to do it than for application. And the other point is that with so many lines of code, which are now created, like you also have a sense of ownership which is less important for different developer teams. You don't even remember sometimes the code that you have created.

So those are new challenges. And also if I complete to your question add something about that. It's particular also like for infrastructure as code. So we like, it can help a series DevOps team engineering the infra because compared to like other frameworks such as Python, Java infrastructure as [00:04:00] code such as Terraform is relatively new.

Like Terraform is 10 years old. Like when you go on GitHub, Like you only have one thousand, like a couple of thousand public modules of Terraform, the amount of code that was used to actually train LLMs. Those large models that do this sort of completion those generative model those are assets are quite small compared to other framework. One of the issue is that nobody really wants to put the infra on clear on GitHub. It's too sensitive. And so the generation of infrastructure as code framework won't be as good as for other one.

**Simon Maple:** So is generative AI, the way LLMs are trained needing huge amounts of data, for example, is that the right way we should be thinking about AI in, in the infrastructure as code in the DevOps space?

**Roxane Fischer:** The way your models are trained is that actually they're going to train on a large amount of data to understand patterns. [00:05:00] And you will understand like large amount of patterns. The more data you have, the better you are actually to understand like similarities, correlation between different framework.

One example and I like to take is that you can create like some code, but potentially because your amount of data won't be big enough, you won't get like the best type of generation of code as that you could expect, so one of the few sandbox environments that we did at Anyshift would be okay, let's generate a VPC peering between two different VPCs. And we are going actually to ask GPT to create it. And it's gonna do it. It's gonna work. Because it's still really good. And it's still help a lot in your day to day. But when you do that, if you don't give enough context, so you will have, there's a two hard coded values of the VPCs.

And also you will miss some additional information metadata. The auto accept value is missing and the tags also, which don't help to create actually these quality content to be part of [00:06:00] the policies of your company how you should actually ensure an infra which is consistent tags that are consistent to actually avoid future mistakes in the future.

Like for instance, with those hard coded data you're opening yourself to future outages with hard coded value, dependencies. And so some context is still missing in terms of the quality of the content that you're generating.

**Simon Maple:** And I guess that's where you as a user will want to use AI to, as you need to provide the right levels of information to the LLM, not just to make sure it provides something accurate, but actually relevant to what you're trying to build.

And I think that's probably where I would guess that interaction between the LLM as a chat would likely work as well. Tell us a little bit about something called Synthesis AI. That's something that I know that you're very interested in using that alongside generative AI.

**Roxane Fischer:** Yes. So super interesting. And so just to make the difference and the split between the two, pretty much like the same technology is used behind it. But [00:07:00] two words to actually point out two ways of using those LLMs. You can describe like Synthesis AI about how you actually Synthesis information.

So as inputs to your LLM, you're going to take logs and also like metadata to actually understand something and give some insights. I want to do like a faster, like root cause analysis. Like automatic root cause analysis of an issue. And so this Synthesis will actually help me to go for like thousands of hundreds of logs, whereas like for generative AI, you also have an input, but rather than actually like finding some insight, you want to create something.

You want to create some code. You are going to ask in your prompt to create a special configuration for me. The issue is that with generative AI, the world is way more open. You can create anything, but because you can create anything, you also need to be like super precise in your instruction garbage in garbage out.

What do you put into your prompt? What is the level of details that you give our context? [00:08:00] It's all about context. And so you're opening yourself to way more mistake or actually not the right guidelines. Synthesis AI, on the contrary, is something that we believe is a more mature in the way of you're taking a lot of information in huge amount of data.

And you want to find the patterns, where can I find this needle in the haystack? And this is mature because I didn't like same technology, LLMs that are going to actually find or create some insights those patterns, but Synthesis is way more framed. So you are going to find information which is already somewhere.

Whereas like generative still lacks some context, the context we've been talking about.

**Simon Maple:** Yeah, just to reflect that back, you use the Synthesis AI to effectively take in huge amounts of data and boil that down almost to insights and then pass those insights over to the generative AI to effectively create something based on those insights. And that presumably avoids the problem of [00:09:00] then passing huge amounts of context to the LLM where it may or may not use that context wisely, given, we know LLMs aren't great with large amounts of context. How does the Synthesis AI, how does that deal with large amounts of context?

**Roxane Fischer:** The way you're going to do it is that as you said you have different flows. You can use Synthesis AI, find insight, and then take it as input to generate some code. Or you can give other way or the windows of context, it can be like the output of like Synthesis AI or something else.

The way it works, let's say Synthesis is just a word that I use to make the distinction. That was actually like referred to in different articles. But, to say that you can consider two different pillars for using those LLMs. Synthesis would be exactly used the same way as generative, like using for instance GPT to do but instead of using a prompt with a guideline create some code your prompt would be to say, I'm going to take into input all my logs. And now search [00:10:00] inside it, find me the different correlation, connection potential issue about one specific configuration among all of them. One actually, as a thing, which is interesting about that is how you can link it. And actually it's the same principle as RAG, which retrieval augmented generation principle which are used in AI to actually leverage content information and then find in this content some information.

So the same way, like you, I'm speaking about Synthesis AI integrating some logs and finding some patterns One of the technology that a lot of people use nowadays to actually answer to complex questions with a lot of input data is to, before like actually asking those questions, do an entire processing of this amount of data.

So it can be logs, it can be something else can be documents anything that you are going to actually encode. You are [00:11:00] going to actually like encompass information into latent spaces which are used in AI to encode information and then query it. And so this entire process of RAG is going to be done after doing this processing, create faster, a huge amount of information.

**Simon Maple:** And with that large amount of information we've talked a few times in the past about determinism and, determinism and non-deterministic output from the LLMs in terms of being able to have that deterministic mapping I guess to the RAG, talk us through, what's available there for us.

**Roxane Fischer:** Yeah. Deterministic is an interesting topic. Because Gen AI is probabilistic. When you call an LLM it's going to give you an answer, but with a probability of the answer. So you're never 100 percent sure that it's going to be the same answer. by nature of those models. Deterministic algorithm on the contrary will [00:12:00] always give you the same answer. If you give the same input, you need both to actually generate quality content, depending on the use case, of course. But so when you imagine that you have like infrastructure with thousand of cloud resources, so many configuration.

Having a deterministic context that you actually know about is as important as also having this LLM, Large Language Model, which will give you some insight about it or generate some code about it. What's interesting about your infrastructure. So depending on what you do is that your cloud resources, Kubernetes etc at the end, it's going to be a graph.

It's a way that resources are connected. You have VPC and within a subnet and you have IAMs and everything all those resources are like interconnected and you can actually represent them as a graph of interconnected nodes. If I take back the example about this VPC peering the way you understand that everything is [00:13:00] related, you need to represent it as a graph.

And if you want to generate some content or get insight about it so imagine I want to generate the best practiced codes for this VPC peering with no hard coded value, I need to have the context of where actually the other resources that I'm going to refer to, those dependencies reside in this graph, right?

And so I need to have this context, this deterministic context, about my own infra or my own content, my own documents, anything to then generate quality content or get the right information inside about it.

Yep. Yep. Oh, very interesting. Okay. So that's, so it's almost like the determinism is built in there.

**Simon Maple:** I guess you're adding the outcomes you would expect almost and having AI choose, pick a path effectively to provide that determinism. Is that fair?

**Roxane Fischer:** Yes. Because otherwise it's all about context. If you don't give this context, you can do anything.

It's so powerful [00:14:00] that it can generate anything. Can create like some Python code, infrastructure as code documents, all within the same model, you can have like more specialized, smaller models, but still super powerful. If you want to have a precise answer to complex queries, you need to give like precise context.

And so this context, either like you put it in your prompt, I want this, and that, but so you need to actually rephrase everything. So it would be the equivalent of coding yourself something, or you need to plug in as inputs these deterministic contents, your own knowledge, like your enterprise knowledge to actually be able to create these very precise quality content.

**Simon Maple:** Yeah, really interesting. And I think we're like flowing from the today of how people can use AI to like almost looking forward as to how we expect people in the DevOps space to use it. Let's go through a couple of examples where we kind of talk about a couple of flows that we would expect folks to, who are listening to very much resonate with. Why don't we start with the idea [00:15:00] that, perhaps there's an issue or something like that in our deployment or our infrastructure somewhere we would have a ton of data through our logs, we'll be doing a lot of our root cause analysis trying to identify where the main issue came from and what tools would you say developers and ops folks should be looking at or considering to be able to analyze or rather even use the AI to analyze that kind of level of data that we know we can get from production systems.

**Roxane Fischer:** It really depends on which time of teams you're speaking of, because at the end like the world is open. And so it depends of what do you integrate as inputs? The more info you have as a better, you have like many startups and we're also building it that are working on that.

And I guess you are too in time of actually, how do you take so many input data to actually analyze it. And the more, the better logs, metadata clouds, APIs, kubernetes everything you have, even like customer spots. I have an application, which is laggy [00:16:00] at my front end. How do I correlate the signal?

So it's like a text, a complaint with some logs about latency. And how I take all this heterogeneous data, use it as inputs to my models and being able to find back this insights, this correlation. And I guess for me, it's one of the biggest strength of this models. It's amazing at actually integrating heterogeneous data.

And this is why like this and this Synthesis AI principle for me is like really, and we are like different tools, which are really good at doing this root cause analysis. Especially good at actually analyzing logs. And so depending of once you will get all the different information you can have. But it's at the end, a lot of work in terms of infrastructure or what do you integrate in your pipeline?

So integration, but once you have integrated everything, how you're able to dump everything encode it and actually being able to query it.

**Simon Maple:** Any tools you'd call out there in terms of the root cause analysis or the [00:17:00] triage space, looking through log lines?

**Roxane Fischer:** I've heard recently about cleric.io, never used it. But I think it's good. And you have also all the big players, such as Datadog that are like now leveraging AI to read logs.

Yeah.

And you have also like many startups like doing that on Kubernetes also.

**Simon Maple:** Yeah. Yeah. Cool. If we take a question about almost an organizational question, when we think about how over the years with DevOps we're trying to get developers closer to their production environments and trying to get them more involved in that overall, deployment, understanding the infrastructure and so forth.

Do you feel like there's an issue in and around ownership where the more we expect developers to use AI in creating their infrastructure and creating their deployment artifacts that they actually lose that ownership or lose that connection more with the final production, because in some sense they're using effectively AI to [00:18:00] push production away or push the ops, leave the ops to the AI. Is that a problem or is that not necessarily one?

**Roxane Fischer:** Yeah I find it super interesting because I think it's a problem, but also like it comes with the benefits of AI. I think like at Anyshift, we like more than 50 percent of our code is generated by Claude. And it's about what do you make out of it? What do you use like as a prompt and you're able to actually code faster.

But the thing is, because of that, because you're so much more efficient, how can you remember everything you have done? I don't have a clear answer to that. I'm myself trying to figure out I think we need like more safety nets. We have for instance so other example about generating infrastructure as code content, imagine like your prompt is really bad or like your model is not super good.

And you are going to create like some open ports to everything. And so super bad practice. Because it's probabilistic and you're not 100 percent sure about the code you're going to [00:19:00] create, you still need those safety nets Checkov Snyk IaC to actually be sure that you don't rely 100 percent on something which is still non deterministic and we can speak about potential security flows that it can also create.

So how to always have, like different ways of actually safeguarding yourself from attacks or from bad configuration. And in terms of ownership how to actually enhance the ownership of different people making some like generating content. It's not obvious because you're getting more like efficient.

You still need to actually own your code. And you need to remember what you have pushed and why you have done it. If you don't even remember. Which line of code you have generated that's an issue.

**Simon Maple:** Yeah, no, absolutely. Let's now take a little look going forward. We obviously mentioned the kind of using RAG a lot more.

I'm really interested there in terms of that the determinism that you can pull out using that. What do you feel like are some of the future trends in this [00:20:00] space particularly around your work with with AnyShift.

**Roxane Fischer:** We sincerely believe that the future of AI slash DevOps resides in two pillars.

AI, but also deterministic parts, the more like infrastructure parts. And you need both to have this context plus also like the power of AI. And so because your infrastructure is a graph, there's a type of like your cloud resources, everything that are interconnected, the definition and different like providers, DNS, et cetera that are linked to it.

We believe that, to create the best of it to generate content for DevOps team. And also like to remediate to some issues. We need to create and work on those both pillars. , this deterministic part, this graph, and then the query and the remediation with AI. In terms of the graph, and also I believe this is where it's important to have, like some knowledge about your space and like how you're going to create it.

The schema of your graph is important. [00:21:00] The different dependencies between your nodes and your edges to represent some cloud resources. It can be done in many ways you can connect a VPC with a subnet with an edge. But you can also connect them because they have similar tags. For instance, data team, I don't know.

So the schema of this graph, how you connect components and what you put inside it is super important because it's also how you're going to query it. How you're going to retrieve information. Because to actually do this remediation and code generation for SRE teams, you need to first create this context, this graph, this schema, and then do the query.

And how do you query it? I want to get all the information of all the dependencies of this specific EC2 in the last 24 hours and the team who owns it that has created a change. To be able to query and to have an answer to those specific and global or local queries you need to have a schema that is actually adaptable to those [00:22:00] queries.

How to have to actually go fetch information in the fastest and like more accurately.

**Simon Maple:** And so you pair those two then, is that something you use as a schema to almost validate or is that something that you then pass directly into the LLM to say look here's some information about the data or about the configurations.

You use this to be able to query, use this to be able to understand. That was good.

**Roxane Fischer:** That's very interesting because the schema we define ourself because this is where we believe the value of having some knowledge about how infrastructure work is. So how do you connect things and how are you going to query them afterward?

One of the fun parts is that to actually enhance development and make it faster, we use AI also like internally to actually populate this graph. When the schema is in place, how do you actually like leverage AI to pass some data and to actually populate your graph.

**Simon Maple:** Awesome. So tell us a little bit about Anyshift and the area the space that, your mission, in the space that's in.

**Roxane Fischer:** [00:23:00] So as you understood we want to give back some visibility to SRE teams to answer actually to key questions, simple questions that are still hard to answer nowadays. Such as like, where is this response defined in my code base who owns it what are it's dependencies, but between like cloud resources, Kubernetes one DNS data providers, et cetera.

And the way we do it is that we are going to create this digital twin of your infrastructure. And from that, you will be able something similar to a resource catalog, being able to actually like query your graph to answer to those questions. We begin with something which is like a very shift left approach, something very similar also to Snyk or like other tools how to actually like prevent issue before it happen.

Or give like at least more information and visibility to your change before you deploy it. And so we are going to integrate within your pull request to actually, once you make a [00:24:00] change, query this graph, query this map of dependency and give you more context about actually what you have done. And okay you have changed like this data for module, It's going to affect like other resources, the repository, should you really do it?

And so the AI part that we're going to leverage is actually on the educational content. So something similar to the Synthesis AI we were speaking about. We have all this information. You have your graph, you have all the dependencies complex and your teams are pretty less aware of the entire context of your infrastructure.

And so how do you use AI to actually explain what happened? And based on this context that we provide, also do the explainability. Based on that.

**Simon Maple:** How do people have a play with Anyshift? Is it available today? Can people get involved?

**Roxane Fischer:** So the way we work is that now you can actually already test Anyshift on a platform if you're like going to subscribe and actually what we are [00:25:00] building first would be like this deterministic part of your graph. So giving all of these dependencies of the impact zone of your change.

**Simon Maple:** I think you said 50 percent of the code in Anyshift is written by Claude. Is that right?

Yeah.

Okay. Tell us a little bit about that then. Cause you know, it's interesting. Sometimes I think Google or someone recently said 25 percent of the code that they write is written by AI. I'd love to hear a little bit more about the 50 percent of code that's written by AI. How are you using AI internally at Anyshift today? Is it mostly on the code creation or a bit of everything through the pipeline?

**Roxane Fischer:** Actually, both first of all we are like in Go and so all of our developers don't necessarily are experts in Go and like LLMs are amazing translator in any kind of language. So how do you like ask something you need at the end to know exactly what you want. And your LLM will create 80 percent of what you need [00:26:00] and you will then need to be able to do the review, which actually accelerates the development a lot.

But this is like all the developers now do it nowadays? Is the other part that's bothering me more conscious way is to actually automatically use LLMs to pass some code and some content to then actually generate a graph.

Which is something which is a little bit more specific. Yeah.

**Simon Maple:** Yeah. Interesting. How have you found using it? Do you have best practices or what advice would you give to people who are trying to add AI capabilities into their workflows today.

**Roxane Fischer:** I think my conclusion would be that people need to, it's when you do a recurrence you have test case number one, test case number two, and then you can do like the N plus one for loop you need to understand very precisely what you want to do.

Because then you will understand the edge cases and then you will be able to actually ask your model exactly what you need as a best way, but it's easy to go straight [00:27:00] away to your model as something and think it's going to do magic. And sometimes it does but if you want to do something at scale be super aware with like first edge cases and number one, number two.

What you want, what's going to be the output, and then you understand the patterns. Let's go.

**Simon Maple:** Yeah, amazing. So still very in a very attended fashion,

**Roxane Fischer:** I guess, but everything is improving. So I guess like potentially this comment will not be relevant in a few months. Yeah. Yeah. Yeah.

**Simon Maple:** Amazing. Roxane, thank you very much for the session.

And I'm looking forward to the demo actually of Anyshift. So for those who are interested in that, please do check out the YouTube channel and you'll see the YouTube video. It's only YouTube but the YouTube video of of Anyshift in action as well. So thank you very much, Roxane, and thanks everyone for tuning in and we'll see you again next time. Thank You.

[00:28:00] Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.