Adapting your team culture for AI: DevOps Lessons Learned with Hannah Foxwell

Join Simon Maple as he delves into the transformative intersection of AI and DevOps with Hannah Foxwell, a seasoned consultant and community leader. This episode explores how AI is reshaping development practices and the critical role of culture and education in this evolution.

Episode Description

In this episode of the AI Native Dev podcast, Simon Maple is joined by Hannah Foxwell, an independent consultant with significant experience in platform engineering, security, and AI. Hannah shares her journey from a release manager to leadership roles at Snyk and VMware, revealing her deep insights into the parallels between DevOps and AI adoption. As a community leader and event organizer, Hannah offers valuable perspectives on the cultural shifts necessary for embracing AI in development teams. Discover how AI is impacting both seasoned and novice developers, the importance of overcoming skepticism, and the future role of AI in software development.

Chapters

1. [00:00:00] Introduction to Hannah Foxwell and Her Background
2. [00:02:00] The Evolution of DevOps: Hannah's Origin Story
3. [00:07:00] Overcoming Skepticism: Embracing AI in Organizations
4. [00:12:00] AI's Impact on Development Teams
5. [00:16:00] The Future of Development and AI
6. [00:21:00] Building AI-Ready Organizations
7. [00:25:00] Guardrails for AI Utilization
8. [00:30:00] Practical Steps for AI Adoption in Teams
9. [00:34:00] The Role of Platform Engineering in AI Integration
10. [00:38:00] Summary and Key Takeaways

The Evolution of DevOps

Hannah's journey into the world of DevOps began during her early career as a release manager. She vividly recalls sitting "on top of that wall of confusion," bridging the gap between development and operations teams. This experience illuminated the need for a more cohesive approach, which DevOps promised to deliver. As Hannah noted, "I saw it as a way to solve a very real and painful problem that I had experienced personally." DevOps emerged as an engineering-led movement that fundamentally altered the software development landscape, breaking down silos and fostering collaboration. Organizations adopted DevOps to address the challenges posed by traditional development processes, leading to improved efficiency and innovation.

DevOps was not just a technical revolution; it was a cultural shift that required teams to rethink their approaches to collaboration and communication. The movement emphasized the importance of cross-functional teams working together towards common goals, breaking down the traditional barriers between developers and operations staff. This collaborative spirit fostered a more agile and responsive development environment, where teams could quickly adapt to changing requirements and deliver value to customers faster than ever before.

The impact of DevOps was profound, as it enabled organizations to achieve greater agility and flexibility in their development processes. By automating repetitive tasks and streamlining workflows, teams could focus on delivering high-quality software at a faster pace. This shift in mindset and practices allowed organizations to stay competitive in an increasingly fast-paced digital landscape, where the ability to quickly respond to customer needs and market demands was crucial for success.

DevOps and Cloud Transformation

The advent of cloud technology further accelerated development cycles and necessitated new operational practices. Hannah emphasized the importance of automation, self-healing infrastructure, and resilience engineering in modern DevOps. With infrastructure available on-demand, the need for months-long procurement processes vanished, enabling teams to "increase the velocity again." This transformation required a complete overhaul of operational strategies, shifting the focus towards automated systems and infrastructure that could adapt and recover autonomously. As Hannah put it, "It became much more about automation. It became much more about self-healing and resilience engineering."

Cloud technology revolutionized the way organizations approached their IT infrastructure, offering unprecedented levels of scalability and flexibility. By leveraging cloud services, companies could rapidly provision resources, scale their applications, and respond to changing demands without the constraints of traditional hardware procurement. This shift not only increased development speed but also empowered teams to experiment and innovate with new technologies and architectures.

Automation became a cornerstone of DevOps practices, as it enabled teams to streamline their processes and reduce the risk of human error. By automating tasks such as testing, deployment, and infrastructure management, teams could ensure consistency and reliability in their software delivery pipelines. Additionally, resilience engineering practices became crucial for building systems that could withstand failures and recover gracefully, ensuring high availability and performance for end-users.

AI's Impact on Development Teams

AI is revolutionizing the way both experienced and novice developers approach their work. Hannah observed that AI tools allow seasoned engineers to accomplish "months worth of work in a day," leveraging their expertise to maximize AI's potential. Conversely, newcomers to the field, whom Hannah refers to as "AI Native Devs," are experiencing a different learning journey, with AI seamlessly integrated into their workflows from the outset. This integration has democratized access to powerful development tools, enhancing productivity across the board. The challenge lies in ensuring that all team members, regardless of experience, can harness AI's capabilities effectively.

For experienced developers, AI acts as a powerful assistant, accelerating their workflows and enabling them to tackle more complex problems. By automating routine tasks and providing intelligent suggestions, AI frees up valuable time for developers to focus on creative and strategic aspects of their work. This shift allows seasoned engineers to leverage their expertise in new ways, exploring innovative solutions and pushing the boundaries of what is possible.

For novice developers, AI provides a unique opportunity to learn and grow in a supportive environment. By offering real-time feedback and guidance, AI tools help newcomers build confidence and develop their skills more quickly. This democratization of development tools ensures that aspiring developers have access to the resources they need to succeed, regardless of their background or experience level. However, it is essential to provide adequate training and support to ensure that all team members can effectively utilize AI tools and maximize their potential.

Overcoming Skepticism and Embracing AI

The skepticism surrounding AI is reminiscent of the doubts faced by DevOps in its early days. Hannah noted that while DevOps was initially met with resistance from business leaders, AI is being embraced at all levels of the organization. "Your CIO, your CTO, your CEO is going, 'I want generative AI in everyone's pockets.'" To overcome skepticism, organizations must encourage developers to experiment with AI tools and integrate them into their workflows. As Hannah suggested, creating a culture of experimentation, where failure is a stepping stone to success, is key to fostering AI adoption.

To effectively integrate AI into development processes, organizations must address the concerns and reservations of their teams. This involves fostering a culture of openness and experimentation, where developers are encouraged to explore new tools and techniques without fear of failure. By providing opportunities for hands-on experimentation and learning, organizations can empower their teams to embrace AI and discover its potential applications in their work.

Leadership plays a crucial role in driving AI adoption, as executives set the tone for the organization's approach to innovation. By actively endorsing AI initiatives and providing the necessary resources and support, leaders can create an environment where AI is viewed as a valuable tool rather than a threat. Additionally, by highlighting success stories and showcasing the tangible benefits of AI, organizations can build confidence and trust among their teams, encouraging more widespread adoption.

The Future of Development and AI

Looking ahead, AI has the potential to reshape software development teams by shifting the focus from mere code writing to broader considerations like product management and user experience. Hannah posited that "writing code is the easy bit," and the real challenge lies in ensuring that development efforts align with user needs and business goals. AI could enable teams to rapidly prototype and iterate, emphasizing the importance of a robust build, measure, learn cycle. This shift will require developers to hone skills beyond coding, embracing a holistic approach to software creation.

As AI continues to evolve, the role of developers will expand to encompass a broader range of responsibilities. In addition to writing code, developers will need to consider the entire product lifecycle, from ideation and design to testing and deployment. This holistic approach will require a deep understanding of user needs, business objectives, and market trends, enabling developers to create products that deliver real value to customers.

AI's ability to facilitate rapid prototyping and iteration will be a game-changer for development teams, allowing them to test new ideas and gather feedback more quickly than ever before. By leveraging AI-driven insights and analytics, teams can make data-driven decisions and continuously refine their products to better meet user expectations. This iterative approach will empower organizations to stay agile and responsive in a rapidly changing landscape, ensuring they remain competitive and relevant.

Building AI-Ready Organizations

Education and enablement are crucial in preparing teams for AI adoption. Hannah stressed the importance of cultivating communities of practice and nurturing internal champions who can advocate for AI's benefits. By providing tailored training and support, organizations can empower team members to explore AI's potential without fear of failure. As Hannah noted, "Make it experimental, make it fun, make it social," and teams will be more inclined to engage with AI tools and methodologies.

To successfully integrate AI into their workflows, organizations must invest in the education and enablement of their teams. This involves providing comprehensive training programs that cover both the technical and ethical aspects of AI, ensuring that all team members have a solid understanding of the technology and its implications. By fostering a culture of continuous learning and development, organizations can empower their teams to stay ahead of the curve and embrace new opportunities as they arise.

In addition to formal training programs, organizations can create communities of practice where team members can share their experiences, insights, and best practices. These communities provide a valuable platform for collaboration and knowledge exchange, enabling teams to learn from one another and collectively advance their understanding of AI. By nurturing internal champions who can advocate for AI adoption and drive initiatives within their teams, organizations can build a strong foundation for long-term success.

Guardrails for AI Utilization

The rapid proliferation of AI tools necessitates clear guidelines to prevent AI sprawl within organizations. Hannah highlighted the role of platform engineering teams in offering AI tools and infrastructure, ensuring consistent and secure usage across the board. Establishing guardrails and educating employees about the responsible use of AI is essential to mitigate risks and maintain control over sensitive data. By proactively managing AI integration, organizations can harness its power while safeguarding their assets.

As organizations increasingly adopt AI tools and technologies, it is essential to establish clear guidelines and guardrails to ensure responsible and ethical usage. This involves defining best practices for AI integration, including data privacy and security protocols, and establishing governance structures to oversee AI initiatives. By setting clear expectations and standards, organizations can maintain control over their AI landscape and prevent unauthorized access to sensitive information.

Platform engineering teams play a crucial role in managing AI tools and infrastructure, providing the necessary resources and support for teams to leverage AI effectively. By offering centralized solutions and services, platform teams can ensure consistency and security in AI usage, reducing the risk of sprawl and fragmentation. Additionally, by providing training and support, platform teams can empower developers to make informed decisions about AI integration and usage, ensuring they are equipped to harness the full potential of AI while safeguarding their organization's assets.

Full Script

**Hannah Foxwell:** [00:00:00] And then DevOps came along and DevOps and, and cloud as well, because infrastructure was on demand. You didn't have to order it months and months in advance, that then increased the velocity again. And we had to we had to completely change. Like the rule book around operations and how you did that job.

It became much more about automation. It became much more about self healing and like resilience engineering. And now here we are with AI.

**Simon Maple:** You're listening to the AI Native Dev, brought to you by Tessl.

On today's episode of the AI Native Dev, joining myself, Simon Maple, is Hannah Foxwell. Hannah, welcome. Thank you for having me. Oh, it's a pleasure to have you on here. Now, Hannah is an independent consultant in and around platform engineering, security, and AI. And Hannah, you and I used to work [00:01:00] together at Snyk many moons ago.

**Hannah Foxwell:** All the best people went to Snyk, obviously.

**Simon Maple:** All the best people. Yeah, absolutely. You were a product director for container security at Snyk. And then prior to that, you're a director of platform services at VMware. You've done a bunch of community stuff as well, including the DevOps days organizer Open UK ambassador and in your spare time, if those few minutes that you have as well on top of all that, you also ran an amazing conference, which we attended and we sponsored last year called AI for the rest of us, which was a great success as well.

**Hannah Foxwell:** Yes, I did. I'm not very good at sitting still. I don't know.

**Simon Maple:** How do you find the time?

**Hannah Foxwell:** I was going to take a career break. Genuinely, I was going to take a career break and then I got bored and I thought, I know what I'll do. I know I'll run an AI conference single handed in five months. Yeah. That's what I spent a lot of last year doing.

But it was fabulous, wasn't it? There'll be another one this year.

Oh, amazing. It has to be good and then you get to do it again. And it was good, and therefore we will do it again.

**Simon Maple:** See that's always the, what you get when you do [00:02:00] good work is just more work, right? So you've run an amazing conference and now you have to do it all over again from demand of everyone who enjoyed that conference last year, which is brilliant.

Your background very heavily in the DevOps space. Talk us through a few of the things that you've obviously the DevOps days one of the organizers there how did you get into DevOps originally?

**Hannah Foxwell:** My DevOps like origin story is actually back from like my early career as a release manager.

Everyone talks about those silos between dev and ops or the wall of confusion and things like that. And I was literally little Hannah in her early twenties sat on top of that wall of confusion, trying to interpret what I was hearing from Devs, from what I was hearing from Ops, and build a cohesive plan that everyone could buy into.

And very early on in the DevOps movement I saw it as a way to solve a very real and painful problem that I had experienced personally, but I knew that almost all organizations were experiencing, and I was like, there [00:03:00] has to be a better way. I didn't have the answer. I didn't know what that better way was, but I knew that there was a very real problem to be solved.

And so yeah that's my DevOps origin story. And yeah I think if I was to describe my career, it has always been about building. Like building better, like engineering teams, building better organizations, like not just a single dev team, but like all of the mechanics that happen around software development that make it successful.

That naturally let me just take a bit of a detour into product leadership because it doesn't matter how good you build it. You have to build the right thing. But very much still in the developer tool space, very much centered on developer experience and developer productivity.

And that's where we work together at Snyk.

**Simon Maple:** Yeah, amazing. And in terms of, with that kind of a background, we're that's behind us, or that's when we look back, we see the changes that we made, we see the lessons that we learned in the DevOps world, looking forward now, of course, [00:04:00] with some of the challenges and some of the amazing benefits that AI can provide us with, we're sat here on the cusp of another cultural team change and we're once again needing to answer many of the same questions in terms of how do we adopt this technology correctly within our teams? How do we change our team to effectively make best use of AI within our applications, within our software that we build but also from the team point of view, how do we use AI to perform better as a team as well? There's gonna be a lot of consistency between the challenges that we faced and probably some in the ways in which we solve those. How do you, I guess maybe looking back at DevOps first, before we then start looking at the challenges that AI provides us from a team and organization point of view and a ways working point of view and what we can actually practically do maybe today, even to help us with those challenges.

When we look back at the [00:05:00] DevOps space, what do you feel like we as a industry, did well and did absolutely appallingly at when addressing or trying to take on, I guess the benefits of DevOps, the things that DevOps can provide us with. What were the highs and lows of that would you say?

**Hannah Foxwell:** Yeah, I think there's one really important similarity and it's that sort of bottom up engineering driven adoption. And I think DevOps was really such a like engineering led movement. Wasn't it? It was like, this is a better way of doing things. And I think we have seen that, almost overnight, we're talking about one or two years, which isn't very much time in tech.

Like the vast majority of software developers will be using AI tools in their workflow, in their everyday workflow. It's one of the personas or like the industries that has really immediately felt the benefits of generative AI and what that technology can do for us today. So I think that's one of the similarities.

I [00:06:00] think on the flip side, one of the differences is that AI is obviously touching every role. It's not just touching developers. Whereas DevOps was like very incubated within the developer community. I don't think many business leaders really cared about it too much. This was just about us getting our house in order.

I think everybody cares about AI. Back in 2012, so we're talking about a while ago, over a decade ago, back in 2012, the CIO like the big boss at the company I was working at he said, don't talk to me about DevOps. Every time like an engineering leader or an architect or someone was like, behold the future.

And it is DevOps. He was like, stop talking to me about that because he just couldn't see like the the return on investment, the business rationale behind it, because it was all new and it was unproven. Whereas today with AI, your CIO, your CTO, your CEO is going, I want generative AI in everyone's pockets, in [00:07:00] everyone's jobs.

I want that productivity boost. I want it yesterday. Go. And I think there's this real pressure top down to couple with that bottom up engineering enthusiasm that we have for these tools. Which is new. I think that's really exciting.

**Simon Maple:** Yeah, that's interesting. When we think about who are the skeptics, looking at it from that point of view. Within AI today, you have the various groups.

Some people absolutely adore AI and are fully all in, and they believe it's the future if not today, but very soon as models improve and technology gets better, there are others who look at AI and are so skeptical of it and don't really want to use it because they think they can do better today.

Would you say that's like similar to the DevOps space as well in terms of people who are trying to put barriers up or people who have more of that inertia not to lean into the new technologies?

**Hannah Foxwell:** Yeah, I think there's always going to be a fear of change.

And that little voice inside your head that goes, what does it mean for me? Am I going to lose my job? Are the things that I enjoy about my job going away? What does the future look [00:08:00] like? Unfortunately, I think for those of us who choose a career in technology that's relentless and comfort with change and ambiguity is actually one of the things that will like, set you up for success. But let me flip the conversation around. Like, where do you sit on that? Like broad spectrum of a pessimist or maybe like a catastrophist. I don't think you are because you're running an AI podcast, but on the other end of the scale we get optimist, but we also get almost like that cultish sort of hype around AI where all of the challenges and limitations are brushed away.

Like those will be solved in the next few years, the good versions are coming, like, where are you on that? Like on that spectrum?

**Simon Maple:** It's a great question. I'd like to think I'm quite a realist in that space where I I see the benefits, right?

The benefits are fairly obvious. I think. It still feels magic when you ask it to do stuff and it provides you with answers and it is quite amazing what it can do today. It's far from perfect. And we can see the levels [00:09:00] of where we are today and using it.

You still today need to use it in somewhat of an assisted manner. And while I think the future is a much more autonomous space. I'm quite that realist in thinking we have come a long way. There is still a long way to go and we need to be almost a little bit careful in how we talk about AI, because I can see so many people who are almost very blase with the way that they're expressing what their app or what their technology can do.

And actually there's a big, and me and Guy were talking about this just the other day, there's a big difference between that demo and the actual real world use case. And it's easy to get any product to work well in a demo, but when you actually want it to work well day in, day out on real world applications, it's way, way harder.

And I think today we're still very much in that demo space. Unless the tool is well attended and you can build up that trust going [00:10:00] forward.

**Hannah Foxwell:** I was going to say trust. It's about trust. It's about reliability. And these tools have to earn it with their consistency which AI doesn't really do consistency, does it? That's the whole point. It's non deterministic. And consistency versus accuracy versus reliability. One of those words or a combination is the right thing, but it helps to build that trust. And then we get to a point where we can really feel the benefits of it because once you have proven that it's good, then we can build much more autonomous systems. I'm very excited about the world of AI agents. And again, everyone's talking about AI agents. I'm going to be incredibly boring and I'm going to talk about it as well, but if you can offload a lot of the repetitive, mundane work to an AI agent or a team of AI agents that work in unison like how liberating is that for the humans involved?

I've worked as part of huge enterprise companies, tens of thousands of people in these organizations and the mechanics of operating those companies is painful, [00:11:00] absolutely painful. And that's an area where I think AI is going to absolutely change the game.

**Simon Maple:** Yeah, absolutely. So let's talk a little bit about.

Teams and culturally how AI can potentially change the organization, how we expect it to change the organization going forward and then also what we can actually do practically today to prepare ourselves for these potential changes. When we start a little bit longer term, how do you see development teams potentially changing the way software is built the way the organizations have to adapt to those changes in development.

**Hannah Foxwell:** Let me talk about like the humans first, because I think that's always a good place to start. So I've seen AI tools in the hands of a really experienced sort of principal engineer.

And you can get literally months worth of work done in a day because you know exactly what you're asking it for. You use your knowledge in unison with the automation tooling that you've got at your disposal to ask for the right thing and the right thing come [00:12:00] out. And I think that is unbelievable.

Like it's absolutely mind blowing. I think at the other end of the spectrum, not your principal engineer, but your new engineer. I think this is called AI Native Dev, but those will be the AI Native Dev. If you're coming into software development today, you're not learning in the same way.

You've got these tools at your disposal. And your learning journey is going to look very different. And then there's all the in betweens, the people who are on their learning journey and they started off in one mode of operation. And now with these new tools, they have to switch.

And I think those are the people who are going to have the hardest time transitioning because they don't necessarily have all of that in depth knowledge that the, your principals and your staff engineers have to ask for the right thing and to know with confidence that what it's given you is the right thing.

And I think then we need to look at, okay how do we make teams with all different, like a mix of skill levels successful? And it's not that different to what we do today. You want your senior engineers, [00:13:00] your principals and your staff engineers, you want them mentoring everybody at at different stages, you want them to be scaling their impact rather than just picking up tasks off the backlog and executing. So I don't think that's that much different. I think it could be an uncomfortable transition for some engineers who don't necessarily yet have like that depth of experience to know with confidence that the AI tooling is giving them the right answer.

But I think there has to be those guardrails and that safety net. And ideally it would be coming from a mentor type figure to figure it out.

**Simon Maple:** The education piece there is super interesting. I find in terms of the number of times I've been asked, by people who are coming into the industry, what is it I need to learn today?

Is it worth still learning various programming languages or not? And things like this. And I guess one of the things that we probably should focus on fairly early is education right from what you were just saying.

**Hannah Foxwell:** But it's not actually that different to the education you would have done anyway.

It's [00:14:00] like you're maybe it's maybe reframe. So instead of a blank sheet of paper. And you having to write the program from scratch, you are now presented with something to respond to but it is the same knowledge you need to be able to make sure that those are quality, got quality built in that you haven't made any silly mistakes and it's doing what it's supposed to do.

**Simon Maple:** And that's interesting because I feel a lot of the details of development will be certainly around implementation will be lost because if we can rely on AI to provide us with implementations, we need to think less about the implementations and more about how we build in terms of the architecture, the patterns that we use and so forth.

Do we feel like that is potentially? something that will be better for a developer, a junior developer to perhaps learn than necessarily the implementations. If that is being handled more for us, do you see like a shift in the perspective of education, maybe more even about what makes an application good versus just what makes code performant or ?

**Hannah Foxwell:** There are so [00:15:00] many tools at your disposal. You can obviously use just like linting tools to find structural issues with your code, you can use security static code analysis tools and things like that to catch any obvious issues.

And so we have all of this stuff at our disposal. We were talking before this, weren't we? About us being of an age where we grew up without the internet. I think it's it's but how being a software developer of an age where you didn't have all of these tools at your disposal, it can be, it will be interesting to see whether it's anything is lost along the way and like how it changes.

How it changes the profession, just the internet's completely changed society.

**Simon Maple:** And for those of us who very used to the, should I call them old ways of coding right now? I don't know the pre AI software development. If we take from our DevOps best practices or like what we learned from folks who were more, I guess skeptical of DevOps.

What can we learn in terms of when we think about encouraging our development teams to be [00:16:00] more AI savvy, to buy into AI more in the future ways of development? What can we learn to avoid inertia when we're making that transition to more of an AI native or just generally AI assisted way of developing.

Are there things that we can do better or learn from that DevOps space as to how we can encourage people to buy into these types of new paradigm shifts?

**Hannah Foxwell:** Yeah. I always come back to that really simple leap of build, measure, learn. Build, measure, learn. And the build, measure, learn is obviously.

The methodology that you use to ensure that your product development is on track, you know, create a hypothesis, test that hypothesis measure the impact that it's had and learn and then improve. And I think you could talk about agile methodologies, you could talk about DevOps, you could talk about platform engineering because that's applying user experience and product management to internal tools, [00:17:00] like that's completely transferable to your AI tools for developers as well.

So it's about making space to experiment and to do that build, measure, learn cycle and it being okay for your learning to be like that was the wrong way. Let's not do that again. I think everything's evolving so quickly. And I think, it's hard to look back and remember. what you were thinking and feeling like 15 years ago, 10 years ago in your career, because you don't have the benefit of hindsight and confirmation bias and things. But I do think that the teams that were real leaders in the DevOps space, in the cloud native space cloud engineering, I think they were the ones that were willing to try something different, to throw out the old way and let's experiment, let's incubate some new practices.

Let's measure the success of them and learn from it. And they were okay with not being right first time or not being not having the answer. Cause I don't think that, you [00:18:00] know, we talk about whether you're like an AI optimist or pessimist or things like that. I am just like, so here to ride this rollercoaster.

And there's going to be ups and there's going to be downs and I'm like but whatever, it's going to be really fun. It's if I could go back like 10 years with the DevOps space, I would have been a lot more vocal like publicly about what I was doing and what we were learning from it because, I felt a little bit of impostery back then.

It's oh if no one else is doing it this way, maybe I'm wrong. It's no, like we're all inventing, we're all inventing and experimenting. And if we share what we're learning, then we all get better. And I think that's where we're at. We're at that messy experimental incubation phase where there's no right answer.

**Simon Maple:** And if you can measure that and then share your success, and you can say, look, I was doing this it took me this long. Now I can do it with actually far less effort and asynchronously to other work I'm doing. And it takes me this long. You can share that and say, look, who else wants to learn from these ways [00:19:00] of working?

And I guess for those who are more skeptical, it's not just about time as well. It's about the frustrations that those things can. That those previous tasks can cause because if you're taking away jobs that people don't want to do whatever that is, and actually providing people with more fun things.

And that could be an area as well. I guess from the team point of view in DevOps, one of the things actually, if we lean into DevSecOps, one of the things that I loved seeing companies and organizations do was build out these security champion style groups and having folks who were almost like bleeding edge, learning how to do things and then sharing that with other teams so that others can adopt it.

So that not everyone has to trip over the same barriers at the same time.

**Hannah Foxwell:** Yeah, absolutely, about building communities of practice is really good. Having those internal champions and advocates. Some it takes a little bit of effort to bring people together around that because everyone's busy, like everyone's so busy.

And the [00:20:00] first sort of the year after like ChatGPT came out i hadn't really touched it because i was like blinkers on it's like i've got a lot of work to do what's very important work to do and i didn't have time to raise my head and look around and be like what's going on here but i think there are a lot of people who are still like that or maybe they haven't experimented with the technology to a point where you know it can really have an impact on their day to day lives.

They've just scratched the surface. Maybe it's hallucinated at you and you've lost a little bit of faith in it. Maybe you haven't spent the time to learn how to write a good, structured prompt. I'm sure there are a lot of folks who are like me, who are like skeptical or like maybe a bit disengaged at the moment just because of the time available and the learning curve.

that you have to go through. And of the absolute the hackathon has made a comeback in a big way, hasn't it? But one of the best things you can do for helping people learn is to make sure that it's low pressure, make sure that it's not tied to delivery or deliverable. Make it experimental, make it fun, make it social and then people will [00:21:00] engage.

People will have a go. And I think that's, I genuinely think like the hackathon has made such a big comeback because it is such an effective way to get people engaged with, and to get people to pick up these new tools in a safe environment where there's no pressure and no fear of failure.

**Simon Maple:** Yeah. Yeah. Interesting. And actually, yeah, the hackathon style, what can we do with AI today kind of thing. Just, yeah, trying to play with it. I saw a tweet or maybe it was a blue sky post, actually. And it was someone who said, before you use AI, just think about if you can do this in normal coding prior pre AI style technologies before you do that.

And I thought that was an interesting tweet in terms of almost. People are almost too inclined to use AI too quickly. And I feel like that tweet was maybe a little bit badly worded. They probably should have said something like, think about if you can do this in conventional programming ways better.

So don't use AI for the sake of using AI, if there are better ways of doing it. Do you feel like one of [00:22:00] the, one of the pushbacks could be that if people are trying to use AI too much people will almost get frustrated in actually being providing them with worse ways of working.

Do you feel like we're almost being pushed or pressured into AI

**Hannah Foxwell:** Yeah, there's a lot of like LLM washing. I don't know what the question is, but the answer is using LLM. They are, it is revolutionary technology because of the way you interact with it. Because it's natural language.

It's very accessible to a lot of people who would not have previously been able to write code. Yeah. There are process automation tools out there. If this, then that. And you want to draw a flow chart of all the different scenarios that you might get as input and you want to write a be very thorough in your analysis of writing down like every permutation of every output.

That's all technically very feasible. Or you could just throw that process at an LLM and tell it whether it got to the right answer or not. And very quickly, you've got a working prototype and I can see [00:23:00] why it's so alluring. I can absolutely see why it's so alluring. It's not deterministic. And if you've got a process that needs to be predictable and specific, maybe it's not the right thing for you, but when you're talking about, there's always like the support desk or the call center option when you've got unpredictable inputs, it can be very good.

And you might actually, you might say, actually, I do need very specific outputs, but I need maybe this and maybe need this filtering layer to help organize all of these inputs into buckets. And then again and if this, then that. program won't really, won't catch everything in the same way LLM will have a go.

They're very good at having a go, aren't they?

**Simon Maple:** And I think that's probably an area that we need to, we ourselves need to change as well in recognizing, it's non deterministic in the sense of and it can be wrong in a non deterministic way, there was a good chat earlier on the podcast with Caleb Sima, who was talking about having an LLM. [00:24:00] provide you with information about how many vulnerabilities you have in your code. And if it runs five times and it says you've got one vulnerability and then it runs a fifth time and it says there's no vulnerability, how do you deal with that? When we're thinking very much in a deterministic way of, is this code vulnerable or not vulnerable?

And I think we need to be much more mindful about the limitations and the output that LLMs provide and deal with that in our ways of working, one other question that I would love to ask is when we think about the changes in teams for example, platform engineering teams, a dev ops engineer, these types of roles, which are or roles and teams, which were largely created through almost everyone trying to do everything themselves followed by it would actually be wonderful if we had this kind of this paved road or path to do this. And then you have people who support those services or support those ways of working and it's okay to go off road, [00:25:00] but you have to support yourself. And these teams are helping us with this more paved road.

I guess if we draw that parallel to AI now, do we see AI engineers and folks like that joining these same teams? Do we see developers today, maybe taking that more upon themselves in the same way that we did with DevOps, what do you see, I guess today?

And what advice would you give for how we can do that most effectively?

**Hannah Foxwell:** I agree. I think there's your two types of users. There is your technical user, like in the platform engineering I'm providing a shared platform that people can use and people can leverage, people can, plug aspects of their application into this platform.

People can plug aspects of their processes into this. Like it's a shared resource. I think that is a pattern that we are absolutely going to see evolving. And it doesn't matter whether, in the backend, you're just like self hosting an open source LLM like Llama [00:26:00] or Mistral, or whether you're backing off into an API that's hosted by Microsoft or Google opening up, it won't really matter because the user interface is something that you've provided like a portal internally. The other aspect of it, and this is where we're going to get sprawl, is the the sort of the business user who can write a prompt. They're not in engineering but you see so many startups which have focused on solving a problem in a vertical for that user, and they're offering the whole stack.

And I think, there's a lot of shadow IT that's probably going to happen. With different AI tools that are tailored specifically for a certain persona within your business. And that's way out of platform engineering that's maybe delivered through a SaaS solution. Maybe it's a AI tool that you don't even really know is like AI tool underneath the hood.

Maybe it's just a tool that you use every day. And I think in the same way, it's really hard to understand your software bill of materials, understanding your AI landscape. [00:27:00] And the AI sprawl that's happening across all of your different tools and platforms, all of your different roles and business processes is going to be really challenging.

It's already happening. It's already happening. And you have to decide what's okay and what's not. And you have to put some guardrails around that. But most importantly, you have to educate your team about what's okay and what's not okay in your organization. Don't feed customer information into a free tier of a SaaS product.

For example, because they're probably using the free tier users as trading data. You probably shouldn't be disclosing any personally identifiable information about your customers anyway, to any third party. But that's a huge education piece because It's not okay to not put these tools in people's hands if it can make them so much better at their job, but you do have to make sure that you have some visibility of what tools are in your portfolio.

And so that's steering a little bit away from the developer tooling piece, because I think, a platform engineering team, they have maybe more visibility [00:28:00] into the engineering domain than they do into the business domain and what's happening there, but and security is like security is probably jumping up and down and losing their mind how to mitigate these risks as well, but yeah,

**Simon Maple:** yeah.

It is interesting that you bring up a few really good points there, actually. I think one is that this is far broader than DevOps in the sense of DevOps was there to help software delivery, software development and things like that. It's very heavily based in the engineering space and the operation space.

AI is far broader than that and actually has implications and benefits across the organization. So this is one. And then I think the second one that you mentioned, which was really good is the idea of what can you do? And what shouldn't you do, what can and can't you do based on limitations of basically company guardrails, company guidelines of what you can and can't, provide the models and so forth.

And as soon as we go outside of the engineering department and those more technical [00:29:00] spaces, you lose that intrinsic kind of technical or mechanical sympathy of what's actually happening under the covers in these LLMs. And so people might accidentally, mistakenly provide an LLM with too much information, too much business sensitive information.

And potentially leak company data and so forth. And that's all before we think about the, what's the best way of doing something. It's more of the can we do it? Can we not do it? Day

**Hannah Foxwell:** zero, isn't it? This is day zero. I think it's probably a combination, isn't it? It's probably a combination.

I think there will be, I'm like, I've always worked in these like very complicated enterprise, I've always worked in that space. These are where the really hard challenges are. And I think, when I'm talking to people about what they're using and how they're using it, there are some teams that can't use a SaaS solution, but they're regulated environment, very sensitive data, maybe it's banking, telco, whatever, and they're going down the self hosted route.

And so it's a very platform engineering. It's very platform [00:30:00] centric. And so one of the folks I'm working with actually called Helix ,they're building the sort of private AI stack. That's their mission to empower these companies so that they can get a similar experience internally on their own hardware as they would using a SaaS or an IaaS kind of solution.

They can't go to Google or Microsoft or OpenAI for those things. Yeah. One of the things that I've been mulling over is actually whether or not it's an and instead of like an or, do you go left or do you go right? Or do you actually support, because every business has very sensitive information.

Does every business actually need to support their own private Gen AI stack for certain processes? And then certain less sensitive more day to day maybe more creative, maybe marketing, maybe stuff like that. Maybe you can use some of the SaaS tools and that is the right answer. Yeah. And be a bit mindful about what that's, like I said, like that landscape of AI solutions is going to look like in your organization.

[00:31:00] Maybe. Yeah, getting ahead of it and making a decision about what you need and what you don't, instead of letting it sprawl organically new solutions popping up all over the place.

**Simon Maple:** Yeah. And if I draw a parallel again, back to let's say DevSecOps and security champions , I used to hear the same question or the same problem space a few times whereby there are general company security guardrails in which an organization wants this kind of a security champions program to help educate everyone, whichever department you're in about good security hygiene within your organization.

And then you have the much more focused. group, which is purely around development and about good security practices within your coding, everything from vulnerabilities. Where you store data all that kind of stuff and people used to always say, you know Should this be the same group and typically what we saw was people who tried to make that the same group actually made it less relevant for everyone because all of a [00:32:00] sudden there's some stuff which is completely irrelevant to others if someone in marketing is getting information about how to use third party libraries. I don't even know what this is, vice versa.

If people are in development, getting extremely high level information, which is, they're effectively, they already know super, super well, but it's like, they almost feel a bit embarrassed or bored to have to go through this type of education. Is this going to be similar where we almost have, a set of AI education for an organization and then a very specific set of organization.

We keep these as separate, almost like programs of education. And maybe this is similar to, as we were talking with the platform groups.

**Hannah Foxwell:** Yeah, definitely. I think like the enablement and the education that you provide to your teams. I'm trying to think about an example that I've been through.

So everybody gets the training on like physical security, don't they? It's like what to do and not to do with your hardware and things like that. Cause that's a commonality. But then I can envisage, like a [00:33:00] very role based enablement process for, and this is the suite of AI tools that we use here.

And here are the best practices that we've uncovered for being the best at your job, not every job, but your job. And that's would look very fundamentally different for technical teams not it would for non technical teams. So I think it's an investment, isn't it? It's an investment and it's such a difficult investment to make when I'm sat here going we don't know what the future looks like.

Like everything's changing every five minutes and it's really exciting. It's wow. Yeah. You could have spent six months developing like a training program only for it to be redundant the moment it's launched, I think, like you said, something that's more organic and evolving like a community of practice or champions program might be the right thing right now.

Yeah. And as some of these best practices kind of solidify and like you figure out, okay, this works for us and this doesn't, Maybe then you add it to like your onboarding process for new employees. Maybe then you do some more formal training. I'm [00:34:00] not sure. I am interested if anybody who is listening to this has any insights because it's I am just genuinely curious about how big organizations are tackling this.

I know how I would think about it, but I'm not doing it in the real world. So

**Simon Maple:** yeah. And I guess if we think about that listener who let's say is a developer working within a team and they're thinking maybe in a year or two, how their job might change.

I think we're both like agreeing that AI is certainly not removing jobs from development, but what would be the biggest changes would you say that developer can expect in their teams and in their ways of working in the next couple of years?

Would you say that's a million dollar question, isn't it?

**Hannah Foxwell:** No. No. This is how I think about it. Lived an experienced that sort of. the switch from maybe waterfall software delivery processes where releases were once every six months, once every three months, like very rigid to agile.

[00:35:00] And when I experienced that, that increased the pressure on the operations team to the point where they became the bottleneck. They could not cope with that extra velocity of the development team. And then DevOps came along and DevOps and cloud as well, because infrastructure was on demand.

You didn't have to order it months and months in advance, that then increased the velocity again. And we had to completely change. Like the rule book around operations and how you did that job. It became much more about automation. It became much more about self healing and like resilience engineering.

And now here we are with AI and it's okay, so the bit where you write the code. Cause I think we have all been optimizing our engineering teams around maximizing focused time writing code. I would say that was previously my mission as an engineering manager or I don't want my team distracted.

I want them focused on the task. What if that's not the bottleneck anymore? What's the [00:36:00] bottleneck? It's not time writing code. Writing code is the easy bit. Actually, I think you go up the process again. I think you start to look at product. And you start to think about what are the product management practices that we need in place to make sure that we are doing that build, measure, learn cycle as efficiently as possible.

What happens if you can prototype something in a day? What then? Do you just ship it? Do you like ship it to all your users? Or do you need to actually get really good at A B testing. Like I know a lot of companies aren't. Do you actually have to get a little bit better about testing your hypotheses?

Do you have to get a little bit better about progressive delivery where you can roll out a feature to cohorts of your user base and really test that you are building the right thing. Cause I think actually there is a real danger that another step change, another increase in developer velocity it's going to in a lot of places encourage feature sprawl.

Every feature that you can think of is built [00:37:00] and every product, it goes through that in shitification process. It's like eventually the feature bloat is so much that it becomes a different product, it becomes unusable. And so I think that's the interesting area for me.

I think it's where is that, how do we like fine tune that build, measure, learn cycle? And we do a much better job of the measure and learn than we do currently, because I've worked in software development for a long time. And I know even I have done a really bad job of the measure and learn bit, because the vast majority of your time is focused on build.

That's the bottleneck. You put all your energy into build, and once you've built something, you're immersed in the next build. So that's where my head's at the moment. I really,

**Simon Maple:** I really love that perspective just in terms of, trying to foresee the next bottleneck.

Because this will take out a bottleneck to the extent that and it will accelerate that so fast that we're going to almost be caught out by [00:38:00] the next bottleneck. And I think at times when we look at, let's say a number of features that we could implement. We've never thought about what happens if we could implement all of these?

Should we implement all of these?

**Hannah Foxwell:** Yeah, it's like with great power comes great responsibility. Just because we can, doesn't mean we should. We very often prioritize. Some of these practices, they're just they're not as common as you might think. The diligence around A B testing, the diligence around measuring success, the user feedback, like how many people go through a round of user feedback on their new features before they push it to production.

From my experience anyway, we're all in execution mode. Because building the thing is your focus and we can only build so many things. And you get this confirmation bias that you must've built the right thing. So you ship it, you market it. And then everyone's terribly surprised when it doesn't get used.

**Simon Maple:** So we'll effectively go away from that point of view of, we don't necessarily need to prioritize because [00:39:00] we only have time to build X number of things. Now we need to think about, we should prioritize based on the needs of what people are doing. And some people are very lucky in terms of maybe having the option of doing both today because they have large teams.

**Hannah Foxwell:** Oh yeah, like the future is here, it's just not evenly distributed. Some teams are fantastic at this. They've got very data driven decision making processes in place. But if I was putting my money on it, I would say that a lot of teams have had similar experiences to me where the build part of build, measure, learn is what takes up most of your time and energy because that's where the bottleneck is currently.

**Simon Maple:** Yeah and I think it's an immensely good thing that we think more about that as developers as well, because the focus shouldn't be on build and the focus should be much more around that thinking about the user need, the use case of what a user is trying to do with this.

And as a result, building tools and applications that are [00:40:00] designed for developers, rather than over rotating so much on how we design and how we build this thing that we're actually not focusing enough on the experience and things like that. So a movement across to that, which will be built into that measure and and learn cycle is very very needed and very very valuable to a lot of organizations, I think.

**Hannah Foxwell:** Yeah. Yeah. And that goes back to what we started the conversation on, which is one of the lessons learned from DevOps, 15 years into DevOps transformations and building this community.

What are we talking about? Developer experience. We're talking about user experience. We're talking about making it easy to do the right thing, yeah. . It's a forever lesson that will keep unlearning and relearning like user experience above everything.

**Simon Maple:** Amazing. How do we could talk for another hour or so quite comfortably, I'm sure.

But this has been super valuable and it's interesting to keep bringing up actually. I feel that the DevOps lessons because it's definitely a parallel that we can draw with this existing movement. So super grateful for you sharing some of those perspectives [00:41:00] on that.

And I love the so what was the loop? It was build, measure, learn, right?

**Hannah Foxwell:** Yes, Build, measure, learn. Build, measure, learn.

**Simon Maple:** Very valuable, a very valuable cycle for us to keep repeating. Yeah, thank you very much. Thank you very much for joining us today Hannah. Great to catch up and hear your experiences.

And for those listening on the episode, thanks again for tuning in and hope to have you tune in to the next episode shortly. Thank you very much.

**Hannah Foxwell:** Bye.

**Simon Maple:** Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.