Changing the Developer Documentation UX Workflow using AI with Amara Graham

In this episode, Simon Maple sits down with Amara Graham, Head of Developer Experience at Camunda, to explore how AI is transforming the landscape of developer documentation. Tune in to discover the challenges and innovations in integrating AI into documentation processes.

In this episode, Simon Maple sits down with Amara Graham, Head of Developer Experience at Camunda, to explore how AI is transforming the landscape of developer documentation. Tune in to discover the challenges and innovations in integrating AI into documentation processes.

Episode Description

Join us for an insightful discussion with Amara Graham, a leader in developer experience at Camunda, as she delves into the transformative impact of AI on developer documentation. With her extensive background in developer relations and advocacy, Amara shares her journey and the innovative strategies employed at Camunda to integrate AI into documentation. This episode explores the role of AI in enhancing user experience, the challenges of implementing AI tools, and the future of AI in developer environments. Whether you're a developer, a technical writer, or an AI enthusiast, this episode offers valuable perspectives and practical insights into the dynamic intersection of AI and developer experience.

Resources Mentioned

Chapters

  1. [00:00:00] Introduction to Amara Graham and AI's Role in Documentation
  2. [00:01:00] Overview of Camunda and Its Developer Experience Initiatives
  3. [00:03:00] Amara's Background and Transition into Developer Advocacy
  4. [00:06:00] Understanding Camunda's Diverse Developer Personas
  5. [00:09:00] The Decision-Making Process for AI Integration
  6. [00:12:00] Implementing an AI Agent and User Comfort Levels
  7. [00:16:00] Addressing Documentation Accuracy and User Feedback
  8. [00:20:00] Challenges in AI Documentation and Solutions
  9. [00:25:00] Building Trust with AI Tools
  10. [00:30:00] The Future of AI in Developer Experience

The Role of AI in Documentation

Integrating AI into Camunda's documentation strategy required careful consideration and balance. Amara discusses the thought process behind this integration, stating, "We want to get on that AI hype train, if you will." The goal was to enhance user experience by providing a dialogue-based tool that goes beyond traditional keyword searches. By introducing an AI agent, Camunda aims to meet evolving user behaviors and provide a more interactive way to access information, all while ensuring that the documentation remains a reliable source of truth.

The decision to integrate AI into documentation was not taken lightly. It involved thorough research and evaluation of various AI technologies to identify the best fit for Camunda's needs. The primary objective was to create a system that could intelligently parse user queries and deliver precise and relevant information, thereby reducing the time users spend searching for answers.

One of the main challenges in deploying AI in documentation is ensuring accuracy and consistency. AI models must be trained on extensive datasets to understand the nuances of technical language and context. At Camunda, the team invested significant effort in training the AI agent to recognize and respond to a wide array of user queries, ensuring that it could handle both simple requests and more complex, context-driven questions.

Additionally, the integration of AI required Camunda to rethink how documentation is structured and presented. The AI agent needed access to well-organized and clearly indexed content to function effectively. This led to a comprehensive overhaul of the documentation, ensuring that it was not only informative but also optimized for AI interactions.

By embracing AI, Camunda is not only enhancing the user experience but also setting a new standard for how documentation can be leveraged in the digital age. The AI agent acts as a bridge between users and the wealth of information available, making it easier for developers to find the answers they need quickly and efficiently.

Implementing an AI Agent

The implementation of an AI agent at Camunda was both a challenge and a success. Amara shares the journey of introducing this technology, emphasizing the importance of user trust. "We need to be a little bit more gentle," she notes, acknowledging the varied comfort levels users have with AI tools. The AI agent serves as a "super powered search," offering users a more dynamic way to interact with the documentation. By maintaining transparency and reliability in AI interactions, Camunda has successfully integrated this technology into their user experience.

The journey of implementing an AI agent began with identifying the specific needs and expectations of Camunda's user base. The team recognized that while AI could offer significant advantages in terms of speed and accessibility, it was crucial to ensure that users felt comfortable and confident in using the new tool. This required a user-centric approach, focusing on clear communication and setting realistic expectations for the AI agent's capabilities.

From a technical standpoint, the implementation involved integrating the AI agent seamlessly into Camunda's existing documentation infrastructure. This required collaboration between the AI development team and the documentation team to ensure that the agent had access to up-to-date and relevant content. The AI model was continuously refined and tested to improve its understanding of user queries and its ability to provide accurate responses.

User feedback played a vital role in the implementation process. Camunda actively sought input from both internal stakeholders and external users to gather insights into how the AI agent was being used and perceived. This feedback was instrumental in making iterative improvements, allowing the team to address any issues and enhance the overall user experience.

The result is an AI agent that not only meets the needs of Camunda's diverse user base but also sets a benchmark for innovation in documentation. By prioritizing user trust and transparency, Camunda has created a tool that empowers developers to access the information they need with greater ease and confidence.

Challenges and Solutions in AI Documentation

Ensuring the AI agent provides accurate and relevant information is an ongoing challenge. Amara highlights the significance of feedback loops and user analytics in refining AI responses. "We go through and evaluate what's happening," she explains, detailing the process of assessing user interactions and identifying documentation gaps. By continuously monitoring and improving the AI agent's performance, Camunda ensures that it meets user needs effectively and maintains the integrity of the documentation.

One of the primary challenges in AI documentation is managing the vast amount of data and ensuring that the AI agent can access and interpret it correctly. This requires a robust data management strategy, where content is regularly updated and organized to facilitate easy retrieval by the AI. Camunda's documentation team works closely with the AI development team to implement a system that supports efficient data management and retrieval.

Another challenge is addressing the diverse range of user queries and ensuring that the AI agent can provide meaningful responses to each. This involves training the AI model to understand the context and intent behind user queries, which can vary significantly depending on the user's experience level and familiarity with the platform. Camunda employs machine learning techniques to continuously improve the AI's ability to understand and respond to different types of queries, ensuring that it can provide accurate and contextually relevant information.

To address potential gaps in documentation, Camunda has established a proactive feedback loop that involves regular reviews and updates to the content. User interactions with the AI agent are closely monitored to identify areas where the documentation may be lacking or unclear. This feedback is then used to make targeted improvements, ensuring that the documentation remains comprehensive and user-friendly.

By adopting a strategic and iterative approach to AI documentation, Camunda is able to overcome the challenges and deliver a tool that enhances the developer experience. The focus on continuous improvement and user feedback ensures that the AI agent remains a valuable resource for developers, providing them with the information they need to succeed.

Trust and Adoption of AI Tools

Building trust with users is crucial for the successful adoption of AI tools. Amara discusses the initial skepticism users may have and the strategies employed to encourage trust. "It's a gentle way to introduce the agent concept," she says, emphasizing the importance of gradual adoption. By providing a reliable and innovative tool, Camunda aims to position the AI agent as a valuable resource within the developer community.

Trust is a key factor in the adoption of any new technology, and AI is no exception. Users need to feel confident that the AI agent will provide accurate and reliable information, especially when it comes to technical documentation. Camunda has taken a thoughtful approach to building trust with its users, focusing on transparency and clear communication about the capabilities and limitations of the AI agent.

One of the strategies employed by Camunda is to ensure that users have a positive first experience with the AI agent. This involves carefully guiding users through the initial interactions and providing clear instructions on how to use the tool effectively. By setting realistic expectations and demonstrating the AI agent's value early on, Camunda is able to build trust and encourage continued use.

Another important aspect of building trust is maintaining the integrity and accuracy of the information provided by the AI agent. Camunda places a strong emphasis on quality control, regularly reviewing the AI's responses and making necessary adjustments to ensure that users receive the most accurate and relevant information. This commitment to quality helps to reinforce user trust and confidence in the AI agent.

Camunda also encourages user feedback and actively seeks input from the developer community. By engaging with users and addressing their concerns, Camunda is able to build a sense of community and foster a collaborative environment where users feel supported and valued. This engagement not only helps to build trust but also enables Camunda to continuously improve the AI agent and its documentation.

Through these strategies, Camunda is able to successfully integrate the AI agent into its documentation ecosystem and position it as a trusted and valuable resource for developers.

The Future of AI in Developer Experience

Looking ahead, Amara shares her vision for the future of AI-powered documentation. She believes AI will continue to evolve and enhance how developers interact with technical content. By leveraging AI, Camunda aims to further streamline information retrieval and improve user experiences. This forward-thinking approach positions Camunda at the forefront of technological innovation in developer documentation.

The potential impact of AI on the developer experience is immense. As AI technology continues to advance, it promises to revolutionize how developers access and interact with documentation. AI-powered tools can provide more personalized and context-aware experiences, tailoring information to individual user needs and preferences. This could significantly enhance productivity and efficiency, enabling developers to focus on more complex and creative tasks.

One of the most exciting possibilities for the future is the integration of AI with other emerging technologies, such as augmented reality (AR) and virtual reality (VR). These technologies have the potential to transform documentation into immersive and interactive experiences, providing developers with new ways to learn and engage with technical content. By combining AI with AR and VR, Camunda could create virtual environments where developers can explore processes and workflows in a more intuitive and experiential manner.

Furthermore, AI has the potential to facilitate greater collaboration and knowledge sharing among developers. By analyzing user interactions and identifying common challenges, AI can help to connect developers with similar interests or issues, fostering a more collaborative and supportive community. This could lead to the development of new solutions and innovations, driving the advancement of the developer experience as a whole.

As AI technology continues to evolve, Camunda remains committed to exploring new possibilities and pushing the boundaries of what is possible in developer documentation. By staying at the forefront of technological innovation, Camunda is well-positioned to continue delivering cutting-edge solutions that enhance the developer experience and empower developers to achieve their full potential.

Summary

The podcast episode underscores the strategic integration of AI in documentation, highlighting the importance of user trust and continuous improvement. As Amara shares, "I hope this encourages people to take that risk." For developers interested in leveraging AI to enhance their documentation practices, the discussion offers valuable insights into the challenges and solutions associated with AI-driven tools. By embracing AI, organizations can revolutionize their documentation processes and offer a more engaging and efficient user experience.

The integration of AI into documentation is not just about keeping up with technological trends but about fundamentally transforming how developers access and interact with information. Camunda's approach demonstrates the potential of AI to streamline processes, enhance user engagement, and ultimately improve the developer experience. By focusing on transparency, user feedback, and continuous improvement, Camunda sets a benchmark for the successful implementation of AI in documentation.

As AI continues to evolve, it opens up new possibilities for innovation and collaboration within the developer community. The insights shared in this episode provide a roadmap for organizations looking to harness the power of AI to enhance their documentation practices and drive the future of developer experience. Through strategic integration and a commitment to excellence, AI can serve as a catalyst for positive change, empowering developers to work more efficiently and effectively.

Full Script

Amara Graham: [00:00:00] I've been in those conversations where I like. We can't use that. We can't use the cool new thing. How do we make this old thing work for us? So I get it. The other piece of course is the agent could respond. I don't know. This was really important for me when we were evaluating different tools to put on the documentation, because the documentation at the end of the day is the source of truth for how you work with the product.

So I'm sure we've all been in a situation where we're looking at this thing going. Is this AI agent going to like work with me to give me what I need?

Simon Maple: You're listening to the AI native dev brought to you by Tessl.

And on today's episode, we have Amara Graham from Camunda, [00:01:00] Head of Developer Experience and we're going to be talking about documentation and how AI can help us with documentation, both looking at writing documentation, reading documentation, whether the documentation is useful or not useful, how we can attest whether the documentation is accurate for our users and so forth.

Amara, welcome to the session. How are you?

Amara Graham: I'm great. Thanks for having me.

Simon Maple: Absolutely. No problem. And tell us a little bit about yourself, Amara and a little bit about Camunda as well. So maybe your background, what you do, what a Head of Developer Experience means and so forth.

Amara Graham: At Camunda we run our documentation, which is part of the reason why I'm here today, as well as some of the more like hands on technical components.

So we are responsible for what I like to say is everything after the product or product adjacent tooling. So that could include API reference, how to work with the API, including things as technical as SDK. So a lot of things that developers are touching and [00:02:00] interacting with, but taking a little bit of a step back, I've been in DevRel I feel like I looked this up yesterday for a number of years, I started as a developer advocate after I decided that heads down development really wasn't for me, I was in an enterprise context.

That was cool. But what I really liked was enabling other developers. So moved into advocacy. And then from there said, I want to explore all things DevRel, so ran a DevRel program, a previous company, and then said, I really want to drill into this developer experience side of things. So that's where I'm at today.

I'm following, the paths of this three pillar program, DevRel approach. Community advocacy developer experience, but in the here and now I'm deep in the developer experience space. Yeah.

Simon Maple: Amazing. And your enterprise development that you mentioned that wasn't IBM, was it?

Amara Graham: No, that was Intel IT many many years ago, but it's funny that you mentioned IBM. Cause I wanted to start in like [00:03:00] the biggest, oldest enterprise context. So yeah, I have a deep love and admiration for my enterprise developers. I was one of them. I get it. I get that companies can be old and slow and still trying to figure out how to adjust to this very like fast paced agile environment that we have in tech today, but yeah that's where a lot of my like developer empathy comes from.

It's I've been in those conversations where I like. We can't use that. We can't use the cool new thing. How do we make this old thing work for us? So I get it.

Simon Maple: Yeah.

I only asked because I knew you've worked at IBM before, I believe. And I did a decade or more. I think everyone's done about that kind of thing in IBM.

And we've actually had very similar career paths and I wasn't sure if your time at IBM was development or advocacy or something else, but it would have been funny had we have had that same career path through IBM as well. Tell us a little bit for those who haven't heard of Komunda.

Tell us a little bit about what you do there, about what Camunda does.

Amara Graham: Yes. So I don't have the polished marketing pitch. So if anybody is listening, I'm so [00:04:00] sorry as I butcher this.

Simon Maple: Just say it's another Camunda. It's a different Camunda.

Amara Graham: We do process automation and orchestration, which again is.

It's very central to my background and development where I started doing process orchestration before we were calling it orchestration, but it's all about bringing together the process that you want to automate and then all of the pieces that do that automation. That's super vague. It could be something like you have a task list that people need to go through to say, okay, how do I do approvals?

How do I do those two step processes where someone creates a thing, you approve it? Boom, it goes through. You need a portal experience all the way through to things that are completely automated, touchless. So you want to integrate with a number of different systems, enterprise or not. We tile those things together for you.

We have a combination of the, what I would describe as like a traditional workflow tool. So you [00:05:00] can visually see how something goes through the process. Whether it's a set of UIs or some of, more of those like backend, truly automated things. And then of course, all of the kind of wiring to make that actually happen.

The term that we're using these days is process orchestration, but a lot of people are familiar with it from tools like IBM BPM or BPMS back in the day. It's the, I'm going to say it's the foundations to where we're at with like artificial intelligence and headed towards machine learning, deep learning.

It was that initial foundation of can we make things faster and easier, remove some of the just like button clicking that the humans did at one point in time, just to say, okay, yes, let's move this along. And how do we do things more efficiently?

Simon Maple: Yeah. And in terms of the persona, then that would be the main user for Camunda.

Are you looking at more of a platform role, a DevOps role, a bit of development, or a bit of a mix?

Amara Graham: That's a great question. [00:06:00] It is a bit of a mix. And it's part of the reason why I'm so fascinated by this space that I'm in with developer experience, because when you talk to me and my team, we're very laser focused on the developer persona in even that is quite broad.

So just like you were saying, it does include some of the DevOps folks. It includes some of the people who are doing that traditional software engineering. But the real answer is it includes everyone. Our product has the concept of running either in the cloud or on your infrastructure. So with that, you could have a variety of folks to support the implementation of our product at your company.

So that could be the DevOps folks supporting the actual like platform infrastructure, but it's also going to include people who are building those applications directly that interact with the engine.

Simon Maple: So from the documentation users point of view, [00:07:00] from when you're thinking about creating documentation, which leads us nicely into this topic, you're essentially, I guess the persona you're writing for is an engineer of some kind, whether they're leaning more onto the platform side or onto the software engineering side, talk us through a little bit about the tools that you use or maybe even the thought process that you went through when you're thinking about needing to offer the highest level of documentation that you could, what went through your mind in terms of, do we choose an AI style solution that can assist us with that? Or do we do the more traditional, let's have humans write everything. Let's have humans provide that based on the questions and requests that come in, provide that back to the user.

Amara Graham: That's a very big question that you just asked. There's seven questions that you asked. Because we have so many different kinds of users, there's a line that we have to walk between how do we enable all of our users to be able to interact and understand our product? And then how do we deeply enable our most technical users?

In our documentation, it [00:08:00] starts with structure, and we're going to have some people who come in and say, I need a really superficial understanding of how the product works and how I could apply it to my use cases, to my organization, and then you're also going to have the people who are coming in and say not only do I need to understand deep use cases, but I need to understand how do I apply them immediately?

So the very, very technical side of the documentation. Because we have this disparity in users, we have to have a spectrum of content. And with that, we have to have a way to enable those people that are at any point in the journey, whether they're beginners to Camunda, they deeply understand process orchestration, Or this is like something totally foreign to them in all manners of the term.

So when we're looking at ways to enable this very broad set of users, we can't just say, okay, developers are super comfortable [00:09:00] with things like early access tooling, the latest, greatest bleeding edge thing. They'll totally understand interacting with an AI agent. We need to be a little bit more gentle with that because we don't always know who's coming to the docs and what they're coming for.

When we're looking at the doc structure, we're trying to make sure that we can introduce all of these different personas, but we also want to get on that AI hype train, if you will. So we know that people are changing some of their behaviors. One of the biggest changes that we've seen is around searching and how they find different pages, how they find the content that they're looking for. That very naturally led us to looking at how do we go from just doing the like text based searching of like keywords and things like that to how do we introduce something that is going to offer our users a dialogue. So introducing an AI agent and introducing it in a way that feels safe that allows some folks in like our enterprise [00:10:00] context to maybe interact with something like that for the first time in a business context?

And how do we do it in a way that people are truly comfortable with it and trust it and don't run to, our support agents to say, can you help me find this in the docs when they're like I'm just gonna search and couldn't you just do that? It's been really interesting to see how we need to evolve our tooling to meet the needs of our users, not just our developers.

But one of those things that we're doing is trying to do it in a gentle way that gives them that encouragement to say, interacting with something like an AI agent on our docs is something that you should do is that first round of knowledge gathering or troubleshooting and again, I just can't emphasize enough, like being very gentle about it because there's some people who are going to say, no, give me the human.

I want to talk to a human and interact with a person to guide me through this experience.

Simon Maple: So that agent then is essentially a chat UI, right? And [00:11:00] it's directly through the Camunda site. So what kind of input does that take in terms of providing an answer back to the user? Will it differ based on things like the experience of the user or will it pivot based on the how the questions are asked or the style of the questions asked, or does it not necessarily care too much about who the user is?

It's just trying to find that best match on the actual answers that exist in documentation and provide that back.

Amara Graham: I'm not going to say it's that smart yet. And it's not doing a lot of like additional analytics sourcing. I'm going to call it, that's maybe not the best term to use, but it's really taking things at face value.

And it maybe is lessening the value. If I say it's like super powered search, but that's really the way that I wanna describe it, because it's that next evolution in, you've gone from keyword based search to something a little bit more powerful. Again I say I'm coming [00:12:00] with my enterprise background.

I know many of our customers are in an enterprise context where safety is really important. So this idea that it's not gonna pull like their browser history or something very invasive like that. But as you're having these conversations with an AI agent, it should pick up some context, right? It should have some concept of history as it goes through.

So with our users, I want to make sure that they understand that the context that we're getting within that conversation is going to help them find the resources that they're looking for within the documentation. It's a gentle way to introduce the agent concept, but it's also a gentle way to introduce them to something that is beyond just a keyword search.

Simon Maple: Yeah, absolutely. And in terms of, interestingly we had Tamar from Glean on, she was talking about how users have a better response or better answers when they realize they're talking to a chat UI. So if they pass in just search terms, they won't get as good an answer as if they [00:13:00] were to actually have that chat style conversations. Have you done any research on how users get results based on whether they're looking at more search style input or whether it's chat input or have you seen any differences there?

Amara Graham: I've seen differences, but only because It's the tool that we use Kappa. They have a great set of kind of backend analytics and dashboards that I can look at. And it's interesting to see, we don't get a lot of engagement on did people ultimately end the conversation with what they felt like they were looking for.

So we have the ability to get like a thumbs up, thumbs down from them. But what's interesting is I can infer if the person felt like the conversation was successful or not. And many times it has to do with the fact that we have some users who are coming into it with that ChatGPT of, I'm going to have a dialogue with this agent.

This is great. I'm going to talk to it exactly like I would a human full sentences and [00:14:00] everything. And then we have other users who are still using it as more of a search function, which I'm almost tickled by because we also have a search box like it's two separate things, two separate, very clear experiences.

And we did that intentionally, but that's some of the things that I'm trying to dig into is how do we validate that the docs are good through tooling like this? Are people able to find what they need and can they use the tooling that we offer successfully? So if they're just using the AI agent, like a search box that doesn't really unlock the value that we're looking for, but it does give us an idea of how our users will interact with this tooling in the future.

So maybe we change how we position that, or maybe we even offer some sort of instruction to say, this is how you use this, and this is how you'll be able to find that information faster, quicker, better.

Simon Maple: Yeah. And in terms of success, I guess there's two levels of success here. One is, was the agent able to [00:15:00] provide the documentation that was requested?

And I guess the second one is maybe it's the right documentation, but was the documentation good or bad? How do you ascertain that level of success from the request?

Amara Graham: Yeah. So there's a couple of different things going on from the tooling itself within Kappa. We have the ability to see, does the agent respond in a way that it feels is uncertain.

So on the dashboard itself, we get an uncertain tag at least once a month. My team and I are going in and evaluating what are the topics or what are the conversations that are coming up as uncertain. And that's a great way for us to dig in and see, is it the agent that's getting confused?

Is it the user that was initially confused? If you've ever been so lost, you're like, I don't know how to find myself. I don't know how to unstick myself from where I'm at. We go through and evaluate like what's happening in that context or in that conversation. The other piece of course [00:16:00] is the agent could respond, I don't know. This was really important for me when we were evaluating different tools to put on the documentation because the documentation at the end of the day is the source of truth for how you work with the product. There are contractual agreements aligned to that terms and conditions like it has to be very clear and very accurate.

So if the agent responds, I don't know. That's good in the sense that like we want to make sure it's not hallucinating. It's not going in the direction of making something up and potentially misleading users. But it also flags to us, are there gaps in our documentation? Did somebody ask a question that we should be able to answer?

And then we launch a little bit of an investigation to understand how we want to address that. Is it a docs gap? Is it a product gap? Is it something that we know we'll never work on? And then in the AI backend itself, we can go and add [00:17:00] like a specific answer if we need to. So that's maybe a short term solution or even a long term solution.

If we say, oh this is not something that we're ever going to do. It's interesting that we have somebody asking this. And then maybe that's something again, that we flag to. product management or product marketing to say maybe people are misunderstanding how we are marketing our product. And how do we want to address this?

It's not necessarily a documentation issue as much as it's a positioning of the product issue.

Simon Maple: Yeah, interesting. So there's a few things that we can definitely unpack. So I think the I don't need no cases is a really tough one. Is that solved by Kappa or is that a layer that you add on top of Kappa?

Amara Graham: It is solved by Kappa, which I'm very thankful for.

Simon Maple: Yeah, absolutely. And the workflow there then, if a user gets an, I don't know, back from Kappa, does that then encourage them to, or does it automatically reach out to someone on the services team or someone on your team or does that just get fed back to the team as data?

How does that resolve?

Amara Graham: [00:18:00] We've done a couple of different things. It does not reach out or I know some flows will say okay, let me route you to like a human agent. We don't do anything quite like that today, but it does try to route people to things to further help themselves. In our documentation we do reference like our community forum. We do link to our traditional support tooling so it can prompt for if you want additional help, or you want to continue this conversation in a different space or with humans involved, it does route people to those places and spaces, but it is not one of those traditional support tools where it says okay let me get a live agent for you to discuss not something that we're looking for.

And I think it's not something that our users are interested in. I think many times they're quite happy to get that response and go oh okay, I'll do my own digging, do a little research, is this maybe just not on the roadmap yet? And then it takes a little bit more investigative journalism, if you will, on my part [00:19:00] to go through.

But every once in a while, I see something pop up on like internal Slack channels where I'm like, that's very interesting that this question is coming up because it looks very similar to something that I saw recently. And it's customer A is requesting this and was curious about the roadmap.

I think customer A is this person on the docs, but it's all anonymous, so I can't quite relate it back. Just that investigative journalism side of things where I'm like, I've connected enough dots that I think this is the same thing.

Simon Maple: Yeah. Interesting. Interesting. Let's talk a little bit about accuracy here as well, because I feel the I don't know is a really important step so that It reduces the kind of like levels of hallucinations and things like that.

As you mentioned, equally, a lot of the data already exists, in terms of the documentation. And I think in chatting with you before, I believe Kappa , it can effectively cite where it's grabbing information from. So it could potentially give you an answer and then provide you with a document that says here's where you can go to get more data back from that.

[00:20:00] In terms of the certainty, it being confident in its responses, what levels of feedback do you get around that? And secondly, on URLs and documents that don't exist. Is that a problem? Or does it pretty much always hit every time? A document that, people can follow up with and read to make sure if they wanted to, that this is accurate and the levels of depth they need.

Amara Graham: Yeah. I'll answer that. That second part first is I have not yet seen it make up a URL or completely make up a document, which again goes back to like when I was doing early evaluation of these tools and this AI agent technology. I did not want to risk having it do that because based on our customers that we have, based on the interactions that I've heard from like our support team, that was going to be wildly unacceptable.

And to be honest, it should be unacceptable everywhere, but this is still very early technology, right? So there's a level of risk associated with [00:21:00] that. And hopefully most people going into interacting with these tools know that. But one of the most important things for me was a tool that cited its sources, and if we needed to do any sort of validation or checking, we could go through and basically like spot check.

So take a random selection of conversations, click into those, see what was going on. For us and for our user behavior, the conversations are often quite short. So we're talking again more of the search style. So it's a couple of terms and then an answer and then people leave. I assume they're getting what they need, or maybe they didn't get what they need, but they made that decision quite quickly in some cases, we do see much longer dialogue.

But again, I am comforted by the fact that if I go in and read through that discussion, there's a back and forth between Kappa and the [00:22:00] user that are helping the two of them understand, is the user getting what they need? Is Kappa serving what the user needs? So you're able to see the person say, yes, this is what I'm looking for or could you give me more information on something specific?

And almost watch them walk through a tree to get the answer that they're looking for. That said, I do monthly reviews to make sure that things are working as expected, that on the dashboard itself, we're able to see where Kappa flags something as a uncertain answer, but we're also able to see user generated feedback.

So users can go through and give a thumbs up, thumbs down. They can report issues that they're seeing. And that's something that of course we monitor very closely.

Simon Maple: I was going to say, is that feedback for you or is that feedback for the tool? Can it learn from, if it sees it's getting some thumbs up, it'll say, okay, that was good.

I'll use that for similar questions. I'll make sure I use that style of answer again?

Amara Graham: [00:23:00] Yeah, so there is some amount of that going on, but for the most part it's going to be feedback to me and my team to say and I would say, is this written in a way that not only humans can read, but that Kappa or other AI tools can read as well because we know we have customers coming in and leveraging those AI agents to, to read our docs.

So in that sense, it becomes yet another persona that we need to write for. And I think that there's at least the general discourse that I'm hearing right now is we don't want to lean too heavily on things that start to get us into like prompt engineering, like we're not writing the docs specifically so that it can be consumed by an AI agent.

But it's something that we keep in mind to say, is it clear enough for both AI agents and humans to be able to parse this in a way that they get what they need? Which is why I say it's yet another persona that we have to think about. But again, it's almost enforcing good hygiene in that sense.

You should be citing sources. You should be linking in between different areas [00:24:00] of the technical documentation to give people the amount of information they're looking for. And that's all things that we do, but we validate that with how people are interacting with the Kappa agent, how it's responding and it's a good gut check for us to say, are we doing our due diligence to make this information available?

Simon Maple: I think with the citations, I think it's a really strong trust piece as well. Whereby if someone can see that if there isn't a doc, if there isn't some static doc that says, these are the rules, this is what you have to do, this is the workflow, whatever, you're just relying on an LLM saying this is how you should do it.

And sometimes yeah, people have different levels of trust on that. And I guess the fact that there's a citation that really helps in that trust. I'd love to talk a little bit about trust. Generally, I guess in terms of, have you seen varying levels of usage on the agent in terms of people perhaps starting off not trusting it as much or trusting it more and more as they get used to it?

Do you have any data that kind of shows the adoption based on potentially trust?

Amara Graham: It's all very [00:25:00] anecdotal in the sense that when we first launched it, I noticed people having kind of those conversations where they were playing with it. And I think it was, do I trust this thing? Or more generally, do I trust this kind of technology?

So I'm sure we've all been in a situation where we're looking at this thing going, is this AI agent gonna like work with me to give me what I need? Very early on, I saw people doing that. I think some of it was, even some of our internal employees, not doing things like in an adversarial sense, but again, just testing the limits of like, how accurate is this thing going to be?

So it's not something that I monitor quite closely, but again, it goes back to like spot checking some of those conversations, seeing how things are going. And just making sure that it's behaving. In that sense, it's almost like yet another report I'm managing. Like how's the performance going?

But no [00:26:00] specific like metrics around it. That said, there is a portion of the tool where I get source analytic information. So we have a number of different sources that Kappa can use, not just the documentation. And that's something that I monitor too, because I want the documentation to continue to be that source of truth and kind of the first place that it goes.

But it can also grab some information from other areas that includes things like our status page that includes our forum, our marketplace, even our podcast transcripts. And when we start talking about places where we have like community generated content and maybe things like forum discussions that are quite lengthy, we want to make sure that it's citing the most accurate thing, right?

So that's something that I watch to see. Is it favoring things outside of the docs? What are those pieces of information that it's favoring? And I can see it quite a granular [00:27:00] level, like specific pages or specific areas of the forum where it's maybe leaning a bit more. And again, just spot checking to make sure that's pulling up the best information and it's not pulling up someone's anecdotes about how they're frustrated about their particular use case or their particular question.

And we don't want a tool that's reading that in and saying you can't use Camunda for this particular use case because exactly one individual ranted about it in some place. That's what I'm trying to avoid. Maybe it's a little bit of paranoia, but again, it's something that I look for.

Simon Maple: Absolutely.

It's interesting, right? Because the other thing you've got to be careful of course, there is the data that gets put into a forum, like for example, podcast, you control the content, your docs, you control the content. As soon as you start thinking about forums, my mind goes straight away to, okay, what if I could put a prompt injection into the forum, right?

Or potentially misleading data or data that's maybe even sensitive, right? Cause I think it's one of those things that it's user data at that stage, right? Are there, within Kappa, are there any measures to take a look at [00:28:00] what kind of data it should and shouldn't look at? Is there anything that you can set to avoid using certain types of data?

Amara Graham: So it's going to come down to the source analytics. So I'm very careful about what additional sources I add in. But then of course, within the source configuration, you can make sure that, or I guess you can double check, do you want to pull in everything? Do you want to pull a subset of that information in?

So our community has been around for a very long time and we have two different products at this point. Camunda 7 and Camunda 8 are two different source codes. And with that, we have a very large history on Camunda 7 and the different use cases that we enabled there and Camunda 8 being the next product that we're supporting and being a totally different code base. We wanted to make sure that it understood the difference between those two. So there's things that we did, like we made a judgment call and saying, let's only [00:29:00] reference the recent forum posts. What does that mean? How many items does that then pull in?

That's something that we're also conscious of. And then refreshing and checking. Does it make sense for us to continue that strategy? So if I see something in the forum where I'm like, ooh, that's not like the most accurate, we can go through and remove it from Kappa's data set. And again, that's something that's the regular maintenance that goes on is you can't just deploy an AI agent, let it run.

I guess you could. I wouldn't do that. I want to have a little bit of due diligence there to make sure Is it serving the best for our users? And in that sense, does it have the freshest content, the most accurate content, and if we see it giving some strange answers. What do we need to do?

Do we need to remove that forum topic? Do we need to do a full refresh of recent forum topics? Do we need to limit what recent is and how we define it? All of those things go [00:30:00] into the discussions that are happening behind the scenes.

Simon Maple: Really interesting. And I think there's a lot of challenges there just in terms of making sure that, yeah, not just is the data relevant, but is it at the same levels of versions and things like that in terms of the interactions between the agent and the user, let's say I didn't ask, or I didn't give it enough information, and it maybe has information on various versions, how good is it asking clarifying questions and things like that, will it go ahead and try and get more data from the user or, cause I know a lot of LLMs, they're like, they're trigger happy in terms of giving information sometimes without enough data.

How does it deal with that?

Amara Graham: This implementation doesn't really do any like additional questioning. I hesitate to say it's taking things at face value, but I think the way that it responds with giving the best answer that it can based on the information that you provided, but then it also very quickly says for more details, you can refer [00:31:00] to the Camunda documentation.

You can maybe go to what it thinks is a relevant forum post. And then occasionally you'll see people engage in a dialogue with it and correct either the question that they asked, or if they're really using it as a super powered search box, you can see them start to add more terms to it.

So the one of the topics that I was looking at today was. around parameters and parameters for in the API context where it was very clear that someone was trying to get specific information, not only on what is this parameter mean or how do you use it, but in a specific programming language context.

So Kappa responds with a general here's what you do. Then that person came back and said, can I have it in spring? And Kappa responded with a code snippet from, here's if you have a spring boot application with Camunda, here's how you might use that [00:32:00] parameter. So in that sense, it's responding in a way that I think elicits the user to maybe give a little bit more information, provide a little bit more context.

It's not at a point where it's like hand holding them through that experience.

Simon Maple: Yeah which, I guess it depends on the use case. Like from a documentation point of view, that might actually be okay. And that might actually be a more preferable thing. Cause it's maybe 60 percent actually it's going to provide you with the right doc straight away.

And if it's not the right doc, then it's not, the relevant snippet you could potentially say, oh, actually, here's a little bit more of a filter so that you can work on this infrastructure or ecosystem. And then it can provide more detail there.

So I suppose it's just in terms of the flow, it might actually might be the better way of even doing it.

Amara Graham: And again, it depends what you're looking for. It depends maybe your level of comfort with something like this, even in this case that I'm thinking of right now, Kappa goes on to say the code provided is a hypothetical example.

It might not work for your context and to refer to the official Camunda documentation. So it's really [00:33:00] landing very heavily on here's what I think you're looking for. And please help me validate that's correct and accurate. And also, do your homework. Don't just copy and paste this out of the UI and into your code, because maybe it doesn't work as intended.

Simon Maple: Amazing. We're going to jump into a little bit of a screen share straight after this. So for those listening on the podcast hop over to our YouTube channel and you'll be able to see various dashboards and some data from Kappa for those only watching on the podcast. Thank you very much for listening.

Amara, it was a real pleasure chatting and it was a really great insights to hear how you're using the AI agent to be able to show or provide answers to people with questions about your documentation and usage. So thank you very much for sharing that. And it's been a pleasure chatting.

Yeah, thanks for having me. And, I hope this encourages people to take that risk. I know for people who like own documentation, like I do, I was a little bit nervous at first, but I think that the tooling out there is really great. I think there's a lot of safeguards in place, particularly now, whether you're using Kappa or something else people use it.

We get a [00:34:00] lot of interaction and I think it's helpful. So definitely recommend, but again, thanks for having me and letting me talk about it.

Oh, no problem at all. And thank you everyone for listening. And I hope you tune into the next session. See you. Thanks.

Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.
Podcast theme music by Transistor.fm. Learn how to start a podcast here.