Video: Celigo Connective AI Summit | Duration: 6758s | Summary: Celigo Connective AI Summit | Chapters: Welcome to AI Summit (0s), Speaker Introduction (110.35613839285716s), Audience AI Experience (229.40113839285712s), AI Experimentation Challenges (340.2461383928571s), AI Implementation Challenges (539.4211383928571s), LLM Platform Support (766.441138392857s), AI Implementation Challenges (900.4360383928572s), Embedding AI Workflows (1360.421138392857s), AI Tool Usage (1457.6961383928572s), SaaS AI Tools (1532.661038392857s), AI Integration Challenges (1805.306138392857s), AI Cost Implications (2130.3561383928572s), Technology Cycle Lessons (2227.541138392857s), Data Readiness Challenges (2309.9911383928575s), Executive Brief Automation (2428.601138392857s), AI Workflow Integration (2625.4509383928576s), AI in Finance (3007.9263383928574s), Operationalizing AI Applications (3265.0761383928575s), AI Implementation Experiences (3425.8962383928574s), Strategic Process Innovation (3671.4661383928574s), Implementing AI Solutions (3828.1411383928576s), AI Language Capabilities (4038.605738392857s), Measuring AI Success (4256.966138392857s), Future of Intelligent Automation (4549.331138392857s)
Transcript for "Celigo Connective AI Summit": Okay. Hello. Welcome, everybody, to our AI Summit. Thanks for, to the people online that are joining us today. I think we've got a couple of 100 people from all over the country signed up for this. For those of you that are online, we also have a room full of people here in Atlanta where we've been having some conversations about a range of topics. But for the next couple of hours, we'll be talking about, AI. I'm sure it's a topic nobody has heard of in the last couple of years, esoteric somewhat. But, before we get into it, this is gonna be a very different session than the ones that the folks in the room have experienced so far. This is meant to be interactive. So I will be, speaking some of my own opinions. I'll share some data. I'll share some, some point of views here. But, the big thing here is to have essentially a round table of practitioners talking amongst ourselves so so that we all learn from each other. This also goes to the folks that are online. If you have questions, please feel free to put them in the q and a, section of, of your interface, and somebody here will read out questions and comments out loud, so you definitely need to be part of the conversation. For those of you that are in the room right now, in front of you, you have, these little cards, on the back of which there are QR codes. Those QR codes are important because or one QR code. It's gonna take you to our surveys. We have an interactive portion of this, that involves you guys actually voting. Right? So, it's only the one QR, scan it, And when the survey is ready, it'll pop up and we're gonna start with that in just in just one second. So, I think I think that that's it for the housekeeping. I'm just making sure I got everything everything done here. So, you know, with that, let me kind of introduce myself. My name is Ronen Venghosh, and I'm chief strategy officer at Celigo. I've been with the company for all of three months. But I've been talking to CIOs and IT leaders about AI nonstop for the past two and a half years. In my previous role, I was, kinda leading the industry business at Ignite, my former employer, and, this has been a topic that we've covered extensively. I spend much of my day, in conversations with, folks just like yourselves, trying to understand where the industry is going, trying to get a sense of what their challenges are in day to day basis, and to be kind of a partner and help them kind of figure out what what seems to be a very, murky future. Right? So I met with the CIO, back in, July at a conference that I went to, who was telling me how, they used to have a five year planning horizon for their technology road map and so forth, and they've cut that down to two because nobody exactly knows what's what's gonna happen. Right? And that's something that I see a lot in a lot of industries, in a lot of different roles. So it's, it's something that we're all trying to figure out together. So with that, let's let's kinda jump right in. You know, I we have, about a ninety minute, schedule for this, session. And after me, we're gonna have, Tony, who's gonna come back up on stage and talk about Celigo AI roadmap. It's it's pretty exciting. I am, very, very excited about where this is going and, what's what role Celigo will play, in this transformation, and we'll leave that to him. But what I want to focus on is actually the operational aspects of AI. Like, how do you actually make some business doing this? How do you transform your organizations? How do you serve your customers? And how you do that effectively and sort, you know, kind of the the real stuff from the hype. And during this conversation, I'll also share with you how we at Celigo are doing this internally for ourselves. Right? So this is not just kind of a, you know, us as a technology vendor talking about how you should do this in in principle. This is us thinking about how we ourselves are transforming our business. Right? So, let's, let's start the conversation with just a good sense of who's in the audience here. So, we've got our first survey on and the question's pretty simple. We're asking, you know, how would you describe your level of expertise or engagement or development with AI? Are you just starting out? Are you experimenting? Do you have anything deployed in production? What is the current state of affairs? And let's let's kinda see as the numbers come in. I see some folks are already in in the system and the numbers are updating live. Okay. Let's give it just, one more second. I see we've got fifteen, sixteen responses. We need way more than that. Come on, people. Eighteen, nineteen. Okay. Okay. We might we might come back to we'll leave this on here for a second. But as as you guys are answering these questions and maybe let me let me kind of start with an open ended or not an open ended question, but let let me ask the audience here in the room, who here answered were experimenting? Can you can you raise your hands here? Okay. So it's it's relatively small percentage of folks here. Oh, okay. Actually, maybe some shy people that didn't raise their hands initially, but okay. So a decent a decent crowd. Who's got AI in production? Anywhere in production? Okay. Fantastic. This is a this is a significant portion. Alright. I I counted five, five folks here in the room. Maybe let let me, let me maybe ask somebody who said that they've got they're they're still in the experimentation phase. If you guys could describe where you are in a bit more detail. Like, who's a do I have a volunteer? And I got, like, for the first one to jump in the water, I actually got a gift card. Here you go. There you go. Hi. I got a big mouth. I will always talk to everybody. I'll give you a gift card for the big mouth. It's all good. Awesome. So no. The main place that I end up using the I've used the AI, was when I was trying to set things up, I would use the AI to ask questions on how to put together particular handlebars. But the major issue I kept running into was it would use depreciated handlebars. And so whatever I made, I'd have to go back and then re figure out how to put back together again because it was using a handlebar that didn't actually exist. Okay. Very cool. Thanks thanks for that. I'm not by the way, just to make sure that people understand the context of the conversation, we're not just talking about using AI within Celigo. We're talking about AI in general. This is like how your organizations are handling AI, what you're experimenting with, what you're trying, what you're deploying. Who else is, experimenting with, with AI right now at this at this stage? Go ahead. So we use it pretty extensively for, speed in the organization with communication. I think it's very good at that, helping people craft emails that don't come across as, hey, you're an idiot anymore. Okay. Oh, that that's frowned upon these days, I guess. Okay. But the areas that we are experimenting that I am personally and my team is that we've had trouble with is trying to get it to do things that you would think it's a fairly straightforward task because it is a computer and it but because it's a language model, it's maybe not doing data very well or manipulation very well. And so we're hitting roadblocks there and trying to use it whether it's cleaning our data or evaluating our data in some other way. We're hitting roadblocks and and it doing it correctly, often finding errors, maybe not doing it all the way. Okay. It's kind of doing the job as good as it thinks it needs to. Trying to understand the boundaries of the technology right now. And how do users react when they run into these boundaries? What what are what are some of the, I guess, responses that you've you've heard from users and how are you guys addressing? For us, it's just creating frustration, because we're trying to do it for speed, right, to make the organization move faster, get answers faster. And it's just like, I should have just done this the hard way. I would be done at this point. So it's it's creating a lot of pushback against it. There are things that it's been very fast for, like email communication. You can give it an outline, basically. It'll craft your email for you. That's been terrific. But the other harder tasks, the team is was excited about initially, and now they don't wanna even try anymore. Okay. And let let me maybe ask one more question of you. Have you guys defined the use cases for the users? Or are you expecting the users to come up with those use cases themselves? At this point, we haven't defined the use cases, because we're still not sure exactly how we should use those as an organization. We're in the promotional products industry, and so there are some tools within the industry that seem interesting, but we need the users to test them and try them and say, what can I actually use this for? Also, we have NetSuite. We're we're finishing a NetSuite implementation, and so we have not begun to try and look at, how AI can help us with that because we're not done with our NetSuite yet. Right? So we're still we don't wanna try and tinker with it before we've we've got to a point that we're comfortable. Okay. So what I'm hearing is we're we've kind of given AI tools to our users. We haven't defined the use cases yet. We're not exactly sure where to go. There's a lot of other technology priorities that are happening simultaneously and so you get you get the complexity. Very cool. Let let's try to, get somebody who has AI deployed in production. There were a few rooms okay. Here we go. Kumar? Before I answer the question, my name is Kumar Vador. I'm with LevelShift. We are a partner with Celigo. If you go to our website, levelship.com, actually, the AI agent is actually in production mode right now. So what we've done with that is pretty much all the services, implementation that we've done with our customers, customer stories, we kind of like build and model and machine learning all of that. And we have an AI agent that could answer questions spontaneously on the website right there for our customers looking for information. This is like any kind of services or solutioning, any kind of integration that they need, any customer stories in the relevant technology field, they can just use natural language prompt as the AI agent and it will just kind of like respond back saying like, hey, these are the stories or lead them to a white paper. Right? So that's kind of where we started and it is in production right now. Very cool. And this is a very common use case that I hear from a lot of folks, when they want to deploy kind of AI in production for customers, they start with kind of automating help desk, automating marketing facing kind of questions and answers. So that's a very common one, typically a low hanging fruit. Maybe a follow-up question to you, Kumar, and if you could give him the mic back. When how long ago did you guys deploy this? Like, how new is this? This is, brand new. I would say, less than two months. Okay. And how long was the cycle between kind of when you decided to deploy this and when it went into production? What what did that take, if you could describe that? In the implementation cycle was actually in weeks, but it took a lot of hydration to go through the leadership and the board to get approvals on so many different things. That was like a three month effort. Correct. But implementation wasn't mixed. I I I love hearing that because I think this is where we see a lot of the adoption challenges. Like, even when the technology is ready, even when there's the willingness and the budget and so forth, the change management, the buy in, the logistics and the politics of getting that stuff done is not easy. There's a lot of people with a lot of questions. And the the challenge to that is as much as we would like to democratize the data, right, in terms of what the customers can see, It's important that we want to establish guardrails and safeguards in terms of like, okay, what is kind of like, you know, sensitive versus confidential versus public information, kind of like, you know, means establish those guidelines, right? That's why we need to spend a lot of time in kind of getting these approvals because like we don't want to just put everything out for public information because there's intellectual property that we don't want other vendors and partners to be looking at to. So that took a lot of time to kind of segregate. And and that's that's perfect because, a lot of organizations and I'll I'll give some examples later on in the presentation. A lot of organizations kind of think about, security, privacy, compliance. Compliance maybe not, but security and privacy sometimes are not at the forefront of of folks kind of mindset when they're when they're thinking about these deployments. And those guardrails are critical. And, yeah. I I love hearing that from you. Sure. Thank you. Okay. So so we we've heard some examples here. I'm hoping I'm hoping this also helps to set the tone for what we're looking for. We're looking for an interactive conversation. It's okay to ask questions of each other. It's okay to exchange ideas. If you have a comment rather than a question, also legitimate. Right? So please chime in. Yeah. Go ahead, Kim. We have a question from the audience. So, Simon, This is our online audience. Yes. Yep. Simon's asking because there's so many different LLMs out there now. Right? There's Claude, there's OpenAI, Gemini. So first question is what are where are we placing our bets on LLMs? And then next question is, like, is there any are you are we seeing any particular adoption of an one LLM over another? And this may be a question that, Tony can answer as well? Yeah. So so, I'm assuming the question is with respect to where Celigo itself is placing the bets on on LOMs. That's that's what I'm gonna answer. That's one yes. That's one aspect of it. And then from a platform support perspective, we're what are what are our plans? So so from from a platform support perspective, which I think is where we need to start here, we're not gonna pick winners and losers. We ultimately are gonna support everything. And in fact, we do expect that a lot of customers will be choosing different models depending on the specific use cases that they want to pursue. And so we not only envision you all kind of using, you know, any LLM that you choose, but changing the LLMs depending on what it is that you're trying to achieve, specifically. We ourselves at Celigo, I think we're using Gemini. We're using, OpenAI as our two systems ourselves. But again, the platform itself is is wide wide open. It needs to be that way. That's that's the nature of what we do. Right? We need to connect everything to everything. Okay. So, wow, we got 57 responses. That's that's nice. So just to summarize what we're seeing here, almost half are, experimenting still. And I gotta say, this is actually, lower than I've seen, previously when I asked this type of question. It does seem like this group is significantly more advanced. We've got 40% of people that are actually implementing AI in multiple areas. That's that's pretty cool. And then we've got zero people that are not planning to, do anything. So that's that's definitely, moving in the right direction here from some of the conversations that I've had previously. I'm really curious to hear from anybody who's saying that they're seeing measurable ROIs. Any anybody like that, in the room right now? Can you talk a little bit about that? Go ahead. Go on. Another area where we implemented an AI agent is to, ease the, number of questions that flows to the HR. Okay. There is frequently asked questions to the HR in terms of policies, playbook, you know, upcoming holidays globally across the different, you know, centers and so forth. Right? So we implemented an AI agent that kind of talks to our SharePoint repository where all the HR guidelines, documentations are safe. And that has really kind of like almost reduced 80% of the frequently asked questions or emails and you know, chats via team messages to the HR department, because, like, the AI agent pretty much, like, furnishes that, you know, in terms of the responses. So that is a pretty good measurable so the HR team can focus on, like, I mean, other, important areas. So so you're measuring ROI by the number of questions kind of saved. It's it's not a dollar ROI. You translate number of questions into hours saved or something like that. Exactly. You can translate that into time and effort. Yeah. Yeah. That that that is actually one of the interesting questions, and we'll we'll cover that a little later on. How do you measure ROI with these things? But, yeah, that that's a that's a good way to think about it. Any other comments on this? Yep. Oh, hold on just just one sec because the folks online can't hear you without the mic. Thank you. We are also NetSuite and Citigroup partners, and we did three implementations that are where they can measure ROI. The first one, they have a feedback management system in NetSuite. So the agent, analyzes feedback on a weekly basis and can compare quarter to quarter. So you can ask them like based on feedback from repeat customers, what is one thing we should focus on? And you get the answer. It's amazing. And the second one is, we had to compare the balance sheet quarter to quarter, month to month and see what variances and so a lot of on the analysis. Can can you talk a little bit more about that? That's a very interesting use case you've just mentioned. The balance sheet or feedback? The the balance sheet. So we export the the report from NetSuite and every day or every week. They can execute it and then we compare the AI with a very interesting prompt, right, understands what's going on and it can tell you areas in the balance sheet that are off or that you just focus on the analysis on. So a lot of time is, is saved. But also if the prompt was designed by the CFO, he makes sure that all these analysis, you know, take into account what he wants to make sure they see. 100%. Yeah. It need needs to come from the folks that know what they're talking about. Right? But it's a it's a one time prompt engineering, and then you keep working on that. Right? And how do you measure the ROI on this? Like, how how do you know if you're successful or not? I well, that analysis takes a lot of time always time, but then making sure that they are looking at the right things. Right. Have you do you have like a before and after type thing? Hey, before it used to take us, you know, three days to do this thing. Now it's taking us two and a half. Right? No. No. But I think we're pretty excited. Okay. Well, very good. Thanks for sharing that. I love that. So so with that, to make sure we're we're on time here, let's kinda talk a little bit about this is where I get to speak for like five minutes so you can you can relax. So, you know, one of the interesting things about AI these days is that it's everywhere except for in the productivity data. If you look at The US economy in general, if you're looking at companies, everybody's doing something with AI, but we're not yet seeing it in the numbers. Very few companies can say, oh, you know, I've I've seen a 10% increase in my revenue, or reduction in my cost. It's right now, we're at the point in the cycle where this is measured in terms of vibes. Are we do we think we're heading in the right direction? Are we saving time? That's that's something we can measure easily. But it's very hard still to measure kind of the the actual return on investment in dollars. It'll come. Okay. This is not we're not done. Right? This is early in the cycle. But when I look at kind of the the universe of of folks that I've been talking with and and kind of the feedback that I'm I'm getting, I see three big challenges to get to this ROI that is that is what we're all here for. Right? The first, the first come on. The first. Here we go. The first is this Amorphous AI initiative. And, I'm gonna ask a question. How many folks in this room had, you know, a c level executive go to a conference, come back and says, oh, we're gonna be an AI first company. Raise your hands. Raise your hands. Come on, you liars. Everybody here has had that experience. This is this is something that happens all the time. Right? I had a customer come to me, like, this is six months ago, and said, you know, we need AI. And I said, cool. What do you wanna do? It's like, we don't know. The board says we need AI. I'm like, okay. Good luck with that. That that's gonna end well. What you're gonna do is you're gonna, you know, run around. You're gonna give people everybody chat GPT or or, you know, Claude or something. And you're gonna sit around and measure the number of questions that they ask. And, sweet. It's great. But that's not gonna show you ROI. Right? That that's the issue. And it's not that it might not help those individuals. They absolutely will go to these tools if you don't give it to them. Right? There's gonna be shadow AI. But measuring ROI is gonna be very, very difficult. So that that's one. The second thing that I see is is folks that are the second thing that I see, here we go, is folks that are going the other way. So these are folks that are fully bought in, to this AI notion and they just go all in. Like, we're gonna we're gonna transform this business. It's gonna be transformed overnight. We're gonna do this thing and this is gonna be a AI first company by the end of the year. Well, very cool. Maybe you can do that, but change management is the thing that folks do not properly understand or don't take into consideration when they're talking about kind of these major initiatives. Change management, in my opinion, is going to be the long pole in the tent for AI. It's not the technology for the most part. For the most part. We'll talk about some exceptions to that. The next one Hey, it worked this time. That's very cool. The next time is the the next issue is that, when people are not confident of the technology and they're not confident in their ability to actually correctly execute on that technology, they hedge. And what they do is they build parallel tracks and they give employees more than one way to do something. Very often, I heard I heard this today, I was talking to somebody here, saying, oh, you know, we we tried the AI, doesn't work. We go back to doing the thing that we did before. Right? It's a parallel track. It doesn't work. Right? What really needs to happen to get this ROI is to embed AI into the workflows, into the normal workflows. And the reason I'm saying this, I was again, this is last year, so might not be relevant anymore, but I think it is. I was meeting with the c I the CEO of an AI company. And he told me something, that went something like this. I I'm in the AI industry. I sell AI. I find myself forgetting to use the tools. It doesn't come to mind, so I don't do it. And that's a problem. Right? If you if you can keep doing your work the way that you've been doing it forever, you're gonna keep doing your work the way that you've been doing it forever. Right? The true way to get transformation is to change the process fundamentally, you know, from from the ground up. And we'll talk about how we do that because that's that's not trivial in its own right. So, these are these are, by the way, these are just insights that I've gathered personally from conversations with CIOs, IT leaders, CEOs over the past two and a half years I just wanted to share that here. You know, I'm gonna ask our next question here, which which again, this is this is all very interactive and this this is going somewhere. So what I want you to answer is how many AI tools are you paying for? I'm not asking about I'm not asking about the LLMs. I'm asking about SaaS solutions that you're paying for AI functionality within. Okay. Let's, let's wait until the numbers come in. Three participants ranking. Okay. Three, two, two. Okay. Four answers, five answers, six answers. I need I need some more numbers so I can draw some conclusions from this. Nine. Okay. Okay. 13. Let's wait till it get to like twenty and then and then we can we can come back to this after we're done talking about it. Okay. So the the most popular number seems to be two. The second is none. I I actually love that. And we have we have somebody who actually answered six. Is that person does that person happen to be in the room? I don't do it. Okay. So, can can I ask, how did you get to SIX? What what was the process that got you to these SIX tools that are you're essentially paying for for AI? Yeah. Sometimes we stumble upon an issue with a particular application and our instinct now is to look for an AI tool to solve that issue. Some of the first tools that we implemented with AI was, or really, replacing or contract review and redlining k. Attorney using an AI attorney. Yep. So now most of our contracts are redlined by an AI attorney before it goes to an actual attorney. And every time we have a use case, we just add more tools. Okay. That's how we end up with it. I love that. So so you're experimenting essentially with a bunch of different tools. How do you decide which tools to purchase? Is there is does this come from, I guess, your users? Does it come from IT? Like, who's who identifies the problem first? So the problem identification can come from anywhere. K. But then we go through our software selection process in order to decide which tool tool we're going to implement. So we kinda, you know, we Google, we ask chat EBT what solutions exist, and then we start to score them, and then we select one. Okay. So you use AI to score the AI options. I love it. That's very meta. I I I love that. That that's actually very cool because what what you're what I'm hearing you say is that you're experimenting with a lot of different tools. Have you guys given thought to whether this is kind of a permanent type situation or is this really because we're just learning the technology right now? What what's what's the So I think I mean, we're learning the technology, but we I don't envision a situation where we're ever going to go away from all of these tools completely because we've become sort of dependent on them. We might replace it with a more robust tool, but there is going to be a tool. Okay. Very very cool. Is anybody else kind of following a similar process here in the room by by show of hands? K. Can can you talk a little bit about your your process? Waiting for the mic. Here we go. I I completely agree with what she was saying in terms of software selection and sort of evaluating tools. I kinda start there. I usually like to use, systems that give you sources as well because, Cite citations Yeah. Cite citations. References so you can check. Yep. You know, if it's, like, coming from some user forum that no one's heard of, I usually ignore it versus, like, Gartner or something like that. You go there and you can actually see what folks have said or what editors have said about a software. So, yeah, I agree with the evaluation process. Okay. Fantastic. And how many tools have you guys purchased? That's why. You know, it I put two there, but, honestly, I feel like every single SaaS platform that we use now has AI in it or is AI enabled. And sometimes they do helpful things. Other times, you know, they're filling out the description or whatever you're trying to do. Man, I I I should, I should pay you for that statement because you're actually making a point that I'm about to make after this live event. We have, we have so many out from the the virtual From the cloud. Right? Same similar thing. Right? Five apps, they're all the ones we know and love, SAP, Salesforce, Workday, Copilot, and ServiceNow. So all of these, we know, have AI extensions, that that they're using. Okay. And I I I I would love to hear from the personal line. I know we can't have an interactive conversation here, but I'd love to hear about how they're making their choices and why those. Like, is this the full universe of requests that that came up? Or is this just the ones that got approved? Or is there a specific drive behind this? May maybe you could fill us in, Kim, if if the person responds and and thanks thanks for that question from, from the cloud or comment, I should say. Awesome. So so one of the things that we're seeing right now and kinda advancing to the next conversation here is is what I call the the SaaS walled garden. And what let's kinda take a step back here and talk about what users want, based on these endless numbers of conversations that I've been having with folks. What people want is a chat GPT for their life or something close there too. They want to go to one place. They want to ask whatever it is that they wanna ask. And they want the systems to go behind the scenes, collect information, bring it to them in a unified, clear, clean, reliable format. That's what everybody wants. But what they actually get is something a little different. Right? So every SaaS vendor on the planet, every SaaS vendor is building AI tools into their product. And they're doing this by necessity. Like, anybody that doesn't do this is gonna be out of business. Like, nobody nobody's model is gonna remain untouched. The problems with this are several. One, there's usually attacks attached to this. Right? So when I build kind of AI capabilities into my model, I often charge you for this. Right? And so we find that a lot of the tools that folks are buying, not all of them, by the way, not by a long shot, but a lot of the tools that we're buying are essentially a wrapper around the same technologies that we pay for anyways in our entropic licenses, our chattyPTs, and so forth. In fact, I'll give I'll give a specific example here, that we experience. So we have, an ATS platform, applicant tracking system for our HR team. And, they built some AI capabilities into the product. What is that AI capability? You can take the data on the applicants from the system and, automatically generate a, an offer letter, for example. Okay. Useful tool, comes with a cost. I won't discuss the actual cost here, but but comes with a cost. Not not huge, but still. But essentially, this could be done if you had access to the systems. Could be done using, you know, ChatGPT or, you know, Cloud or Gemini or or any other LLM. Right? So there's there's a lot of these tools that exist. Everybody is forced to build AI capabilities into their products and because of that, they're also forced to charge for that. And that costs redundancy and extra cost when you're looking at your IT stack in in total. Right? But that's not the only thing that happens because of this. The biggest issue is in fact, the fact that there's no unified AI experience. Right? You're you're now like, when I was talking to, the CIO of a major, construction company that's based right here in Atlanta, she was telling me that her biggest issue is training their employees to figure out where to go for what. Right? Like, if you have 17 different places to go for 17 different things, you don't you don't know where to go. And not all of your employees are, you know, tech savvy and, you know, capable of kind of understanding technology and the latest, changes that happen every couple of months or a couple of weeks. Right? So that's that's a very big challenge and, you know, when when I was talking about change management being the long pole in the tent on AI, this is what I'm talking about. Right? It's the the user experience, it's the engineering of the processes, it's getting folks to actually do their work using the systems that you're purchasing for them at at a high cost. But it doesn't end there because one of the biggest issues that you get here is fragmented data. Data is siloed given this, AI I'm sorry, the the SAS AI walled garden. Every system, you go into Salesforce and you use the AI. You get responses from Salesforce And you go into NetSuite and you'll get the same there. And you go into a third system and you'll get the same there. And guess what? The answers might not align. Nobody's nobody's kinda making sure that the whole thing is functioning and working correctly. Right? And so, you get all these issues on the back end that are simply, being created here by market forces. Like, nobody's choosing to do this. It just happens because of the way that the system is set up. Right? So, manual access to multiple systems, lost information, conflicting, inconsistent information, training the users, all of this stuff and then on top of that you get the hallucinations and so forth and that's why you don't get adoption. This is the problem. Okay. So now that I've, spent the last, what was it, ten minutes talking about the problem. Let's talk about, some of the things that we can do, to address this. But before we do, I'm really, curious to hear what the audience has to say about this thesis that I just, threw out there. Any any volunteers? Yeah. Go ahead. Go ahead. You mentioned about cost. I wanted to add something to that. What we are hearing from customers is, as a result of embracing AI, we are system integrators. Right? So there is some increase in productivity efficiency in terms of how we are generating code and how we are generating solutions. So customers are asking for a 30 to 40% reduction discount as a result of using AI. So that's a reverse. Yeah. Is it working? Well, it's customers, so you need to take care of that. Yeah. In some form. Yeah. And I I gotta say that that's a that's a valid point. Right? There's there's definitely it plays both ways. I have yet to see a piece of software that was offered to us at a discount because there's AI. But, you know, it's, it's, it's something that that I I am sure one of the things, by the way, one of the things that I think will happen, and this is kind of it's beyond the scope of this, conversation. But one of the things I think will happen because AI makes it so much easier to produce a lot of new product, is that you're gonna see some overlap getting created between different software categories that previously were standalone. Right? And you're gonna see some, IT organizations thinking about doing some in house development on a more consistent basis. Right? So that that might kinda go the other way, but we'll see. Yeah. Any other any other comments? Does anybody kind of experience this sort of thing with their users? Like this fragmentation issue. Is this something that it's just in my head or are you guys seeing it as well? I got a gift card for the next person who gives me a response. I'm I'm not about bribery. I'm okay. Sir, let's, get the. So I may get a little bit philosophical here. But we've kind of seen this before about thirty five years ago at the dawn of the Internet. You know, nobody knew where the train was going, but everybody's hopping on it. Right? So we don't really know what it can offer us yet. And so all these fragmentations here, all these problems that we're seeing, I think we can take a lesson from the past and see, hey, how which businesses are successful today? How did they respond to this revolution or movement, you know, so many years ago? Yeah. I I I look, I I think you're spot on. There's no, there's nothing new under the sun. Right? I mean, this is this is a technology cycle. I fully expect that these things will be resolved. They might even be resolved faster than in previous cycles. But given the state of the economy right now and the state of the development of the technology, these are issues that can be avoided and we'll we'll we'll talk about that in a second, how to not to not repeat kind of the mistakes. And, Kim, you have, some somebody online? I have a question from the audience. So, how you mentioned data data readiness. Right? So you have data fragmented in order for AI to be useful. It needs data. So the question is, you know, how do you how do you approach the data to readiness problem? Can you start experimenting and then fix it? Can you you know, it's it's, along those lines. So how do how do you think about data management as such as they Look. I mean, the the reality is I've never met any company, and may maybe there's exceptions in the room, but I've never met any company that has told me, you know it, our data is in tip top shape. Like, every person in every function that I ever speak to says, oh my god, our data is in trouble. We need to do this and we need to do. And the reality is it's a never ending process. Right? If you're gonna wait until you got everything locked down, you won't be deploying AI anytime soon or ever. Right? So that's that's not the point. I think the the point rather is that, a, you need to figure out how to harmonize your data across the different systems. You need to bring data from multiple sources rather than from a single, place. And you need to think about this pragmatically. Right? It's like the thing that I'm doing right now, can I get reliable answers? If I can't get reliable answers, then guess what? No adoption, user frustration, back to doing things the way we did before. Right? So that that's kind of how I, think about this issue. But, yeah. So so so how are we how are we kind of, looking to deal with it? And I'm I'm gonna switch gears here for a second and tell me a little bit about what we at Celigo are doing in house and how we are looking at our business and trying to, embed and adopt AI at scale. After I talk through this, I'll ask what you guys are doing as well because because I'd love to get that feedback. But let let me start with a little anecdote, which is not so little for me. It was it was a massive pain for me until three months ago. So, as I mentioned, I was at a company called Ignite, for ten years. I left over the summer. Great company. And in the last year, I've done something on the order of 50 customer on-site visits. And every time I did a customer on-site visit, there was a team that created an executive brief for me so that I don't walk in there and, you know, do some damage. Right? So where are we in the latest sales cycle? What does renewal look like? Do they have any open tickets? Do they have any product requests? Do they have, kind of the necessary adoption? Right? All of that stuff. And there was a team, usually a team comprised of folks in support, in customer success, in sales that manually created this for me. It was a two page document, kinda looked like this. And, I'm guessing it probably took an hour maybe maybe a little more for everybody to kinda do this work. Okay? I'm one of, 12 executives at the company. All of us do these customer visits. And also, by the way, we do red account reviews for any accounts that we think are, in trouble, on a regular basis. So add, I don't know, at least double that number and add those hours. And, this was very painful, not to mention the fact that you probably have to wait like three days for all this information to get collected. So, when I started It's illegal and I had my first customer meeting coming up, I looked to my right where, my, chief product officer, Matt, sits and I said, hey, Matt. How do I how do I prep for this? Like, where do I get this? He said, oh, I gotta show you something. And he took me, by the way, this is where I'm gonna do the most difficult feat known to humankind. I'm gonna try a real live demo over the internet during a live conference that's also broadcast online. So if it doesn't work, please have mercy. But let me show you what he showed me. And he said, and this is, I'm gonna take you to our production environment in Slack. Okay? And he said, do this. Type in the name of the customer and I'm gonna I'm gonna type in a fake name here because I don't wanna expose any any data. Hey, it's working. What do you know? And, what the system does now is it's going to our CRM system and it's gonna give me a list of the customers that have that name. Right? So in this case, because it's a fake customer, there's no ambiguity but could come up with a list of five. Okay? And then, what I need to do is kinda choose choose this account and, once I've chosen that account, it takes a second to to kind of confirm, but I'll show you the output because it takes about three minutes to run. It gives me this. No human created this. No human touched this. This goes into our various systems and I'll I'll go back in a second, so I'll show you the different systems that we used for this. And it collects information about all the stakeholders and how adoption looks and, you know, do they have any issues. And, of course, it also goes online and brings me a summary of who this account is, what do they do, just in case I've never I've never met them before. And this takes all of three minutes. Okay. Now, you might not have a similar, similar thing. Right? You might, not be facing customers, directly. Where's my where's my slideshow? Here we go back in the slideshow. But I'll bet you that there's workflows that collect information from multiple sources manually and somebody's sitting there stitching them. I was talking to a customer earlier this week before I flew out on Monday, who said, I have a team of 600 people that work in the field. They do inspections. And, they go into NetSuite mobile app and manually type out kind of a report. And then somebody kinda takes it and adds pictures and adds additional information and kinda do this and we send it to the customer. Okay. This is one. Right? So And I'm I'm sure if I asked you, you each kind of your wheels are turning, you're thinking, oh, like, oh, okay. Like, we can collect information from multiple sources, create a template. By the way, in the same vein, right now, my team and I are working on creating a presentation that's customer facing that we will be able to share with each of our customers during our quarterly business reviews. You know how much time we spent creating that stuff? But all this data is already there in all these different systems. Why are we having a human kinda collect all this information? Makes no sense. Right? Automate that and all of a sudden your customer success team has 20% more time to do what they need to do. That's kind of a big deal. Okay? So in this case, we go into, you saw the entry point in Slack. I kinda go in there. I typed the the the customer name. We go into Salesforce to bring in the sales data. We go into Gong. We analyze the calls, either the transcripts or the recordings themselves. We have Snowflake where a lot of our usage data is maintained. We have Gainsight. We have Zendesk. A lot of this information is there. It just needs to be brought together. Where is the AI? Where is the AI in all of this? That's kind of the key question. Can somebody suggest? Where is the AI? Well, no. It's not. But but we're using AI, but where in the process do we use AI? The only place where we're using AI here is to collect the data and then format it into into the deck that you're looking for. Right? So that's it. Everything else is is deterministic, uses our platform normally as it normally does. We add kind of chat g p t as a as kind of the last step. So this is cool. Now, why am I why am I bringing up this use case? I'm bringing it up because of a few things. If we if we talk about kind of, where we started, what is the problem with AI, this kind of SAS walled garden, the inability to kind of unify data from multiple sources, the having to train people to go to a bunch of different entry points to access AI, like, all of this stuff is gone. Like, essentially, we tell people, hey, you know how you use Slack? Go in there, type the name of the customer. It's done. Right? They don't have to manually go to all these systems. This happens for them. And importantly, this starts from the ground up. This was not an initiative by our IT team. This was an initiative from the operating teams. They want this. This was the very first or one of the very first applications like that that we created. Right now, we have about three dozen of these workflows or agents running in the business. Once the first teams once this was deployed and people saw how awesome it is and how much time it can save them, guess what happened? Everybody started raising their hands. Like, we want to do this. Like, what can we do? So we have a team that implements this internally. We're hiring more people for that team because demand is there. This is coming from the business, not from us. We didn't say, hey, you thou shalt adopt the no. It came from them. Right? Let me give you another example. Yeah. Go ahead. Yeah. Now just a comment from, our virtual group, from Nathan. This is around the information and security. Right? So proprietary information, security concerns are constantly the most significant challenges we encounter when attempting to utilize AI of any type. Yeah. Yeah. Security is kind of a big deal. Is there is it a comment or a question? It's a it's a comment. Yeah. And it look, I mean, one of the things and I I think it's a it's a very important comment here because because, hey, If you have access to all these systems, it needs to be thought out well. Right? You can't willy nilly give everybody in the company access to everything because you know it will be misused. You know that things will happen. In our case, this is something that, everybody in the company has access to, but it's scoped down so that no damage is of course, they can take the data and leak it, but but, we very consciously, decided that this use case is acceptable. Yeah. There's a question here for Mike, please. Mike. Here we go. Sorry. Just building up on that. I think it's not just security for the systems that you already have and the people who have access to it. But if you use AI, you send out that information to that AI. Right. And it will remember that. Ab absolutely. So so this is so for us, we obviously use, kind of our corporate tool, for this. And our agreement says that no data is used to train the model and and all of that stuff. But if you're not comfortable with the use of AI in general, then then it's a different matter. We we have reviewed kind of the the legal ease behind this. We feel comfortable and confident with with that use of the data. There's another question right here. I was just curious if you have ever stumbled in a meeting because the interpretation that the AI did on the data was incorrect. I I love that question. I I love that question. So so this goes to the question of human in the loop. Okay? AI will always make mistakes for the foreseeable future, I should say. Like, I don't know what'll happen five years from now. But but for the foreseeable future, mistakes will happen. And if you're gonna blindly rely on something that's in a report, yeah, you're gonna have issues. Right? So so this needs to be reviewed by a human. Now, the the level of review and the amount of work you invest needs to be kind of aligned with the value of that or the risk that's created by that. Right? Here, there's lesser risk and, we feel comfortable with that. I'll show you in a second different situations. Okay. Let me show you another one. And I've I've got, I've got five or six of these and, so if it gets boring, just tell me. I'll I'll skip. But but I I I think it's helpful to understand kind of the breadth of what of what can be done here. This is all done, by the way, in our platform, in in Celigo. Right? So, here's here's another one. This is this one, is always something that folks are excited to hear about. So our finance team, our AR team, did an analysis of their incoming inquiries. And they found that 30% of all their inbox traffic for AR came from four simple inquiries. Four. K. Why don't I give a gift card to somebody who can tell me, who can guess what one of these is? Any guesses? What would be what would be one? Kim, you're not eligible. I'm sorry. Give me a guess. Come on. Doesn't cost you anything. How much do I owe? How much do I owe? Close close enough. Okay. You had you had a guess? No? Anybody else? Yeah. Go ahead. I'm sorry? The first use case is an auto display. I'll I'll I'll get I'll get to that in a second. I'll get to that in a second. But but any can anybody else get well, let me just let me save you the trouble. Okay? There's four of them. Right? Send me my invoice. Send me my statement. Can you send me your w nine? Can you update my billing information? Four items. 30% of traffic. Somebody sits there constantly hammering away at the keyboard, looking up at NetSuite, figuring out, downloading stuff. Okay. What did we do? We built a listener that sits in the AR inbox. K? When it identifies these questions, it goes and takes autonomous action. Send me my invoice. Cool. Goes into NetSuite, downloads the invoice, attaches it to a email draft, and then stops. Right? So that that's that's to your question. It doesn't automatically respond. Like, we do not feel comfortable sending an auto generated email with financial information to a customer. But we feel very comfortable creating the draft that the AR person can then go in, validate, and if they're happy with it, send it. Right? How much time does that save? Like, do we really need humans to go into NetSuite download statements or change billing information? Pretty simple stuff. We'll talk, when Tony comes on stage a bit later, he'll talk about some of the things we're making to, we're building to make this kind of thing even easier and more powerful, which I'm super excited about. Let me give you another example. Have to press the right button. Okay. This is for, support use cases. Okay? Support tickets come in Rather than having the support agent start from scratch, we have an AI agent that takes a look at this first. Many of the questions are repeat questions that get asked regularly, and we can propose a very reasonable response to the agent, to our human agent that they can take and run. So we save them some time on doing the research, on composing the email, but again, we stop. We don't send this automatically to the customer. We have the human review, make sure that everything's hunky dory, legitimate, and then they can approve it and send it out. If something like that works or something like this thing works, it has very clear ROI that is very measurable. If we can get a response to a customer asking about their invoice a day earlier, our days our days of sales outstanding are gonna be a day shorter. Okay? If we can if we can get a faster response to a customer on an inquiry, our customer is gonna be happier. They don't wanna wait for somebody to get to them in the queue. Right? This shortens the queue. So that's what I'm talking about. So this is what we call this is operationalizing AI. This is not some kind of highfalutin strategy that comes from the top that says we're gonna we're gonna transform the organization. This is a very pragmatic approach that says we're gonna transform operations. We're gonna transform individual processes. Ultimately, we'll tie them all together. Right? This they won't stand on their own. But that's where we're starting, and adoption then becomes much easier because, again, it's coming from the bottoms up. People want this. It helps them. It saves us money. It saves us time, but it helps them helps them do their job. And, you know, it's in the normal course of their daily activity. They don't have to remember. Like, none of these things require them to remember to do something. It's something that they would do anyways. It's just get to do it easier. Rico, you had a comment there? Or Yeah. I just and this is kinda going back to, the security comment in question because, you know, providing, this type of functionality, like what we've seen and, like, what I've seen with on the partner community is that, well, number one, from a security standpoint, education is important. You do need to educate your employees that, you know, if they're using a free version of ChatGPT and you're putting your customer data in there, that is being used to train the model. Right? You wanna be using the enterprise, you know, service that has a, you know, a walled garden for that to be protected. But part of what you wanna do, you know, from a security standpoint is, provide, you know, vetted tools that are usable to the end user because if it's too complicated, that's when they start using unvetted tools and, you know, shadow IT and so on. And so part of, the kind of the soft power of security is to provide a tool mechanism in which, the users have more agency and more control in what they're doing, but within a certain governance, that the centralized IT team or whoever can can control. So Yep. 100%. I'll I'll do I'll do one more here. This is the use case I was talking about before with, where ATS, was asking us to pay for for kind of, you know, an additional AI charge or functionality, we chose to do it ourselves. And the way we did it is on this illegal platform using the tools that we already paid for, which is kind of our our chat GPT licenses. Right? Why pay somebody else for a wrapper of something that you're paying for already. Right? And the tool is there and it's available. So again, the interesting thing here is that all the information exists in our ATS. The the templates exist in our Google Drive. We take it, we merge it, we send the DocuSign PDF. Done. Easy. Maybe one more. This is also a fun one. So I'm gonna come back from this trip. I'm gonna submit my expense report. If I'm an honest person, I've done my job right, no human's gonna look at it because we have our expense policy that lives, in our system. And, it'll automatically review it. And if it's compliant, it'll automatically take action. Now note, by the way, the interesting thing here is that we consider this low risk. Right? We're happy for the AI to take automated action, whereas we wouldn't take automated action with the customer response, for example. Right? And this boundary is something that changes over time. Right? As we get more comfortable with the level of responses that we're getting from the AI, for example, on the AR inquiry, perhaps there will come a day when we feel solid enough that we'll allow the AI to automatically respond, to these inquiries on its own. Not there yet, but could happen. So so I I wanna pause here for a second and get a readout from the room. Like, is this resonating? How does that kind of match with your experiences of AI adoption of how you originate the ideas and the use cases? How do you solve how you solve kind of the AI problems? Like, what what what are the thoughts? Like, does this match with your experience? Is this different? Oh, you're a tough audience, people. You're a tough audience. Or it's late in the day. You're all waiting for the for the beer. There's a there's a question from our virtual group. Okay. Virtual group to the rescue. Yes. Indeed. So it's a bit of a long one, so bear with me. So a lot of what you showed here is sort of replacing human busy work. Right? Stuff that RPA kind of promised to do. And I think you touched on it just a minute ago and run-in a bit. Has anybody thought about implementing, you know, beyond just sort of these use cases into the really innovating how a business process works? You touched on it, like, we we are we're using human in the loop to verify. Yep. But, how are we thinking about sort of that next step Yeah. Business innovation? You you know, I I think I I think it's a it's a fantastic question. Right? And I think, the use cases that you're seeing here I'm not sure what is being waved with me. Oh, okay. So so the the use cases that you're seeing here are, where the state of the art is for us today. Right? So we're, again, consciously taking an approach of operationalizing from from the bottoms up, automating specific processes, driven by initiatives from from the various teams. Is there room for strategic, overhaul of entire processes? Absolutely. It it's not the approach that we're taking, but if somebody has kind of a vision for where this can go, then then absolutely. That that's a possibility. Right? So I I mentioned a few minutes ago, kind of this conversation I was having with a customer regarding their field teams and, you know, kind of individuals individuals kind of filling out reports manually, NetSuite and then sending it to the back office, somebody in the back office kind of doing additional process. This would be, very amenable to this this kind of use case, like the customer three sixty type thing that I that I presented earlier. And in my opinion, that would be a strategic overhaul of the process. Right? Because the time difference would be, dramatic. The cost difference would be dramatic. Right? So I I do think, that there is a lot of potential for that as well. Yep. Please go ahead. Hello again, Fran. Hello there. So, things are starting to cool. I wanna know if I could get, like, a dummy data version of it so I could, like, dig into what you did. Because it my brain goes to, okay. That's really awesome. I wanna try it. I have no idea how any of these things work. I just want to see Oh. Like, the the the the bits and bobbins, even if it's, like, fake data, just so I can understand what's happening and build it myself. Yeah. So so we we are very interested in working with every customer that's interested to understand how this is done. Do you wanna add something, Tony? Yeah. And I was gonna say a lot of the use cases you're seeing Ronan presenting today, presenting today, the ones that we've been using internally, we plan to put on our marketplace as templates within the next week. So, if you are signed in and you got to the marketplace, maybe as early as tomorrow, but within the next week to be sure, making sure that we don't release anything that's too private. But, of course, the patterns are there, and you'll be able to replicate the patterns very easily out of the templates that we're gonna provide. Because specific flow that I have in mind is for something we do currently is all of our invoices are in Laser Fish and need to get that to the CSR. But Laser Fish's, workflow management yada yada can't email the file the invoice file with our stamps on it. You have to download that. So I have a Python script that I've been running for the past two years to download the file somewhere and then email it. But I could put it into Celigo, but I don't know how that document management portion would work. But I think the second one that you had, it's like you're looking for documents monitoring email, putting documents in a place, and then emailing those documents. And that's what I'm trying to think through how to build now. Yep. Yep. All of that can be done. And what what I'm hoping to do in this presentation is not to tell you, hey, like, build this. That that's not the point. The point is to teach you how to fish. Like, we have building blocks for all of this. Right? Any application that you wanna connect, bring AI, bring workflow into this thing, and and build the thing that's right for you. Like, I showed you what we've built for ourselves because, you know, I know what we built for ourselves and I like what we built for ourselves, but but this is not what I'm right? I'm not proposing you built this. Maybe you should, but this is not the the objective here. How does this resonate? How does this story resonate? Bottoms up? Okay. Well, let's expand on that a little bit. Give give the guy a mic here. You can't just say sounds great. What sounds great and how do you how do you know about this? Yeah. It sounds great because, I it's it's it's about recognizing the the use case. You have to know your business process and then realize how you can AI in all of that. So, I think it's making the connection between people processes, what do people do, What is the busy work like somebody else mentioned earlier? And and and how can we replace that using the tools that we have? Right. Right. 100%. And and the the other thing that I'm hoping you take away from this is don't pay for the same thing seven times. Right? Like, you already have the LLM. In many cases, you can just do this stuff. It's really not hard. Yeah. There was another question back there. Thank you. If you have a company that is distributed worldwide and speaks different languages, Portuguese, English, Spanish, Are you finding that you have to build the same flows three times? Or, are you finding that the natural language processing available at these AI agents is on par for different languages? Or do you find that maybe, non English only speaking companies may be disadvantaged because they would have to build the flow three different times potentially to answer the question? You you know, it's it's a great question. I don't know that I've thought of that. I know Tony, do you have do you have any any comments on this? Yeah. For sure, the the, LLMs are trained on all of the languages. So as far as the corpus of what they have there is, but what I do know is a lot of the exemplars I've seen in the market, depending on what industry you're in and the industry technology, you may still be better served by certain types of models. So multi language aside, just, if you're really trying to communicate in different languages with particular types of industry jargon, again, it may not be a good use case for LLM generally. With that being said, there's no reason why you can't prompt in English to create it once and have the response being a natural language vernacular that's passed into the prompt, so it understands how it should generate the output. So, yeah, there's definitely within the realm of possibility, but I think there's t's and c's in that answer. Yeah. I've been doing a lot of work in Latin America in Portuguese and Spanish, and it's from a pure language perspective using a ton of LLM for that. It's been really excellent, at least between those three. It's not domain specific knowledge, so that's a separate thing. But just purely in terms of kind of the language, you know, these models are really, really well trained, in general. So Great. There's another question over here. My question is, so we're bringing all these information from these different sources. Where is the consolidation and the mapping of the data happening? Yeah. Tony, do you wanna take this one? Yeah. Let me let the master answer. Yeah. I think one of the the key parts of how we're being successful with this is that we're being very selective about how much agency we give to the agent. Right? Because, it is possible, and and I'll talk about this a little bit more later, to just allow the agent to go get the data, figure out what data it needs, create the plan for how it should do that, create the plan for how it should map it. Or you can have much more precise ways that you're using the AI to limit the amount of hallucination that's even possible with the particular step you've assigned it. And so we have definitely a mix, but I'll I'll introduce the the language of least agency as a a great governance principle to say, if you could solve a use case you know, NAI is wonderful. Like, a lot of these examples, it's summarizing text, generating text from known context. Again, it's wonderful that, like, it just three years ago, you couldn't even imagine the scale with which these things could be done, but but it only goes so far. Right? The least agency, again, is a good principle to say, let me use it where I can precisely to limit my exposure to risk. Awesome. Thank you, Tony. Tony, before you drop the mic, so to speak, maybe there's some questions. We have lots of questions now coming through from our virtual group. But one is a little bit more technical. So, specifically, when Ron is going through these use cases, what AI tools, are we specifically referencing? Is it ChatGPT plugged into Sligo, or is it Sligo just mentioned in, in the way the data is transferred? We're within our integration platform using just great connectivity to interoperate with the LLMs of choice. And again, as Ronan mentioned, we don't want to be choosing which technology is used because we're using different technologies, different LLMs in different places and every model is going to have some benefits, pluses and minuses. So, I'll talk to the camera a little bit since it came from the audience. Yes, so, it's not particular perspective. We're using OpenAI for a lot of the ones you saw here. Again, it was just a starting choice that we had to start someplace, but we've had great success with it. In other places, we're using other models. But if you think about there's a flow, a particular step in a flow gets data. So the data is acquired through various places and we give it very precise instruction, do this with the specific minimal set of data that I've given you, trying to adhere to the least agency principle. I think, again, that's where we found a sweet spot of some success. Yeah. And one of the, one of the things that I wanna emphasize here, as we're we're getting ready to to wrap up this session and move into the road map conversation, none of what I showed you today requires any future road map. This is all available off the bat. You can go to your office tomorrow morning and do this. Right? Nothing nothing, you know, futuristic about this. It's available right now. So, with that, just to make sure, I I have one more oh, you know what? Before I before I do that before I do that, I just wanna give kind of one one last, one last example here, because I know we have some some e commerce customers in the room. This is not not one we implemented for ourselves. We implemented this for a customer, in e commerce, specifically. But, the the use case is is pretty straightforward. They have a product description, that's kinda crappy. It's not it's not marketing oriented. It's not it's not customer oriented, but they've got a lot of storefronts. And they want to collect this information at scale across thousands of SKUs, enrich it, make it look nice, and then push it to all the storefronts and keep it in sync. This is something that was built using exactly the same approach. So I just I just thought I'd finish round out the the conversation with that one because because I thought it might be of interest to some. Yeah. There was a question. Do you have a question or mic is coming over. Here we go. So this is another feature that's just there already for us. Yes, sir. It just needs to be built as a flow. Wow. Yeah. 100%. You'll wipe out some businesses with that. There are some very expensive businesses that do that. It's it's a we've looked at them for our organization and decided against using them because of the cost. And this is something that, I think we'll probably ask you about tomorrow. So I I love it. Look, I mean, if if what we're what I'm hoping one of the biggest takeaways from this conversation for you all is that you're paying for the same thing multiple times. Like, you have this. If you have an LLM license and you have kind of Celigo, you can do this. It's just a matter of putting together the right flows, and some trial and error. Right? So that's that's another important thing. Okay. Let's let's kinda wrap wrap this up. We started the conversation with this issue of ROI. I wanna end it, with that as well. And, I wanna know how folks in the room are measuring success. Like, what does success mean to you in terms in terms of AI? Is it just measuring the amount of time saved? Is it making your customers happier? Is it, a cost reduction? Is anybody able to show revenue growth? Right? And this is this is not, necessarily a a question of where you are today, but how do you, how do you envision doing this? If you do this today, like, I'd love to hear about that as well. But, but even if you don't have that data yet. So let's let's give it another couple of seconds here. And a clear a clear majority here going for, for time savings as as the easiest to measure path. And it's not a surprise. Just just given the state of kind of the industry and the technology, we haven't yet figured out how to measure, you know, the dollar return yet. But hopefully, I've given you some ideas today with some of the use cases that I covered about how you'd go about doing that. Right? The the AR inquiry use case, for example, days of sales, outstanding. Easy to do. Customer, the support customer support element is your time to handle a ticket shorter. Right? And that's the benefit of doing this as a process oriented and operational AI strategy. Right? You're optimizing a specific process. You should be able to measure the results very concretely. Okay. I think, I think I think this is probably, good enough for this. I'm just gonna end, in this one note. Hopefully, you've heard some things that are interesting for you. If you're interested in brainstorming about this some more, I will gladly take a meeting, with all of you guys. I'll share some of the experiences that I've had with others. Happy to share best practices. Feel free to kinda do this and somebody will schedule you on my calendar and we'll talk. Yeah. Okay. So this is all the present. Are Are you all curious to hear about the future? Because the future is pretty cool. So let me, let me welcome let me welcome Tony to the stage. And, Tony's gonna talk about the road map, and I'll come back after he's done to completely exhausted them. So now now it's here. Yeah. We'll see how it all goes. You've exhausted me now. It's been exciting actually. It was great to hear a lot of the feedback, from the group in the room and some of the questions online. I made some notes about different things. I was gonna revisit one, but maybe I'll get a slide or two in and then then I'll visit a topic or two that came up in the last hour and a half. So, for those, who didn't meet me in the morning online, Tony Curcio, Senior Director of Product Management here at Celigo. And, I've been working with customers and observing the ways that we've been using AI internally. And, really pleased to be able to present to you what we want to do with Soligo as we move forward. We've been expanding the boundaries of the platform and AI is really the next frontier for us to get into. We've been pretty successful with some of the things that Ronan described, and I think we have the the the requisite experience to do something interestingly different with the way that we could build product to make our own lives easier as we have our own internal practitioners who build these things. And of course then take input from our customers and partners and part of the reason why we wanted to have a session like today is so we can learn where are you all, where are you on your journey, what are the things that are of most interest, where are you on measuring risk, portfolio, governance. So, again, a lot of topics, But we put all these under the the umbrella of intelligent automation. Right? Automation is, an objective on of its own. It's also a technology type. But really what we need to do is make automation more intelligent, intelligent automation. So we'll talk more about that. That's who I am. Let's skip that. We're gonna be talking a little bit about future. So please don't make purchase decisions based on anything you hear. Plans change. Of course, standard disclaimer language. But what I'll try to do and anytime I get up in a room like this, I say, this is the best we know of today. And a lot of times we have conversations with you, we learn more and then we adjust our plans a bit. So again, so, you're you're hearing is our our best laid plans as of this moment in time. Alright. So we are really at a fundamental inflection point in technology landscape. I think the things that we've seen in the last couple of years have been groundbreaking. If I think back to, you know, heritage, I was a data engineer. I I've been building integration flows myself since 1999. A little bit before that, if you wanna count mainframe, like, long, long time. The the pace of change and the rate with which things are evolving is very different. And the kind of things you could do is like a data engineer with a data science background and creating linear regression models to understand patterns, wonderful math, wonderful science. But this whole Gen AI revolution, again, it changes the boundaries of what we can imagine, what we can rely on technology to do for us in ways that we couldn't have imagined before. And so, you look back in time, there's been various times and someone mentioned earlier the Internet in our prior session about, you know, what we're gonna use the Internet for. And could you imagine, like, my kids have not known a world without Amazon. Right? So they get some free free press today. But, you know, like Internet shopping as a whole thing. Right? Very early visionary way to use new technology. Of course, there's always been commerce. Go back further in time, you know, it wasn't just the, the engine in the industrial revolution, but it was the concept of the need for something like a conveyor belt to change the process to use assembly lines as the mode for which we operate. So, I really did like the question in the last session about like, can you re engineer a whole process? I think, know, if you look back in history, there is nothing new under the sun. I like Roy when you said that. Right? Like, we've seen these kind of inflection points in the past and we start to work differently because we see, the changes that we can make given that there's a new technology and new thinking applied to the technology. So, we can go through, smart devices. Again, my my kids knew maybe a year or two without a smartphone in their lives. Right? It's just but that's the way world works, social operations. So, what does this really mean for automation? What does this mean for integration technologies, integration platform as a service within our particular domain? Again, we want to be able to think differently and lean into a future, but it is a future that's got a little bit of risk associated to it. Earlier somebody in the room had presented about the challenges, it wasn't really working and so our teams had lost confidence and really they're just ignoring it. So we have to be mindful about how much risk do we take, where do we take it, what kind of value are we trying to get out of it as we go into this brave new frontier and look for, well, this is the next conveyor belt that's going to revolutionize the way I operate, right. I want to be able to redesign processes in an intelligent way to up level the automation, extend the boundaries of the things that I can automate. So, it's proof point. You know, we spent a good bit of the last hour talking about, ways that in Celigo, we are using AI, because we're a SaaS platform, we get to look at the way people are using our flow technologies and the kind of patterns they're solving. It's not unlike a lot of other customers have been using the existing technology to solve. So, as far as proof points of where can it be successful, I would say, yes, we've looked at that in the last hour and we know a lot of customers who are now adopting things like OpenAI, Anthropic, Gemini, within their flows and they're getting different levels of success with very precise ways that they're looking to use those. But, another way I could show some success of how AI could be applied in the technology landscape is the way we've been using it in the product itself. So, if we look at, just our Integrator IO platform, and if you log in, you would have seen a lot of things over the past year that came out. The picture on the left of course is our CoPilot, which had a new face lift, not too far, maybe last month. But ways that we've had, last March, so eighteen months ago, we announced our knowledge bot. Now, it's part of what is the co pilot where you could get help in context. So, how do I do this particular thing in Solvego? What is the connection? You know, what kind of connections do you have? Any question you might use as far as productivity would be a very interesting point. Automated documentation, so this is true of flows, they're self describing. You could just click the little button and it will give you a very good synopsis of what is this flow actually doing. So if you're in a organization where you've had to take on somebody's integration flows, you didn't build them yourself, this is a great way to get learning what does this thing do and to have the context to it. Okay. Now I I get what this is about. So helping your teams. Code assistant, actually came up over lunch. We were talking a little bit about this one, about, handlebar expression helpers, JavaScript helpers, SQL helpers, so QL, GraphQL, like a lot of ways that we've augmented the tooling within the platform to help assist with getting to levels of productivity. Next step, suggestions. Anytime you build a flow, you get those suggestions that come from things you've previously created, things from a marketplace, so next best action, how do I start with just a, hey, hey, we're looking to do something with product and we immediately go scan because we have a a rag based solution of everything that we've built before and everything you've built in your account that you only see. Again, privacy matters in these places. And the suggestions you get are a combination of our learnings and your learnings and, again, you get to reuse those. So it's a lot of ways that, error classification bread and butter for us, automated field mapping, of course, ways that the technology we can use AI to fundamentally get to higher levels of productivity within just the integration landscape. Okay. So if we look back in the last hour or so about kind of use cases to transform business operations, kind of use cases that we have within the product to apply AI to get to better outcomes for higher levels of productivity. So, let's just move everything over to AI and we're all good and but of course, we also hit some of the scenarios which is, yeah, but there's risk and there's human in the loop. And I love the examples, you know, and Ronan, I thought you drew a great picture of where do the Celigo within our operations team, as we look to operationalize AI, where do we perceive risk, where do we want the human in the loop, how much agency is it that we wanna give. Right? And again, I think the the the mantra of least agency is a very insightful way. I didn't create it, but I like it, of, thinking about, yeah, I wanna try to give it the least amount that I can. I wanna embrace it. It does wonderful things with classification. It was wonderful things with text generation. But, again, I need to balance these out. So where's the perspective of balance? And we'll talk a little bit more about that. But let's just make this point, say, again given all of the proof points we have, I think we could accept, just like we saw in the short revolution, the Internet, this is an inflection point that will make things different in the future. We should embrace it and find the opportunities that we can to accelerate our businesses. And with respect to automating your business, like this is a good axiom to say, we just want to get to the higher levels of automation, expand the boundaries of what we can get to and do so in a way that, again, minimizes risk and gets us those outcomes. So, where does that leave us? So, if we have an intelligent automation imperative, where we want to automate with AI where we can and get higher levels of predictable outcome, we have to have AI that's predictable. And you can think about it as a scale. On the left, things are very predictable. On the right, things are autonomous. AI is making decisions. And so you'll always in your organization have things that, well, I could just perceive this as a flow or a composite service, an API. And these things can continue to operate as they should. They're rules based orchestration. You're able to build them, in a very specific model. The question about who does the mapping was a very interesting one. Right? Like, in a rules based orchestration, our developers, your developers on your teams, are defining the rules with which apply to your business. It's very specific, it's very testable, and it's very deterministic. You will always get that outcome every time. And there will be standard use cases that fall into the automation set that are this predictable set. And then you could say, let's slide that slider all the way to the right and think about what are autonomous use cases. So, again, like we looked in the last hour at use cases that we've imagined within our business where we had confidence, but you could start to think about things like continuously forecasting demand in a particular area, what are the conditions, how does this change, how am I learning. So, could you set AI to something like that and get good outcomes? Maybe in some cases today, maybe in two years, maybe at much more regularity. So, you'll see that there are going to be use cases that are way on the right hand side. I can only solve this use case if AI is fully in charge of not just any particular step, but the whole plan. How does this data come together? When do I rerun this? What are learnings along the way that make me rethink? Actually, I should have gotten this other piece of data when I did that the first time. Because AI can reason itself reason itself, again, it has the opportunity to be applied in use cases like that. Now at the end of the day, you know, if you, get me on a whiteboard, I'll say, just about anything AI could do. If you give me a loop and a case statement, a switch statement, I could figure out a way to make it very predictable, and I could kind of shift it left to predictable outcomes. But that's assuming that I know everything at the time and I could imagine all of the various outcomes that are the realm of possibility, which may not always be the case. And again, AI could be applied in these use cases. But as you slide it further and further to the right, you get more and more risk, right, because we do have the hallucinations. I was grateful for some of those data points. We've seen them ourselves. So, one of the comments that I had written down was, for the weekend, I was listening to Jensen from NVIDIA. He was being interviewed. And, at the end of the interview, the question, that he received was, how are you personally in your life using AI? And he said, for research. And so a couple of folks in the room also said for research. Like, I find that that's a really highly valuable use. And the the case that came up before was, scoring the proposals. Okay. I was doing a similar exercise not too long ago, scoring, some research I was doing. And, I said, oh, you know, give me these 10 different criteria. Go ahead and score it. So I started looking through the scores and scores are great and the categories are great. Like, you know, I'm the human in the loop in that particular scenario, but it totaled it wrong. Like, if I had just looked at the total line, it would've so, like, it was great at categorization but not so good at math. Right? And so where are you relying on the AI to do what particular things? And even in the research case, even when there's human in the loop, there's this risk for error. You know, I'm grateful that I, wait, that total doesn't look right. Again, it requires that human intervention to make sure it's good. Okay. So we've got use cases on the left for predictable. This is what we know and we love. Years and years of integration technology, automation technology that have done that. What's burst onto the scene is these autonomous agents which, inherently have some risk. But what does the future really look like? We think it's one where these things have to be in balance. Where I can say, AI is just, one of the tools I have that I could take advantage of. And the question before is like, where is the the mapping? Where is the information gathering? That doesn't need to be AI. That could be the the tools that you have at your disposal that we use as part of our integration automation every day. Is it good to give AI the ability to use a tool? For sure, yes. Does every way that I use AI always have to command all of the tools that could be at its disposal? For sure, no. Right? Again, we need to be practical in our approach for how we move forward with AI, so we can minimize the risk. Give it the agency that only it can do. And then, with agency, I don't have to trust it for these other things. I can define the path. I can give it this particular tool in this use case, because I know I have human in the loop. And so, not everything is going to look the same. And what we want intelligent automation for us to be is flexible, where you could say, it's very easy in this Celigo portfolio for for me to set this dial, this slider where I want in this particular use case this far, in this particular use case farther, in this particular use case I need more guardrails. And so, how do we give the tools, you know, in flow construction, in API construction, where you can balance these things effectively. So that's how we view intelligent automation. And so, we're really pleased to be announcing here in the first time in a public setting that we're moving forward with expanding the portfolio further to introduce a new agent builder and a new MCP server. We think these are the fundamental building blocks of getting to the next phase of our journey as a platform and the journey that we could take our customers on as they go to this transformative step to get to intelligent automation. So let me walk a little bit through what we mean by an agent builder, then I'll spend a little bit of time on an MCP server. I think these languages are somewhat common in the industry, but not everybody who's listening in online or in the room may be familiar with all of this. So, an agent builder really is this integrated user experience for extracting away the complexities of working with LLMs to have really a low code approach to building the agent. Of course, there's always going to be things that you have to express in anything you're building, but to the degree that we could abstract out the complexity of working with LM model one or two or three, we want to do the same thing for LLMs that we've done for connectors. Right? Just make it much more straightforward to work with. So, one goal, building Adjentic solutions fast. Again, a low code approach to defining what is the prompt, what are the instructions, what are the tools it has that it could take advantage of. So, today, really criticism of our own use, if I were to look at those flows, the person inside of our platform would have had to understand, well, it's the OpenAI spec, so they have a chat completion API, how does the chat completion API work. Yeah. We could just remove all of that complexity. So the agent builder will remove that. So working with an LOM is just as intuitive as working really with, a prompt, and a set of tools you can choose. Validate it works. One of the things I pride ourselves on at Celigo is our data driven approach. Every one of our steps has preview built in. You can immediately see the results. Anytime you can configure something at a very low cost. That preview button is just always there to kind of give you that early validation. I wanna go to step further with what we're doing with agents and say, hey, part of the exercise for you to validate the outcomes is gonna be test with this model, test with that model. So you can almost thinking about shopping cart like experiences where I could see two models side by side and I could say, what is the outcome that I see here? Is that outcome in generating an email for me better with this model than what it was for that model? That it may be within a particular single vendor, it might be across multiple vendors where you want to test these outcomes. And then, of course, since at the end of the day, this is all about from a cost model of using LLM technology, it's about tokens, how many tokens, input out the tokens, output tokens, a lot of complexity in there. We want to let you understand what the cost of using these are too. So again, if you think about that side by side, compare before you shop, what's the spec specifically of this particular outcome versus its outcome. We want to be able to give you the tools, so you can make those evaluations very intelligently with the way that you're using the technology. Next one is enforcing policies, built in guardrails. Again, this is, to me, when I think about least agency, the first guardrail for me is always don't give it more responsibility than it should. Right? Because then I could test everything else about this flow, and I could trust that these things will work the way I expect. But anytime I give it something into the agent, again, I have to be able to say, how am I going to validate that result conforms to the way, to the outcomes that our business needs. And that's where guardrails come into play. So, the industry has adopted a standard set, moderation API, shouldn't have certain types of language, PII, that was another question that came up. Maybe I'll go back to a note and say, anything that we're doing in the platform today, we for our use cases, we're using zero data retention. It doesn't train on our language. There is nothing that we're storing there. And so, again, we've been able to isolate, with our vendor of choice, very specific privacy ways that we can operate. And if you're looking to do this, I would say, make a note, asterisk, circle that point and you want to make sure that you're using plans that don't train on your business data. So, guardrails beyond that, of course, again protect PII. So, it doesn't actually go out in an email where it shouldn't, right. Guardrails that are also not prompt based guardrails, rule based guardrails. I thought the example of the approval for the expense report, right, if it's above $500 maybe I don't want to auto approve, right. We don't really want to trust an AI to get that right. We already talked about it doesn't do math well, Make that a very deterministic guardrail. Again, so the ability to equip all of your agents with those. Trusting the trail, very important part of using AI with confidence is going to be what tools was it executing, what decisions did it make, where were those, decisions impacting my business. So, one of the things that we've been working on that is going to be delivered in q one is our enterprise logging features, where for every one of our flows, you'll be able to have trace all the way through the flow, trace key, I talked about in the earlier session, that trace key being followed from step one to step two to step three. It's exactly the kind of features that we need to be able to trust and observe the choices that AI is making. So you have that step by step, these were the things I decided, this was the tool that I executed, this is what I got back from that execution and then I made this recommendation and I sent out this particular email. Again, we want to make sure all of that is captured and very, very visible, so you have the audit that you need, if there's ever questions about how are is your business using AI and what are the decisions that it is in charge of within your enterprise. And then single vendor. So, you know, we could look at as a legal single vendor in a couple of different ways. We think, LLMs, technology generative AI is fundamentally able to change the way we can automate, move these boundaries. And so, it's one of many integration patterns, an AgenTic workflow or an agent or a chain of agents, you know, there's a variety of different ways that you can describe this. We want to be able to be the vendor who does that and the vendor who does data integration and application integration. And so, on one platform where you can get all of these automation patterns. Beyond that though, we want to make it easy to start. And so, a lot of what we heard with some of the questions earlier in the session was, you're getting AI from your SaaS vendor of choice. Many organizations we've heard this from said, we haven't made a decision to go buy with one of the mega LLM vendors. We're just getting the SaaS in the applications of choice. I was privy to a conversation that happened with another customer just last week, where they were saying, there's some AI in the organization over here, but it's really for that project and we have project specific budgets and we need to actually buy it within the realm of, you know, where we're using it. And so, there's a lot of constraints within an organization depending on where the budget is and how you're working. And so, if you could say, actually, what I need is I need AI as part of my automation project and Celigo is our automation vendor of choice. We'd like to get the LLM use from them. We want to do is be able to give you those choices and not just one of the mega vendors, but several of them. So, just contracting through us is a way that you can do business, start small, grow to the use that you need and then potentially go out beyond that and say, yep, we're ready to go get that big contract with whatever vendor for automating the rest of our enterprise. Okay. So that's agents. Maybe I'll take a pause there and just see if there's any questions. I think we have enough time for that. As, as you were talking Yeah. I, remembered, a couple of weeks ago, I was speaking at, the Microsoft Community Summit in, Orlando. And the session right before mine was, by some Microsoft people, and they were talking about agents and how to implement and so forth. And and one of the questions, that came up was from a member of the audience that said, hey, listen, so really like what you have to say, but how do I avoid these hallucinations? And the guys they were saying, look, if you prompt it just right, you're gonna get about 99% accuracy. And I I, like, I immediately thought, okay, like, if I'm an e commerce vendor and I get 99% accuracy, does that mean that 1% of all my orders automatically goes off the rails? Right? And I was like, okay. This is not the world that we live in. Like, this is not the operational world. It's still kind of a technology thing. Right? So as you were talking about kind of the builder and the agents and kind of, you know, choosing between agency and predictability and determinism, man, that totally resonated. Yes. And the percentages are really scary too, Ronan. I appreciate the point, because if you think about I do 10,000 of these every week and the stat is 99.8% no hallucinations. That 0.2 is really meaningful all of a sudden. And so again, you really do need to be selective and I'm a believer that in three years we'll have different conversations. But what we know today is the technology is far enough along that we could be successful today in the kind of use cases we can imagine, with the kind of controls around it that we could also inspire and require as part of the governance processes for our enterprise. Okay. So, let's move on from agent building. So, again, big deliverable, we're looking to bring this to market in the very near future. I won't say a particular date right now. Came unless you think I should, but let's just say very early next year, we'll be talking about Celigo Agent Builder. Celigo MCP server. So if you're not yet familiar with MCP, you you might have been on vacation in April because it seemed like every vendor and their brother were talking about their model context protocol support. Model context protocol being something that Entropic introduced as a standard about a year ago and in that past year, it's won the VHS Betamax war. It is VHS, it's going to survive, it's going to be the protocol du jour, well, going forward. And, but what it's for is how does an agent, an a LLM communicate with a tool? A tool can be anything that is exposed as a composite service, an API, a thing that it can execute to go get data or kick off a process, take an action. And so model context protocols quickly emerges the standard by which these LLMs can take action. And and when it was introduced in November, it didn't really have a lot of security around it. Back in June, they augmented the standard to introduce how do you do authentication. Back in March, they got to the latest levels of protocol support for how we could use it on cloud. Now it's because it's emerged as a standard. We imagine great ways that we can equip, the thousand connectors that we have in the platform to be reused as endpoints, as tools in various LMS. So, imagine you're building an agent and again, you've got our thousand connectors in the box, any one of those could be a tool within that agent. But if you're building agents in other applications and you need connectivity to one of these other SaaS applications, one of these other on premise applications to get data from an FTP server, to get data from a database that's heritage on prem. Again, we want to be able to equip you to get use of our connectors for those use cases too. So, not just within our platform, but agents you're building in other platforms. So, there will always be again, the, SaaS, other SaaS vendors provided ways to build agents, which may make sense within the user experiences. We have a particular user. This is the way that vendor does it. They have an LLM and it's pluggable. Again, we want to be able to help in those scenarios too. And that's why an MCP server is very interesting technology for us to participate as a partner of yours to help you with those use cases as well. So, publish tools safely. One of the things that model context protocol and LLMs generally break down and it was evidenced in actually one of the scenarios that came up earlier is, after a while it kind of forgets. Right? It loses focus. It doesn't necessarily give it too much data. So, again, leased agency is about also curating the dataset that I'm having it act on. Another way that leased agency is, let's reduce the number of tools that are made available. So, let's say in my company, I have an HR team that has this particular use case and I want for that particular use case one tool available and I have another use case within that same HR team for three tools to be available. I want to be able to actually have these virtual MCP servers around my organization that decrease the surface area with which AI can get confused. So, one of the ways that we are looking at that is, we can publish endpoints that actually reduce the surface area of the tools that are available and if you've studied the MCP server protocol, of course, what you do is you say, hey, MCP server give me your tool set and it responds back with lots of information about every tool so I can make the best choice. That's overloading it with a lot of data. We want to be able to be very precise in the way that we can help you equip those tools to not make bad decisions, reduce the number of tools that it can use very particularly for that. And again, there's other controls in the LMS, but this will be an important one as well, for us to be able to isolate by use case. So, expose the tools that you need in a safe way that's authorized, be able to reduce the scope with which they can be published, on a use case by use case basis. And then, of course, because we've seen such broad adoption of the MCP protocol across the enterprise, across the tech landscape, we know this is going to be highly reusable way that we could operate. So, by us being able to be an MCP server in this particular protocol, the LLM is the MCP host that uses a client to communicate to us. Again, this is the patent that, most vendors are now adopting. And again, we'll be pluggable to basically any LLM, any SaaS application that has extensibility to go call a tool externally, and we'll be able to provide you with that connectivity layer for those choices. So where that leaves us really is an integration platform that's multi dimensional, capable of a variety of integration patterns, agent builder up at the top being the net new one that we'll bring into the platform, very excited about the opportunities. Again, our own internal explorations have got us very excited that we are going to be able to help our customers with that. Application integration is our heritage, where we grew up, working on very hard complex ERP workloads, API management being the area that we really energized in the past twelve months, extended that with the API builder that we talked about today in the session here in Atlanta, as well as our API management capabilities, so you can govern those things, secure them, socialize them at scale. Our data integration capabilities is one that's been very prominently leveraged. We've been building bulk load, bulk extract capabilities for various connectors and we're really, really excited as of the past short time period that we've been able to go into private preview for some new data ingestion experiences. So, we won't have the opportunity to talk about that in today's broadcast. But again, that's a very exciting place for us to be able to accelerate how I can move lots of data from applications, select objects en masse and say keep these things in sync for me, all with active metadata, data governance standards, understanding of schema drift, some very exciting things that we're doing in that space too, but I'm looking forward to our next webcast where we could talk about that. And then circling back up to, you know, really AI is going to be at the basis of everything that we're doing as a platform as we go forward. And so all of these use cases being very close to, hey, maybe there's an agent that I should be applying in the middle of this particular API, in the middle of this flow. So, I can slide that slider where I need to, combining this agent with our deterministic workflows. So, I know we didn't get into all of the other things on the roadmap, but I think this is a pretty good view of where we are on the AI journey. And again, looking forward to the beginning of next year where we get some of these things in your hands as well. So with that, I'm gonna bring Ronan back up and maybe some other questions or other things to talk about. I think we're nearly out of time, but I, how cool is that? Like, I I am super excited about everything that's that's coming up. Like, as as you can see, we haven't been kind of sitting quietly and and waiting for stuff to happen. There's a lot of stuff that's being built, a lot of very exciting products that's being developed. So, Tony, I I really appreciate you kinda walking us through that. So Tony is the master of the future, and I'm, I'm a master of the present. So I I talk to you about what we have, and Tony talks to you about what what is coming. Always got more exciting things, to to say than than I do. If there are any questions, we're happy to kinda take them here. If not, I want to kind of do a little bit of a wrap up of the session. Anybody wants to kinda raise any questions or ask, ask away. Somebody's gotta ask about the beer. Where's the beer? For those that are in the room. Okay. So, yeah. So so let's let's assume you're all exhausted because because I know I am. It's been a long day. But, you know, be before we wrap up, I just wanna kinda go back and just leave on the screen the one slide here. Where is it? Here we go. This is the one. So, you know, we spend a lot of time talking about AI, about the capabilities we have today, about the things that are coming in the future. A lot is going on. There's a lot of confusion in the industry. You're probably all trying to figure out your strategies here. We're happy to talk. So if you're interested in a conversation with me, some of the other folks in the room, please scan this. We're happy to get together and talk in more detail about how this stuff applies to your business. Always here, to help. But but kind of to summarize summarize the AI portion of the day, I just wanna say a couple of things. So hopefully, you've seen that there's a very wide range of experience in the industry. Right? Some folks are just getting started. If you're just getting started, you're not behind. Like, you are experimenting virtually. I think zero, people responded that they're not planning to do anything. So everybody is in some stage of experimentation. Technology is in a lot of flux. Right? So experimentation is a good strategy, right now. I'm hoping, that you took away that there are things you can do today. And my, number one takeaway or or the thing that I'm most passionate about is this approach of operational AI. So initiatives rising from the process side, from the ground up, automating specific processes, unifying the data across the different sources rather than having multiple entry points into AI, and then not paying several times for a wrapper, for your LLM. I think those are kind of the the big takeaways that I'm hoping you you take from from my section. And and again, these are building blocks. Invent your own Lego. Right? I mean, you can do whatever you want with this and, you don't need to be tied down to the specific examples that I, I illustrated. So, with that, I'm gonna I'm gonna thank our, web audience. Thank you for sticking with us for, for such a long session. We really appreciate the questions and engagement that came from you and, really appreciate the audience in the room for for for the conversation and the live discussion. Thanks again. And for you guys, we have, we have some beer and, and snacks. So, please join us for that. And, I think, Lisa, we're signing off on the website. So thanks all for being with us and, have a fantastic day.