Video: Winter Platform Product Update | Duration: 3328s | Summary: Winter Platform Product Update | Chapters: Platform Update Introduction (15.135s), Platform Updates Unveiled (190.215s), Drag and Drop (429.04s), Lookup Cache Feature (584.91s), Lookup Cache Demonstration (788.86005s), Active Directory Connector (1238.475s), Enhancing Integration Features (1438.885s), AI Enhancements Showcase (1896.945s), Community and Resources (2111.9001s), Q&A and Feedback (2452.7s), New Flow Builder (2594.7551s), Future Enhancements Preview (3032.895s), Lookup Cache Sharing (3076.4s), Conclusion and Resources (3198.5151s), Concluding Remarks (3279.905s)
Transcript for "Winter Platform Product Update": Hey, everybody. So good morning, good afternoon, good evening, depending on where in the world you are. And on behalf of all of us at Celigo, I'd like to thank you for taking the time to join us for today's platform update. My name is Dave Wallen. I'm the director of product marketing here at Celigo. And along with me today, I've got our primary presenters, two of the best product leaders in the industry, Tony Curcio and Tyler Lamparter. Introduce yourselves. Sure. Thanks, Dave. Tony Curcio. I, am our senior director for, Celigo platform and, help with a lot of the investments and strategy for where we're going. Hopefully, we'll catch up with each other some point. We'll share contact info. Please always feel free to reach out. Love to make connections with customers and see what's on your mind. Thanks, Dave. Tyler? Yeah. And I'm Tyler Lamparter. I'm a principal product manager here at Celigo. Been here for about three years, but I've used Celigo for probably six or seven years now. So many of you probably know me from interacting with you in the community or various other places. So good to be here again and give some more, product updates. Awesome, man. Since we haven't done an update since, since early last fall, we've got a particularly full agenda. We've got some great demos. But before we get into that, couple of quick housekeeping items. There is an attendee, engagement bar on the right side of your screen, and here you can access the session group chat. You can submit questions to the q and a, and please do the q and a rather than the session chat because it's easier for us to manage it that way, and view and download other resources that we've attached, including some links to some new, some new resources for you. We do encourage you to use the session chat throughout this webinar to drop comments and share your thoughts with fellow attendees. At the end, there will be a survey, and we really would appreciate you taking the time to share your feedback so we can make this, the right format and the right the right evidence for you. So just from, agenda point of view, we're gonna start with a look at the most important platform updates from these last few releases. We'll highlight some advances we've made in connectivity and AI, followed by some updates to our b two b manager product. And we've also got an exciting announcement about a major upgrade to our Celigo community, which I expect everybody will really be excited about. And then we'll round things out with, final q and a. So I think we're ready to get started. Tony? Yeah. Thanks, David. Yeah. We'll start off with the platform updates, and, of course, there's been a lot of things going on. So, really pleased to be able to take some time and appreciate you joining today to learn. A lot of opportunities for follow ups too. So if you have those needs, you know, of course, reach out to account managers or directly to the team you see on the call here. Any of us would be happy to engage and and follow-up, and we'll have a q and a like David said at the end here too. Alright. So for platform updates, really the largest announcement we've done in some time, is legal private cloud. So one of the things that we were looking at is a number of requests from different customers across the globe to have in country processing for the countries they were in. As folks may know, we're primarily located in AWS regions in North America, Europe, and APAC. And so, but those, you know, three sites aren't covering all of the needs. And, hopefully, we've also seen a recent announcement that we just, kick started a new, engagement set of opportunities for business development in in Latin America. And so with, so many people who've been asking questions, we wanted to say how can we we scale out to more regions more quickly. And now, kind of writing on just the wonderfulness really of being with AWS and the way that they make their data centers largely, I'll say, consistent across the globe. It opened up some avenues for us to now be, able to deploy, Celigo in, 20 plus countries in a private cloud edition. So, normally, all customers to date are public cloud. And so if you do have, excuse me, in country region needs, where you're looking at a larger footprint for lots of activity, but you'd like that to run locally, this may be a fit for you. And again so I think you see some of the advantages there with respect to private cloud. K. Next slide. Another big request, is multiple environments. So, largely to date, we have a lot of customers that are deployed with Sandbox and production. And sandbox was wonderful to do your, non prod activities. We have great cloning features to promote and do pull requests between between platforms, versioning, all of those life cycle management. But if you are a customer who needed specifically to build, deployment chains like you have with, dev test, QA, pre prod, prod, performance, you know, this variety of needs, We don't have a great solution for you there. A couple of different ways we can handle that. But, what we now have is a very flexible model where you can deploy multiple environments, and configure these really to match the way you do software development life cycle across all of your other, application development activities. So, we were pleased to launch this in January. We're still working on the migration process for existing customers. When that launches, and it should be within the next month or so, you'll get a in app notification that lets you know you're ready for that migration. There'll be a little click, a little assessment that tells you if there's any cleanup activities you'll need to do post migrate. And so, we're just working out all of the Kingston migration process, and then we'll be announcing, again, hopefully, in the April time frame, that you can get those benefits. And one of the very, interesting features let's say you already have a sandbox and you or be getting, the multi environment benefit. We're doing true isolation of those environments. And so today, you know, if you want to have somebody isolated with only privileges in the sandbox, that's, basically problematic with the older configuration model. With the new multi environment, these are true isolated instances. You, still get the benefits of all that integration life cycle management to do pull requests and clone ones across environment. That will be there. But, the additional benefit you get is true RBAC isolation for invitation so people can only be in the dev environment. And then, again, maybe not in your QA and not in your pre prod and not in your prod. Right? Again, the admins in those particular, nonproduction environments can be authorized to invite people exclusively to those spaces. And, again, it has all the granularity permissions that you would want, you know, within that domain. And so, again, this is a great additional security feature in addition to the flexibility that it provides across the environment. So, again, really excited about this one. Alright, Tyler. Next slide. It'll take a moment here. Apologies for that. So, drag and drop from branch flows. We did introduce this, back in November. A few of these things are more recent, like, multi environments just launched in January, but this one's been out there for a couple of months. We thought it really important to profile because, I, you know, had a conversation or two with customers, and I said, yep. You know, with the inability to do drag and drop across branch flows, just becomes very problematic for us to do maintenance activities. And so to the degree that we now have this feature, it basically is all the fluidity that you would expect of, being able to maintain, reorder your flows. It maintains the connections with your transformations and results mapping and filters, etcetera, as you would expect. And so, again, works, as you would in a normal flow. That that feature is available for branch flows now. K. Next slide. Handlebar enhancements. So two different ones, and, don't remember exactly some of the time frames on this. I believe the first one you're you're viewing there was, a November release as well. And, it was a bit quirky when you click that my field there where previously it would have wiped out the rest of the line. It became very difficult with the way the handlebar helper worked for you to, maintain existing things without suffering a penalty on the rest of the line. And, again, so it's more of a bug fix here, but, basically, we've enhanced it to be just a bit more responsive and work the way, you would expect the IVD to work. On the bottom is new features for helpers that get you information about your jobs. So we know that some people would do build out flows that capture run execution information, and they wanna log a lot of that information externally. And so, in a in order to build that flow, you need insights into what was the job that was running at that time, what time did it start. And so we've made that information much easier to gather. So, of course, we've had the API, support for that for some time. So people have built, you know, whatever your code base du jour is that has API capabilities, and you can do those external to the platform with that API enhanced, enriched layer to get you the flow details. But now you could do that actually in flows too, with some of these capabilities. So, again, bit more support for some of those extension patterns for visibility and traceability into the platform. K. Next slide. Look up cache is a great big feature, and, we're gonna spend a lot of time on this with some demos from Tyler. So Tyler Awesome. Thanks, Tony. Yeah. So this is another great feature that, we've been working on and just released a few weeks ago. It was post January release, but, this is lookup cache. So lookup cache essentially lets you, if you're if any any of you are familiar with the static to static mappings that you've previously had, within transformation steps or mapper steps, you know that those were restricted to just that particular import or just that particular export, and there was no way to reuse those mappings across, all your different, flows. So lookup cache is now a central place. Currently, it's at account wide where you can have these mappings between, for example, state codes where you have various different state names and you need to map it to, the abbreviated state code. Very common use case, especially when you've got different marketplaces that you're working with, and they all maybe store in different ways, or maybe customers can maybe input their own state values that you need to convert correctly into a state code. So now you can put all of that just in the central lookup cache, reference it within your flow itself, and then get that normalized data without having to copy and paste that static to static mapping, everywhere you go or use a script. And in some cases, people were using scripts where they would just have a variable field in their script that would do the exact same thing. We do have, some things in the work right now to bring this down to the integration level as well. So in the future, right now, it's a global cache, but we'll also be able to restrict it to integration level, sometime in the future. And then, additionally, these caches will, also soon support the ability to reference it in handlebar expressions, and also scripts as well in case you need to access it that way. This also lets you, decrease your overall API usage. So depending on how you set up your lookup cache, like, the state to state code mapping is a simple example where all that logic is in place within integrator itself. But you have other use cases where, maybe you're doing, like, a dynamic mapping for a customer or, items would be a a a very popular use case where, let's say, I'm I'm grabbing an order from Amazon or Shopify or some other source, and I need to put it into NetSuite or into some other system. And you you often need to do a mapping of I have the SKU in my source system, and I need to get the ID in NetSuite to be able to map and create a sales order. This would let you instead of doing dynamic mappings, and using additional API calls and concurrency against your endpoint, actually, just store all of your items and that item mapping in a lookup cache within IO itself, reference that in transformations and mappings, and then save on, you know, ten, twenty x, API usage calls because every line, right, has to do that lookup call to get the item ID. And, you know, the order right behind it might have the same SKU, and so it's doing another lookup for the same ID. So, be a great way to save on API usage against all of your endpoints. And with that, I will go into a demo here. And I guess a caveat, this is available for professional and enterprise customers, only if you're on standard tier. You would have, the current static to static mapping. Let me flip over to IO and go into winter updates here. So I built out two flows. One, I'll start off with, first, how do I even create a lookup cache? So if I come over into my resources tab over on the left hand side, I've got this new option called lookup caches. And you can see here I've got two already made, one state names to state code and NetSuite items to SKU. And if I look up the state names and state codes, I can see my key value here is, various state, abbreviations or various state names in full length, and then they all map to just the state code. So, if your source data maybe already gave you the state code, but now I, you know, just need to map the state code or it gave you some abbreviation of, you know, r I z, then I wanna go to a z. All those are in here going to the state code itself. To start off by creating a new lookup cache, you just go to the top right here, create lookup cache. You would need to name it. So you could just name it, you know, state codes. And then currently, you'd have currently, you have to provide a CSV file, on the initial creation. But, in the March release, the requirement of uploading a CSV initially will be removed. That way, you can go ahead and just create the cache and then populate data either with data loader or a flow like I'll show you next. So discard that. If I come back over into my actual flow, what I'm doing here is I set up a flow that will actually populate that lookup cache. So that way, I don't have to go manually populate every time, and I always know that it's refreshed with current data. And in this case, I'm just pulling a NetSuite save search. So I have a NetSuite save search to grab items from NetSuite, and then I have an import step here to populate that lookup cache. So on my on my source export here alright. It's just a simple safe search export. I can see all these items in the preview panel once NetSuite returns it here. There we go. So I've got all these records coming from NetSuite. And then when I get over into the lookup cache, upsert step, it's just a simple you map your key in. In my case, I'm mapping SKU as the key, and then map the various values. And so I can schedule this to run, you know, up to every five minutes if you want. Or if I had set this up as, like, a real time, job, then as items are updated or added in NetSuite, it could just automatically go over to the cache so that it's almost in real time up to date. So now once I have my lookup cache populated, now I can reference it in my flows themselves. So if I come over into a an example flow of syncing over Shopify orders to NetSuite, my source export step here is to get Shopify orders. I then have a transformation step here to convert the, now Shopify already gives you the the, code in this case. But just as an example, I'm not using the code, but I'm using the state name that they give you. In this case, I've got the shipping province, and I'm outputting a new province code from cache. And when I go into the settings field here for this particular field, you now see this additional option. Instead of only having static as an option, I've got this lookup cache option. And when I choose this option, I choose the, lookup cache that I want to reference, in this case, state names to state code. I choose which field do I want to get in return once I find, once I find a state that matches, what field from the value section do I wanna return? In this case, I only have one field available, which would be state code, but you could have other various fields available here. Maybe, like, the country that that state is in or, or, I don't know, continent or something like that. Right? I could have multiple fields here. And then if the lookup fails, what do you wanna do? So I can fill out the whole record at that point, in which case, maybe I need to go update my lookup cache to have this new particular mapping. Or you can pass it on and then maybe have some branch off logic in case of, in case of a lookup failure. And so then if I preview this data, you'll see down here under shipping, oops, shipping address. I now have this province code with cash, and this looked up, province looked up Pennsylvania, found it in the cash, and then returned PA. So that's a quick example of how to use it within transformations, and then mappings. Additionally, you could do it as an individual lookup step. So I could have that all my transformations could have it reference could reference the lookup cache there, but I could actually have individual lookup steps that reference the cache as well. So in this case, for the items, instead of doing any transformation, I'm showing you as a individual step. And here, I'm doing a lookup for every item. So I'm doing a one to many of line items referencing that cache. And then I just pass in which keys do I wanna look up. In this case, I wanna look up SKU. And then I map those results back. So in that case, every line item now is looking up the NetSuite internal ID that I have on its lookup cache. That way, when I finally get to the create NetSuite sales order step, I don't have to do a dynamic, safe search mapping. I can just, straight map that internal ID that I've already looked up previously, into NetSuite. So this would be an example of if I had, like, 10 line items on an order, I'd be saving, 10 different safe cert dynamic safe searches that I'm running, for just a single record. And then it you know, if you extrapolate that out to 20 records, thousand records per hour. Right? Tons of API calls that you're saving against your particular endpoint. And that is lookup cache. So excited to have this feature and excited for the future updates as well that, we're coming to it. And then next on to connectivity. So we've been releasing tons of new connectors, still, several from LMS area, Skilljar. As many of you know, our our new university itself is based off of Skilljar, so, put a new connector out for it, and then a bunch of others, excuse me, in the CRM space, ecommerce space, like Amazon Pay, content management like OneDrive, and several others, for example. But we continue continue to release these connectors, every two to three weeks. And if you have requests for new connectors, just let us know, and we can get it on the road map. And with that, I'll pass it off to Tony for one of our main new connectors, on prem AD. Yeah. Thanks, Tony. We, it was surprising to me how many requests we got for this one, but we do see a lot of use for, let's say, hire to retire scenarios where there is a, you know, scenario where you've got an HR system, new employees are coming in. You wanna start to automate how you get provisioned IDs in various places, And, you can imagine then all the similar types of scenarios where, there is a, perhaps a new employee added, with authentication through your active directory system, and then you wanna cascade other things that need to get provisioned to that individual. And so depending on whether or not you wanna start with active directory or it just becomes one of the things that's actioned against, lots of ways that we could build automations in this way and add to groups and, you know, around the user setting another criteria. And so, this is a really exciting way to, figure out different areas of the organization, the back end systems that can get automated. So this is an on premise connector, so it does work through the agent's technology. So if you're not familiar with our secure agents, through your instance in Integrator IO, you can download a small lightweight component that sits behind your firewall. From that point, you control its authorization and authentication really to the your instance of Integrator IO, and that creates a secure communication channel between those two points. Then, of course, that widget can be put anywhere you'd want to, but in the particular case with active directory, obviously, it needs network connectivity at that point to your active directory instance. This is based on JDBC technology at the back end. So like many of our other connectors where it's, not HTTP based or FTP based or etcetera, This is using those JTBC protocols. So one of the nice capabilities is if you have complex logic you'd like to write specific to that endpoint, you have all the capabilities of SQL available to you with which to do that. So joining your users and your groups and whatever other elements need to. And, again, you can make that quite capable. Couple of things that are just about to launch on this, in the release from March, which should be next week, you'll see that we've also created the, text to SQL code assist. So our Celigo AI will be able to help you in this connector also be able to write, SQL compliant, sorry, the active directory set of SQL, you know, for this particular endpoint. So with our SQL code assess, we we do certifications for each of the types of drivers that we have. And so we'll we'll give you the checkbox on that with particular to this connector. And, of course, there's a lot of other connectors that we already support that text to SQL for the code assistant. And then, probably in the May release, we'll see some other new user interfaces that we start to introduce for a really simplification of some connectivity, that will be coming also with this active directory. So a lot of goodness behind this one, but, again, that's already available, this connector without some of those things, but this connector available from January. Alright. Tom, back to you. Awesome. Thanks, Tom. Next one that I think is pretty big as well is, basically adding mapper into lookup and export steps. So what we're calling this is, adding support for body params, but it essentially gives you a way to use a mapper like UI, for your lookups and exports, where historically, for those lookups and exports, the only option that you had was to use either a premap script or the actual eighth editor, the, the request body itself where you had to reference handlebar expressions and then build your structure of your request in the cases where you need to make, like, a post call and provide some kind some type of, request body for those. So just a quick, just a really quick demo. In this example here where I was looking up, cash against, when I was looking up the NetSuite item in the cache table that I've got, usually, this lookup would not have had this mapping step. You would have had to go into the lookup. And if you were over in, like, HTTP view, for example, you would have had to have filled this request body out, you know, with whatever data that you needed to post to it. So now we've simplified it a lot where we just give you this mapper like experience where I now can just create these fields, for that request body then send it off. So should make it a lot easier for users to not have to basically switch back and forth as much between a mapper like experience or transformation like experience, and into the, eighth and handlebar expression experience. And the next, this one's a little wordy, but essentially boils down to, historically, when we have been making CSV files and Excel files in a FTP server or, s three buckets or something, for example, we didn't really have support for putting object or complex structures unstructured data within singular columns within a CSV file. And what we found was whenever we released the bulk load feature for Snowflake, Snowflake supports loading JSON data. And given that we were loading files via CSV, in this bulk fashion, we needed a way to put all this JSON data into a single column. That way it could properly be loaded into variant fields within Snowflake and other databases, like, eventually, big Google BigQuery once we bring bulk loading to it, and and it's AWS bulk loading for it. And so what we did was since we weren't really even handling objects and arrays correctly when they were mapped into into CSV files, is now if if we see any of those unstructured complex structures, or unstructured data, all of it will go into that root column that you have mapped. So in this example of mapping customer, the name, age, and city would all be a JSON structure underneath the customer column. So all of these root level fields are your CSV headers, and then anything nested gets put as JSON structure above. This wasn't, a huge use case for most people since most people, if they created CSV files, they were creating it flat anyways. But in these cases where you do need that structured unstructured data in a CSV, we enhanced it for this. That way, we could support bulk loading for Snowflake and databases. And the next would be, overriding merge queries for Snowflake bulk loads. So when we introduce Snowflake bulk loading, essentially, what we do in the background, right, is we create a CSV file, and then we, load that CSV file to Snowflake and then run a merge command from that file and into your destination table that you specified. But there are some use cases where you may need to have your own custom merge statement, or maybe you want to insert data, and then you wanna delete the old data that was there. And this is a good example of where Tony previously mentioned that we now allow you to access access job data, within your AF editors. We essentially, what you can do is whenever you load data to Snowflake, you can load in that job ID. And then when you load again the next time, you can load that data, but then delete where the job ID does not equal, the current job running. That that way, it lets you essentially overwrite what you have up to the point where I also now have new data. So, historically, you would have maybe had to delete it and then run this flow to load the data, and you would have had some gap period of data being there or not being there. And this lets you, at the time that you do merge new data, delete the old data. That way, it's pretty pretty much seamless in that I only am replacing this table with new and fresh, data. So this override merge query lets you do all that. You can reference the temporary table that we make, and then reference the name of the table that you specified to load into. And lastly, this override merge statement along with all other SQL statements for Snowflake now support multi multi statements. So before, you could only put one statement in here and then run it against Snowflake. But now any of these options, whether it's the bulk load with merge override or run SQL query once per record or per page of records, you can now specify multiple queries or multiple statements to run, for the sync for the same import and same record. This allows you to maybe you wanna specify a different warehouse to use at the time that you are, inserting that data or maybe specify a different database or schema, or do merge and deletes or whatever you wanna do, in different complex ways. So we support up to 10 SQL statements, now. And if I go into a quick demo, hop over back here into IO. And in this, sample demo, I'm extracting data from NetSuite using the NetSuite JDBC connector, which lets you access, you know, raw transaction data, basically, the raw database underneath NetSuite. And then I'm pushing that over into Snowflake as a bulk load. So you can see here, I ran this, pulled out a 50,000 records yesterday, loaded that by just specifying which table I want it want it to go to, what my primary keys were. In this case, since I have override merge, it wouldn't use these primary keys. But if I didn't have this specified, it would merge on these keys. And then if I didn't have any key specified, it would just insert the data directly to the table specified. But in this case, I'm using the override merge query because, not in this data, but as an example, you might need to add, like, a qualify statement for removing duplicate data. So if you go try to load a file, and you have duplicate entries based off of the key that you specified, you might need to choose the most recent one. So you might need to do a qualify statement to pull out the, most recently modified one or the most re recently created one. That way, when you go to actually merge, it doesn't error out saying duplicate keys. You know, I don't know what to update since you have multiple keys in your in your file. And that is Snowflake bulk loading. And then next on to AI enhancements. So I'll pass it back over to Tony. Alright. Thanks. I'll say, Tyler, you made that, CSV embedded JSON much easier to understand than, some of the ways I've read it in some of the text. So yeah. Alright. So on to Celigo AI. We've introduced a knowledge bot, about, April, I think, released last year. And, you know, obviously, with a SaaS product, we get to observe behavior. We're gonna see what kind of questions people are asking. And, I got to also see some usability improvements that we needed to make this much more functional and usable. And so there's a lot more coming with respect to how do we make AI more usable in q two. But one of these first steps we did, and I believe this was maybe the November release, a few extra just tools. So, if you want to take some of the code snippets that you see in the articles, we've added a copy button. There's also an expand button since, because the, the current way the knowledge about works is it's sort of docked in in bottom quarter of your screen. And sometimes when there's embedded code in our documentation, rather than you happen to navigate out there to go see it, we wanted to be able to let you stay in Integrator IO and just be able to use more real estate. So you'll see that expand button. Regenerate, of course, sometimes with AI, you wanna ask it again. So very easy ways to do that. And finally, reset. And I'll tease out, in the release for, again, planned for next week, this will be in a few other things much more conversational style also. And so, if you do ask it a question like, can you tell me a little bit about imports? And then it gives you an answer, and then you say something like, oh, what about connecting inside them? Well, the them is, of course, the context of of exports or imports, whatever the prior question was. And so, up until next week, you know, the release never has allowed you that kind of conversational interaction. But, progressively, we will be making all of our AI that way that it's knowledgeable about the thread, is able to use all the context of questions you've asked before to help assess what is the thing that you most specifically mean here. So, again, next week's release that will you'll see that sprinkled into, the way text to SQL works, JavaScript helper works, handlebar helper works. So, again, pretty excited about some of those those features coming. So k. And then next slide. Big additional code assistant that we've built is with GraphQL. And, of course, we've been spending a lot of time with the Shopify community. As, folks hopefully are aware, Shopify has been deprecating some of their old rest APIs and moving progressively to GraphQL. A lot of what Shopify has been doing has been the product and product variance where with Solvio customers and a a variety of their other partners, April 1 is an interesting deadline. And so if you're not already moving your product and product variant, endpoints out of rest to GraphQL, This may be something, you know, again, that's of interest to you. But, just generally, GraphQL support, is a feature we invested a lot in in 2024. Twenty '20 '5 here with the January release, the AI code assistant We'll just help you. It's metadata aware. We'll go to the back end, fetch some of the context of the system you're connecting to and help you write those GraphQL statements in context. So, again, a nice new feature in our GraphQL support. K. And back to you, Tyler, for b two b. Yep. And so if if everybody's not not aware, b two b manager is, our new EDI solution for, all your various EDI transactions, which give you nice dashboards to track everything. And we have separate webinars that can go thoroughly through b two b manager itself. But with some of the enhancements for b two b manager, now with within your dashboard, you can customize which columns you see on your on your dashboard. So you can remove, you know, doc number, doc type, or add these in, and then they're persistent, between logins. So whenever you come back, you'll just see the same ordering of columns that you had before along with which fields you previously had selected for your for your table view. And the next is the ability to have, safe searches on, for your EDI dashboard. So in cases where you want to maybe export that data, and add various different filters for, you know, things that are accepted, things that the directions inbound versus outbound, you can now create these, essentially, safe searches or reports, on your data where you can then export it out, and then see exactly, you know, what's going on for maybe a particular trading partner, or, various different transaction types. And then lastly, for b two b manager, we are constantly releasing new, trading partner connectors. So in this release, we've got various different ones like Amazon Vendor Central, Costco, Academy Sports. And then just earlier today, we released, I think, 20 more. So that team is, really hauling on how many, trading partner connectors that they're releasing out to support on b two b manager. So if you're not already familiar, reach out to your account rep, and get a sneak peek, or demo of b two b manager. Maybe it be useful useful for your business. And yeah. And then lastly, additional resources. So this is one that I'm excited about. I've been working on this for, I don't know, maybe six months now. First had to get approval from Tony, after I found which vendor I wanted to go for. But, historically, I've been responding into community, into our older Zendesk community. And I'm sure if anybody anybody has used it, it wasn't wasn't really the best community. It, was kinda cumbersome to reply to stuff. It had quirky issues. You couldn't attach various file formats to it, and then just various other issues. The search was bad. You couldn't really find what you were looking for whenever you wanted. I couldn't even find my my own posts in there if I wanted to go back and find something. So I've set out on a mission to find us a new community, and we've landed on our partner, Discourse, and we just released this out yesterday. We went live on our new community. And if I give you a quick demo, if you haven't already checked it out, go to connective.soligo.com. And if I flip over here into this quick demo, you'll see you start off in the community with just a list of, what people are posting in the community itself. So you're brought directly into, what are people discussing, what are they talking about, And then they're filtered based off of, right now, different product areas. So Integrator IO itself, which would be, like, the main core platform, API management, which we, released last year, but then have several updates, coming very soon. B two b manager integration apps for a lot of the Shopify, migration help, and then various other things. In the future, we'll have product feedback and then community feedback itself if you have feedback on a community. But this has just been a great way to, build the community more. You can have your own profile pictures, your own if you're a partner, you can put your own, description of, like, what you specialize in as a partner. We sync over badges from university courses and it on your partner badges, we sync over that you are a partner. So tons of ways to get involved and kind of show off and, help back with everybody else, but super excited about this. Hey, Tyler. Maybe also, the calendar. I know we're we're just starting to use that out, but that's maybe also a good resource for folks. Yep. Yeah. So this calendar will keep up to date with various webinars. Like, you can see our webinar we're on right now. And then next week, different spotlights or when we'll be, in Chicago for Suite Connect. And then even office hours, if everybody's not familiar with office hours, we host this every Tuesday for an hour and a half for people to come on and, get help with various flows that they're working on, various custom flows that they're working on on the platform. And with that, other resources, builders hub content, help center with, for, you know, articles and documentation, online training through Sligo University. And I do wanna mention the Sligo University, if you haven't taken the certification courses, the builder core course, come April 1, April fourth, something like that, it will be a cost to be certified. So for now, up until that point, it is free. So I would, highly suggest everybody to go take that, before having to pay for that certification. And with that, we are on q and a. Outstanding. So just before we before we start the q and a, just remember, you know, please fill out the survey. That's down on the right side engagement bar. We really would love to have your feedback so we can make this better for everybody on the call. If you'd like to request follow-up from a member or team, you can click on the request to follow-up button also on the top right. And let's, let's see. Let me just open up the q and a bar, see what we've got. So, hey, one here is is b to b manager included in our subscription? So b to b manager is a separate subscription offering, which you can bring up with your account executive and get a demo and then, have that license added to you to, to get you up and going. Okay. I've got one here. Referring to the lookup catch. Can I I see we can use a flow to load the lookup catch? Are there any ways to maintain a catch? Yep. So there's a few different ways. You could use a data loader flow for more, like, one time loads. If you just have a CSV file or JSON file that you wanna load into the cache and update, you can do that with a data loader. What I would do personally is probably just, like I demoed out. I was I would make a flow that I scheduled to run, once a day or twice a day or whatever or even a real time flow so that as data is updated from my source system, I update the cache. That way, my flows never have to directly reference the source data. It can just reference the cache and save on your API usage. And, up to page in. We had a question in the community, I think it was yesterday or the day before, from a good friend of the org, asking about, opportunities to maybe, load data manually and maintain it, by hand. That's an upcoming feature, that will be with us shortly. So definitely more to come with respect to how you can interact. And, of course, the way the connector work is it's, via the API. So even if you had some off platform ways you wanted to interact, I think that's a model we could also be, right, all standard APIs at that level just like everything else in the platform. What are the data retention periods for additional environments? I can take that. And you mean the, multiple environment, that we discussed earlier? That follows the same standard you have for the account in general. So, the way that works is if you're a standard professional enterprise, it's basically thirty, sixty, one eighty days, and you'll find documentation in the help center about that. And, also, the what are the objects that we're retaining logs for and other data for for those selected periods, and that's fully documented in the help center. But we try to make it very consistent with, you know, if you're at the enterprise tier, it's a hundred eighty days across the board. Of course, some people have concerns about things that are data retention for longer periods where they might like logs. And so we're currently collecting feedback on some of that, differentiation where people want those to be controlled individually. But those guidelines are are pretty clear, you know, as far as the dock and and and pretty elaborate as far as what they cover. And then if you miss the consistent across the nonproduction and production if I didn't say that specifically. Yep. So we had a question. I think, Tony, you might have mentioned this. Can we get a b to b EDI specific webinar? I think I think there is already a recorded webinar out there for b two b specifically. I think Alexia can maybe send it out afterwards. Okay. Any other questions? Okay. So I think we Tony, you were thinking if we had some extra time, there's, there's a sneak peek you wanted to give. Yeah. Yeah. Really, really excited about some of the other work we've been doing, for the road map. And so, we're looking at discussing different ways we can make some of these, webcast road map be looking, you know, forward looking too. And so, I realized we didn't have a nice disclaimer at the beginning of this slide, and I've talked about road map a few times. So disclaimers apply. Of course, future looking statements are always with potential set of risks. Don't use that for purchasing decisions. But, of course, we're, you know, listening to feedback all the time. So, I'm really excited about what is planned to be next week's release, our March release. And, we've got some very interesting things coming, among which is this new flow builder. So if you wanna page down, Tyler. So, again, sneak peek into the new flow builder. A lot of interesting, opportunities for us to streamline the way, we've laid out the interaction model. So, few different things that we we look to achieve here. So it's a bit more compact as far as ability to, in your viewable space, see more of the business logic you've constructed. For newer users who aren't familiar with our icons, there was a lot of things we understood about how do I learn about the product. And, you know, hitting the plus sign and getting that explosion of icons is quite jarring for first time users. It's pretty impactful and useful. Right? Everything is right there in front of you. So we tried to figure out how we keep the usefulness of everything being directly available, but also to give, new users opportunity to learn. And so you'll see a bit more of a context menu that we've put on every flow step. And so that gives you what is the tool and a short description of what it's impactful to do. I think one of the most productive, kind of accelerating features in the new UI is this clone step. So, of course, all of our flow steps, whether it's an import, export, or lookup, you know, they're all saved as first class objects themselves, and they're all reusable across the product at whole. So we we we love that ability to reuse Swiggo artifacts. But sometimes you wanna make a copy and then change it, you know, and you say, you know what? I just did all this configuration for a Salesforce contact table, and now I'd like also an account version of that. So you can quickly up the menu now, click clone. You'll, like, almost instantaneously get a new version of that artifact where you can go in and just tweak it as you need for that secondary action against that same, application. So, again, a nice, good productivity accelerator. One of the things you'll notice on the flow step in the middle just kind of behind to the right of that context menu is, the flow steps now have a bottom bar. And if you're really familiar with the way the bubbles looked, all of those icons were kind of interspersed with the icon for the import and export and some of your text, and we just try to create a nicer separations so you have a very clear visibility to what are the tools available on that particular step. Of course, each of those icons is directly accessible to or clickable to access, the tool in play. You don't really see it here, but simplified merge, unmerge. So drag drop features like we saw with branching, of course, work here as well. And then some context menus for clarifying clarifying how the merging and unmerging works. Crisped up the icons. And then, one of the nice things is we we wanted to be listening as much as we could. And so while we're changing a lot, folks are a little bit more like my age, moving your cheese on you. You know, we we recognize a lot of that's happening here. So, what you have is from the top of the screen a little new UI button. So, we're not forcing on anybody. We want everybody to click it. But if you love the old bubble way of working, let us know. Underneath that little eye icon, you'll be able to get a feedback link to tell us a little bit more about it. Click the eye icon, you'll get a panel. It'll tell you all the things I told you as well. And then if you flip that switch, you can always flip that. We've kept it a % compatible. We're gonna try to maintain that state as long as we can, to just let us collect as much feedback as we can about the new versus the old. And, of course, we've got Tyler's wonderful new, so we go connect to the community to collect feedback and, you know, just let us know how you feel through that particular eye icon as well. That'll go to an email where you can send it directly to our team. And, Tyler, I think I hit everything. Anything that, yeah. Yeah. I think the other other main thing is the, we've ensured the ordering of all these different steps within a bubble, making sure that as you see it there on the screen, they're in order of how they are processed through, the flow. Like, I go from transform to output filter to preset page script along with those descriptions. And then we've removed options that didn't actually previously, work. That way it simplifies, you know, why is this particular bubble not working, but it's not supported on file providers, stuff like that. So just simplified it and then, yeah, made it so everything there is working properly. Yeah. Thanks, Tyler. I missed that. And that's an important one. We definitely had gotten some feedback on those items. I'll maybe just add to, to the end of this. This is the first of a series of improvements on Flow Builder. I think this is a real an agenda you will see. I I I believe every release this year, you'll see incremental improvements. So this next gen basically sets a baseline for a lot of enhancements that are coming, you know, to name a few again and and road map disclaimers apply. But, some enhancements around preview, trace key, ability to see the trace key that's being applied to all of your records, a record by record viewer, some enhancements around branching layout and its usability and, ability to do more. So we're really excited about this agenda. I think it's gonna be a strong 2025 for us with getting a lot of new tools in the hands, off of this nice baseline that we've we've, we'll we'll be delivering next week. So so more to come on this, and, again, just an exciting agenda for us. Yeah. And another question came in while we're talking. Can the lookup cache be used for a Celigo template like Amazon to NetSuite template? I'm not sure in particular what this is referenced to. Maybe it's like, can I attach a lookup cache to a template and publish it out so that it's reference or, so that it can be referenced for, install from the marketplace? Or if I download this, integration and then share it off, is it still there? And I'm not do you know the answer to that, Tony? I I could comment a little bit about, one of the things about lookup cache is we've enhanced the ILM features. So when you do snapshots, pull requests, lookup cache, go with that. And so I believe it's a, like, a first class resource that gets managed commonly throughout, specifically on a template. I don't recall a conversation about whether or not there's support there. So if, we have contact information, David, for whoever asked that, we could follow-up. I don't know if we have other ways to get their contact info privately. But, if you leave it there, we we could for sure take that as a follow-up. Yeah. That like, it looks it looks like it's, this field here on the lookup cache. So it's a setting itself of, whenever you are sharing. So maybe you have sensitive data or something in this lookup cache that whenever you download, and share it off, you don't want the data from the cache to be included. It looks like this option here would give you that ability. So if you check it, then it would include the data on download or on template share, so that when you do share it off, it would still be there. But if it's unselected, then that data would not be there. The lookup catch itself would be referenced. It would just be not have any data in it itself. So, hopefully, that answers that question. Actually, now you said that that teases out another use case maybe we didn't talk about, but, like, the idea of environment variables, things that are, you know, unique to your sandbox versus your prod. We have a lot of interest in standardizing some patterns around lookup cache being used for that. And so that would be another scenario where you wouldn't want the data to move between environments, but that structure becomes important, to kind of configure settings that you want applicable into your, let's say, test environment differently than your production environment. And so, again, some use use scenarios that I I think you'll see us develop some patterns around, as far as blogs and other help center support. Alright. Well, Dave, I think that's it. See some, some nice comments in the chat about the session being useful, so I appreciate your time similarly. You know, it's it's good to be able to connect this way. And, you know, we're trying to get a lot of content out to support all your follow ups, you know, after this too. So if you haven't used that builder side that, Tyler pointed to earlier, get a lot of content that's now going out there with each of our releases or in follow-up to our releases. And, of course, the Celigo Connective community is a great resource and will be increasingly so as we go forward for asking questions and getting feedback, just having discussions about use scenarios and such. So, Dave, anything else to close out? Hey. And, Yeg, and and and to add on to that, just start the survey here in here on Goldcast in terms of this particular webinar. Gentlemen, Tyler, Tony, thank you so much. It was great content. It looks like it's gonna be a great year for Celigo with a lot of new and exciting exciting developments, so stay tuned. We'll be doing this again in the spring, or do we call it we call this one winter, the next one is spring. Yes. So we'll have a few more releases under our belt, some more exciting developments, some more things to share with everybody. Thank you all for your time. Thanks for being Soligo customers, and have a great day. Thanks, Ned. Thanks.