AI is transforming how software is developed and how it must be secured. As development teams use AI to generate and ship code faster, security teams face the challenge of keeping up with this new scale and speed.
Join Scott Roberts, CISO of UiPath, for a practical discussion on how his team applies agentic AI to strengthen UiPath’s own security program. Learn how his team uses AI to automate vulnerability triage, accelerate remediation, and reduce investigation time while maintaining the guardrails needed for reliability and governance.
In conversation with Averlon, Scott will share lessons from UiPath’s experience and explore how security teams can:
- Apply AI to accelerate vulnerability management and security operations
- Save thousands of human hours through intelligent automation
- Balance innovation with safeguards for identity, data, and reproducibility
- Build governance frameworks that make AI use more transparent and controlled
- Track measurable improvements in remediation speed and risk reduction
This session shares a firsthand look at how a global AI company secures itself and how other organizations can safely apply these ideas in their own programs.
Register today and we'll notify you when the on-demand session is available.
Register Now
Watch Now
Rajeev Raghunarayan: Well, thank you, everyone, for joining, the webinar, and we’ll get the ball rolling right away. So, welcome, thank you for joining us. Today’s session is called Inside UiPath, How Inject AI is Redefining Security in the AI Era. I am Rajiv Raghunarayan, the go-to-market lead at Averlon, and I’ll be facilitating the session today. A couple of quick housekeeping items before we begin. This webinar is being recorded. We won’t be taking live questions, but please drop in your Q&A… questions in the Q&A panel. Time permitting, we’ll obviously get to them at the end. And as always, we welcome your feedback. We’re all learning together, so any feedback you share will definitely be valuable. Now, getting to the topic at hand. If you haven’t already heard a few million mentions of agentic AI so far, I’m curious where you’ve been. It’s been one of the biggest change agents in the recent years, transforming not just how we build, deploy, but also how we secure our systems. UiPath has been right in the middle of this transformation for thousands of their customers. Securing their environment, therefore, becomes not just an imperative, it’s central to maintaining that trust, maintaining the accountability and the resilience as AI accelerates. And redefines how we work. I’m excited to be joined by Scott Roberts. Scott is a Chief Information Security Officer at UiPath. Scott has led security across some of the world’s most complex environments, from Google, where he secured Android and Pixel for about 3 billion users, to Coinbase Cloud, where he protected the developer platform that powers much of the crypto environment today. Earlier in his career, he held security leadership roles at AWS and Microsoft, including the team that created what we know today as Patch Tuesday. In short, he’s experienced most forms of complexity when it comes to security, from code to deployment, from devices to cloud, and from people to AI. Scott, it’s a pleasure to have you here. Welcome.
Scott Roberts: Well, thank you so much. It’s a pleasure to be here, and I look forward to the discussion.
Rajeev Raghunarayan: Thank you. Excellent. So, Scott, we’ll get started with a straightforward question. You’ve led security across some of the most complex environments, obviously from Google to AWS to Microsoft to Coinbase, and now UiPath. Can you start a little bit by explaining your journey, your path to becoming a CISO at UiPath, and how that experience has shaped your perspective of security today?
Scott Roberts: Absolutely. So, again, thank you for the opportunity. , I have been in security for about 30 years, and I’ve been able to witness, , many fundamental paradigm shifts as it relates to security throughout that career. Before we dive into AI, just to kind of provide a bit of context on those paradigm shifts that I’ve seen, because I think it’s interesting to set up a pattern that we see kind of repeating over and over again. , I was lucky to have been part of an amazing team back at Microsoft that created the security development lifecycle in Patch Tuesday, kind of as you mentioned in the early 2000s. At that time, attackers were evolving to figure out how to leverage the internet at scale. , we were seeing constant large-scale worm attacks. I joined the Microsoft Security Response Center about a month before the blaster worm hit the internet. So we were underneath these constant attacks, by large-scale internet worms, literally trying to crash the planet. And as defenders, we had to evolve We had to reconsider how we built software, how we leveraged the internet. , out of that time, , Windows Update was created, providing those patches on Tuesday. We created inbox features inside of Windows, such as Windows Firewall, Windows Defender, that allowed us to service and defend our products in real time, evolving alongside the attackers. Then you saw the next wave, in terms of the cloud, where, as compute resources were moving from on-prem into the cloud, attackers were evolving right along with that. So, we were having to build products and services inside of Amazon and AWS, Where we could detect in real time when a virtual machine, for example, had been compromised and turned into a crypto miner for an attacker, and be able to keep that instance safe and secure for customers. The other big next wave, was our… the changes that we saw in mobile. , I seem to have a pattern, but I joined Android right at the start of the stage fright vulnerability. Attackers were figuring out how to attack mobile devices at scale. At that time, we had about 2 billion active users and Android, and the stage fright vulnerability was really a kind of a scary SMS-based vulnerability that could have led to the very first real sort of SMS-based worm, almost similar to the email-based worms of, , Melissa’s back in the day. And we had, although we had, , at that time, about 2 billion active users, we had no way to patch them effectively. So we had to go rethink again, as a defender, , how we actually would stand up and rethink about how to build our products, and how to keep them safe over time. And then, , eventually we created the Android Monthly Security Update Program, and And shipped, , 6, 000-some CVEs across the crazy number of devices that, as you said, are now over 3 billion users. But I definitely think everyone would agree that we’re clearly on the third major wave. , a lot of folks thought that… no, or fourth major way, excuse me. A lot of folks thought that that was going to be blockchain, , so I did spend some time as a… and I entered the CISO ranks over at Coinbase, thinking about how to, , protect against smart contract vulnerabilities. But I think the momentum has clearly shifted to AI, and back at Google, we were actually using specialized ML models. at Google to secure the Play Store and the Android supply chain. And what I love about the evolution that’s occurred is, in many ways, it democratizes access to the capabilities that we had back at Google with these specialized models, except you don’t have to hire a team of specialized data scientists with PhDs in order to be able to take advantage of these technologies. So, this pattern of, , attackers learning how to use the technology, defenders learning how to then respond to that is a pattern that I see over and over again. Inside of, all of these sort of major waves. And on the AI front, if you look at that same sort of paradigm, AI used by attackers. AI used by defenders. I also, we could spend some time talking about those different buckets on the attacker side. , you, , you see perfect sort of spear phishing. emails now that are complete org-aware and context-aware that are able to be created at scale. You’ve seen them leveraged in terms of social engineering that are just very convincing, and we’ve had those attacks in our environment, other folks have had those attacks. You’re able to see AI dynamically respond on the fly and rewrite attack payloads and adapt to the vulnerabilities they find on the ground. inside of an organization. But similarly, and we’ll spend some time today talking about how we’re using it on the defense side and scale. The two other categories that I to think about when it comes to AI, besides just the attacker and defender paradigm, is security for AI itself. Meaning, , the model itself. When we think of, jailbreaks, for example, as a popular term, where you’re able to get the model to, , share information, share data that it wasn’t designed, or to share, data poisoning, other types of identity attacks against, sort of inside of the box of the LLM, if you will. The second category I to think about is the security of AI, meaning attacking the people and the tooling being used to build the models, sort of the app and application infrastructure around how you build and train models, and how do we keep that environment safe. A lot of discussions around data security and our end-user security as well. So those are kind of the four big buckets that I to think of. Attackers, defenders, security for AI, security of AI.
Rajeev Raghunarayan: That’s very interesting. You kind of almost split this into, , I can almost see this as a 2x2 matrix, if I will. You started with the four themes. or the four big evolutions, the internet, mobile cloud, and obviously now AI. You had crypto in there in between. By the way, have a… I should say you have a great way to pick the next company. I should just follow where you’re going. But, having said that, the other… paradigm, or the other quadrant that you actually picked were… was basically how attackers use any of these tools and technologies, how attackers can… how defenders use these tools or… this tool or technology, and more importantly, how do you protect the respective technology, the security for AI, and security of AI? Right? So it’s kind of a very nice 2x2 matrix that I can imagine in my mind of how you’re presenting that information. Scott, you’re… you’re obviously, You’re in a very interesting environment. You’ve seen this shift of balance kind of go between attackers and defenders, as you think, right? Inside UiPath. AI and automation are, , two sides of the same coin. In some sense, UiPaths also emerged as an agentic AI vendor. with its core to the product itself, how do you see such an environment influence your strategy of your security program?
Scott Roberts: So, as we started to talk about, , AI isn’t just changing what attackers do, it’s changing how they operate, right? How they adapt, how they scale, and how they come after your environment. In the UiPath context, if I put my product hat on for a moment, we think about how do we bake security into our stack, , from the start. UiPath has been, , in the traditional RPA business, process automation business, for many, many years. And in many ways, that model is a great foundation for all of our identity automation that has then come on top of it. So, what I mean by that is, in the traditional RPA environment, you have autonomous robots, as we would call them. They have identities. have access to information, and those identities need to be protected, the data needs to be protected, and the difference, though, in the RPA world is that all of the possible outcomes, in many cases, were deterministic, right? And the change with the agentic agents is, of course, the non-deterministic nature, that you can get surprised, and we have to build kind of a new management framework around the fact that agentic agents can be a little more non-deterministic. So you want to think about how you build in that… those guardrails and those management systems from the start. You can’t build it on later. And so I’ve been really proud of the fact that recently, our entire AI stack that we shipped in the spring was certified against ISO 42001, which is the very first new AI management system. to certify products that are built with that sort of AI security in mind from the start. And so we’re one of the first companies to have that across our entire AI stack, not just cherry-pick one or two products to get certified. Within that platform, you also want to make sure that the agents themselves have oversight. So, not just permissioning, but continuous policy enforcement. So, on a workflow, for example, where you might have a non-deterministic outcome, we incorporate, on our workflows within the security team, what are generally referred to as judge agents, right? You have an agent that has an output. And then another agent that looks at that output and makes sure it’s a rational output, for example, before moving the workflow on to the next stage. So there’s a couple different techniques that you can use in these workflows. To make sure that you have these, deterministic outcomes. from the RPA world, but now sort of managed more, broadly and in the agentic world.
Rajeev Raghunarayan: That’s an interesting point, yeah, and congratulations, by the way, on your ISO 42001. That’s actually a big step in moving forward. Interestingly, you mentioned judge agents, that’s something we use at Averlon, too. I was just talking to our AI researcher the other day, and he obviously was talking in the exact same language, in terms of how do we make things more deterministic, right? Or at least how do we measure the the responses that your AI agents generate. So it’s a very, interesting thing. Scott, I want to pivot onto one more thing, right? You made a point about AI being friends, or AI being a foe. Right? I want to kind of pivot onto that particular aspect. How do you see AI helping defenders? Now, you are an Averlon customer, no, you’ve linked… you’ve posted LinkedIn posts about it, so it’s kind of a well-known, well-known fact. Now. you’ve been putting that into practice on a day-to-day basis on your own site as well, so I’d love to understand, where have you seen the biggest impact from applying agent AI within your security operations so far?
Scott Roberts: Yeah, we are… we are definitely, obviously, an Averlon customer. We’ve been so for quite some time. Averlon was kind of an early adopter of many of these technologies, and… and, , this is a space that we’re very close with. So, , I’ve looked at it from kind of two points of view. One is, what are the sort of the best-of-breed products that we can bring into our environment to accelerate our team, accelerate our capabilities. that are… and then what are the ways that we could leverage the UiPath automation, platform, to custom build, , automations, , for particular, more niche and more targeted things that make more sense for my team? So, bringing in sort of best-of-breed products. Supplementing those with our own agentic automation platform. And we have really been able to take advantage of Averlon, for example, on the vulnerability management side. That’s where we’ve been able to really see true value. In it. One of the first experiences I had with Averlon is when we had a critical zero-day vulnerability that occurred. The products we were using at the time came back and told us we had , tens of thousands of instances of that vulnerability across both our product, our cloud operating environment, as well as our IT infrastructure. And of course. You don’t want to be told to, , have to patch a critical zero-day, 9X vulnerability, and oh yeah, you have 2 hours, to do it across 20, 000 instances. It wasn’t really achievable. In our particular case, we use open source software that had a particular vulnerable library that was in a broader package, and that package was included in a number of areas. And we really needed to figure out where were the exploitable code paths, right? Where was there attacker-controlled input that then led to the vulnerable function being called? So that we could have end-to-end exploitability analysis and determination around that. And that’s where Averlon really kind of stepped up. It was able to help us take a look at all of that environment, be able to do real-time sort of analysis of the code. And be able to say, what? Here is the much smaller subset of those vulnerabilities that are actually reachable by an attacker in your environment, and those are the ones that you need to go after. And we were able to get patched on those particular vulnerabilities in a matter of hours, and reducing that exposure to near zero, , and taking care of updating the packages and on all the other, , sort of orphaned code, if you will, as part of our regular maintenance cycle. So… so vulnerability management, I think, has just a tremendous capability to be accelerated, through under… through these agentic agents and understanding this. There’s a couple other areas, as well. So, product security. to moving beyond just incident response, , how do we actually build the capability right into our development process? And that’s where, , the automated remediation and proposed PRs are really coming in handy. We’re GitHub Advanced Security customers, we were Dependabot customers. But there’s a high number of issues, developers are ignoring, , the dependable alerts, and I’m really excited by everyone’s, sort of, proposed PR capabilities, and looking forward to leveraging that. And then the third big bucket of how we’re using it inside of our organization is on our security operations, one of the in-house Gentic automations that we’ve built based on UiPaths. has been in our threat analyst agent, where we’ve been able to save over 13, 000 human hours of responding to alerts and understanding where the true positives are in our environment to keep our SecOps team very focused on the most urgent issues.
Rajeev Raghunarayan: That’s amazing. I was just trying to make notes as you were talking here, Scott. It’s actually interesting how you actually touched upon multiple areas. One, you touched upon vulnerability deviation and how you can prioritize issues. The second one you actually touched upon was the, engineering workflow and basically whittling down how much what the engineers need to do, especially with the new auto-remediation capability. And the third one is actually more on the SecOps side. So, kind of, you’re trying to leverage this in… on multiple fronts, so to speak, that… how you can leverage AI for faster results, faster triage, more time savings, or measurable time savings. I’m curious, how does that change the day-to-day rhythm for your team? Are there specific workflows, processes where you’ve seen the biggest impact in how people work today, using AI?
Scott Roberts: Yep. So, in our team, on… just as the security engineering side, , for example, , if you look at the open source sort of inbound, , there is, on average, 500 new vulnerabilities a week. That would come into our team, , from open source, and we’re not, , alone in that. That’s the, , about the average rate. as a relatively, , constrained security team, most security teams, we always need more people and resources. We have to think about, okay, which ones do we spend time looking at? And often, with that type of inbound, you’re just looking at the most critical ones and getting to the rest as you can. By adopting agentic triage. We’re able to have it take the first pass, if you will, on all of the vulnerabilities, so that we can actually determine in real time which ones apply to our environment, which ones don’t, and really get detailed. And so overall, you can measure time to triage. that in some cases, on that longer tail, went from weeks before we could get to some of the medium and low vulnerabilities to literally down to days and hours on that. And the amount of time that people can spend on that effort can be dramatically reduced as well. So you’ve seen both the time to triage reduce significantly, as well as the hours spent on the triage function. can be significantly reduced on the loan management side. On the security operations, I mentioned the tremendous savings we’ve had in people time. when you would have an alert that would fire in your environment, it could take up to 4 hours to take that alert, understand its implications, go pull the logs from all the different places where you need to go get data, determine is it true positive, false positive, figure out all that information. In some cases, we’ve been able to take 4 hours worth of investigation down to , a minute and a half. Through identity automation.
Rajeev Raghunarayan: Did you say minute and a half?
Scott Roberts: Yeah, a minute and a half.
Rajeev Raghunarayan: Okay.
Scott Roberts: Yeah, one of the workflows we’ve orchestrated has 61 agents in a giant swarm all operating in parallel, to be able to do near real-time analysis, on some of these things.
Rajeev Raghunarayan: That must change the… scale at which you can operate, obviously, right? Just going from 4 hours to a minute and a half, that’s… that’s actually awesome. Interesting. Now. Scott, the obvious question, again, from anything AI, and again, you mentioned this about your ISO certification. A real important thing there is also, how do you as you scale your AI workflows, there’s… I’m sure there are challenges in data handling and trust building. Obviously, your ISO certification speaks to… speaks volumes to some… you’ve done something right there. But what’s the hardest parts of applying AI at enterprise scale, right? How do you balance that? With innovation… the innovation, with managing the risk that ensues, or at least people perceive.
Scott Roberts: just my company, but when I talk to my CISO peers, and they talk about, , introducing various AI, , technologies from internal, , instances of ChatGPT to other, from Microsoft Copilot, when you’ve looked at introducing these, , these kind of platforms into your environment. The biggest surprise has been… , kind of how bad most organizations are at access control around their internal documents and internal information repositories. Even internally, the colleagues that I’ve talked to at Microsoft, they’ve been surprised where you might have thought your SharePoint environment was very well secured, very well locked down, least privileged, and all of a sudden, when you throw Copilot. at your environment, people start asking, what is the salary of the, , of this person or that person? All of a sudden, , some HR salary spreadsheet that everyone thought was secure pops up, in the answer. And so. I really… so a lot of organizations have had to go back and rethink about their overall enterprise data security, making sure that they understand where the data is at, is it access controlled, is it classified, is it labeled properly, because although those weaknesses were there, and you saw some of the same weaknesses, for example, become revealed during the first deployment of Enterprise Search, , Enterprise Search was a thing, that was a wave.
Rajeev Raghunarayan: That’s right.
Scott Roberts: were surprised by what it would find. But now, instead of having to go look for it, this information is just being automatically picked up by all of these, agents and co-pilots, and surfacing it, kind of in, maybe in some cases, unexpected. To our users, so starting theirs is critical for me. And then layering in the identity management. All of us have been dealing with service accounts for 30 years, alright, where you have a piece of software that’s running with an identity, that has the ability to take certain actions. In many ways, agents are an extension of that service account model, but now the things that that code will do might be unexpected. It might dynamically go try to grab a tool, for example, that it thinks would help for a triage action, and then try to incorporate that tool into its, , capabilities, and it might not have been expected that it would have that capability. So, understanding the identity that your agents are operating in, what are they accessing, having really good telemetry and tooling around and auditing and around logging, for that environment, becomes even more critical.
Rajeev Raghunarayan: Interesting. Do you end up treating your agents as another human entity with appropriate access control policies that you deploy for your agents? I’m curious.
Scott Roberts: Yeah, absolutely. I mean, you’re… yeah, you have… you have proper identity management, , for that particular agent. You use, , sort of role-based access control for that agent, and then you want to monitor what that agent’s accessing, how it’s… what data is it… is it using, how is it using the data, is it sending the data someplace else? And even in our own product. the UiPath, product, we have a component called our AI Trust Layer, and we pass all of the calls through to back-end models through that trust layer, so we have that inspection point, we can redact data, we can make sure it meets our encryption standards, we can log it and usage monitor everything that’s kind of going in and out of the workflows.
Rajeev Raghunarayan: Interesting, okay. So, so not only do you actually have identity, you actually have a trust layer that any calls to an LLM is actually going through the trust layer first for validation before… or production even, before it goes back… before results go back.
Scott Roberts: Exactly, and by having that kind of trust layer, you can make governance statements such as, we will not allow an API to be sent to an external model, for example, right? And then there are advanced capabilities where you can tokenize those calls so that, , data doesn’t leave the environment, but when the return comes, you can rehydrate that token and still provide contextually…
Rajeev Raghunarayan: infection, yeah, makes sense.
Scott Roberts: Without leaking the data out of your estate.
Rajeev Raghunarayan: That’s cool. That is super interesting. Scott, I’m gonna, again, this is interesting, right? You’re applying AI, in security, for security, you’re looking at how it’s actually improving outcomes. How do you actually know it’s actually improving outcomes? You mentioned a few examples earlier, the 4 hours to 1. 5 minutes, the 13, 000 hour savings. You’ve seen measurable improvements in risk reduction. What kind of metrics, what kind of outcomes are you tracking beyond those two numbers that you mentioned, perhaps including those two numbers, to see whether your AI is actually improving risk reduction when you deploy technologies such as Averlon being one of them.
Scott Roberts: Yeah, I think we hit on some of the big ones, right? On the operational side, , beyond just time to triage, you’re wanting to really look at time to remediate at the end of the day, is, , from the time the vulnerability is first announced, to triage, to remediation, is being able to watch that entire sort of life cycle Come down over time. The other sort of way to look at it is from sort of a risk reduction point of view. So, if you follow sort of traditional enterprise risk management, you have risk ratings associated to various parts of your infrastructure and your environment. And are you able to measure the amount of risk reduction, , that you’ve followed, that you’ve been able to deliver? in the implementation of those particular strategies. So, , depending on what your perspective is and what sort of part of the organization you’re talking to, either operational metrics or, , more, , high-level business outcomes can be factored in to the discussion. One of the other things that I’ve been working on within my team is establishing a metric around what we call misses. What were the… the things that… that we should have found, but missed. We have scanning in place, we have, , all of the right, sort of, folks looking at the tooling, we have agents that are looking at, , code. But yet, we get through our bug bounty program, or from a customer’s, , scanner, a particular vulnerability report. We take each of those, , pretty seriously, and look to see, okay. what was missing or lacking in our environment, and how do we, , sort of close the loop on that gap and sort of raise the bar internally? So that external finding, , sort of sources important for us. And I recently introduced that as a board-level metric for my team, is things that we should have caught and didn’t to hold ourselves accountable for improving the bar in that area.
Rajeev Raghunarayan: Interesting, interesting. So yeah, so risk reduction, time to triage, time to remediate, the, misses are why the misses occurred, right? Not just the number of misses, also the rationale behind why they occurred and how you’re… how you’re actually overcoming them. Obviously, the goal at the end of the day is to reduce your exposure window, is basically what pretty much, I think, every CISO is going for. Now, you mentioned something about the board here, right? I think every board member, every executive that… that we’ve spoken with tends to, obviously, get excited about AI, right? But they may not fully grasp the risk side, so how do you frame conversation with leadership, with the board, so that they both understand the potentials and the pitfalls, right? You don’t want to be over-pivoting on one direction or the other.
Scott Roberts: Yeah, it’s interesting board-level conversations that are occurring across the industry. And again, in talking with my peers, some boards have responded, , from a fear, uncertainty, and doubt point of view, and just locked down environments, and prohibited and blocked access to certain sites. Fortunately, because we are an ML, , sort of AI-forward organization, all the way from our founder down to the board, to our team. We’ve been very encouraged to kind of lean into the technology. Our founder actually, , sent out a company-wide communication, encouraging every team to consider themselves to be an agentic team. to think about how they can transform their own processes on a day-to-day basis using, , agentic AI. And so we had to look at this from a very balanced point of view in our environment. Instead of talking about, sort of, what but we’re gonna block access to all of these sites. , what we try to do is have a reasonable balance around having sort of our corporate-approved, enterprise version of Of these models that we point users to, so that we can make sure that, , company data is safe and secure and not being used to, , to train external models. But without blocking users from accessing, , consumer-grade, , sites. But we have a… , a set of technology and controls that we’re working to integrate, to be able to nudge users that are using those third-party tools kind of back onto the approved guardrails. So, we want to, instead of saying. No, we’re trying to say, here’s how, in our environment. And our board has been pretty supportive, of that. overall, at the board level, what I try to do is go back to that risk point. , many of them, , may not be super familiar with the lowest levels of details in terms of the technology, so it all comes back up to the risk. , either from ransomware or, , sort of other large-scale security risks. That we think about in our Enterprise Risk Register. And how are, , the investments we’re making reducing that risk, year over year. And… there’s our internal view of it, but then I try to supplement that at the board level with an external, sort of independent view, as well. And so, , we leverage, a lot of organizations, , third-party audits across a broad set of compliance regimes, everything from our SOC 1 and SOC 2 to HIPAA, we’re, , HITRUST certified, UK Cyber Essential certified, so we have a lot of, sort of, external auditors that look at our environment to kind of help us provide that objective measure. And then we also do a regular, sort of, NIST CSF maturity rating, and measure, sort of, year-over-year improvements against NIST CSF. So, we try to have an internal view around risk, and then an external view For compliance, an external view for cyber maturity, and that’s usually the lens that we’re talking to our board about.
Rajeev Raghunarayan: Okay, interesting. Very, very interesting. I think we’re a little bit over time, but I’m going to ask you one last question there, and folks, if there’s any questions, please throw it into the chat window, or we’ll ask Scott maybe another question, if time permits. Scott, looking ahead, obviously. the role of AI in enterprise security is changing over the next few years, right? What changes do you expect for CISOs and their teams as AI becomes more deeply embedded in how organizations evolve?
Scott Roberts: , I think if I were to go back 3 years ago, we were starting to explore, , for example, Microsoft Copilot, , it was used in the coding world, and it was an assistant, if you will, to the human. And now, things have evolved so quickly, where AI agents are taking the lead on generating code from scratch, just based off requirements documents. So, I think the one thing that I can predict is that I have no idea what things are going to look three years from now. And so, we are, , really, taking a much more short-term horizon in terms of our predictions, and trying to keep us, keep ourselves as agile as possible, so as things are responding, how quickly can we either adopt or react To the new environment, or to the new capability. I know one of the things that my… myself and my peers have been frustrated by, are… every tool that we use is now trying to become an AI tool in some way. And so you would get, , Zoom’s AI agent, and you would get, , Slack’s AI agent, and all of these other tools are now popping up. And so it’s very important to understand what you have in your environment, right? So what is your, sort of, asset inventory, and where on their roadmaps are those products evolving to include AI capabilities, ? Because, for example, when Zoom came out with their first AI capabilities, their terms of service were that they would have access to train model data. off of your data, right? And so, that’s a big liability for, , for our company, others. And so, , we had to restrict that. And if you’re not kind of staying on top of those evolutions and the fine print behind those, you can be pretty scary. For organizations, you end up with some unintended consequences there. So, staying on top of the environment, and really understanding what organizations are doing. In terms of some predictions, if you look at the evolution of the AI models, right now, most of the models, are very, very expensive to train. And they evolve the base foundational models on a reasonable cadence. If you look at ChatGPT3, ChatGP 4, ChatGP 5, there’s been a kind of a cadence. I believe that in the not-too-distant future, these models will be able to be much more dynamic. Where agents will be able to not just go in real time augment their model with the data that they collect on the fly, but then use that to fundamentally change the foundational model that they’re based on, when they truly become, , almost self-evolving, self-learning, and that will change a lot of the sort of ways we think about controls and guardrails today, if the underlying foundational model is not just read-only. What if that weren’t read right? And when that happens, that’s going to be another major shift for a lot of our state.
Rajeev Raghunarayan: Very, very, very interesting. Scott, this… this was fantastic. I could easily talk to you for another 40 minutes. I’ve got more questions than we have time for. Folks, if there’s any questions that you’ve thrown in today, I think we’re… A little bit over time. So I’m going to leave it at this, and I’ll follow up with Scott for any questions offline, and we’ll share that, obviously, in the… in further chat. Scott, thank you again for a very insightful discussion. I told you, I could definitely have another half an hour or 40 minutes worth of discussion, but it was fantastic to hear how you and UiPath are approaching security in an AI era, right, and the perspective of where you see the industry heading. And thank you to everyone who joined today. At Averlon, we obviously use AI agents, as Scott already highlighted, to help teams accelerate vulnerability remediation and close that exposure window. Again, the goal is to turn findings into fixes that help security teams keep up in the AIRM. We host these sessions to share what we’re learning as we build and deploy security, AI security, in different places. Again, if today’s discussion was useful, we’d love for you to stay connected with Averlon. Join us for future sessions and research on how AI is reshaping the future of security. Once again, Scott, this was absolutely fantastic. I’d love to host you some other time again, but really help, really thankful for today.
Scott Roberts: Thank you so much for the opportunity, enjoy the discussion, feel free to, folks, to connect and comment, and, stay in touch.
Rajeev Raghunarayan: Thank you. Bye-bye.


%20Webinar%20-%20Jason%20James%20-%20Cyber%20Risk%20Meets%20Deal%20Risk.png)
