Security teams are flooded with thousands of findings, yet risk remains.
Traditional vulnerability and exposure management prioritizes based on static severity scores. But real risk emerges from how vulnerabilities intersect with identities, network exposure, and configuration in live environments.
In this session, we will examine how vulnerabilities, identity exposure, network reachability, and configuration drift combine to form attack chains. We will explore why static severity scores fail to capture exploitability in real environments, and how remediation decisions change when risk is evaluated in context.
In this session, you’ll see:
- How vulnerabilities, identities, and network exposure combine into real attack chains
- How exploitability is evaluated in operational context
- How teams move from findings to prioritized, actionable remediation
- How remediation workflows reduce time to fix across cloud and infrastructure
Live demo led by Manish Datla, Founding Product Manager at Averlon, this session is designed for security leaders and practitioners responsible for exposure and vulnerability management, cloud security, and remediation operations.
Register Now
Watch Now
This transcript has been edited for readability and to remove filler words. It may contain minor corrections for clarity and spelling.
Rajeev Raghunarayan:
Awesome. Hey, thank you, everyone, for joining. My name is Rajeev Raghunarayan. I'm your host. I spent over 20 years in cybersecurity across engineering, product, go-to-market roles in various companies, Cisco, FireEye, SentinelOne Elastic, Anomali, Forcepoint. The one pattern that I'll say is consistent. Across my entire career is that we've kept improving detection. We've kept improving visibility.
But it's what happens after detection, the triage, the prioritization, the coordination with engineering, the actual risk reduction. Is where most programs are still struggling. We call that space remediation operations. That's what we're gonna focus on today. This session's being recorded. Please drop in your Q&A, obviously questions into a Q&A panel, and we'll cover them towards the end. Also joining me today is Manish Datla. He's a founding product manager at Averlon.
Manish has built and secured large-scale systems in real environments. At BlackRock, which was a company he worked before here, he worked on back-end infrastructure, supporting multi-billion dollar asset management platforms. Prior to that, he was with Salesforce, where he was actually a part of the threat and vulnerability Management team, and helped drive risk SLAs from 65% to 99% across infrastructure and operations.
For the past 3 years, he's been at Averlon and taken those lessons to build the platform that we walked through today. Welcome, Anish.
Manish Datla:
Thank you, Rajeev. I appreciate the introduction.
Rajeev Raghunarayan:
Luckily. Excellent. So, without further ado, let's get the ball rolling here. So, before we look at the platform, it's worth grounding in why the problem exists in the first place. Let's start with defining what we call the exposure window. I'm gonna quickly use a couple of quick slides to give you… to give… to ground yourself. Okay, so what do we call as exposure window? Well, for us, exposure window is the time between an exploit and the time to remediate. Right?
At the end of the day, this is where the attackers are obviously living in, this is where attackers thrive. And the goal of every security team is to try and shrink or compress this window to near zero. Now, this is very interesting because I think the more we move towards AI, the more code is deployed faster. the more it takes us time to actually get to understanding what the code does, and then understanding what the remediation is.
So there is also… this particular window is also getting stressed on the speed at which development is occurring. So, why does the mediation stay slow? Obviously, if I look at the approaches, we've gone from everything that's scanner-led to aggregation, to, more recently, AI-driven prioritization. Obviously, things have improved, so this is not to say the market's not getting better. We are indeed getting better.
We have definitely gone better than the traditional, let's just rely on CVSS prioritization, or even, to some extent, EPSS, and then we'll do everything manually. To trying to consolidate tickets, trying to deduplicate, trying to automatically ticket the issues right down to the teams that are responsible for fixing them. And we've also gotten vendors who've given people fixed recommendations, albeit they're not environment-specific, but there are fixed recommendations.
And more recently, you've seen companies that have come up that are talking about AI-driven prioritization. How do we use AI to triage issues? How do we use AI to prioritize in the context of your environment? But still, the remediation is left to the end user. Still figuring out which one of the priority tickets matters is left to the end user. And this is where we still start compounding issues.
They're equally… they're good prog… there's good progress, but we haven't gotten to a point where this has been systemized, where this has been integrated into a workflow. That is what we call remediation ops. It's the journey from findings to fixes, because it's the fix that actually reduces the risk, not just the finding. Now, obviously, everybody in this industry understands these five stages, right? You prevent issues, you detect issues, you triage them, you prioritize them, and you remediate them.
Now, prevention is a huge thing, often underestimated in terms of value, but I think in the AI world, that's going to be even more powerful, even more valuable than ever before, in terms of the speed at which code's coming. I think it's going to be a critical element of any future strategy. So, how does this work in practice? What is remediation operations?
Obviously, remediation operations should work with whatever are your existing tools, whether it's the container security tools, cloud security tools, can scan your environment by itself, or should integrate with tools that you've already invested in. The real workflow starts when you ingest information from all these tools, the findings, the issues, the vulnerabilities, whatever you call it. It starts on the basis of, like, understanding the vulnerability or the issue itself.
And then layers on what we'd call AI on top. Now, but AI is really important to get right, because as you all know, as we all know, AI has a ton of problems if it's just deployed blindly. There's hallucination, there's obviously judgment that it's making that's not necessarily always accurate. So it's important that these planning, these AI algorithms are continuously trained.
the AI agents and the planning algorithms that Manish will talk about in a little bit are continuously evolving and keeping up with where the industry is going, and there's techniques that you use that make sure that the hallucination is reduced. Otherwise, the trust in any AI-generated solution is going to disappear. Let's talk about the three key pillars here. what do you really need? Once you've ingested everything, you need to triage everything as much as possible. What does triage really mean?
Basically means answering questions that are relevant and pertinent to your environment. Is it… is that issue relevant to my environment? For instance, it could be a SEV10 or 79.8, but is it actually exposed in my environment? Is it reachable in my environment? Is it exploitable in my environment? If not, we'd just be chasing an SLA that's driven off an artificial number. But triage, therefore, becomes important. Can I understand my environment and the applicability of an issue in my environment?
This is where a lot of triaging can happen, and this is where AI becomes really powerful, because it can ingest a ton of data, correlate that with your environment, and give you an answer. But it's not just about giving you an answer. It's also telling you which one of these issues really matter the most. If one issue actually leads to a critical value data, high value data, you probably want to focus on that before you focus on something else.
And those are also things that you can actually build on top of AI algorithms, and we'll talk about the concept of attack chains as we go through the presentation. The second bucket, obviously, is, hey, finding and prioritizing is great, but my risk, as I said, only reduces when I remediate. And this is where the agentic remediation cape comes into play. Can I tell you what to fix? Can I tell you how to fix it? Can I tell you whether the fix is going to cause any regression?
That's what engineers are most worried about, is implementing something. Two things that engineers typically are worried about is, hey, am I wasting time fixing security issues that I don't need to because they don't apply to my environment? And if I fix it. Is it going to break something else, and it's going to cause a regression, and it's going to cause another SE1 issue that becomes an applicability issue? Oh, sorry, availability issue on the other side. So, any remediation recommendation can't just be generic.
It has to be, again, tailored to your environment. These two are great because they will take you from, hey, let me take the backlog, let me triage the backlog, let me identify issues that are pertinent and can access high-value data, then let me remediate it. These are great because it'll help you compress the backlog, reduce, shrink the backlog. But what also matters is, hey. Why am I continuing to add more issues to the backlog? even though this bucket is at the end, it's actually more prevention.
It's not more, it is prevention. Can I predict If a particular change is going to expand or open up more exposure in my environment. If I open up a particular load balancer, is it going to open a vulnerability? That is preventative before any code moves to production. Can I stop the issues before they move to production instead of trying to catch up after they move to production? And that changes the speed at which that exposure window can shrink.
Okay, so obviously one is backlog reduction, and then the other one is backlog elimination altogether, because you don't have a new issue that shows up in your backlog. And all of these need to integrate… any remediation needs to integrate with tools that your developers are already using. A developer hates to jump out of tools that they're using today, whether it's an IDE, whether it's a vision control system, or source code management system like GitHub, GitLab. or terminals that they may be using.
They need to get in front of it, and not, like. not try to use a different tool. So again, how do you integrate within the developer workflows directly so that it can be eased in terms of their workflow? This, from an operational perspective, we've had customers who are actually using it, and we've seen some really significant and good results. Now, obviously, the mileage varies according to specific deployments, but we've seen as high as 95% reduction in false positives.
In some cases, we've seen critical issues, the 9-point access. Go down by over 50%. And that's a massive savings in time that any security team, any engineering team has. Put… put into that analysis. Obviously, the thing that everybody cares about is how fast can you resolve those vulnerabilities, because that's where the exposure window gets impacted. And one customer actually told us in one of our previous webinars, and you can actually see it on our Averlon website.
it was Scott, Scott, Roberts from UiPath, who basically said, in some cases, his team has gone from taking four, four and a half hours to triage some issues. To getting them down under 90 seconds. So that's more than a 100x improvement in efficacy that he's seen for his team by applying some of these principles in their security practice.
So without further ado, let me actually turn it over to Manish, who's going to cover exactly what you're gonna talk… what you care for, is how do you triage issues in a way that eliminates noise? How do you identify the high-risk pathways and specific issues that lead to the high-risk pathways? How do you remediate those issues directly in the developer workflows? And how do you stop new issues from getting in with something like pre-com? Manish, I'll turn it over to you. Take it away.
Manish Datla:
Thank you, Rajeev, for that context. Let me just share my screen. Alright. So, similar to how Rajeev mentioned, right, the three key pillars that we have in our Averlon platform. One was the triage component. Then we had the remediation, and then finally the precognition bits. Through the course of this demo, I'll walk you through all these three components. Why I wish I had this when I was back at Salesforce, how it would have simplified my life drastically.
And also give you material examples as we go through the entire process. A little bit of context on the environment that I'm carrying this out on. This is a GOAT environment that we've set up. It's your typical GOAT environment that's got, like, containers, got a couple of Kubernetes clusters, VMs, and storage, and database. Sitting around. I'll primarily focus here on the triage side of the story. So, when we use the word triage, think of this like an AI security engineer.
And the best way I often find to describe this is, what used to happen when we were back at, Salesforce was. We had a number of scanning tools that we used across the application security stack and the infrastructure security stack. these tools used to vary diff… one tool for, like, cloud misconfigurations, another one for cloud vulnerabilities, then when we were looking at the applicant security stat, we had, like, a couple for SE and SEST, and then similarly for DAS and ISC issues.
So, trying to recreate the same experience here, this environment has scanners that's been enabled from Sneak, Advanced Security, GitHub Dependabot. What happens is, once the customer onboards. Averlon overlays… establishes the network topology of how different assets communicate with each other, what are the permissions from one asset to another. In addition to that, overlays the issues that these scanners have detected.
To establish the baseline of what is the total number of issues that have been detected, how many of these are critical and high. Once that baseline is established, Similar to how security engineers would have triage in the traditional vuln management world. It's our AI security engineers, which are, like, agents which have been given a different set of skills, triage to first determine how many exploitable issues that you've got.
So these are issues given the context of your environment, the context of the package, or the cloud misconfiguration, can be exploited by the attacker, and then we'll show you how those issues… the exploitable issues can actually be used For an attacker to carry out the compromise. So this happens to be the first part of the story, which is the risk analysis and the triage.
From there, we'll go into the remediation side, wherein we'll show you how our PRs help you remediate, again, issues across your SDLC lifecycle of the application and the infrastructure stack. Where we ingest into your, coding tools, CLI-based coding tools, IDEs, give you that one final PR that you folks can… developers can use and remediate, instead of going through the good old workflow of tickets and so on.
So, with that baseline established, so in this account, we had, like, about 240K Issues that were detected. Out of which… 3,000 of them were deemed as exploitable by our AI agents. So how do we go about this process? So let's take this, vulnerability that we've got here. So this is, like, a vulnerability that's been detected in the core package of Git, specifically in the context of This being present in a Windows subsystem for Linux.
So ideally, if Git was enabled within the Windows subsystem for Linux, it could result in information disclosure, because the Linux system, which was supposed to be self-contained and supposed to run its own set of commands, is somehow able to look into the Windows side of things, and… Look for any fi… look through the file systems that are present there. If you look at the parameters and the risk associated with this particular vulnerability. The CBS score is fairly high, 9.8.
It is exposed over the network, it is externally exposed as well. But what happened was, when we triaged it, we said the likelihood of this export happening is low, and we reduced the severity of this to 3.4. The primary reason here being, firstly, the issue was not present, the package that was detected here was a git man package. This is, like, a package that's used to generate manuals for GitHub. base code… code generated within Git. So, it was the wrong package that was being looked at.
Second, for this vulnerability to be carried out, it had to be within the context of a Windows subsystem. for Linux, but instead, this was found as part of, like, a containerized Linux environment. on a Linux container, so the likelihood of this attack carrying out is highly unlikely. So that's where you'll see the final risk analysis, where we give out the entire justification in terms of why such a thing is not critical. with the explanation that I just provided, and then we reduce the severity of it.
To give you a little bit of a peek of how we do this is… We have an Uber agent that exists, so think of this, like, as an agent which has got the skills of a security engineer. The skills here are broken down on, like, three primary pivots. One. the ability to understand the CV itself. So we go into the specifics of the CV in terms of, okay, in order for the CV to be exploited, does… can this be exploited over the network? Does it need some a user interaction?
So we're looking at various properties that are associated with the CV to determine, okay, what are the conditions and circumstances under which it can be exploited. Two, we go into the properties of the asset itself. When I say this, so we're looking at if this is exposed. Is this limited exposure or broad exposure? If it is limited exposure, are there, like, a specific set of IPs on which we are exposed to? If that's the case, then what are the ports that are present?
We also look for presence of, like, mitigating factors, like existing RAF rules or Cloudflare rules. We take all these properties associated with the asset, and this becomes the second set of information that we are actively deriving. Then finally, we look into the package itself that is compromised, or the CV has been detected against. If this package is present, does it have to be present on a specific port? If so, is that port specifically open?
So, those are the other… package-related properties that we're looking at. We look at how it's functionally being used. Can it, what are the circumstances under which this package can be exploited? We then combine all of this information. to eventually come up with this analysis that I showed you before. So it's, like, one agent that's taking in all of your CV-related details, this is information coming in from, like, NVD, KV, these active threat feeds that we ingest from.
Then we pull in the asset-related information. This is from our own ability to establish the context of your network topology. And then finally, we look into the package and how the package is being used. And then eventually get to the conclusion. So, this way, going back to the stat that Rajeev was showing, and in fact, even with one of our existing customers, we were able to reduce the volume of, like, what some of these other scanners deemed as critical to what we deem as critical by almost 90-95%.
Because triage at scale doesn't just go into these trivial factors that we're looking at, exploitability, external exposure, the CVSS scores, but it's more than that in terms of trying to understand the context of your environment and then reducing it down. So this is one example. Let me actually show you another example of how this same triage actually comes into play. So this is, like, a Linux kernel vulnerability that was detected specifically against Marvel chipsets.
For this CVE also, if you look at it, the CVSS score is 9.8, again, exploitable over network. It is externally exposed. But the thing that you'll notice here, again, is for the attacker to carry out the attack, or for it to be compromised, the attacker has to be in the same network. as the… the Wi-Fi router itself. And then the second thing that has to happen in the same case is Also, the typical cloud environment does not have the typical hardware drivers that are required for this particular CV to be exploited.
So, considering these two scenarios of not being in the same proximity, your cloud environment not being set up with the same set of drivers, the likelihood of something like this being exploited. Also gets reduced significantly. We've seen numerous such cases. These are, like, just two examples, but when we triage it at scale, a lot of times, a lot of these scanners which deem certain things as critical are not critical because certain ports are not exposed.
Or because certain packages that they detect against the binaries associated with them are not present, or if the exploit has to be carried out, you've got to be in the proximity of the network. So there are severe, triage implications that exist. If, when looked into in detail, you can reduce the number of issues that truly matter and your developers have to focus on. So, by carrying out this. end-to-end ability to actively triage issues. We are able to reduce the number of exploitable issues to, like.
3,670, so if you look… look at this, initially what was deemed as critical was almost, like, about 70,000 issues. Critical and high, but now that's been almost sub-reduced to, like, 3,000 issues in this case, 3,500-ish issues. So once we've established this baseline of, like, okay, these are the issues that are exploitable, we'll also show you how attackers can use them to actually compromise your account itself. So, we call this attack chains.
this is a concept you've potentially heard at this point of time, but the thing that we go about doing is, firstly, our attack chain analysis is carried on top of the existing exploitability analysis that I just walked you through. So, which means the results of these attack chains are far more trustable and reliable.
Two is we chain together different issues, again, issues coming in, be it from your application security scanners or your infrastructure security scanners, to show you how an attacker could use some of these to actually compromise your account. So, in this particular attack chain that I'm showing you here, chatbot front end has got a bunch of RCs. These were actually detected in code by Snyk, so these were SA issues that SNEEK detected.
So as your developer is writing code, actually, we can provide this context within the context of the code also. Like, so if they are trying to push code. That potentially could result in access to certain data storages that are deemed critical. We can actually provide this context as a result of this analysis that we are able to do here. So, chatbot front end is actually exposed to the internet.
The thing that you'll see is we've actually detected this limited exposure, so potentially we will reduce the risk associated with this. But attackers are able to get to chatbot front end, due to a remote code exploit. Laterally travels to chatbot API, which has also got remote code exploits. And then from here, laterally travels to chatbot Langchain VM, which has got a role attached that gives access to two S3 buckets.
So the attacker is able to read the contents of this S3 bucket, and then exfiltrate the data out. The thing that I really like about this is not only were we able to say… see the ingress axis and the fact that it was limited exposure. The fact that I like is we were able to see it was, like, 3 to 4 layers deep. And we were also able to detect that there was egress access. So, when you have evidence like this, and then you present it to your developers, or even when you're trying to accept some form of risk.
it more so grounds the truth in terms of, like, this concept of how you can go about vuln management in a pure risk-based perspective, versus purely deriving it on the basis of SLA numbers and trying to hold people accountable if that certain critical issues have to be fixed within 7 or 30 days of going from there. So this is one of the attack chains. The way we actually go about doing this is we map out all those exploitable issues that I just shown you. to the different MITRE techniques and tactics that exist.
And at scale. A little bit of an eye candy here, but at scale, this is all the different ways in which those exploitable issues could be used to compromise your account. Here, we map down to different attacks in terms of, like, data exfiltration, the example that I just showed you, privilege escalation. data destruction. And there's multiple other such techniques and tactics from a MITRE perspective that we look at. And then what we tell our customers is.
Just focus on this small subset of assets that break these attack chains. To get rid of the risk that's most critical to you. Again, this is a means for us to help you prioritize your remediation actions. This is not all of what we are asking you to recommend. This is a means for asking you to recommend, like, a certain set of issues that are critical. So you first go about remediating all your issues from an attack change perspective.
And then we ask you to go into the other buckets in terms of externally exposed, and issues with known exploits, and then externally exposed issues with RCs. This way, we are helping you navigate through the whole process of determining what is critical, and how do you go about the remediation process itself. So, coming back to the chart that we started with, right? We started with… Let me start… go back to the original number that we were looking at. So we started with 240K issues, we reduced it to 3,000-ish.
Out of those 3,000, we deemed there were, like, about a 1,500 attack chain… issues on attack chains. And these 1,500 issues map to, like, about 31 assets that we want you to fix. These were the issues that… these were the assets that we highlighted. In the world up until now, the way we would have gone about it is, okay, now a security engineer takes a look at, okay, Averlon said this is critical, we've triaged it. They would have gone about and created tickets.
Now, the developer takes this ticket, is trying to get a sense of how This issue needs to be fixed. tries to apply the fix, let's say, like, upgrades a package version. the code breaks. Now, that results in some back and forth, or they want to raise some an exception. But we are trying to flip the model around here. Wherein, for every asset that we tell you is critical. We are also going to raise PRs that help you fix them. So, to start things off, I'll give you an example here.
Let's assume as part of those 31 assets that we just saw, one of them was, like, a container. The container included assets, issues, which… Could have, potentially, some of which could have come in from the base image layer, some of which could have come in from the application layer. So what Averlon does is… Once it's deemed a certain asset as critical and certain issues within those assets as critical.
we map it back to, if it's like a Dockerfile, back to your… If it's a container, back to your Dockerfile, and… map the issues back to the layer from which they're coming in, and actually give you a PR with the necessary fixes. So if you look at what's happening here. We were able to detect a certain set of issues. This had, like, about 38 vulnerabilities. We mapped them to the packages that they're coming from. We upgraded these package versions.
And as part of updating this package version, we actually saw there was, like, a potential breaking change here. Because a certain package was deprecated. So, our agents went in. upgraded the packages. In addition to that, changed the package based on the advisory from the deprecated package. And updated the associated code also.
So, at this point of time, the developer is just reviewing this PR, And based on the PR, like, we raised two sets of PRs, one set of PRs for, like, list of… Packages that just need pure upgrades and not affecting your code, and the other one which needs, like, code upgrades. We combine those two, we give it to your developers, your developers are reviewing them, now it's just validating and then pushing these fixes.
So, this way, we are able to, like, tremendously reduce the amount of effort that goes into putting this remediation in play. And the way we do this is… this can be extended out to not just within GitHub, the example that I'm currently showing you, but we can also take this into the context of your… MCP servers, so we give out an MCP server, and… just give me, like, one second… So, yeah, so the way we do it is we have an MCP server that we expose with the list of issues.
So you'll see here, once we give that context, this can be used inside of your IDE, this could be used inside of your coding… CLI-based coding tools, like… Plot code, codecs. we give your… those coding agents the required context in terms of what are these issues. So in this case, you can see it's actually calling our MCP server. gets the context of all these issues, maps them back to the files that are locally present, and then goes about doing the remediation that I just showed you about.
So that's how we get to this whole story of, okay, we've detected a bunch of issues, or, like, your scanning tools have detected a bunch of issues. We then determine what's exploitable, mapping it back to your codebase, and then eventually helping you raise a PR for it. And we don't just stop at helping you Fix it, but we also help you prevent it. So, I'm going to give you another example in this case. let's say your DevOps engineer changed the property of, like, a load balancer.
Not a very common thing, but let's assume some network segmentation happened, or the properties of a security group changed. So, what was internally, what was not exposed to the internet now became exposed to the internet. In an ideal, in the current world, what would have happened is you would have taken this issue, this would have been deployed, a CSPM vendor would have flagged, this issue, and then you would have deemed… you'd have seen, okay, this is exposed, like, a couple of other assets.
But we are pulling that in. Even before you can actually push in this code. The way we do this is the moment this change comes in, we go ahead and analyze the PR. We are able to overlay the context of what could be the potential impact of this PR even before the code changes, because of the network topology that we established when we onboarded your account? And in this particular change, we were actually able to say there were two instances two EC2 instances that got exposed to the internet, so in this case.
AWS instance Web 1 and Web0. And both of them had… RCEs present on them on externally exposed instances, so the risk of this change is significantly high. So we provide this context to the developer right at the time of when the PR itself is getting raised, and once that is done. we act as… this can function as a guardrail, again, depends on how you can set it up, but you can set it up as GitHub Action, or… And once this context is set up, this prevents dangerous, risky changes from going through.
So that's how we are helping you with your entire lifecycle of Detection. Remediation and prevention. Bringing this full circle back to what Rajeev started, which makes up, like, the core components of Averlon's agentic vuln management platform. This is, in summary, our platform. Any questions at this time?
Rajeev Raghunarayan:
Manish, there is one question in the chat. Basically, it asks, how often does Averlon upgrade the severity of issues, and when… when it does triage. From why one scanner said it was.
Manish Datla:
Yep, so this is purely directly proportional to how the scanning schedule has been set up. So typically what we do is we scan for our customers a few hours to every day, and so every time the scan does… goes through. We triage… we run the triage against list of issues that were detected by your custom scanners, and then we update or either upgrade or downgrade the severity. So the short answer is every time the scan runs, we either update the severity of the issues or downgrade the severity of the issues.
Rajeev Raghunarayan:
Awesome. One other question, Manish. What are the factors that go into the triage agent's decision-making? Is it capable of re-triaging based on details provided by end users?
Manish Datla:
Yes. So, what we can do is, within the products, we've provided the ability for somebody to agree, disagree, or agree with the analysis that we've provided. So, an example of this could be… We potentially didn't see a mitigating factor in place for either an asset or a group of assets. Or, like, say, a certain package, the way it's being used. the way we deemed as critical was different. So we take that information, and then we enable the agent to recharge it.
And when we recharge it, we also make sure that if there are certain other assets that map this property. We also ensure that's taken care so that it's not just applied to that specific issue which the user boarded up on, but applies to all the other assets, or the… all the other issues that were detected on that asset.
Rajeev Raghunarayan:
A third question just popped in. Do clients use your analysis for exception management and compliance?
Manish Datla:
Yes, that is one of the key use cases for giving out an analysis like this. Specifically, like, exception management, where you've got to accept a certain amount of risk. You can use this as the basis to determine, like, why is something deemed critical?
If a developer disagrees showing the right amount of evidence, this can be used both from an exception management perspective, and in terms of compliance, obviously, there are certain set of SLAs that your developers have to stick to, so it helps on that front as well.
Rajeev Raghunarayan:
So, I'll take one final question, and of course, if there's anything else, please throw it into the chat as a Q&A window. How are the pull requests that you're showing different from those provided by Sneak, or Wiz, or some of the other vendors?
Manish Datla:
Yep, so, Sneak, or, like, even Dependabot, for that matter, all of these folks, like, raise pull requests. key distinguishing factors that come into play, right? One is All of them don't have context of your infrastructure. So the number of PRs that your developers are having to deal with are exponentially higher. Two is, since they don't have context of your infrastructure and how it's been established. Some of the so-called guardrails that they established are not accurate.
Or don't have the context that Averlon's able to provide to you. Here, when we are able to tell you that, hey, something is… critical. We are also able to provide that context purely based on how you're actually running your applications in production. So those two are the key factors that come into play. And then third is obviously a factor of where these tools sit. These are primarily looking at just your application side of the code. They don't have context into your infrastructure issues.
So, some of your issues in terms of, like, entitlement issues or, like, even cloud misconfigurations and stuff, those won't go in. So, that's how Averlon's PRs are more contextually relevant. Help you with some of these braking changes, and then eventually ensure you are, like, looking at a smaller subset of them.
Rajeev Raghunarayan:
I love that, love that. Thank you, Manish. One more question just popped in. Can I feed information about my assets back to Aberlon and make sure it knows what we consider crown jewels?
Manish Datla:
Yes, that does… so today, as part of the onboarding, or even, like, post the onboarding. You can mark a certain set of assets, either based on a few tags, or they belong to a certain team, or a certain class of assets as crown jewels. Once they're considered as crown jewels, say in some case, if the issue was triaged as slightly lower, we'll still keep the triage a little higher, so the contextual relevance of that asset can be taken into consideration.
Rajeev Raghunarayan:
Excellent, excellent. I think one last question, Manish, just to, In case nothing else comes up, Explain how attack chains are different from what some of the other vendors also do?
Manish Datla:
So the main thing with attack chains that, fundamentally, when you look at some of these other vendors, is They've historically treated, if you're exposed to the internet, and then you have access to some, like, a crown jewel of sorts, when I say a crown jewel, it could be, like, a storage bucket, or, like, a secrets manager. That became, like, an attack chain.
Just because you're externally exposed, there's usually huge network graphs, and then you have access to that key, or, like, the crown jewel of sorts, and that becomes, like, the critical asset. This is what people have been treating as, the so-called kill chain slash attack parts are other terms that we would have seen. But when you come to Averlon, the thing that distinguishes us is, first, the depths of what we can go into.
So it's not just on externally exposed assets, but we are able to see layers of lateral traversals, privilege escalations, sometimes, like, 5-6 levels deep. Typically, how an attacker would have combined, would have carried it out. And that brings in the major distinction for us between what we are able to do versus, say, what some of the other vendors claim to do.
Rajeev Raghunarayan:
Makes sense, makes sense, yeah. And sometimes the application architecture determines that, but three-tier architectures were fairly common back in the day. N-tier architectures aren't our question.
Manish Datla:
Correct.
Rajeev Raghunarayan:
Makes sense, makes total sense. Awesome. If there's no other questions, If there's any other questions, I'll give you a second here to type it. Okay, if not, I would… Love to thank Manish. Hey, Manish, thank you so much for walking us through the demo. And thank you to everyone who actually joined us today. What we're trying to address is this operational layer that starts after detection, the decisions, coordination, execution, that actually reduce the risk.
If this resonates, if you'd like to do more of a deep dive in your own environment, happy to schedule a follow-on conversation. Once again. Appreciate the time that you spent with us. Have a great rest of the day.
Manish Datla:
Thank you.
Rajeev Raghunarayan:
Bye-bye.


