Insights
Dec 8, 2025
14
min read

A Practical Guide to Patching EC2 with AWS SSM Patch Manager

Vishal Agarwal

Table of Contents

The Grace Period is Over

The grace period is over. One in three vulnerabilities are now exploited on or before disclosure day [1]. The former "safe window" between vulnerability disclosure and exploitation has officially collapsed.

For Equifax, an unpatched Apache Struts vulnerability (CVE-2017-5638) cost $1.4 billion [2,3]. For Microsoft Exchange customers, slow patch installation meant 250,000 organizations compromised by the Hafnium attacks [6,7,8]. Most recently, state-aligned actors exploited WinRAR's CVE-2025-8088 in the wild before most organizations even knew about it [4,5].

Most organizations have patch management processes. Few have processes that consistently close the exposure window before attackers strike. The difference comes down to three things:

  • Automation that removes human delay.
  • Prioritization that focuses on real, exploitable risk.
  • Visibility that goes beyond proving compliance to proving the closure of real, exploitable risk.

AWS Case Study: Why SSM Patch Manager?

In AWS cloud environments, SSM Patch Manager automates the patching of operating system and application binaries across your EC2 instances. Unlike manual patching workflows that depend on human coordination and timing, Patch Manager enables:

  • Scheduled, automated patch operations that run without manual intervention
  • Centralized compliance reporting across your entire fleet
  • Flexible patch baselines that let you control what gets patched and when
  • Integration with AWS Systems Manager for unified infrastructure management

For teams managing dozens or hundreds of instances, this automation isn't optional—it's essential. Manual patching simply can't keep pace with the volume and velocity of modern vulnerabilities.

Who should read this guide: Security engineers, DevOps teams, and cloud architects responsible for maintaining secure AWS infrastructure and meeting compliance requirements.

More information about Patch Manager is available in the official AWS documentation.

Prerequisites Checklist

Before setting up automated patch management with Patch Manager, ensure you have:

  1. SSM Agent installed on target instances: the agent is pre-installed on a selection of AMIs from Amazon and trusted third parties, such as Amazon Linux, Amazon Linux 2, macOS, Ubuntu Server, Windows, etc. On other operating systems, it must be manually installed.
  2. IAM Permissions configured for SSM: AWS Systems Manager requires explicit permissions to perform actions on your target instances, such as AmazonSSMManagedEC2InstanceDefaultPolicy
  3. Additional prerequisites specific to the Patch Manager.

Key Concepts: Understanding the Building Blocks

Before diving into implementation, let's clarify three core concepts that form the foundation of SSM Patch Manager.

Patch Baselines

Patch Baselines define which patches are approved for installation on your instances. AWS provides predefined baselines for each supported operating system, but you can create custom baselines with your own approval logic. The baseline also serves as the benchmark for compliance reporting, and instances that don't meet the baseline are flagged as non-compliant [9].

Patch Groups

Patch Groups associate target instances with specific patch baselines and enable coordinated rollouts and differentiated patching strategies. For EC2 instances, patch groups are defined using tags. 

Maintenance Windows

Maintenance Windows create schedules for patching operations, minimizing disruption to normal operations. You control when patching happens, how long the window lasts, and what actions occur when patches require reboots.

Implementation Guide

The following walkthrough uses both Terraform and AWS CLI examples. Commands were tested on macOS with AWS CLI v2.31.33 and Terraform v1.11.4.

aws --version
aws-cli/2.31.33 Python/3.13.9 Darwin/25.0.0 exe/arm64


terraform --version
Terraform v1.11.4
on darwin_arm64

Additional implementation details are available in this AWS blog post for patching EC2 instances and this documentation on AWS CLI commands for Patch Manager.

Step 1: Define your baseline

Every patching strategy starts with a baseline—the rules that determine which patches get approved. You can use the provided AWS default baselines or create custom baselines tailored to your risk tolerance and operational requirements.

Option A: Using a Default Baseline

AWS provides sensible defaults for each operating system. Here's what the Ubuntu default looks like:

aws ssm get-patch-baseline --baseline-id "arn:aws:ssm:us-west-2:280605243866:patchbaseline/pb-0dcda0730ce35c5e6"
{
   "BaselineId": "arn:aws:ssm:us-west-2:280605243866:patchbaseline/pb-0dcda0730ce35c5e6",
   "Name": "AWS-UbuntuDefaultPatchBaseline",
   "OperatingSystem": "UBUNTU",
   "GlobalFilters": {
       "PatchFilters": [
           {
               "Key": "PRODUCT",
               "Values": [
                   "*"
               ]
           }
       ]
   },
   "ApprovalRules": {
       "PatchRules": [
           {
               "PatchFilterGroup": {
                   "PatchFilters": [
                       {
                           "Key": "PRIORITY",
                           "Values": [
                               "Required",
                               "Important",
                               "Standard",
                               "Optional",
                               "Extra"
                           ]
                       }
                   ]
               },
               "ComplianceLevel": "UNSPECIFIED",
               "ApproveAfterDays": 7,
               "EnableNonSecurity": false
           }
       ]
   },
   "ApprovedPatches": [],
   "ApprovedPatchesComplianceLevel": "UNSPECIFIED",
   "ApprovedPatchesEnableNonSecurity": false,
   "RejectedPatches": [],
   "RejectedPatchesAction": "ALLOW_AS_DEPENDENCY",
   "PatchGroups": [],
   "CreatedDate": "2018-05-03T19:25:48.416000-07:00",
   "ModifiedDate": "2018-05-03T19:25:48.416000-07:00",
   "Description": "Default Patch Baseline for Ubuntu Provided by AWS.",
   "Sources": []
}

Key settings to note in the default Ubuntu baseline:

  • Patches for all priority levels are approved for installation after 7 days
  • Non-security updates are disabled

This conservative approach gives you a week to test patches before they're automatically approved. This is reasonable for most production environments and must be balanced against the increasingly non-existent exploitation window for vulnerabilities.

Option B: Creating a Custom Baseline

For tighter control, create a custom baseline. This example separates critical patches from lower-priority patches, allowing the potential for separate deployment schedules. In this example, both groups of patches are deployed immediately in response to the closing exploit window for vulnerabilities.

Terraform
resource "aws_ssm_patch_baseline" "ec2" {
 name             = "averlon-ec2-patches"
 description      = "Baseline for EC2 Ubuntu patches"
 operating_system = "UBUNTU"
 approval_rule {
   approve_after_days = 0
   compliance_level   = "CRITICAL"


   # For more on patch filters, see:
   # - https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html
   # - https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html
   patch_filter {
     key    = "PRODUCT"
     values = "Ubuntu22.04"
   }


   # AWS priorities for Ubuntu can be viewed via this command: 'aws ssm --region us-west-2 describe-patch-properties --operating-system UBUNTU --property PRIORITY'
   # Ubuntu's actual CVE priorities are different, and can be viewed here: https://ubuntu.com/security/cves
   # Sources like this (https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-predefined-and-custom-patch-baselines.html)
   # suggest an AWS ranking of: Required, Important, Standard, Optional, then Extra.
   patch_filter {
     key    = "PRIORITY"
     values = ["Required", "Important"]
   }
 }
 approval_rule {
   approve_after_days = 0
   compliance_level   = "MEDIUM"
   patch_filter {
     key    = "PRODUCT"
     values = "Ubuntu22.04"
   }
   patch_filter {
     key    = "PRIORITY"
     values = ["Standard", "Optional", "Extra"]
   }
 }
}
AWS CLI
aws ssm create-patch-baseline \
 --name "averlon-ec2-patches" \
 --description "Baseline for EC2 Ubuntu patches" \
 --operating-system UBUNTU \
 --approval-rules '{
   "PatchRules": [
     {
       "ApproveAfterDays": 0,
       "ComplianceLevel": "CRITICAL",
       "PatchFilterGroup": {
         "PatchFilters": [
           {
             "Key": "PRODUCT",
             "Values": ["Ubuntu22.04"]
           },
           {
             "Key": "PRIORITY",
             "Values": ["Required", "Important"]
           }
         ]
       }
     },
     {
       "ApproveAfterDays": 0,
       "ComplianceLevel": "MEDIUM",
       "PatchFilterGroup": {
         "PatchFilters": [
           {
             "Key": "PRODUCT",
             "Values": ["Ubuntu22.04"]
           },
           {
             "Key": "PRIORITY",
             "Values": ["Standard", "Optional", "Extra"]
           }
         ]
       }
     }
   ]
 }'

Step 2: Organize Target Instances into Patch Groups

With your baseline defined, the next step is organizing instances. Patch groups let you apply different baselines to different sets of instances—essential for staged rollouts or environment-specific policies.

Pro Tip: Use meaningful patch group names that reflect your environment structure (e.g., prod-web-servers, staging-databases). This makes it easier to manage multiple groups and understand which instances are affected by each baseline.

Terraform

resource "aws_ssm_patch_group" "ec2" {
 baseline_id = aws_ssm_patch_baseline.ec2.id
 patch_group = "averlon-ec2"
}

# In your aws_instance resources, add this tag:
# tags = {
#   PatchGroup = "averlon-ec2"
# } 

AWS CLI

aws ssm register-patch-baseline-for-patch-group \
 --baseline-id <BASELINE_ID_FROM_PRIOR_COMMAND> \
 --patch-group "averlon-ec2"
aws ec2 create-tags \
   --resources i-1234567890abcdef0 \
   --tags Key=Patch Group,Value=averlon-ec2

Step 3: Schedule Patching with Maintenance Windows

Automation is powerful, but uncoordinated patching can cause disruption. Maintenance windows give you control over when patches are applied and how long operations can run.

Creating the Maintenance Window

This example creates a daily window running at midnight UTC with a 2-hour duration and 1-hour cutoff for new tasks:

Terraform
resource "aws_ssm_maintenance_window" "ec2" {
 name     = "averlon-ec2"
 schedule = "cron(0 0 * * ? *)"
 duration = 2
 cutoff   = 1
}
AWS CLI
aws ssm create-maintenance-window \
 --name "averlon-ec2" \
 --schedule "cron(0 0 * * ? *)" \
 --duration 2 \
 --cutoff 1 \
 --no-allow-unassociated-targets

When defining your maintenance window, review the following scheduling considerations:

  • Choose low-traffic periods for production systems.
  • Account for time zones when using UTC.
  • Consider staggering windows for different environments.
  • The cutoff ensures long-running operations complete before the window closes.

Registering the Targets for the Maintenance Window

This step identifies the instances to patch in the maintenance window by specifying the patch group tag.

Terraform
resource "aws_ssm_maintenance_window_target" "ec2" {
 window_id     = aws_ssm_maintenance_window.ec2.id
 name          = "averlon-ec2"
 resource_type = "INSTANCE"


 targets {
   key    = "tag:PatchGroup"
   values = "averlon-ec2"
 }
}
AWS CLI
aws ssm register-target-with-maintenance-window \
 --window-id <MAINTENANCE_WINDOW_ID_FROM_PRIOR_COMMAND> \
 --resource-type INSTANCE \
 --targets "Key=tag:PatchGroup,Values=averlon-ec2" \
 --name "averlon-ec2"

Creating patch tasks

Finally, this step defines what actually happens during the window—in this case, running the AWS-RunPatchBaseline command to install patches and logging results to S3 and CloudWatch.

Terraform
resource "aws_ssm_maintenance_window_task" "ec2" {
 window_id       = aws_ssm_maintenance_window.ec2.id
 task_type       = "RUN_COMMAND"
 max_concurrency = 3
 max_errors      = 1
 priority        = 1
 task_arn        = "AWS-RunPatchBaseline"
 targets {
   key    = "WindowTargetIds"
   values = [aws_ssm_maintenance_window_target.ec2.id]
 }


 task_invocation_parameters {
   run_command_parameters {
     output_s3_bucket     = aws_s3_bucket.logs.id
     output_s3_key_prefix = "patch"
     timeout_seconds      = 600 # console default


     parameter {
       name   = "Operation"
       values = ["Install"]
     }


     cloudwatch_config {
       cloudwatch_log_group_name = "/aws/ssm-patch/averlon-logs-dev"
       cloudwatch_output_enabled = true
     }
   }
 }
}
AWS CLI
aws ssm register-task-with-maintenance-window \
 --window-id <MAINTENANCE_WINDOW_ID_FROM_PRIOR_COMMAND> \
 --targets "Key=WindowTargetIds,Values=<TARGET_ID_FROM_PRIOR_COMMAND>" \
 --task-arn "AWS-RunPatchBaseline" \
 --task-type "RUN_COMMAND" \
 --task-invocation-parameters '{
   "RunCommand": {
     "Parameters": {
       "Operation": ["Install"]
     },
     "OutputS3BucketName": "<S3_BUCKET_ID>",
     "OutputS3KeyPrefix": "patch",
     "TimeoutSeconds": 600,
     "CloudWatchOutputConfig": {
       "CloudWatchLogGroupName": "/aws/ssm-patch/averlon-logs-dev",
       "CloudWatchOutputEnabled": true
     }
   }
 }' \
 --max-concurrency 3 \
 --max-errors 1 \
 --priority 1 \
 --name "averlon-ec2"

Things to consider:

  • max_concurrency: setting to 3 patches three instances at a time—adjust based on your fleet size and risk tolerance.
  • max_errors: setting to 1 stops the operation if multiple instances fail, preventing widespread issues.
  • Logging to both S3 and CloudWatch gives you durable storage and real-time monitoring
  • The 600-second timeout (10 minutes) should accommodate most patching operations.

Operational Tips

Starting an impromptu targeted patch operation

While scheduled maintenance windows handle routine patching, sometimes you need to patch immediately, such as:

  • When responding to critical vulnerabilities with active exploitation (like those in CISA's KEV catalog).
  • Emergency security advisories from vendors.
  • Testing patch operations before scheduling them.

On-Demand Patching for Specific Instances

When you need to patch specific instances right now:

# patch specific instances with logs
aws ssm send-command --document-name "AWS-RunPatchBaseline" \
--document-version "1" \
--targets '[{"Key":"InstanceIds","Values":["i-1234567890abcdef0"]}]' \
--parameters '{"Operation":["Install"],"SnapshotId":[""],"InstallOverrideList":[""],"AssociationId":[""],"BaselineOverride":[""],"RebootOption":["RebootIfNeeded"]}' \
--timeout-seconds 600 \
--max-concurrency "50" \
--max-errors "0" \
--output-s3-bucket-name "averlon-ssm-logs-dev" \
--output-s3-key-prefix "patch" \
--cloud-watch-output-config '{"CloudWatchOutputEnabled":false}'

On-Demand Patching by Tag

To patch all instances in a patch group (or identified by other EC2 tag) immediately:

# patch instances that match a tag with logs
aws ssm send-command --document-name "AWS-RunPatchBaseline" \
--document-version "1" \
--targets '[{"Key":"tag:PatchGroup","Values":["averlon-ec2"]}]' \
--parameters '{"Operation":["Install"],"SnapshotId":[""],"InstallOverrideList":[""],"AssociationId":[""],"BaselineOverride":[""],"RebootOption":["RebootIfNeeded"]}' \
--timeout-seconds 600 \
--max-concurrency "50" \
--max-errors "0" \
--output-s3-bucket-name "averlon-ssm-logs-dev" \
--output-s3-key-prefix "patch" \
--cloud-watch-output-config '{"CloudWatchOutputEnabled":false}'

Immutability vs. In-Place Patching

Averlon generally encourages teams to adopt immutable infrastructure patterns for server workloads, where new images are baked and deployed instead of patching systems in place. Immutable models reduce uncertainty, simplify rollback, and make it easier to guarantee that patches have been correctly applied across environments.

That said, many organizations operate large VM fleets, complex vendor software, and stateful workloads that cannot yet move to an immutable model. For these systems, automated in-place patching through tools like AWS SSM Patch Manager remains essential to close the exploitation window quickly and consistently.

When Things Go Wrong: Troubleshooting Guide

Even well-configured patching operations can encounter issues. Here are some things to watch for:

TargetNotConnected Errors

Symptoms: Instances show as TargetNotConnected after patching triggers a reboot.

Root cause: The instance may have underlying issues preventing proper restart. Patching forces a reboot, exposing these problems.

Troubleshooting steps:

  1. Verify the instance can restart successfully outside of patching operations.
  2. Check system logs for kernel panics or boot failures.
  3. Review recent configuration changes that might prevent clean restarts.

Other Common Issues

Permissions errors: Verify the instance role has AmazonSSMManagedEC2InstanceDefaultPolicy or an equivalent policy attached

SSM Agent not responding: Ensure the agent is running and has network connectivity to SSM endpoints

Patch failures: Check the CloudWatch logs and S3 outputs specified in your maintenance window task for detailed error messages

For comprehensive troubleshooting guidance, see the AWS troubleshooting documentation.

Limitations

OS & Package Manager Support

AWS SSM Patch Manager supports Linux, MacOS, and Windows operating systems [10], and generally supports the native package managers on those operating systems. However, certain package managers, such as snap are not supported. 

  • For snap, while the loss of centralized management and compliance visibility is significant, you are not left entirely out in the cold – the snapd daemon does automatically check for updates 4 times daily by default [11].

See the full list of supported operating systems and supported package managers.

Tracking Compliance Across Your Fleet

Automation is valuable, but proving compliance is essential. SSM Patch Manager provides detailed reporting through both the console and CLI.

Figure 1: Tracking compliance across your fleet

Executive Summary View

To get a high-level overview of fleet-wide compliance:

aws ssm list-compliance-summaries --filters "Key=ComplianceType,Values=Patch"                                           
{
   "ComplianceSummaryItems": [
       {
           "ComplianceType": "FleetTotal",
           "CompliantSummary": {
               "CompliantCount": 1,
               "SeveritySummary": {
                   "CriticalCount": 1,
                   "HighCount": 0,
                   "MediumCount": 0,
                   "LowCount": 0,
                   "InformationalCount": 0,
                   "UnspecifiedCount": 0
               }
           },
           "NonCompliantSummary": {
               "NonCompliantCount": 1,
               "SeveritySummary": {
                   "CriticalCount": 0,
                   "HighCount": 0,
                   "MediumCount": 1,
                   "LowCount": 0,
                   "InformationalCount": 0,
                   "UnspecifiedCount": 0
               }
           }
       },
       {
           "ComplianceType": "Patch",
           "CompliantSummary": {
               "CompliantCount": 1,
               "SeveritySummary": {
                   "CriticalCount": 1,
                   "HighCount": 0,
                   "MediumCount": 0,
                   "LowCount": 0,
                   "InformationalCount": 0,
                   "UnspecifiedCount": 0
               }
           },
           "NonCompliantSummary": {
               "NonCompliantCount": 1,
               "SeveritySummary": {
                   "CriticalCount": 0,
                   "HighCount": 0,
                   "MediumCount": 1,
                   "LowCount": 0,
                   "InformationalCount": 0,
                   "UnspecifiedCount": 0
               }
           }
       }
   ]
}

In this sample output, we can see that 1 instance is fully compliant with critical patches, but 1 instance has a missing medium-severity patch.

Identifying Non-Compliant Instances

You can drill down to see exactly which instances need attention:

aws ssm list-resource-compliance-summaries --filters "Key=ComplianceType,Values=Patch" "Key=Status,Values=NON_COMPLIANT"
{
   "ResourceComplianceSummaryItems": [
       {
           "ComplianceType": "Patch",
           "ResourceType": "ManagedInstance",
           "ResourceId": "i-1234567890abcdef0",
           "Status": "NON_COMPLIANT",
           "OverallSeverity": "MEDIUM",
           "ExecutionSummary": {
               "ExecutionTime": "2025-11-25T16:00:18-08:00",
               "ExecutionId": "605acf03-5f2f-4daf-a808-532209e316ac",
               "ExecutionType": "Command"
           },
           "CompliantSummary": {
               "CompliantCount": 624,
               "SeveritySummary": {
                   "CriticalCount": 94,
                   "HighCount": 0,
                   "MediumCount": 148,
                   "LowCount": 0,
                   "InformationalCount": 0,
                   "UnspecifiedCount": 382
               }
           },
           "NonCompliantSummary": {
               "NonCompliantCount": 1,
               "SeveritySummary": {
                   "CriticalCount": 0,
                   "HighCount": 0,
                   "MediumCount": 1,
                   "LowCount": 0,
                   "InformationalCount": 0,
                   "UnspecifiedCount": 0
               }
           }
       }
   ]
}

This shows instance i-1234567890abcdef0 has 624 patches installed but is missing 1 medium-severity patch.

Summarizing compliance by Patch Group

You can also check the health of entire patch groups to quickly assess patch group health and identify trends.

for group in $(aws ssm describe-patch-groups --query 'Mappings[*].PatchGroup' --output text); do
   echo "Patch Group: $group"
   aws ssm describe-patch-group-state --patch-group "$group"
   echo "---"
done
Patch Group: averlon-ec2
{
   "Instances": 2,
   "InstancesWithInstalledPatches": 2,
   "InstancesWithInstalledOtherPatches": 2,
   "InstancesWithInstalledPendingRebootPatches": 0,
   "InstancesWithInstalledRejectedPatches": 0,
   "InstancesWithMissingPatches": 0,
   "InstancesWithFailedPatches": 1,
   "InstancesWithNotApplicablePatches": 2,
   "InstancesWithUnreportedNotApplicablePatches": 0,
   "InstancesWithCriticalNonCompliantPatches": 0,
   "InstancesWithSecurityNonCompliantPatches": 1,
   "InstancesWithOtherNonCompliantPatches": 0,
   "InstancesWithAvailableSecurityUpdates": 0
}

What SSM Patch Manager Doesn't Tell You

SSM Patch Manager automates the mechanics of patching and provides compliance visibility. But it can't answer the questions security teams need to answer every day:

  • Which patches should you deploy first based on real-world risk?
  • Which patches will address actually exploitable vulnerabilities in your environment?
  • How do these vulnerabilities connect to broader attack paths?
  • Did applied patches actually close the exploit path?
For a deeper look at why raw CVE counts create noise and overwhelm patch teams, see our breakdown of the challenge posed by massive CVE numbers.

SSM tells you what to patch. It doesn't tell you what to patch first or why—and it can't prove that the deployed fix has actually closed the real risk to your organization.

This is where intelligence layered on top of automation becomes critical.

How Averlon Transforms Patching from Compliance to Strategy

While automated patch management handles the mechanics, Averlon adds the intelligence layer that transforms patching from a compliance checkbox into strategic risk reduction:

  • Exploitability Analysis: Not all patches are equal. Averlon analyzes which vulnerabilities in your unpatched instances are actually exploitable in your environment. 
  • Attack Chain Mapping: See how unpatched vulnerabilities connect to broader attack paths across your cloud infrastructure. 
  • Automated Prioritization: Focus patching efforts on the instances and vulnerabilities that pose real exploitable risk, not just high CVSS scores. 
  • Closing the Remediation Gap: SSM Patch Manager tells you what needs patching. Averlon tells you what needs patching *first* and why—then proves that the exploit risk is closed.
For more on how attack chains shape real risk and remediation priorities, see our post on understanding attack chains and why they matter.

Sample workflow:

  1. SSM reports 500 missing patches across your fleet.
  2. Averlon identifies 12 that are actually exploitable given your environment configuration
  3. Of those 12, Averlon shows 3 are part of attack chains to high-value data.
  4. You patch those 3 first, immediately reducing real risk.
  5. Averlon validates that the patches actually closed the exploit paths.

From Configuration to Confidence

Setting up SSM Patch Manager is the foundation. But in a threat landscape where 1 in 3 vulnerabilities are exploited on day zero, automation alone isn't enough. The teams that stay ahead: 

  • Patch continuously, not just on a schedule.
  • Prioritize based on real-world exploitability, not static severity scores lacking the unique context of your application.
  • Can prove that deployed fixes have closed the real risk to their organization.

Ready to move beyond basic patch management?

Frequently Asked Questions

How does AWS SSM Patch Manager work?

It installs approved patches on EC2 instances based on patch baselines, patch groups, and scheduled maintenance windows. 

What is the difference between default and custom patch baselines?

Default baselines provide broad coverage with delayed approval. Custom baselines let teams control patch types, timing, and priorities. 

Can I trigger patching immediately instead of waiting for a maintenance window?

Yes. You can use AWS CLI to run on demand patches on specific instances or tags. 

Do I need in place patching if I prefer immutability?

Immutable images are preferred for many workloads, but VM based and stateful systems often cannot adopt that approach. Automated in place patching is still required. 

References

[1] https://www.vulncheck.com/blog/state-of-exploitation-1h-2025/

[2] https://www.bitdefender.com/en-us/blog/hotforsecurity/equifax-has-bled-1-4-billion-from-2017-breach-so-far 

[3] https://www.infosecurity-magazine.com/news/equifax-has-spent-nearly-14bn-on-1/ 

[4] https://www.eset.com/us/about/newsroom/research/eset-research-russian-romcom-group-exploits-new-vulnerability-targets-companies-in-europe-and-canada/

[5] https://www.greenbone.net/en/blog/new-winrar-flaw-cve-2025-8088-exploited-in-social-engineering-attacks/ 

[6] https://www.microsoft.com/en-us/security/blog/2021/03/02/hafnium-targeting-exchange-servers/

[7] https://www.cnn.com/2021/03/10/tech/microsoft-exchange-hafnium-hack-explainer/index.html

[8] https://www.reuters.com/article/us-microsoft-hack-eba-idUSKBN2B01RP/

[9] https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html#patch-manager-definition-of-compliance

[10] https://docs.aws.amazon.com/systems-manager/latest/userguide/operating-systems-and-machine-types.html

[11] https://snapcraft.io/docs/managing-updates

Ready to Reduce Cloud Security Noise and Act Faster?

Discover the power of Averlon’s AI-driven insights. Identify and prioritize real threats faster and drive a swift, targeted response to regain control of your cloud. Shrink the time to resolution for critical risk by up to 90%.

CTA image