0% found this document useful (0 votes)
2 views594 pages

Updated - TWC AWS Secure Config Review Draft Report March2025

The document provides a severity summary of various AWS services, indicating the number of high, medium, low, and informational issues identified. It includes a compliant summary showing the status of AWS services reviewed, with a total of 240 entities assessed. Additionally, it outlines the scope of work for AWS services in the Asia Pacific (Mumbai) region, detailing the services and their respective counts of AWS entities.

Uploaded by

jaimindalwaniya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views594 pages

Updated - TWC AWS Secure Config Review Draft Report March2025

The document provides a severity summary of various AWS services, indicating the number of high, medium, low, and informational issues identified. It includes a compliant summary showing the status of AWS services reviewed, with a total of 240 entities assessed. Additionally, it outlines the scope of work for AWS services in the Asia Pacific (Mumbai) region, detailing the services and their respective counts of AWS entities.

Uploaded by

jaimindalwaniya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
You are on page 1/ 594

TWC - AWS

DATE 3/7/2025

Severity Summary
Sr. No. Service High Medium Low Info
1 EC2 8 34 12 7
2 Relational Database Service 9 10 12 0
3 S3 11 4 4 4
4 Virtual Private Cloud 1 5 4 1
5 CloudFront 9 2 1 0
6 CloudWatch 1 0 0 0
7 ALB 2 5 1 1
8 IAM 8 12 8 4
9 EKS 2 5 3 0
10 ECS 2 0 8 0
11 Lambda 1 0 9 0
12 SQS 1 0 9 0
13 Elasti Cache 1 1 8 0
14 WAF 0 0 10 0
56 78 89 17
Grand Total
240
TWC - AWS Secure Configuration Review Report

Compliant Summ
Review Status S# Service Reference Link
Completed 1 EC2 Report Link
Completed 2 Relational Database Service Report Link
Completed 3 S3 Report Link
Completed 4 Virtual Private Cloud Report Link
Completed 5 CloudFront Report Link
Completed 6 CloudWatch Report Link
Completed 7 ALB Report Link
Completed 8 IAM Report Link
Completed 9 EKS Report Link
Completed 10 ECS Report Link
Completed 11 Lambda Report Link
Completed 12 SQS Report Link
Completed 13 Elasti Cache Report Link
Completed 14 WAF Report Link
Grand Total
ew Report

Compliant Summary
Complaint Not Compliant N/A Total Review Status
32 7 22 61 Completed
22 3 6 31 Completed
7 9 7 23 Completed
3 6 2 11 Completed
4 4 4 12 Completed
1 0 0 1 Completed
2 2 5 9 Completed
8 21 3 32 Completed
3 7 0 10 Completed
3 4 3 10 Completed
0 0 10 10 Completed
0 0 10 10 Completed
4 4 2 10 Completed
2 6 2 10 Completed
91 73 76 240
Scope of Work (Asia Pacific (Mumb

Sr. No. Service Count of AWS Entities

1 EC2 (Elastic Compute Cloud) 25 Instance (19 Active)

2 Relational Database Service (RDS) 2 Instance

3 S3 (Simple Storage Service) 15 Buckets

4 Virtual Private Cloud (VPC) 3 VPC

5 CloudFront 5 Distribution

6 CloudWatch N/A

7 ALB (Application Load Balancing) N/A

8 IAM (Identity and Access Management) 10 User Group (36 Users)

9 EKS (Elastic Kubernetes Service) 2 Clusters

10 ECS (Elastic Container Service) 1 Cluster

11 Lambda 2 Functions

12 SQS (Simple Queue Service) N/A

13 ElastiCache N/A

14 WAF (Web Application Firewall) 1 Web ACL


e of Work (Asia Pacific (Mumbai)) Region

Brief
EC2 provides resizable compute capacity in the cloud. It allows you to launch virtual machines (known as instances)
with customizable CPU, memory, storage, and networking capacity. You can scale your infrastructure up or down
based on your needs and only pay for what you use.
RDS is a managed database service that simplifies the setup, operation, and scaling of relational databases. It supports
various database engines such as MySQL, PostgreSQL, MariaDB, SQL Server, and Amazon Aurora. It offers features like
automated backups, patching, scaling, and high availability.
S3 provides highly scalable, durable, and low-cost object storage. It allows users to store any amount of data and
retrieve it from anywhere on the web. S3 is commonly used for backup, archiving, data sharing, and hosting static
websites. It integrates with a wide range of AWS services.
VPC lets you define a private network within AWS, including your own IP address range, subnets, route tables, and
network gateways. You can configure security settings, such as firewalls, and establish VPNs to extend your network
into the cloud.
CloudFront is a content delivery network (CDN) that speeds up the distribution of your content (e.g., static files, video,
and software) to end-users globally. CloudFront caches content in edge locations, reducing latency and improving
performance for users worldwide.
CloudWatch provides monitoring, logging, and alerting for AWS resources and applications. You can track metrics,
collect logs, and create custom alarms to monitor your infrastructure's health and performance in real-time, ensuring
that issues are identified and resolved quickly.
ALB automatically distributes incoming traffic across multiple targets like EC2 instances, containers, and IP addresses
to ensure no single instance is overwhelmed. It supports several types of load balancers, including Application Load
Balancer, Network Load Balancer, and Classic Load Balancer.
IAM helps manage user identities and access to AWS resources. It enables you to create and manage AWS users,
groups, and roles, and assign specific permissions to control access. IAM allows you to implement the principle of
least privilege by enforcing fine-grained access control.
EKS is a fully managed service that enables running Kubernetes clusters in AWS. It takes care of the Kubernetes
control plane, including updates and scaling, allowing users to focus on deploying and managing containerized
applications without worrying about the infrastructure.
ECS is a container orchestration service that enables you to run and manage Docker containers on AWS. It integrates
with other AWS services such as EC2, IAM, and CloudWatch, allowing you to deploy and scale containerized
applications easily. ECS supports both EC2 and serverless computing with Fargate.
Lambda is a serverless compute service where you can run code in response to events without provisioning or
managing servers. It automatically scales by running code in response to triggers, such as changes in data or system
state, without requiring ongoing management.
SQS is a fully managed message queuing service that decouples microservices and distributed systems, ensuring
reliable communication between components. It allows for storing messages until they are processed, ensuring that
workloads can scale, be distributed, and fail gracefully.
ElastiCache is a managed caching service that speeds up application performance by caching frequently accessed data
in-memory using Redis or Memcached. It helps reduce latency and improve the throughput of applications by
reducing database load and enhancing read-heavy workloads.
WAF is a security service designed to protect web applications from common exploits and attacks such as SQL
injection and cross-site scripting (XSS). You can create custom rules to monitor and block unwanted traffic based on
IP, query strings, or geographic location.
HIGH 8
MEDIUM 34
LOW 12
INFO 7
EC2

Sr. No. Check Name Description


1 AWS AMI Encryption When dealing with production data that is
crucial to your business, it is highly
recommended to implement data
encryption in order to protect it from
attackers or unauthorized personnel. The
AMI encryption keys are using AES-256
algorithm and are entirely managed and
protected by the AWS key management
infrastructure through AWS Key
Management Service (KMS).
2 EC2 Instance Monitoring EC2 scheduled events within
Scheduled Events your AWS account will help you prevent
unexpected downtime and data loss,
improving the reliability and availability of
your AWS EC2 fleet.

3 EC2 Reserved Instance Reserved Instances represent a good


Payment Failed strategy to cut down on AWS EC2 costs but
to fully receive the discount benefit you
need to make sure that all your EC2
reservation purchases have been
successfully completed.
4 EC2 Reserved Instance EC2 Reserved Instances represent an
Payment Pending efficient strategy to cut down on AWS costs.
However, to receive the billing discount
benefit promoted by Amazon, you need to
make sure that all your EC2 reservation
purchases have been fully processed (i.e.
successfully confirmed by AWS) and none of
them remained in the "payment-pending"
state.

5 Idle EC2 Instance Idle instances represent a good candidate


to reduce your monthly AWS costs and
avoid accumulating unnecessary EC2 usage
charges.
6 Publicly Shared AMI When you make your AMIs publicly
accessible, these become available in the
Community AMIs where everyone with an
AWS account can use them to launch EC2
instances. Most of the time your AMIs will
contain snapshots of your applications
(including their data), therefore exposing
your snapshots in this manner is not
advised.

7 Security Group Port Opening range of ports inside your EC2


Range security groups is not a good practice
because it will allow attackers to use port
scanners and other probing techniques to
identify services running on your instances
and exploit their vulnerabilities.
8 Unused EC2 Reserved When an AWS EC2 Reserved Instance is not
Instances used (i.e. does not have a running
corresponding EC2 instance) the investment
made is not valorized. For example, if you
reserve a c4.large EC2 instance with default
tenancy within US East (N. Virginia) region
but for some reason you don't provision an
instance with the same type and tenancy, in
the same region of the same AWS account
or in any other linked AWS accounts
available within your AWS Organization, the
specified RI is considered unused and you
end up paying for a service that you don't
use.
9 Check for vCPU-Based Monitoring vCPU-based limits for your On-
EC2 Instance Limit Demand EC2 instances will help you to
manage better your AWS compute power
and avoid resource starvation in case your
applications need to scale up or in case you
just need to provision multiple EC2
instances in a short period of time.
10 Approved/Golden An approved/golden AMI is a base EC2
AMIs machine image that contains a pre-
configured OS and a well-defined stack of
server software fully configured to run your
application. Using golden AMIs to create
new EC2 instances within your AWS
environment brings major benefits such as
fast and stable application deployment and
scaling, secure application stack upgrades
and versioning.

11 Default Security When an EC2 instance is launched without


Groups In Use specifying a custom security group, the
default security group is automatically
assigned to the instance. Because a lot of
instances are launched in this way, if the
default security group is configured to allow
unrestricted access, it can increase
opportunities for malicious activity such as
hacking, brute-force attacks or even denial-
of-service (DoS) attacks.
12 EC2 Desired Instance Setting limits for the type(s) of EC2
Type instances provisioned in your AWS account
will help you to manage better your cloud
compute power, address internal
compliance requirements and prevent
unexpected charges on your AWS bill.

13 EC2 Instance Counts Monitoring and setting limits for the


maximum number of EC2 instances
provisioned in your AWS account will help
you to manage better your compute power
and prevent unexpected charges on your
AWS bill in case of auto-scaling
misconfiguration or large DDOS attacks.
14 EC2 Instance Using the current (latest) generation of EC2
Generation instances instead of the previous
generation has multiple advantages such as
better hardware performance (faster CPUs,
increased memory and network
throughput), better virtualization
technology (HVM) and lower costs. If you
are currently using any EC2 instances from
the previous generation, we highly
recommend upgrading these instances with
their latest generation equivalents.

15 EC2 Instance In VPC Launching your EC2 instances using the


EC2-VPC platform instead of EC2-Classic can
bring several advantages such as better
networking infrastructure (network
isolation, Elastic Network Interfaces,
subnets), much more flexible security
controls (network ACLs, security groups
outbound/egress filtering), access to newer
and powerful instance types (C4, M4, T2,
etc) and the capability to run instances on
single-tenant hardware.
16 EC2 Instance Tenancy Using the right tenancy model for your EC2
instances should reduce the concerns
around security at the instance hypervisor
level and promote better compliance.

17 EC2 Instance You can delete your instance when you no


Termination longer need it. This is referred to as
Protection terminating your instance. You can't
connect to or start an instance after you've
terminated it.
18 EC2-Classic Elastic IP Monitoring your EC2-Classic Elastic IP (EIP)
Address Limit limits will help you avoid public IP resources
starvation in case you need to expand
rapidly your AWS EC2-Classic infrastructure.

19 EC2-VPC Elastic IP Monitoring your Elastic IP (EIP) limits will


Address Limit help you avoid public IP resources
starvation in case you need to expand fast
your AWS EC2-VPC infrastructure.
20 Instance In Auto Every EC2 instance should be launched
Scaling Group inside an AWS Auto Scaling Group. To
achieve zero downtime, Cloud AWS
recommends attaching an Elastic Load
Balancer (ELB) to the Auto Scaling Group
(ASG) in order to use ELB health checks in
combination with the ASG to identify
unhealthy instances and cycle them out
automatically.

21 Reserved Instance With Reserved Instances (RIs) you can


Lease Expiration In The optimize your Amazon EC2 costs based on
Next 30 Days your expected usage. Since RIs are not
renewed automatically, purchasing another
reserved EC2 instances before expiration
will guarantee their billing at a discounted
hourly rate.
22 Security Group Using a large number of EC2 security groups
Excessive Counts can increase opportunities for malicious
activity as creating and managing multiple
security groups can increase the risk of
accidentally allowing unrestricted access.

23 Security Group RFC Using RFC-1918 CIDRs within your EC2


1918 security groups to allow an entire private
network to access EC2 instances is
implementing overly permissive access
control, therefore the security groups
access configuration does not adhere to
security best practices.
24 Unrestricted CIFS Allowing unrestricted CIFS access can
Access increase opportunities for malicious activity
such as man-in-the-middle attacks (MITM),
Denial of Service (DoS) attacks or the
Windows Null Session Exploit.
25 Unrestricted DNS Allowing unrestricted DNS access can
Access increase opportunities for malicious activity
such as such as Denial of Service (DoS)
attacks or Distributed Denial of Service
(DDoS) attacks.
26 Unrestricted Allowing unrestricted Elasticsearch access
Elasticsearch Access can increase opportunities for malicious
activity such as hacking, denial-of-service
(DoS) attacks and loss of data.
27 Unrestricted FTP Allowing unrestricted FTP access can
Access increase opportunities for malicious activity
such as brute-force attacks, FTP bounce
attacks, spoofing attacks and packet
capture.
28 Unrestricted ICMP Allowing unrestricted ICMP access can
Access increase opportunities for malicious activity
such as denial-of-service (DoS) attacks,
Smurf and Fraggle attacks.
29 Unrestricted Inbound Allowing unrestricted (0.0.0.0/0 or ::/0)
Access on Uncommon inbound/ingress access to uncommon ports
Ports can increase opportunities for malicious
activity such as hacking, data loss and all
multiple types of attacks (brute-force
attacks, Denial of Service (DoS) attacks, etc).
30 Unrestricted MongoDB Allowing unrestricted MongoDB Database
Access access can increase opportunities for
malicious activity such as hacking, denial-of-
service (DoS) attacks and loss of data.
31 Unrestricted MsSQL Allowing unrestricted MSSQL access can
Access increase opportunities for malicious activity
such as hacking, denial-of-service (DoS)
attacks and loss of data.
32 Unrestricted MySQL Allowing unrestricted MySQL access can
Access increase opportunities for malicious activity
such as hacking, denial-of-service (DoS)
attacks and loss of data.
33 Unrestricted NetBIOS Allowing unrestricted NetBIOS access can
Access increase opportunities for malicious activity
such as man-in-the-middle attacks (MITM),
Denial of Service (DoS) attacks or BadTunnel
exploits.
34 Unrestricted Oracle Allowing unrestricted Oracle Database
Access access can increase opportunities for
malicious activity such as hacking, denial-of-
service (DoS) attacks and loss of data.
35 Unrestricted Allowing unrestricted (0.0.0.0/0 or ::/0)
Outbound Access on outbound/egress access can increase
All Ports opportunities for malicious activity such as
such as Denial of Service (DoS) attacks or
Distributed Denial of Service (DDoS) attacks.
36 Unrestricted Allowing unrestricted PostgreSQL Database
PostgreSQL Access access can increase opportunities for
malicious activity such as hacking, denial-of-
service (DoS) attacks and loss of data.
37 Unrestricted RDP Allowing unrestricted RDP access can
Access increase opportunities for malicious activity
such as hacking, man-in-the-middle attacks
(MITM) and Pass-the-Hash (PtH) attacks.
38 Unrestricted RPC Allowing unrestricted RPC access can
Access increase opportunities for malicious activity
such as hacking (backdoor command shell),
denial-of-service (DoS) attacks and loss of
data.
39 Unrestricted SMTP Allowing unrestricted SMTP access can
Access increase opportunities for malicious activity
such as hacking, spamming, Shellshock
attacks and Denial-of-Service (DoS) attacks.
40 Unrestricted SSH Allowing unrestricted SSH access can
Access increase opportunities for malicious activity
such as hacking, man-in-the-middle attacks
(MITM) and brute-force attacks.
41 Unrestricted Telnet Allowing unrestricted Telnet access can
Access increase opportunities for malicious activity
such as IP address spoofing, man-in-the-
middle attacks (MITM) and brute-force
attacks.
42 Unused AWS EC2 Key Removing unused SSH key pairs can
Pairs significantly reduce the risk of unauthorized
access to your AWS EC2 instances as these
key pairs can be reassociated at any time,
providing access (usually by mistake) to the
wrong users. Ideally, you will want to
restrict access to your EC2 resources for all
individuals who leave your organization,
department or project that still possess the
private key from the SSH key pair used.

43 AMI Naming Naming (tagging) your AWS AMIs logically


Conventions and consistently has several advantages
such as providing additional information
about the image location and usage,
promoting consistency within the selected
environment, distinguishing fast similar
resources from one another, avoiding
naming collisions, improving clarity in cases
of potential ambiguity and enhancing the
aesthetic and professional appearance.
44 Default Security Group Because a lot of AWS users have the
Unrestricted tendency to attach the default security
group to their EC2 instances during the
launch process, any default security groups
configured to allow unrestricted access can
increase opportunities for malicious activity
such as hacking, denial-of-service attacks or
brute-force attacks.

45 EC2 AMI Too Old Using up-to-date AMIs to launch your EC2
instances brings major benefits to your AWS
application stack, maintaining your EC2
deployments secure and reliable.
46 EC2 Instance Detailed With detailed monitoring enabled, you
Monitoring would be able manage better your EC2
resources. For example, you would be able
to upgrade or downgrade faster the
instance type based on its workload, get
trends that you might possibly not be able
to see with the basic monitoring and create
CloudWatch alarms for time periods of 1
minute and take advantage of notifying you
earlier on instead of waiting for a 5 minute
period.

47 EC2 Instance Security Applying a large number of security group


Group Rules Counts rules to an EC2 instance can impact its
network performance and increase the
latency when accessing the instance.
48 EC2 Instance Too Old Stopping and relaunching your old EC2
instances will reallocate them to different
and possibly more reliable underlying
hardware (host machine).

49 App-Tier EC2 Instance Using IAM Roles over IAM Access Keys to
Using IAM Roles sign AWS API requests has multiple
benefits. For example, once enabled, you or
your administrators don't have to manage
credentials anymore as the credentials
provided by the IAM roles are temporary
and rotated automatically behind the
scenes.
50 Security Group Name When a new security group is created, its
Prefixed With 'launch- default name value will be prefixed with
wizard' "launch-wizard", unless specified otherwise.
The problem with this security group is that
it comes with the default configuration
which allows inbound/ingress traffic on port
22 from any source (i.e. 0.0.0.0/0). Because
a lot of EC2 instances are launched using a
security group like this, it can increase
opportunities for malicious activity such as
hacking, brute-force attacks or even Denial-
of-Service (DoS) attacks.

51 Security Group Rules Defining a large number of rules for a


Counts security group can increase the latency and
impact the performance of the EC2
instances associated with the security
group.
52 Associated Elastic IP Amazon Web Services enforce a small
Addresses hourly charge if an Elastic IP (EIP) address
within your account is not associated with a
running EC2 instance or an Elastic Network
Interface (ENI). Cloud AWS recommends
releasing any associated EIPs that are no
longer needed to reduce your AWS monthly
costs.

53 Unused AMI The AMIs created in your AWS account are


adding charges to your monthly bill,
regardless whether are being used or not.
Many AWS customers will deregister their
images but forget to delete the AMIs
snapshots, therefore continue to incur
storage costs. Cloud AWS recommends
implementing the two-step cleanup process
shown in the Remediation/Resolution
section in order to avoid any unexpected
charges on your AWS bill.
54 Unused Elastic As good practice, unused (detached)
Network Interfaces Amazon Elastic Network Interfaces should
be removed from your account because
keeping a lot of unused ENIs can exhaust
the resource limit and eventually prevent
the launching of new EC2 instances.

55 Descriptions for With security group rules descriptions, you


Security Group Rules simply gain more insight into the
configuration of your firewall(s). You can
define the purpose of the rule and the
identity of the IP address next to the rule
entry so it can be used for security group
management (e.g. update
source/destination IP addresses, remove
obsolete rules, etc) and auditing (internal
and external, compliance and forensic
audits). As an admin, you must know who
has access (and why) to your instances and
your applications without the need for
asking for the required details all the time.
Rule descriptions should be visible to AWS
Support as well, this could help resolve your
EC2 related issues more quickly.
56 EC2 Instance Since dedicated instances are physically
Dedicated Tenancy isolated at the host hardware level from
instances provisioned in other AWS
accounts, these are more expensive than
the ones running on shared (default)
environment.

57 EC2 Instance Naming Naming (tagging) your EC2 instances


Conventions logically and consistently has several
advantages such as providing additional
information about the instance location and
usage, promoting consistency within the
selected environment, distinguishing fast
similar resources from one another,
improving clarity in cases of potential
ambiguity and classifying them accurately
as compute resources for easy
management and billing purposes.
58 Enable AWS EC2 Your applications can take tens of minutes
Hibernation to preload or warm up when relying on
caches and other RAM memory-centric
components, and this service delay can
force you to over-provision in case you
need incremental compute capacity very
quickly. With EC2 hibernation enabled, you
can maintain your Amazon EC2 instances in
a "pre-warmed" state so these can get to a
productive state faster.
59 Reserved Instance With Reserved Instances (RIs) you can
Lease Expiration In The optimize your Amazon EC2 costs based on
Next 7 Days your expected usage. Since RIs are not
renewed automatically, purchasing another
reserved EC2 instances before expiration
will guarantee their billing at a discounted
hourly rate.
60 Unrestricted HTTP Allowing unrestricted HTTP access can
Access increase opportunities for malicious activity
such as hacking, denial-of-service (DoS)
attacks and loss of data.
61 Unrestricted HTTPS Allowing unrestricted HTTPS access can
Access increase opportunities for malicious activity
such as hacking, denial-of-service (DoS)
attacks and loss of data.
Category Risk Level Impact
Data protection Any authenticated user with malicious
intent who has access to the snapshots can
replicate data and applications which will
lead to data leakage. This can also lead to a
planned attack on the running instances by
the attacker.

High
Availability The check addresses a security concern
related to misconfigurations or
inefficiencies.

High

Cost Reduction The check addresses a security concern


related to misconfigurations or
inefficiencies.

High
Cost Reduction The check addresses a security concern
related to misconfigurations or
inefficiencies.

High

Cost Reduction The check addresses a security concern


related to misconfigurations or
inefficiencies.

High
User access control The check addresses a security concern
related to misconfigurations or
inefficiencies.

High

Secure Network access The check addresses a security concern


related to misconfigurations or
inefficiencies.

High
Cost Reduction The check addresses a security concern
related to misconfigurations or
inefficiencies.

High
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Secure Network access The check addresses a security concern


related to misconfigurations or
inefficiencies.

Medium
Auditing The check addresses a security concern rel

Medium

Cost Reduction The check addresses a security concern rel

Medium
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Performance Improvement The check addresses a security concern


related to misconfigurations or
inefficiencies.

Medium
Cost Reduction

Medium

Availability Without this feature enabled, there is not


protection against accidental termination of
EC2 instance.

Medium
Availability The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Availability The check addresses a security concern


related to misconfigurations or
inefficiencies.

Medium
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Cost Reduction The check addresses a security concern


related to misconfigurations or
inefficiencies.

Medium
Resource Management The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Auditing Any authenticated user with malicious


intent, having private IP address, will be
able to access the EC2 instance which he or
she is restricted to.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access Any authorised user can visit malicious sites
which can lead to EC2 instance getting
compromised or any malware contained
files being downloaded.
Attacker can even perform DoS attack on
the instance.
Any authorised user with malicious intent
can even form network connection with
outside environment to send sensitive data.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
User access control The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Auditing The check addresses a security concern


related to misconfigurations or
inefficiencies.

Low
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Low

Auditing The check addresses a security concern rel

Low
Resource Management Detail monitoring gives more granular level
of monitoring details. Also, organization will
get aggregated data across groups of similar
instances.

Low

Performance Improvement The check addresses a security concern rel

Low
Performance Improvement As underlying machine running EC2 instance
gets old, the performance of the machine
deteriorates

Low

Secure Authentication Without this feature implemented, a fix set


of keys will be used which make it
vulnerable to getting exposed to public.

Low
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Low

Performance Improvement The check addresses a security concern


related to misconfigurations or
inefficiencies.

Low
Cost Reduction The check addresses a security concern
related to misconfigurations or
inefficiencies.

Low

Cost Reduction The check addresses a security concern


related to misconfigurations or
inefficiencies.

Low
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Low

Auditing The check addresses a security concern rel

Info
Cost Reduction The check addresses a security concern rel

Info

Auditing The check addresses a security concern


related to misconfigurations or
inefficiencies.

Info
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Info
Cost Reduction The check addresses a security concern
related to misconfigurations or
inefficiencies.

Info
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Info
Secure Network access The check addresses a security concern
related to misconfigurations or
inefficiencies.

Info
Navigation Path/ Location
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under IMAGES section, choose AMIs.

4) Select the image that you want to examine.

5) Select the Details tab from the dashboard bottom panel and copy
the EBS snapshot ID (e.g. snap-3)41f42cf11)91edc6) available as value
for the Block Devices attribute.

6) In the left navigation panel, under ELASTIC BLOCK STORE section,


choose Snapshots.

7) Click inside the attributes filter box located under the dashboard
top menu and select Snapshot ID from the dropdown list.

8) Paste the ID copied at step no. 5 into the attributes filter box as the
Snapshot ID input value and press Enter.

9) Select the EBS snapshot returned as result, choose Description tab


from the dashboard bottom panel and check the Encrypted attribute
value available for the selected snapshot. Since the AWS AMIs are
backed by EBS snapshots we can use the snapshots configuration
details to get the encryption status of the associated AMIs. If the
Encrypted attribute value is set to Not Encrypted, the selected
Amazon Machine Image is not encrypted, therefore your EBS data-at-
rest is not protected from unauthorized access.

10 Repeat steps no. 4 – 9 to identify any other unencrypted AMIs


available in the current region.

11) Change the AWS region from the navigation bar and repeat the
entire process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) On the EC2 dashboard main page, verify the Scheduled Events
section for any EC2 instances that have scheduled events assigned,
available in the current AWS region. If the Scheduled Events current
status is set to "No events":

there are no EC2 instances scheduled for retirement/maintenance


within the current region, otherwise, the dashboard will display the
number of EC2 instances that have scheduled events assigned, e.g.
If the Scheduled Events status displays one or more instances, click on
the status link to access the Events page and identify the type of the
scheduled event for each EC2 instance, listed in the Event Type
column.

4) Change the AWS region from the navigation bar and repeat the
audit process for other regions.

1) Sign in to the AWS Management Console.


2) Navigate to EC2 dashboard at
https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Reserved Instances.

4) On EC2 Reserved Instances listing page, click inside the attributes


filter box located under the dashboard top menu, choose State
parameter from the dropdown list and select Payment Failed option.
This filtering method i.e.

will help you to determine if there are any failed EC2 reservation
purchases available in the current AWS region. If one or more EC2 RIs
matching the filter criteria are found, the purchase process for the
returned Reserved Instance(s) has failed, therefore you need to retry
your failed RI(s) payment by contacting AWS Support Centre (see
Remediation/Resolution section for more details).

5) Change the AWS region from the navigation bar and repeat the
audit process for the other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the navigation panel, under INSTANCES section, choose
Reserved Instances.
4) On the EC2 Reserved Instances page, click inside the attributes
filter box located under the dashboard top menu, choose State
parameter from the dropdown list and select Payment Pending
option. This filtering method, i.e.

will help you to determine if there are any incomplete EC2


reservation purchases available in the current AWS region. If one or
more EC2 RIs matching the filter criteria are found, the purchase
payment for the returned Reserved Instance(s) was not fully
processed, therefore you need to retry your RI(s) purchase payment
by contacting AWS Support Centre (For more information see
Remediation/Resolution section).
5) Change the AWS region from the navigation bar and repeat the
audit process for the other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Instances.

4) Select the EC2 instance that you want to examine.

5) Select the Monitoring tab from the dashboard bottom panel.

6) Within the CloudWatch metrics section, perform the following


actions:
Click on the CPU Utilization (Percent) usage graph thumbnail to
open the instance CPU usage details box. Inside the CloudWatch
Monitoring Details dialog box, set the following parameters:
From the Statistic dropdown list, select Average.
From the Time Range list, select Last 1 Week.
From the Period dropdown list, select 1 Hour.
Once the monitoring data is loaded, verify the instance CPU usage
for the last 7 days. If the average usage (percent) has been less than
2%, e.g.

, the selected EC2 instance qualifies as candidate for an idle instance.


Click Close to return to the dashboard.
Click on the Network In (Percent) usage graph thumbnail to open the
instance network usage details box. Inside the CloudWatch
Monitoring Details dialog box, set the following parameters:

From the Statistic dropdown list, select Average.


From the Time Range list, select Last 1 Week.
From the Period dropdown list, select 1 Hour.

Once the monitoring data is loaded, verify the incoming network


traffic for the last 7 days. If the average traffic has been less than 5
MB, e.g.
, the selected EC2 instance qualifies as candidate for an idle instance.
Click Close to exit.
Click on the Network Out (Percent) usage graph thumbnail to open
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under IMAGES section, choose AMIs.

4) Select the image that you want to examine.

5) Select the Permissions tab from the dashboard bottom panel and
check the AMI current launch permissions. If the selected image is
publicly accessible, the EC2 dashboard will display the following
status: "This image is currently Public.".
6) Repeat steps no. 4 and 5 to verify the launch permissions for the
rest of the AMIs available in the current region.

7) Change the AWS region from the navigation bar and repeat the
audit process for the other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under NETWORK & SECURITY section,


choose Security Groups.
4) Select the security group that you want to examine.

5) Select the Inbound tab from the dashboard bottom panel.

6) Verify the value available in the Port Range column for any existing
inbound/ingress rules to identify if there are range or ports (e.g. 0 –
65535, 80 – 88)0, 11)1 – 32800,) currently defined. If one or more
inbound rules are using range of ports to allow traffic, the selected
security group is not secure and does not adhere to AWS security best
practices.

7) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Reserved Instances.
4) Select the active Reserved Instance (RI) that you want to examine.

5) Select the Details tab from the dashboard bottom panel and copy
the following attributes values: Instance Type, Platform, Tenancy and
Availability Zone (if applicable).
6) Within the same AWS region, in the navigation panel, under
INSTANCES section, choose Instances.

7) On the EC2 dashboard, click inside the attributes filter box located
under the dashboard top menu, choose Instance Type parameter
from the dropdown list, paste the instance type value copied at step
no. 5 and press Enter. Repeat this step for Platform, Tenancy and
Availability Zone parameters using the values copied at step no. 5. To
search for active EC2 instances only, choose Instance State then select
Running from the dropdown list. This filtering method e.g.
will help you to determine if there are any EC2 instance that match
the selected RI criteria, available in the current AWS region. If no EC2
instances matching your filter criteria are found, the selected
Reserved Instance does not have a corresponding instance running
within the current region, therefore the purchased RI is not being
used.

8) If you are using Consolidated Billing and the current AWS account
is member of an AWS Organization, access the Instances page on
each linked account, using the same region, and repeat step no. 7 to
check for any corresponding EC2 instance.

9) Repeat steps no. 4 - 8 for other EC2 Reserved Instances (RIs)


available in the current region.

10 Change the AWS region from the navigation bar and repeat the
1) Sign in to AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the navigation panel, under INSTANCES, click Instances.

4) Click inside the EC2 attributes filter box located under the
dashboard top menu, choose Instance Type from the dropdown list
and select one of the instance types available in the list. This filtering
method will help you to determine how many On-Demand instances
are currently provisioned for the selected instance type. Repeat this
step for the rest of the instance types available within the current
AWS region.
5) In the left navigation panel, under Reports section, select Limits to
access the page with the vCPU-based instance limits set for the AWS
region.

6) On the Limits page, click Calculate vCPU limit to open the simplified
vCPU calculator necessary to compute the total vCPU limit
requirements for your AWS account.

7) On the Calculate vCPU limit page, use Add instance type button to
add each instance type identified at step no. 4. Use Instance Count to
set the number of EC2 instances available for each instance type
found. Once all the instance types are added to the calculator,
compare the value available in the vCPUs needed column (i.e. the
total number of vCPUs in use) with the value defined in the Current
limit column (i.e. the vCPU limit quota set for the AWS region). If the
total number of vCPUs in use is going to reach soon the limit quota
set for the current AWS region, follow the instructions provided in the
Remediation/Resolution section to request a vCPU limit increase.
Click Close to return to the Limits dashboard.

8) Change the AWS region from the navigation bar and repeat the
entire process for the other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) Select the EC2 instance that you want to examine.

5) Select the Description tab from the dashboard bottom panel.

6) In the right column, click on the AMI ID parameter value to display


the description box for the AMI used to launch the selected instance.
7) Inside the description box, copy the AMI ID exposed next to the
AMI name: to your clipboard.

8) In the navigation panel, under IMAGES section, select AMIs.

9) Select Owned by me from the search filter dropdown menu, paste


the AMI ID copied at step no. 7 into the search bar and press Enter. If
the filtering process is not returning any results, the selected EC2
instance was deployed without using an approved/golden Amazon
Machine Image (AMI), therefore the instance software configuration
might not be well-secured.

10 Repeat steps no. 3 – 9 to verify the AMI origin for the other EC2
instances within your AWS region.

11) Change the AWS region from the navigation bar: and repeat the
process for the other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Instances.
4) On the EC2 Instances page, click inside the attributes filter box:

choose Security Group Name from the dropdown list and type default
for the attribute value. This filtering technique will help you to detect
the EC2 instances that are currently associated with the default
security group created alongside with the VPC available within the
current AWS region. If the filtering process returns one or more EC2
instances, the default security group is currently in use within the
selected region, therefore the EC2 network configuration is not
following AWS security best practices.

5) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) Click inside the attributes filter box located under the EC2
dashboard top menu, select Instance Type, type the name of the
desired instance type prefixed with an exclamation mark (e.g. !
m3.medium) and press Enter. If the filtering process returns one or
more EC2 instances as result, the instances available in the current
region were not launched using the desired type, therefore you must
take action and raise an AWS support case to limit EC2 instance
creation only to the desired/required instance type(s) (see
Remediation/Resolution section).

5) Change the AWS region from the navigation bar and repeat step
no. 4 for all other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Instances.
4) Check the total number of EC2 instances available in the current
AWS region, listed in the top-right section of the dashboard, e.g.
6) Change the AWS region from the navigation bar and repeat step
no. 4 for all other regions. If the total number of running EC2
instances provisioned in your AWS account is greater than 50, the
recommended threshold was exceeded, therefore you must take
action and raise an AWS support case to limit the number of
instances based on your requirements (see Remediation/Resolution
section).
1)Login to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the navigation panel, under Instances section, click Instances.

4) Select the EC2 instance that you need to examine.

5) Select the Description tab from the bottom panel.

6) And check the selected Instance type parameter value:

to determine if the instance is using a previous generation


configuration type like the ones listed in the table.

7) Repeat step no. 4, 5 and 6 for each EC2 instance available in the
current region. Change the AWS region from the navigation bar:
to repeat the process for the other regions. To upgrade the instance
type to its latest generation equivalent, see Remediation / Resolution
section.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) On the EC2 console dashboard, in the Account Attributes upper-


right section, check the EC2 Supported Platforms for your AWS
account:

If the Supported Platforms value is VPC, your account supports only


the EC2-VPC platform and all your instances are launched within a
Virtual Private Cloud (VPC) environment, therefore the platform
checkup stops here.
If the Supported Platforms status value is set to EC2 and VPC, your
account supports both EC2-Classic and EC2-VPC platforms. To identify
any instances launched using EC2-Classic, continue to the next step.

4) In the left navigation panel, under INSTANCES section, choose


Instances.

5) Select the running EC2 instance that you want to examine.

6) Select the Description tab from the dashboard bottom panel.


7) In the left column, check the VPC ID parameter value. If VPC ID
parameter has no value assigned, the selected EC2 instance was
launched within the EC2-Classic platform and needs to be moved to
the EC2-VPC platform. Cloud AWS recommends migrating any running
EC2-Classic instances to a VPC.
8) Repeat steps no. 5 – 7 to verify the EC2 platform used by other
instances available in the current region.

9) Change the AWS region from the navigation bar and repeat the
process for the other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) Select the EC2 instance that you want to examine.

5) Select the Description tab from the dashboard bottom panel.

6) In the right column, check the Tenancy attribute value to


determine the selected EC2 instance tenancy type. If the Tenancy
current value is set to default, the instance is running on Multi-Tenant
Hardware (logically isolated). Otherwise, if the Tenancy value is set to
dedicated, the instance is running on Single-Tenant Hardware
(physically isolated at the host hardware level). To determine if you
have any EC2 Dedicated Hosts (physically isolated), just select
Dedicated Hosts from the EC2 navigation panel and check for any
instances listed.

7) Repeat steps no. 4 – 6 to verify the tenancy type for the rest of the
EC2 instances provisioned in the current region.
8) Change the AWS region from the navigation bar and repeat the
audit process for the other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Instances.

4) Select the EC2 instance that you want to examine.

5) Select the Description tab from the dashboard bottom panel.

6) In the right column, check the Termination Protection flag value to


determine if the feature is enabled or disabled. If the Termination
Protection current value is set to False, the feature is not enabled and
the selected EC2 instance is not protected against accidental
termination.

7) Repeat steps no. 4 – 6 to verify the termination protection current


status for the rest of the EC2 instances provisioned in the current
region.

8) Change the AWS region from the navigation bar and repeat the
audit process for the other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
2) In the left navigation panel, under NETWORK & SECURITY section,
choose Elastic IPs.
3) Click inside the EIP attributes filter box located under the
dashboard top menu, choose Network Platform from the dropdown
list and select EC2-Classic. This filtering technique will help you to
detect how many Elastic IP addresses are currently allocated within
the current AWS region in order to determine if your account has
reached the default limit of 5 (five) EIP addresses.
4) Change the AWS region from the navigation bar and repeat the
process for other regions.

1) Sign in to the AWS Management Console.


2) Navigate to EC2 dashboard at
https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Elastic IPs.

4) Click inside the EIP attributes filter box located under the
dashboard top menu, choose Network Platform from the dropdown
list and select EC2-VPC. This filtering technique will help you to detect
how many Elastic IP addresses are currently allocated within the
current AWS region in order to determine if your account has already
reached the default limit of 5 (five) EIP addresses.

5) Change the AWS region from the navigation bar and repeat the
process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) Select the EC2 instance that you want to examine.

5) Click on the Actions dropdown button from the dashboard top


menu, select Instance Settings and verify the Attach to Auto Scaling
Group command link state. If the command link is active, i.e.
the selected EC2 instance is not currently running within an AWS Auto
Scaling Group (ASG), therefore the running instance is not configured
to follow AWS best practices.

6) Repeat step no. 4 and 5 to verify if the rest of the EC2 instances
provisioned in the current region are running inside an Auto Scaling
Group.

7) Change the AWS region from the navigation bar and repeat the
audit process for the other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Reserved Instances.

4) Open the dashboard Show/Hide Columns dialog box by clicking the


configuration icon from the right menu:

5) Inside the Show/Hide Columns dialog box, select Expires checkbox


then click Close to return to the EC2 dashboard.

6) Select the Reserved Instance (RI) that you want to examine and
verify the value listed for the selected instance in the Expires column.
If the date displayed in this column is sooner than 30 days, the
selected AWS EC2 RI is about to expire, therefore it must be renewed
to keep it running at the current discounted hourly rate.
7) Repeat step no. 6 to determine the expiration date for other EC2
Reserved Instances available in the current region.
8) Change the AWS region from the navigation bar and repeat the
process for the other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Check the total number of EC2 security groups available in the
current AWS region, listed in the top-right section of the dashboard,
e.g.

If the total number of security groups available is greater than 50, the
recommended threshold was exceeded, therefore you must take
actions to remove any unnecessary or overlapping security groups
created within the current region (see Remediation/Resolution
section).

5) Change the AWS region from the navigation bar and repeat the
audit process for other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under NETWORK & SECURITY section,


choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Source/Destination (CIDR), type 10.0.0.0/8 as input for the


CIDR then press Enter.
Choose Source/Destination (CIDR) again, type 172.16.0.0/12 and
press Enter.
Choose Source/Destination (CIDR) one more time, type
192.168.0.0/16) and press Enter.

If one or more EC2 security groups allow inbound traffic from RFC-
1918 CIDRs, the filtering process will return one or more entries as
result. Cloud AWS agent alerts if one or more security groups are
configured to allow traffic from RFC-1918 CIDRs.

5) Change the AWS region from the navigation bar and repeat the
process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range, type 445 for the port number and press Enter.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 445. If one or more
rules have the source set to 0.0.0.0/0 (Anywhere), the selected
security group allows unrestricted traffic on port 445, therefore the
access to the associated EC2 instance(s) using Common Internet File
System (CIFS) protocol is not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose again Protocol and select UDP from the list.
Choose Port Range then select DNS (TCP) as filter input.
Choose Port Range then select DNS (UDP) as filter input.

5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 53. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted DNS traffic on port 53, therefore
the DNS server can be exploited and used for malicious activities

8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.

5) Select an EC2 security group returned as result.


6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 9200. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted data traffic on port 9200, therefore
the Elasticsearch access to the associated EC2 or RDS instance(s) is
not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range, type 20 as input for the port number then
press Enter.
Choose Port Range again, type 21 for the port number and press
Enter.

5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.


7) Verify the value available in the Source column for any
inbound/ingress rules with the Port Range set to 20 and 21. If one or
more rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the
selected security group allows unrestricted traffic on ports 20 and 21,
therefore the FTP access to the associated EC2 instance(s) is not
secured.

8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu, choose Protocol and select ICMP from the protocols list.

5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.


7) Verify the value available in the Source column for any
inbound/ingress rules with the Protocol set to ICMP or any other
custom ICMP type:

If one or more rules have the source set to 0.0.0.0/0 or ::/0


(Anywhere), the selected security group allows unrestricted traffic to
any hosts using ICMP, therefore the access to the associated EC2
instance(s) is not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Select the EC2 security group that you want to examine.

5) Select the Inbound tab from the dashboard bottom panel.

6) Verify the value available in the Source column for any


inbound/ingress rules with uncommon ports. If one or more rules
have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted traffic to uncommon ports,
therefore the access to the EC2 instance(s) associated with the
security group is not restricted.

7) Repeat steps no. 4 – 6 to verify the rest of the EC2 security groups
available in the current region.

8) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range, type 271)7 as input for the port number then
press Enter.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 271)7. If one or
more rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the
selected security group allows unrestricted data traffic on port 271)7,
therefore the MongoDB Database access to the associated EC2 is not
secured.

8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under NETWORK & SECURITY section,


choose Security Groups.

4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range then select MS SQL as filter input.

5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Rangeset to 14)33. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted data traffic on port 14)33,
therefore the MSSQL access to the associated EC2 or RDS instance(s)
is not secured.

8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.F43
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range then select MySQL/Aurora as filter input.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 336). If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted data traffic on port 336), therefore
the MySQL access to the associated EC2 or RDS instance(s) is not
secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose again Protocol and select UDP from the list.
Choose Port Range, type 139 for the port number and press Enter.
Repeat step c. using ports 137 and 138 as input value.

5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 137 - 139. If one or
more rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the
selected security group allows unrestricted traffic on ports 137, 138
and 139, therefore the NetBIOS access to the associated EC2
instance(s) is not secured.

8)Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9)Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range then select Oracle-RDS as filter input.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 15)21. If one or
more rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the
selected security group allows unrestricted data traffic on port 15)21,
therefore the Oracle Database access to the associated EC2 or RDS
instance(s) is not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Select the EC2 security group that you want to examine.

5) Select the Outbound tab from the dashboard bottom panel.

6) Verify the value available in the Destination column for any


outbound/egress rules defined. If one or more rules have the
destination set to 0.0.0.0/0 or ::/0 (Anywhere), the selected security
group allows unrestricted outbound traffic, therefore the access to
the Internet for any EC2 instances associated with the security group
is not restricted.

7) Repeat steps no. 4 – 6 to verify other EC2 security groups available


in the current region.

8) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range then select /strong>PostgreSQL as filter input.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 5432. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted data traffic on port 5432, therefore
the PostgreSQL Database access to the associated EC2 or RDS
instance(s) is not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3)In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4)Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range then select RDP as filter input.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7)Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 3389. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted traffic on port 3389, therefore the
RDP access to the associated EC2 instance(s) is not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range, type 13)5 for the port number and press Enter.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 13)5. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted data traffic on port 13)5, therefore
the Remote Procedure Call (RPC) access to the associated EC2
instance(s) is not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.

3) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range then select SMTP as filter input.

4) Select an EC2 security group returned as result.


5) Select the Inbound tab from the dashboard bottom panel.

6) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 25. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted traffic on port 25, therefore the
SMTP access to the associated EC2 instance(s) is not secured.

7) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

8) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the search filed under and select the following options
from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range then select SSH as filter input.
5) Select one of the EC2 security groups displayed.

6) Select the Inbound tab located at the bottom of the screen.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 22. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted traffic on port 22, therefore the
SSH access to the associated EC2 instance(s) is not secured.
8) Repeat steps 5 - 7 to verify the rest of your EC2 security groups
returned as a result step 4 above.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.


Choose Port Range, type 23 as input for the port number then
press Enter.
5) Select an EC2 security group returned as result.

6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 23. If one or more
rules have the source set to 0.0.0.0/0 or ::/0 (Anywhere), the selected
security group allows unrestricted traffic on port 23, therefore the
Telnet access to the associated EC2 instance(s) is not secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Key Pairs.
4) Select the EC2 key pair that you want to examine.

5) Copy the name of the selected key displayed as the value of the
Key pair name attribute, available within the EC2 dashboard bottom
panel.
6) Go back to the navigation panel and under INSTANCES section
choose Instances.

7) On the EC2 dashboard, click inside the attributes filter box located
under the dashboard top menu, choose Key Name parameter from
the dropdown list, paste the key pair name copied at step no. 5 and
press Enter. To search for active EC2 instances only, choose Instance
State then select Running from the dropdown list. This filtering
method, i.e.
will help you to determine if there are any EC2 instances that match
the selected criteria, available in the current AWS region. If no AWS
EC2 instances matching your filter criteria are found, the selected EC2
SSH key pair is not associated with any instances provisioned in the
current region, therefore the EC2 key pair is not being used and
should be removed from your account.

8) Repeat steps no. 3 – 7 to determine the status for other EC2 SSH
key pairs provisioned within the current region.

9) Change the AWS region from the navigation bar and repeat the
entire audit process for other regions.

1) Sign in to the AWS Management Console.


2) Navigate to EC2 dashboard at
https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under IMAGES section, choose AMIs.
4) Open the dashboard Show/Hide Columns dialog box by clicking the
configuration icon:
5) Inside the Show/Hide Columns dialog box, under Your Tag Keys
column, select the Name checkbox then click Close to return to the
AMI dashboard.
6) Under Name column, check the name tag value e.g of each image
created in the current AWS region. If one or more AMIs are not using
naming conventions based on the Cloud AWS default pattern (i.e.
^ami-(ue1|uw1|uw2|ew1|ec1|an1|an2|as1|as2|se1)-(d|t|s|p)-([a-
z0-9\\-]+)$) or based on a well-defined custom pattern, the naming
structure of these resources does not adhere to AWS tagging best
practices.
7) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu, choose Group Name from the dropdown list and enter
default to return the EC2 default security group.

5) Select the security group returned as result.


6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules defined. If one or more rules have the source
set to Anywhere (0.0.0.0/0 or ::/0), the selected default security
group allows public inbound traffic, therefore is not following the
AWS security best practices.

8) Change the AWS region from the navigation bar and repeat the
audit process for the remaining regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under IMAGES section, choose AMIs.

4) Select the image that you want to examine.

5) Select the Details tab from the dashboard bottom panel to access
the resource configuration details.

6) In the left column, check the Creation date parameter value:

to determine the image age. If the age of the selected Amazon


Machine Image is greater than 180 days, the AMI is considered
outdated and it must be updated.

7) Repeat steps no. 4 – 6 to verify the provision date for other AMIs
available in the current region.

8) Change the AWS region from the navigation bar and repeat the
entire process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) Select the EC2 instance that you want to examine.

5) Select the Description tab from the dashboard bottom panel.

6) Verify the Monitoring attribute value to determine the level of


CloudWatch monitoring enabled for the instance. If the attribute
value is set to basic, the selected AWS EC2 instance does not have the
detailed monitoring feature enabled.

7) Repeat steps no. 4 – 6 to verify the monitoring level for other EC2
instances that you need to monitor closely, provisioned in the current
region.
8) Change the AWS region from the navigation bar and repeat the
audit process for other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Instances.

4) Select the EC2 instance that you want to examine.

5) Select the Description tab from the dashboard bottom panel.

6) In the right column, check the Security Groups attribute value(s) to


identify the name of the security group(s) associated with the
selected instance. Copy the name of the associated security group(s).

7) In the navigation panel, under NETWORK & SECURITY section,


choose Security Groups.
8) Click inside the attributes filter box located under the dashboard
top menu, select Group Name, paste the name of the EC2 security
group copied at step no. 6 and press Enter. Repeat the step if the
selected EC2 instance has more than one security groups assigned.

9) Click on the Show/Hide Columns button:

, select Inbound Rules Count and Outbound Rules Count attributes


from the Security Group Attributes column and click Close.

10 Check the number of inbound and outbound rules defined for the
selected security group(s), displayed in the Inbound Rules Count and
Outbound Rules Count columns:

If the total number of inbound and outbound rules displayed is


greater than 50, the security group(s) associated with the selected
EC2 instance exceed(s) the recommended threshold for the number
of rules defined, therefore the instance network performance can be
degraded (see Remediation/Resolution section to remove any
unnecessary rules).

11) Repeat steps no. 4 – 10 to determine the number of inbound and


1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) Select the EC2 instance that you want to examine. The Instance
State for the selected EC2 instance must be 'running'.

5) Select the Description tab from the dashboard bottom panel.

6) In the right column, check the Launch time parameter value:


to determine the instance active age. If the selected EC2 instance
active age is greater than 180 days, the instance is considered old and
requires a restart.

7) Repeat steps no. 4 – 6 to verify the launch date for other instances
available in the current region.

8) Change the AWS region from the navigation bar and repeat the
audit process for the other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under INSTANCES section, choose


Instances.

4) Select the EC2 instance that you want to examine.

5) Select the Description tab from the dashboard bottom panel.

6) In the right column, check the IAM role attribute value. If the
attribute has no value assigned, the selected EC2 instance has no IAM
roles associated. Cloud AWS strongly recommends using IAM roles
when your applications need to perform AWS API requests.

7) Repeat steps no. 4 – 6 to check other EC2 instances provisioned in


the current region for associated IAM roles.

8) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) On the EC2 Instances page, click inside the attributes filter box:

choose Security Group Name from the dropdown list and type
launch-wizard for the attribute value. This filtering technique will help
you to detect all the EC2 instances that are currently associated with
security groups prefixed with "launch-wizard", in the current AWS
region. If the filtering process returns one or more EC2 instances,
there are security groups prefixed with "launch-wizard" in use within
the selected region, therefore the specified instances are using
security groups that are possibly unconfigured and insecure.

5) Change the AWS region from the navigation bar and repeat the
audit process for other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under NETWORK & SECURITY section,


choose Security Groups.

4) Select the EC2 security group that you want to examine.

5) Click on the Show/Hide Columns button from the top-right menu:

select Inbound Rules Count and Outbound Rules Count attributes


from the Security Group Attributes column and click Close.

6) Check the number of inbound and outbound rules defined for the
selected security group, displayed in the Inbound Rules Count and
Outbound Rules Count columns:

If the total number of inbound and outbound rules displayed is


greater than 50, the selected EC2 security group exceeds the
recommended threshold for the number of rules defined, therefore
you must take actions to remove any unnecessary or overlapping
rules in order to restore performance efficiency (see
Remediation/Resolution section).

7) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to VPC dashboard at


https://console.aws.amazon.com/vpc/.
3) In the left navigation panel, under Virtual Private Cloud section,
choose Elastic IPs.
4) ec2-select-unassociated-from-the-filter-dropdown-menu.png
Select Associated from the Filter dropdown menu:

to filter all the available EIPs and return the unattached ones. The
filtering process should return the Elastic IPs that are not currently
associated with any running EC2 instances or Elastic Network
Interfaces (ENIs). The unattached EIPs returned at this step can be
safely released (see Remediation/Resolution section).

5) Change the AWS region from the navigation bar:


and repeat the process for the other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the left navigation panel, under IMAGES section, choose AMIs.

4) Select the image that you want to examine.

5) Select the Details tab from the dashboard bottom panel and copy
the AMI ID value (e.g. ami-15)728c78) from the left column.

6) In the left navigation panel, under INSTANCES section, choose


Instances.

7) Click inside the EC2 attributes filter box located under the
dashboard top menu and select Image ID from the dropdown list:

8) Paste the AMI ID copied at step no. 5 into the EC2 attributes filter
box as the Image ID input value and press Enter. If the filtering
process is returning one or more EC2 instances as search results, the
selected AMI is currently in use. If the filtering process is not
returning any results, the selected AMI is not used anymore and can
be safely removed from your AWS account.

9) Repeat steps no. 4 – 8 to identify any other unused AMIs available


in the current region.
10 Change the AWS region from the navigation bar and repeat the
entire process for the other regions.
1) Sign in to the AWS Management Console.

2) Navigate to AWS EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
click Network Interfaces.
4) Select the AWS ENI that you want to examine.

5) Select the Details tab from the dashboard bottom panel and check
the value set for the Status attribute. If the Status attribute value is
"available", the selected AWS Elastic Network Interface is not
attached to an EC2 instance, therefore it should be marked as unused
then safely removed from your AWS account (see
Remediation/Resolution section).

6) Repeat step no. 4 and 5 to determine the current status for other
AWS ENIs available within the current region.

7) Change the AWS region from the navigation bar and repeat the
audit process for other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) On the left navigation panel, under NETWORK & SECURITY section,


choose Security Groups.

4) Select the security group that you want to examine.

5) Select the Inbound/Outbound tab from the dashboard bottom


panel.

6) Verify the fields within Description column for any existing


inbound/outbound rule description defined. If there are
inbound/outbound rules without any descriptions assigned, the
selected EC2 security group does not have descriptions defined for all
existing rules, therefore does not adhere to security and operational
excellence best practices.
7) Repeat steps no. 4 – 6 to verify other EC2 security groups for
descriptive text assigned to inbound/outbound rules, available in the
selected region.
8) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Instances.
4) On the EC2 Instances listing page, click inside the attributes filter
box located under the dashboard top menu, choose the Tenancy
parameter from the dropdown list and select the Dedicated – Run a
Dedicated instance option. To search for active dedicated instances
only, use the filter box again, choose Instance State then select
Running. This filtering method will help you find and review all active
EC2 dedicated instances provisioned within the current AWS region. If
no instances matching your filter criteria are found, there are no
dedicated instances currently running in the selected region.

5) Change the AWS region from the navigation bar and repeat the
audit process for other regions.

1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.

3) In the navigation panel, under INSTANCES section, choose


Instances.

4) Open the Show/Hide Columns dialog box by clicking the EC2


dashboard configuration icon:

5) Inside the Show/Hide Columns dialog box, under Your Tag Keys
column, select the Name checkbox then click Close to return to the
dashboard.

6) Under Name column, check the name tag value e.g.

of each instance available in the current AWS region. If one or more


provisioned EC2 instances are not using naming conventions based on
the default Cloud AWS pattern (i.e. ^ec2-(ue1|uw1|uw2|ew1|ec1|
an1|an2|as1|as2|se1)-([1-2]{1})([a-c]{1})-(d|t|s|p)-([a-z0-9\\-]+)$) or
based on a well-defined custom pattern, the naming structure of
these resources does not adhere to AWS tagging best practices.

7) Change the AWS region from the navigation bar and repeat the
entire audit process for other regions.
1) Sign in to AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the navigation panel, under INSTANCES, click Instances.

4) Select the AWS EC2 instance that you want to examine.

5) Select the Description tab from the dashboard bottom panel.

6) In the left column, check the Stop - Hibernation behaviour attribute


value. If the verified value (status) is set to Disabled, the Hibernation
feature is not enabled for the selected Amazon EC2 EBS-backed
instance.

7) Repeat steps no. 4 – 6 to check the Hibernation feature status for


other Amazon EC2 instances launched in the current region.

8) Change the AWS region from the navigation bar and repeat steps
no. 4 – 7 for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under INSTANCES section, choose
Reserved Instances.
4) Open the dashboard Show/Hide Columns dialog box by clicking the
configuration icon from the right menu:

5) Inside the Show/Hide Columns dialog box, select Expires checkbox


then click Close to return to the EC2 dashboard.
6) Select the Reserved Instance (RI) that you want to examine and
verify the value listed for the selected instance in the Expires column.
If the date displayed in this column is sooner than 7 days, the selected
AWS EC2 RI is about to expire, therefore it must be renewed to keep
it running at the current discounted hourly rate.

7) Repeat step no. 6 to determine the expiration date for other EC2
Reserved Instances available in the current region.

8) Change the AWS region from the navigation bar and repeat the
process for the other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.

5) Select an EC2 security group returned as result.


6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 80. If one or more
rules have the source set to 0.0.0.0/0 (Anywhere), the selected
security group allows unrestricted data traffic on port 80, therefore
the HTTP access to the associated EC2 or RDS instance(s) is not
secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
1) Sign in to the AWS Management Console.

2) Navigate to EC2 dashboard at


https://console.aws.amazon.com/ec2/.
3) In the left navigation panel, under NETWORK & SECURITY section,
choose Security Groups.
4) Click inside the attributes filter box located under the dashboard
top menu and select the following options from the dropdown list:

Choose Protocol and select TCP from the protocols list.

5) Select an EC2 security group returned as result.


6) Select the Inbound tab from the dashboard bottom panel.

7) Verify the value available in the Source column for any


inbound/ingress rules with the Port Range set to 443. If one or more
rules have the source set to 0.0.0.0/0 (Anywhere), the selected
security group allows unrestricted data traffic on port 443, therefore
the HTTPS access to the associated EC2 or RDS instance(s) is not
secured.
8) Repeat steps no. 5 – 7 to verify the rest of the EC2 security groups
returned as result at step no. 4.

9) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
Recommendation Patch Priority
Ensure that your Amazon Machine Images (AMIs)
are encrypted to fulfil compliance requirements for
data-at-rest encryption.

Quick Wins
Determine if there are any EC2 instances scheduled
for retirement and/or maintenance in your AWS
account and take the necessary steps (reboot,
restart or re-launch) to resolve them.

Quick Wins

Identify any failed Amazon EC2 Reserved Instances


(RIs) available within your account

Long Term
Identify any pending Amazon EC2 Reserved Instance
(RI) purchases available within your AWS account
and follow Cloud AWS guidelines for remediation in
order to receive a significant discount on the hourly
charges

Long Term

Identify any Amazon EC2 instances that appear to be


idle and stop or terminate them to help lower the
cost of your monthly AWS bill.

Long Term
Ensure that your AWS AMIs are not publicly shared
with the other AWS accounts in order to avoid
exposing sensitive data.

Quick Wins

Ensure that your security groups don't have range of


ports opened for inbound traffic in order to protect
your EC2 instances against denial-of-service (DoS)
attacks or brute-force attacks. Cloud AWS strongly
recommends opening only specific ports within your
security groups, based on your applications
requirements.

Quick Wins
Ensure that all purchased AWS EC2 Reserved
Instances (RI) have corresponding instances running
within the same account or within any linked AWS
accounts available in an AWS Organization (if you
are using one).

Quick Wins
Determine if the number of vCPUs (Virtual Central
Processing Units) used by EC2 On-Demand instances
per AWS region is close to the vCPU limit number
established by Amazon Web Services, and request a
limit increase in order to avoid running into resource
limitations for future EC2 provisioning sessions.

Short Term
Ensure that all the AWS EC2 instances necessary for
your application stack are launched from your
approved base Amazon Machine Images (AMIs),
known as golden AMIs in order to enforce
consistency and save time when scaling your
application.

Long Term

Ensure that the EC2 instances provisioned in your


AWS account are not associated with default
security groups created alongside with your VPCs in
order to enforce using custom and unique security
groups that exercise the principle of least privilege.

Short Term
Determine if the EC2 instances provisioned in your
AWS account have the desired instance type(s)
established by your organization based on the
workload deployed.

Short Term

Determine if the number of EC2 instances


provisioned in your AWS account has reached the
limit quota established by your organization for the
workload deployed.

Short Term
Ensure that all servers available in your AWS account
are using the latest generation of EC2 instances to
get the best performance with lower costs.

Short Term

Ensure that all your EC2 instances are deployed


within the AWS EC2-VPC platform instead of EC2-
Classic platform for better flexibility and control over
security, traffic routing and availability.

Quick Wins
Ensure that your AWS EC2 instances are using the
appropriate tenancy model

Short Term

Ensure that the EC2 instances provisioned outside of


the AWS Auto Scaling Groups (ASGs) have
Termination Protection safety feature enabled in
order to protect your instances from being
accidentally terminated.

Quick Wins
Determine if the number of EC2-Classic Elastic IPs
(EIPs) allocated per region is close to the limit
number established by Amazon for accounts that
support EC2-Classic platform and request limit
increase in order to avoid encountering IP resource
limitations on future EC2 provisioning sessions.

Quick Wins

Determine if the number of EC2-VPC Elastic IPs (EIPs)


allocated per region is close to the limit number
established by AWS for accounts that support Virtual
Private Clouds (VPCs) and request limit increase in
order to avoid encountering IP resource limitations
on future EC2 provisioning sessions.

Short Term
Orphaned EC2 Instances to make sure every instance
is launched within an AWS Auto Scaling Group in
order to help improve the availability and scalability
of your web applications during instance failures or
denial-of-service attacks (DoS, DDoS).

Short Term

Ensure that your AWS EC2 Reserved Instances are


renewed before expiration in order to get a
significant discount (up to 75% depending on the
commitment term) on the hourly charges.

Short Term
Determine if there is a large number of EC2 security
groups available within each AWS regions and
reduce their number by removing any unnecessary
or obsolete security groups.

Short Term

Check your EC2 security groups for inbound rules


that allow access from IP address ranges specified in
RFC-1918 (i.e. 10.0.0.0/8, 172.16.0.0/12 and
192.168.0.0/16) and restrict access to only those
private IP addresses that require it in order to
implement the principle of least privilege

Short Term
Check your EC2 `security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 445 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP and UDP port 53 and restrict access to only
those IP addresses that require it in order to
implement the principle of least privilege and reduce
the possibility of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 9200 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP ports 20 and 21 and restrict access to only
those IP addresses that require it in order to
implement the principle of least privilege and reduce
the possibility of a breach

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to any hosts using ICMP and restrict access to only
those IP addresses that require it in order to
implement the principle of least privilege and reduce
the possibility of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to any uncommon TCP and UDP ports and restrict
access to only those IP addresses that require it in
order to implement the principle of least privilege
and reduce the possibility of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 27017 and restrict access to only those
IP addresses that require it in order to implement
the principle of least privilege and reduce the
possibility of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 1433 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 3306 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (0.0.0.0/0 or ::/0) to
TCP port 139 and UDP ports 137 and 138 and restrict
access to only those IP addresses that require it in
order to implement the principle of least privilege
and reduce the possibility of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 1521 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Short Term
Check your EC2 security groups for outbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to any TCP/UDP ports and restrict access to only
those IP addresses that require it in order to
implement the principle of least privilege and reduce
the possibility of a breach.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 5432 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Long Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 3389 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Long Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 135 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Long Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 25 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Long Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 22. Restrict access to only those IP
addresses that require it, in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Long Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0 or ::/0)
to TCP port 23 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach

Long Term
Identify and remove any unused Amazon EC2 key
pairs in order to adhere to AWS security best
practices and protect against unapproved SSH
access.

Long Term

Ensure that all your Amazon Machine Images (AMIs)


are using suitable naming conventions for tagging in
order to manage them more efficiently and adhere
to AWS resource tagging best practices.

Long Term
Ensure that your AWS EC2 default security groups
restrict all inbound public traffic in order to enforce
AWS users (EC2 administrators, resource managers,
etc) to create custom security groups that exercise
the rule of least privilege instead of using the default
security groups.

Long Term

Ensure that your existing AWS Amazon Machine


Images (AMIs) are not older than 180 days in order
to ensure their reliability and to meet security and
compliance requirements.

Long Term
Ensure that detailed monitoring is enabled for your
Amazon EC2 instances in order to have enough
monitoring data to help you make better decisions
on architecting and managing compute resources
within your AWS account.

To enable detailed monitoring for an existing


instance (console):

1. Open the Amazon EC2 console at


https://console.aws.amazon.com/ec2/
2. In the navigation pane, choose Instances.
3. Select the instance and choose Actions,
CloudWatch Monitoring, Enable Detailed
Monitoring.
4. In the Enable Detailed Monitoring dialog box, Long Term
choose Yes, Enable.
5. Choose Close.

Determine if there is a large number of security


group rules assigned to an EC2 instance and reduce
their number by removing any unnecessary or
overlapping rules.

Long Term
Identify and re-launch any running AWS EC2
instances older than 180 days in order to ensure
their reliability.

Short Term

Use IAM Roles/Instance Profiles instead of IAM


Access Keys to appropriately grant access
permissions to any application that perform AWS
API requests running on your EC2 instances. With
IAM roles you can avoid sharing long-term
credentials and protect your instances against
unauthorized access.

Long Term
Ensure that EC2 instances provisioned in your AWS
account are not associated with security groups that
have their name prefixed with "launch-wizard", in
order to enforce using secure and custom security
groups that exercise the principle of least privilege.

Short Term

Determine if there is a large number of inbound and


outbound rules defined within your AWS EC2
security groups and reduce their number by
removing any unnecessary or overlapping rules.

Short Term
Check for any unattached Elastic IP (EIP) addresses in
your AWS account and release (remove) them in
order to lower the cost of your monthly AWS bill.

Short Term

Find any unused Amazon Machine Images available


in your AWS account and remove them in order to
lower the cost of your monthly AWS bill.

Short Term
Identify and delete any unused Amazon AWS Elastic
Network Interfaces in order to adhere to best
practices and to avoid reaching the service limit.

Short Term

Ensure that all the rules defined for your Amazon


EC2 security groups have a description to help
simplify your operations and remove any
opportunities for operator errors.

Short Term
Ensure that all Amazon EC2 dedicated instances
provisioned within your AWS account are regularly
reviewed for cost optimization.

Short Term

Ensure that all your EC2 instances are using suitable


naming conventions for tagging in order to manage
them more efficiently and adhere to AWS resource
tagging best practices

Long Term
Enable hibernation as an additional stop behaviour
for your EC2 instances backed by Amazon EBS in
order to reduce the time it takes for these instances
to return to service at restart.
To enable hibernation using the console:

1. Follow the Launching an instance using the


Launch Instance Wizard procedure.
2. On the Choose an Amazon Machine Image (AMI)
page, select an AMI that supports hibernation. For
more information about supported AMIs, see
Hibernation prerequisites.
3. On the Choose an Instance Type page, select a
supported instance type, and choose Next:
Configure Instance Details. For information about
supported instance types, see Hibernation
prerequisites.
4.On the Configure Instance Details page, for Stop -
Hibernate Behaviour, select the Enable hibernation Long Term
as an additional stop behaviour check box.
5. Continue as prompted by the wizard. When
you've finished reviewing your options on the
Review Instance Launch page, choose Launch. For
more information, see Launching an instance using
the Launch Instance Wizard.
Ensure that your AWS EC2 Reserved Instances are
renewed before expiration in order to get a
significant discount (up to 75% depending on the
commitment term) on the hourly charges. The
renewal process consists of purchasing another EC2
Reserved Instance so that Amazon can keep charging
you based on the chosen reservation term.

Short Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0) to TCP
port 80 and restrict access to only those IP addresses
that require it in order to implement the principle of
least privilege and reduce the possibility of a breach.
TCP port 80 is used by the HTTP

Long Term
Check your EC2 security groups for inbound rules
that allow unrestricted access (i.e. 0.0.0.0/0) to TCP
port 443 and restrict access to only those IP
addresses that require it in order to implement the
principle of least privilege and reduce the possibility
of a breach.

Long Term
Patch Status Remark

Already Compliant
Already Compliant

N/A
N/A

N/A
Already Compliant

Already Compliant
N/A
N/A
N/A

N/A
N/A

Not Compliant
Not Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

N/A
Already Compliant

N/A
Already Compliant
N/A
N/A
N/A
N/A
Not Compliant
N/A
N/A
Already Compliant
Already Compliant
N/A
Already Compliant
N/A
Already Compliant
N/A
N/A
N/A
Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Not Compliant

Already Compliant
Not Compliant
N/A
Not Compliant
Not Compliant
HIGH 9
MEDIUM 10
LOW 12
R INFO 0
D
S
Sr. No. Check Name Description Category
1 Amazon RDS Public When you publicly share an AWS RDS User access control
Snapshots database snapshot, you give another
AWS account permission to both copy
the snapshot and create database
instances from it.
2 IAM Database Enabling IAM Database Authentication Secure Authentication
Authentication for RDS feature for your MySQL/PostgreSQL
database instances provides multiple
benefits such as in-transit encryption -
the network traffic to and from
database instances is encrypted using
Secure Sockets Layer (SSL), centralized
management - using AWS IAM to
centrally manage access to your
database resources, instead of
managing access individually for each
database instance and enhanced
security - for web applications running
on Amazon EC2, you can use IAM
profile credentials specific to each EC2
instance to access the associated
database instead of a using passwords.

3 RDS Automated Creating point-in-time RDS instance Availability


Backups Enabled snapshots periodically will allow you to
handle efficiently your data restoration
process in the event of a user error on
the source database or to save data
before making a major change to the
instance database such as changing the
structure of a table.
4 RDS Encryption When dealing with production Data protection
Enabled databases that hold sensitive and
critical data, it is highly recommended
to implement encryption in order to
protect your data from unauthorized
access. With RDS encryption enabled,
the data stored on the instance
underlying storage, the automated
backups, Read Replicas, and snapshots,
become all encrypted. The RDS
encryption keys implement AES-256
algorithm and are entirely managed
and protected by the AWS key
management infrastructure through
AWS Key Management Service (AWS
KMS).

5 RDS Publicly Accessible When the VPC security group User access control
associated with an RDS instance allows
unrestricted access (0.0.0.0/0),
everyone and everything on the
Internet can establish a connection to
your database and this can increase
the opportunity for malicious activities
such as brute force attacks, SQL
injections or DoS/DDoS attacks.
6 RDS Reserved DB With Reserved Instances (RIs) you can Cost Reduction
Instance Lease optimize your Amazon RDS costs based
Expiration In The Next on your expected usage. Since RDS RIs
7 Days are not renewed automatically,
purchasing another reserved database
instances on time will guarantee that
these instances will be also billed at a
discounted hourly rate.
7 Underutilized RDS Downsizing underused RDS database Cost Reduction
Instance instances represents a good strategy
for optimizing your monthly AWS costs.
8 Unrestricted DB When RDS DB security groups allow Secure Network Access
Security Group unrestricted access (0.0.0.0/0),
everyone and everything on the
Internet can make a connection to your
RDS database resources and this can
increase the opportunity for malicious
activities such as hacking or denial-of-
service (DoS) attacks.
9 Unused RDS Reserved When an AWS RDS Reserved Instance Cost Reduction
Instances is not in use (i.e. does not have an
active corresponding instance) the
investment made is not exploited. For
example, if you reserve a
db.m3.medium RDS instance within US
West (Oregon) region and you don't
provision a database instance with the
same class/type, in the same region of
the same AWS account or in any other
linked AWS accounts within your AWS
Organization, the specified RDS RI is
considered unused and your
investment has a negative return.
10 Aurora Database It is highly recommended to have all Availability
Instance Accessibility the database instances within an AWS
Aurora cluster as either publicly or
privately accessible as in case of a
failover, an instance might go from
publicly accessible to privately
accessible and obstruct the
connectivity to the database cluster.
11 DB Instance Using the latest generation of RDS Performance Improvement
Generation database instances instead of the
previous generation instances has
tangible benefits such as better
hardware performance (more
computing capacity and faster CPUs,
memory optimization and higher
network throughput), better support
for latest DB engines versions (e.g.
MySQL 5.7) and lower costs for
memory and storage.

12 Instance Deletion With Deletion Protection safety feature Availability


Protection enabled, you have the guarantee that
your Amazon RDS database instances
cannot be accidentally deleted and
make sure that your data remains safe.
Deletion protection prevents any
existing or new RDS database instances
from being deleted by users via the
AWS Management Console, the CLI or
the API calls, unless the feature is
explicitly disabled.
13 RDS Default Port Running your database instances on Secure Network Access
default ports represent a potential
security concern. Moving RDS instances
ports (the ports on which the database
accepts connections) to non-default
ports will add an extra layer of security,
protecting your publicly accessible
AWS RDS databases from brute force
and dictionary attacks.
14 RDS Desired Instance Setting limits for the type of Amazon Auditing
Type RDS instances provisioned in your AWS
account will help you address internal
compliance requirements and prevent
unexpected charges on your AWS bill.
15 RDS General Purpose Using General Purpose (GP) SSD Cost Reduction
SSD database storage instead of
Provisioned IOPS (PIOPS) SSD storage
represents a good strategy to cut down
on AWS RDS costs because for GP SSDs
you only pay for the storage compared
to PIOPS SSDs where you pay for both
storage and IOPS. Converting existing
PIOPS-based databases to GP is often
possible by configuring larger storage
which gives higher baseline
performance of IOPS for a lower cost.
16 RDS Instance Counts Monitoring and setting limits for the Cost Reduction
maximum number of RDS instances
provisioned within your AWS account
will help you to manage better your
database compute resources, prevent
unexpected charges on your AWS bill
and act fast to mitigate attacks.
Furthermore, if your AWS account
security has been compromised and
the attacker is creating a large number
of RDS resources within your account,
you risk to accrue a lot of AWS charges
in a short period of time and this can
affect your business.
17 RDS Master Username Since 'awsuser' is the Amazon's Secure Authentication
example (default) for the RDS database
master username, many AWS
customers will use this username for
their RDS databases in production,
therefore malicious users can use this
information to their advantage and
frequently try to use 'awsuser' for the
master username during brute-force
attacks.
18 RDS Reserved DB With Reserved Instances (RIs) you can Cost Reduction
Instance Lease optimize your Amazon RDS costs based
Expiration In The Next on your expected usage. Since RDS RIs
30 Days are not renewed automatically,
purchasing another reserved database
instances on time will guarantee that
these instances will be also billed at a
discounted hourly rate.
19 RDS Sufficient Backup Having a minimum retention period set Auditing
Retention Period for RDS database instances will enforce
your backup strategy to follow the best
practices as specified in the compliance
regulations. Retaining point-in-time
RDS snapshots for a longer period of
time will allow you to handle more
efficiently your data restoration
process in the event of failure.
20 Backtrack Once the Backtrack feature is enabled, Logging and tracing
Amazon RDS can quickly "rewind" your
Aurora MySQL database cluster to a
point in time that you specify. In
contrast to the backup and restore
method, with Backtrack you can easily
undo a destructive action, such as a
DELETE query without a WHERE clause,
with minimal downtime, you can
rewind your Aurora cluster in just few
minutes, and you can repeatedly
backtrack a database cluster back and
forth in time to help determine when a
particular data change occurred.
21 Enable RDS Log Once the Log Exports feature is Logging and tracing
Exports enabled, Amazon RDS sends general,
slow query, audit and error logs from
your MySQL, Aurora and MariaDB
databases to AWS CloudWatch Logs.
Broadcasting these logs to CloudWatch
allows you to maintain continuous
visibility into database activity, query
performance and errors within your
RDS database instances. For example,
you can set up AWS CloudWatch
alarms to notify on frequent restarts
which are recorded in the error log or
alarms for events recorded in the audit
logs that can alert on unwanted
changes made to your databases. You
can also create Amazon CloudWatch
alarms to monitor the slow query log
and enable timely detection of long-
running SQL queries. Additionally, you
can use CloudWatch Logs to perform
impromptu searches across multiple
logs published by RDS Log Exports –
this capability is particularly useful for
troubleshooting, audits and log
analysis.
22 Enable Serverless Log As soon as the Log Exports feature is Logging and tracing
Exports enabled, Amazon Aurora Serverless
starts publishing general, slow query,
audit and error logs from your Aurora
databases to AWS CloudWatch Logs. By
sending this type of logging data to
Amazon CloudWatch service, you gain
continuous visibility into database
activity, query performance and errors
occurred within your Aurora Serverless
databases. To augment the feature's
functionality, you can set up
CloudWatch alarms to notify you on
frequent restarts which are recorded in
the error log, or alarms for events
recorded in the audit logs that can
alert on unwanted changes made to
your Aurora databases. You can also
create AWS CloudWatch alarms to
monitor the slow query log and enable
timely detection of long-running SQL
queries. Additionally, you can use
Amazon CloudWatch Logs to perform
random searches across multiple logs
published by Aurora Serverless Log
Exports – this capability is particularly
useful for troubleshooting and
compliance auditing.
23 Idle RDS Instance Idle RDS instances represent a good Cost Reduction
candidate for reducing your monthly
AWS costs. Regularly checking your
AWS RDS instances for the number of
database connections performed will
help you efficiently detect and remove
any idle RDS resources from your AWS
account in order to avoid accumulating
unnecessary charges.
24 Performance Insights AWS Relational Database Service (RDS) Performance Improvement
Performance Insights feature provides
you instant visibility into the nature of
the workloads on your Amazon RDS
databases and helps you find the cause
of any performance issue found on
those databases.

25 Instance Level Events Amazon RDS event subscriptions for Incident Notification
Subscriptions instance level events are designed to
provide incident notification of event
changes triggered at the database
engine level such as failure, failover,
low storage, maintenance, recovery or
deletion.
26 RDS Copy Tags to Copying your AWS RDS database User access control
Snapshots instance tags to any automated or
manual snapshots taken from your
instances, allows you to easily set
metadata (including access policies) on
your snapshots in order to match the
parent instances.

27 RDS Event Monitoring is an essential part of Incident Notification


Notifications maintaining the availability, reliability
and performance of your AWS RDS
resources. Enabling RDS event
notifications will keep you up-to-date
on everything that's going on within
your Amazon RDS environment.
28 RDS Free Storage Detecting RDS database instances that Performance Improvement
Space run low on disk space is crucial when
these instances are used in production
by latency sensitive applications as this
can help you take immediate actions
and expand the storage space in order
to maintain an optimal response time.
29 RDS Multi-AZ When Multi-AZ is enabled, AWS Availability
automatically provision and maintain a
synchronous database standby replica
on a dedicated hardware in a different
datacenter (known as Availability
Zone). AWS RDS will automatically
switch from the primary cluster to the
available standby replica in the event
of a failure such as an Availability Zone
outage, an internal hardware or
network outage, a software failure or
in case of planned interruptions such
as software patching or changing the
RDS cluster type.
30 RDS Reserved DB By checking your RDS Reserved Cost Reduction
Instance Recent Instances on a regular basis you can
Purchases detect and cancel any unwanted
purchases placed within your AWS
account and avoid unexpected charges
on your AWS monthly bill.

31 Security Groups Events Amazon RDS event subscriptions for Incident Notification
Subscriptions database security groups are designed
to provide incident notification of
events that may affect the security,
availability and reliability of the RDS
instances associated with these
security groups.
Risk Level Impact Navigation Path/ Location
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.

3) In the left navigation panel, under RDS Dashboard,


click Snapshots.
4) Select Manual Snapshots from the Filter dropdown
menu to display only manual database snapshots.

5) Select the snapshot that you want to examine.

6) Click Snapshot Actions button from the dashboard


top menu and select Share Snapshot option.

7) On the Manage Snapshot Permissions page, check


the DB Snapshot Visibility setting. If the setting value is
set to Public, the selected Amazon RDS database
snapshot is publicly accessible, therefore all AWS
High accounts and users have access to the data available
on the snapshot.

8) Repeat steps no. 5 – 7 to verify the access


permissions and visibility for other RDS snapshots
available in the current region.

9) Change the AWS region from the navigation bar and


repeat the audit process for the other regions.
Attacker can perform password 1) Sign in to AWS Management Console.
guess attack to get access to RDS
instances. Further, attacker can 2) Navigate to RDS dashboard at
read sensitive information stored https://console.aws.amazon.com/rds/.
in the relational database and
leak it on the Internet. 3) In the left navigation panel, under Amazon RDS, click
Instances.
4) Choose the RDS instance that you want to examine
and click on the resource name (link) available in the
DB instance column.

5) Within Details panel section, in the Configurations


category, check the IAM DB Authentication Enabled
configuration attribute value. If the attribute value is
set to No, the IAM Database Authentication feature is
not enabled for the selected Amazon RDS database
High instance.

6) Repeat step no. 4 and 5 to verify the IAM Database


Authentication feature status for other AWS RDS
instances created in the selected region.

7) Change the AWS region from the navigation bar and


repeat the process for other regions.

The check addresses a security 1) Login to the AWS Management Console.


concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.

3) In the navigation panel, under RDS Dashboard, click


Instances.

4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.
6) Under Availability and Durability section, search for
the Automated Backups status:
If the current status is set to Disabled, the RDS service
High will not perform point-in-time snapshots for the
selected instance.
7) Repeat steps no. 4 – 6 for each RDS instance
provisioned in the current region. Change the AWS
region from the navigation bar to repeat the process
for other regions.
Any authenticated user with 1) Login to the AWS Management Console.
malicious intent having access to
RDS instance will be able to view 2) Navigate to RDS dashboard at
sensitive data stored in that https://console.aws.amazon.com/rds/.
instance in plain text. The user
can even leak the information on 3) In the navigation panel, under RDS Dashboard, click
the Internet. Instances.
4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.

6) Under Encryption Details section, search for the


Encryption Enabled status:

If the current status is set to No, data-at-rest


High encryption is not enabled for the selected RDS
database instance.

7) Repeat steps no. 4 – 6 for each RDS instance


provisioned in the current region. Change the AWS
region from the navigation bar to repeat the process
for other regions.

The check addresses a security 1) Login to the AWS Management Console.


concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.

3) In the navigation panel, under RDS Dashboard, click


Instances.

4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.
6)On the Details tab, next to Endpoint section, hover
over the information icon (i) to display the Connection
Information box. If the Publicly Accessible flag status is
set to Yes and the security group associated with the
instance allows access to everyone, i.e. 0.0.0.0/0:
High the RDS database instance selected is publicly
accessible and prone to security risks.

7) Repeat steps no. 4 – 6 for each RDS instance


provisioned in the current region. Change the AWS
region from the navigation bar to repeat the process
for other regions.
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard,
click Reserved Purchases.
4) Open the dashboard Show/Hide Columns dialog box
by clicking the configuration icon from the right menu:

5) Inside the Show/Hide Columns dialog box, select


Remaining Days checkbox then click Save to apply the
changes.
6) Select the Reserved Instance (RI) that you want to
examine and verify the value listed for the instance in
the Remaining Days column. If the number of days
displayed in this column is less than 7, the selected
RDS RI is about to expire, therefore it must be renewed
High to keep it running at the current discounted hourly
rate. To renew (repurchase) the instance, follow the
steps outlined in the Remediation/Resolution section
of the rule.
7) Repeat step no. 6 to determine the expiration date
of other RDS Reserved Instances available in the
current region.

8) Change the AWS region from the navigation bar and


repeat the process for the other regions.
The check addresses a security 1) Log in to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard
section, choose Instances.
4) Select the RDS database instance that you want to
examine.

5) Click on Show Monitoring button from the


dashboard top menu and select Show Multi-Graph
View to expand the AWS CloudWatch monitoring
panel.

6) On the monitoring panel displayed for the selected


instance, perform the following actions:

Click on the CPU Utilization (Percent) usage graph


High thumbnail to open the RDS instance CPU usage details
box. Inside the CPU Utilization (Percent) dialog box, set
the following parameters:
From the Statistic dropdown list, select Average.
From the Time Range list, select Last 1 Week.
From the Period dropdown list, select 1 Hour.

Once the monitoring data is available, verify the


instance CPU usage for the last 7 days. If the average
usage (percent) has been less than 60%, e.g.

, the selected database instance qualifies as candidate


for the underused instance. Click X (close) to return to
the RDS dashboard.
Click on the Read IOPS (Count/Second) usage graph
thumbnail to open the database instance disk
ReadIOPS usage details box. Inside the Read IOPS
(Count/Second) dialog box, set the following
parameters:

From the Statistic dropdown list, select Sum.


The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the navigation panel, under RDS Dashboard, click
Security Groups. If there are no DB security groups
available and the following message is displayed: “Your
account does not support the EC2-Classic Platform in
this region. DB Security Groups are only needed when
the EC2-Classic Platform is supported.”, your RDS
database instance are not using DB security groups.
Otherwise, continue with the next step.
4) Select the DB security group that you want to
examine and click on the details button (magnifying
glass icon).

5) And check the CIDR/IP value listed in the Details


column for each authorized connection. If the security
High group contains any rules that have set the CIDR/IP to
0.0.0.0/0 and the Status to authorized, the selected DB
security group configuration is insecure and does not
restrict access to the database instance(s).
6) Repeat steps no. 4 – 5 for each DB security group
available in the current region. Change the AWS region
from the navigation bar to repeat the process for other
regions.
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard,
click Reserved Purchases.
4) Choose the active RDS Reserved Instance (RI) that
you want to examine.

5) Click Show or Hide Item Details button:

to expand the details panel and copy the DB Instance


Class attribute value (i.e. the instance type used for
reservation).

6) Within the same AWS region, in the navigation


panel, under RDS Dashboard, click Instances.
High 7) On the RDS dashboard, click inside the search box
located under the dashboard top menu, paste the RDS
instance class/type value copied at step no. 5 and
press Enter. This filtering method will help you to
determine if there are any RDS database instance that
match the selected RI criteria, available in the current
AWS region. If the search results are not returning any
database instances that match the reservation
class/type parameter, the selected Reserved Instance
does not have a corresponding instance running within
the current region, therefore the purchased RDS RI is
not being used.

8) If you are using Consolidated Billing and the current


AWS account is member of an AWS Organization,
access the RDS Instances page on each linked account,
under the same region, and repeat step no. 7 to check
for any corresponding RDS database instances.

9) Repeat steps no. 4 - 8 for other RDS Reserved


Instances (RIs) available in the current region.
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard,
click Clusters.
4) Choose the AWS Aurora cluster that you want to
examine and click on its Show or Hide Item Details
button:

5) Within DB Cluster Members section, perform the


following actions:
Click on the writer database instance name to access
its configuration page. Under Security and Network
section, check the Publicly Accessible attribute value to
determine whether the writer instance is publicly
accessible or not. If the attribute value is Yes, the
Medium selected database instance is publicly accessible. If the
value is No, the instance is not publicly accessible.
Click on the reader database instance name to
access its configuration page. In the Security and
Network section, check the Publicly Accessible
attribute value to determine whether the reader
instance is publicly accessible (i.e. attribute value set to
Yes) or not (i.e. value set to No).
If the database instances verified at step a. and b.
have different values for the Publicly Accessible
attribute, the instances within the selected Amazon
Aurora database cluster does not have the same
accessibility, therefore in case of failover, when the
healthy instance is promoted as primary, the
connectivity to the cluster will be lost.

6) Repeat steps no. 5 – 7 to verify the instance


accessibility settings for other AWS Aurora database
clusters available in the current region.

7) Change the AWS region from the navigation bar and


The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to EC2 dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the navigation panel, under RDS Dashboard, click
Instances.
4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.

6) Under Instance and IOPS section, check the Instance


Class value:

If the selected database is using an instance class from


Medium a previous generation - like the ones listed in the Audit
section table, we highly recommend an upgrade (see
Remediation / Resolution section).
7) Repeat steps no. 4 – 6 for each RDS instance
provisioned in the current region. Change the AWS
region from the navigation bar to repeat the process
for other regions.

The check addresses a security 1) Sign in to AWS Management Console.


concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.

3) In the left navigation panel, under Amazon RDS, click


Instances.

4) Choose the RDS database instance that you want to


examine and click on the resource name (link) available
in the DB instance column.

5) Within Details panel section, in the Configurations


category, check the Deletion protection configuration
attribute value. If the attribute value is set to Disabled,
the Deletion Protection safety feature is not enabled
for the selected AWS RDS database instance.
Medium
6) Repeat step no. 4 and 5 to verify the Deletion
Protection feature status for other AWS RDS instances
provisioned in the current region.
7) Change the AWS region from the navigation bar and
repeat the process for other regions.
When running database instance 1) Login to the AWS Management Console.
on default port, attacker do not
need to scan entire port range 2) Navigate to RDS dashboard at
and create traffic. Not scanning https://console.aws.amazon.com/rds/.
the port range helps the attacker
be in stealth mode and 3) In the left navigation panel, under RDS Dashboard,
organisation's security team will click Instances.
not be notified for any incoming
attack. 4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.

6) On the Details tab, in the Security and Network


section, check the Port number:

If the current number is the default port number for


the database engine used (verify the section table), the
selected RDS instance is not using a non-default port
for incoming connections, therefore is vulnerable to
Medium brute force and dictionary attacks. To change your RDS
database endpoint port, follow the steps outlined in
the Remediation/Resolution section.
7) Repeat steps no. 4 - 6 to verify the database port for
other RDS database instances provisioned in the
current region.

8) Change the AWS region from the navigation bar and


repeat the process for other regions.
The check addresses a security 1) Sign in to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard
section, choose Instances.
4) Select All instances option from the Filter dropdown
list to return the list with all RDS instances, including
Read Replicas, provisioned within the selected region.

5) Check the class (type) value for each RDS database


instance available in the current AWS region, listed in
the Class column, e.g.

If the value (i.e. instance type) listed in the Class


column is not the same for all provisioned RDS
resources, the RDS database instances available in the
Medium current region were not launched using the desired
type, therefore you must take action and create an
AWS support case to limit RDS instance provision only
to the desired/required instance type (see
Remediation/Resolution section).
6) Change the AWS region from the navigation bar and
repeat step no. 4 and 5 for all other regions.
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard,
click Instances.
4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.

6) On the Details tab, in the Instance and IOPS section,


check the Storage Type property value. If the current
value is set to Provisioned IOPS (SSD):

the storage type used is Provisioned IOPS SSD,


therefore the selected RDS instance is not using the
most cost-effective storage type available. To convert a
Medium PIOPS-based database instance to a GP one, follow the
steps outlined in the Remediation/Resolution section
of the AWS rule.
7) Repeat steps no. 4 - 6 to verify the storage type of
other RDS database instances provisioned in the
current region.

8) Change the AWS region from the navigation bar and


repeat the process for the other regions.
The check addresses a security 1) Sign in to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard
section, choose Instances.
4) Select All instances option from the Filter dropdown
list to return the list with all RDS instances, including
Read Replicas, provisioned within the selected region.

5) Check the total number of RDS instances available in


the current AWS region, listed in the top-right section
of the dashboard, e.g.
6) Change the AWS region from the navigation bar and
repeat step no. 4 and 5 for all other regions. If the total
Medium number of available RDS database instances
provisioned in your AWS account is greater than 10),
the recommended threshold was exceeded, therefore
you must take action and raise an AWS support case to
limit the number of instances based on your
requirements (see Remediation/Resolution section).
The default master have certain 1) Login to the AWS Management Console.
privilege by default for each
database engines. Attacker can 2) Navigate to RDS dashboard at
perform password guess attack or https://console.aws.amazon.com/rds/.
brute force attack to get access to
the account. After getting access, 3) In the left navigation panel, under RDS Dashboard,
attacker can perform malicious click Instances.
activity which he has right to.
4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.

6) On the Details tab, in the Configuration Details


section, check the Username attribute value. If the
current value is set to "awsuser", i.e.

the selected RDS instance is not using a unique master


username for its database. To change the database
master username, follow the steps outlined in the
Medium Remediation/Resolution section of the AWS rule.

7) Repeat steps no. 4 - 6 to verify the master username


for other RDS database instances provisioned in the
current region.

8) Change the AWS region from the navigation bar and


repeat the process for other regions.
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard,
click Reserved Purchases.
4) Open the dashboard Show/Hide Columns dialog box
by clicking the configuration icon from the right menu:

5) Inside the Show/Hide Columns dialog box, select


Remaining Days checkbox then click Save to apply the
changes.
6) Select the Reserved Instance (RI) that you want to
examine and verify the value listed for the instance in
the Remaining Days column. If the number of days
displayed in this column is less than 30, the selected
RDS RI is about to expire, therefore it must be renewed
Medium to keep it running at the current discounted hourly
rate. To renew (repurchase) the instance, follow the
steps outlined in the Remediation/Resolution section
of the rule.
7) Repeat step no. 6 to determine the expiration date
of other RDS Reserved Instances available in the
current region.

8) Change the AWS region from the navigation bar and


repeat the process for the other regions.
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the navigation panel, under RDS Dashboard, click
Instances.
4) Select the RDS instance that you want to examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.

6) Under Availability and Durability section, search for


the Automated Backups status:

If the backup retention period currently set is less than


Medium 7 (seven) days, the RDS instance backup configuration
does not comply with the recommended regulations.

7) Repeat steps no. 4 – 6 for each RDS instance


provisioned in the current region. Change the AWS
region from the navigation bar to repeat the process
for other regions.
The check addresses a security 1) Sign in to AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under Amazon RDS, click
Clusters.
4) Select the RDS database cluster that you want to
examine and click on the resource name (link) available
within DB cluster identifier column. The selected
cluster must have the database engine, available in the
Engine column, set to Aurora MySQL.
5) In the Details panel section, within Backtrack
category, check the Backtrack window configuration
attribute value. If the attribute value is set to Disabled,
the Backtrack feature is not enabled for the selected
Amazon Aurora MySQL database cluster.
Low
6) Repeat step no. 4 and 5 to determine if other
Aurora clusters, provisioned in the current region,
have backtracking enabled.
7) Change the AWS region from the navigation bar and
repeat the entire process for other regions.
The check addresses a security 1) Sign in to AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under Amazon RDS, click
Instances.
4) Select the RDS database instance that you want to
examine. The selected instance must have the
database engine, available in the Engine column, set to
MySQL, Aurora or MariaDB.

5) Click the Instance Actions button from the


dashboard top menu and select Modify.

6) Within Log exports configuration panel, check the


log types (i.e. Audit log, Error log, General log, Slow
query log) checkboxes. If none of these checkboxes are
selected, i.e.
Low
the Log Exports feature is not enabled for the selected
RDS database instance as Amazon RDS does not
publish the instance's general, slow query, audit and
error logs to AWS CloudWatch Logs.

7) Repeat steps no. 4 – 6 to verify the Log Exports


feature status for other AWS RDS instances
provisioned in the current region.

8) Change the AWS region from the navigation bar and


repeat the process for other regions.
The check addresses a security Using AWS CLI
concern related to
misconfigurations or
inefficiencies.

Low
The check addresses a security 1) Sign in to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard
section, choose Instances.
4) Select the RDS instance that you want to examine.

5) Click on Show Monitoring button from the


dashboard top menu and select Show Multi-Graph
View to expand the AWS CloudWatch monitoring
panel.
6) On the monitoring panel displayed for the selected
instance, perform the following actions:

Click on the DB Connections (Count) usage graph


thumbnail to open the database connections usage
Low details box. Inside the DB Connections (Count) dialog
box, set the following parameters:
From the Statistic dropdown list, select Average.
From the Time Range list, select Last 1 Week.
From the Period dropdown list, select 1 Hour.
Once the monitoring data is loaded, verify the
number of database connections for the last 7 days. If
the average usage (count) has been less than 1, e.g.

, the selected RDS instance qualifies as candidate for


the idle instance. Click the x (close) icon to return to
the dashboard.

Click on the Read Operations (Count/Second) usage


graph thumbnail to open the instance disk ReadIOPS
usage details box. Inside the Read Operations
(Count/Second) dialog box, set the following
parameters:

From the Statistic dropdown list, select Sum.


From the Time Range list, select Last 1 Week.
When RDS instance performance 1) Sign in to AWS Management Console.
graph drops, there is latency and
unwanted results related to 2) Navigate to RDS dashboard at
database queries, performance https://console.aws.amazon.com/rds/.
insight feature is great tool to
investigate the problem and solve 3) In the left navigation panel, under Amazon RDS, click
it. Instances.
4) Select the RDS database instance that you want to
examine and click on the resource name (link) available
in the DB instance column. The selected instance must
have the database engine, available in the Engine
column, set to MySQL, Aurora MySQL or PostgreSQL.
5) Within Details panel section, in the Performance
Insights category, check the Performance Insights
enabled configuration attribute value. If the attribute
value is set to No, the Performance Insights feature is
not enabled for the selected Amazon RDS database
instance.
Low
6) Repeat step no. 4 and 5 to determine the
Performance Insights feature status for other AWS RDS
instances created in the selected region.
7) Change the AWS region from the navigation bar and
repeat the process for other regions.

The check addresses a security 1) Sign in to AWS Management Console.


concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.

3) In the left navigation panel, under Amazon RDS, click


Event subscriptions.
4) In the Event subscriptions list, search for any RDS
event notification subscriptions with the Source type
set to Instances. If there are no such subscriptions
Low listed, there are no RDS event subscriptions created for
instance level events, available in the selected AWS
region.
5) Change the AWS region from the navigation bar and
repeat the audit process for other regions.
The check addresses a security 1) Sign in to AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under Amazon RDS, click
Instances.
4) Select the RDS database instance that you want to
examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.
6) On the Details panel, within Maintenance and
Backups section, check the value set for the Copy tags
to snapshots attribute. If the Copy tags to snapshots
configuration attribute value is set to No, the feature
with the same name is not currently enabled for the
Low selected Amazon RDS database instance.
7) Repeat steps no. 4 – 6 to verify the Copy Tags to
Snapshots feature status for other database instances
provisioned in the current region.
8) Change the AWS region from the navigation bar and
repeat the process for other regions.

The check addresses a security 1) Sign in to AWS Management Console.


concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.

3) In the left navigation panel, under RDS Dashboard,


click Event Subscriptions.
4) Check for any subscriptions currently available on
the RDS Event Subscriptions page. If there are no event
subscriptions listed on this page, and instead a "No
event subscriptions found." message is displayed, the
Low event notifications are not enabled for the Amazon
RDS resources provisioned in the current region.

5) Change the AWS region from the navigation bar and


repeat the audit process for other regions.
Low disk space will affect the 1) Log in to the AWS Management Console.
performance of the RDS instance
by crashing and slowdowns. 2) Navigate to RDS dashboard at
https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard
section, choose Instances.
4) Select the RDS database instance that you want to
examine.

5) Click Instance Actions button from the dashboard


top menu and select See Details.
6) On the Details tab, in the Instance and IOPS section,
check the Storage attribute value to get the amount of
storage allocated for the selected database instance, in
gigabytes.

7) Now go back to the RDS dashboard and select again


Low the database instance that you want to examine.

8) Click on Show Monitoring button from the


dashboard top menu and select Show Multi-Graph
View to expand the AWS CloudWatch monitoring
panel.

9) On the monitoring panel displayed for the selected


instance, click on the Free Storage Space usage graph
thumbnail to open the RDS instance free storage space
details box. Inside the Free Storage Space (MB) dialog
box, set the following parameters:

From the Statistic dropdown list, select Maximum.


From the Time Range list, select Last 24 Hours.
From the Period dropdown list, select 1 Hour.

Once the monitoring data is loaded, verify the free


storage space currently available (megabytes) for the
selected database instance.
The check addresses a security 1) Login to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the navigation panel, under RDS Dashboard, click
Clusters.
4) Select the RDS cluster that you want to examine.

5) Click Cluster Actions button from the dashboard top


menu and select See Details.

6) Under Availability and Durability section, search for


the Multi AZ status:

If the current status is set to No, the feature is not


Low enabled, which means that the selected RDS cluster is
not deployed in multiple Availability Zones.

7) Repeat steps no. 4 – 6 for each RDS cluster


provisioned in the current region. Change the AWS
region from the navigation bar to repeat the process
for other regions.
The check addresses a security 1) Sign in to the AWS Management Console.
concern related to
misconfigurations or 2) Navigate to RDS dashboard at
inefficiencies. https://console.aws.amazon.com/rds/.
3) In the left navigation panel, under RDS Dashboard,
click Reserved Purchases.
4) Choose the active RDS Reserved Instance that you
want to examine.

5) Click the Show or Hide Item Details button:

available for the selected RI to expand the reservation


details panel and check the Start Date attribute value
(e.g. March 16, 21)7 at 9:50:14 PM UTC+2). If the Start
Date value shows an AWS RDS RI purchase request
placed in the last 7 days and you are unaware of this
purchase, verify your AWS CloudTrail logs or contact
Amazon Web Services using the Support Center
Low console to solve the unwanted RI purchase issue (see
Remediation/Resolution section for more details).

6) Repeat step no. 4 and 5 to check the RI purchase


request Start Date for other RDS Reserved Instances
available within the selected region.

7) Change the AWS region from the navigation bar and


repeat the audit process for other regions.

Any malicious activity will not be 1) Sign in to AWS Management Console.


notified to the security team.
2) Navigate to RDS dashboard at
https://console.aws.amazon.com/rds/.

3) In the left navigation panel, under Amazon RDS, click


Event subscriptions.
4) In the Event subscriptions list, search for any RDS
event notification subscriptions with the Source type
configuration attribute set to Security groups. If there
are no subscriptions with the Source type set to
Low Security groups listed on the page, there are no RDS
event subscriptions created for database security
groups available within the selected AWS region.

5) Change the AWS region from the navigation bar and


repeat the audit process for other regions.
Recommendation Patch Priority
Ensure that your AWS Relational Database Service (RDS)
database snapshots are not publicly accessible (i.e. shared
with all AWS accounts and users) in order to avoid
exposing your private data.

Quick Win
Ensure IAM Database Authentication feature is enabled in
order to use AWS Identity and Access Management (IAM)
service to manage database access to your Amazon RDS
MySQL and PostgreSQL instances
To enable IAM authentication for existing DB instances:

1. Open the Amazon RDS console at


https://console.aws.amazon.com/rds/
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. In the Database options section, for IAM DB
authentication choose Enable IAM DB authentication, and
then choose Continue.
6. To apply the changes immediately, choose Apply
immediately.
7. Choose Modify DB instance . Quick Win

Ensure that your RDS database instances have automated


backups enabled for point-in-time recovery.

Quick Win
Ensure that your RDS database instances are encrypted to
fulfil compliance requirements for data-at-rest encryption.

Short Term

Check for any public facing RDS database instances


provisioned in your AWS account and restrict
unauthorized access in order to minimise security risks.

Quick Win
Ensure that your AWS RDS Reserved Instances (RIs) are
renewed before expiration in order to get the appropriate
discount (based on the commitment term) on the hourly
charge for these instances.

Short Term
Identify any Amazon RDS database instances that appear
to be underutilized and downsize (resize) them to help
lower the cost of your monthly AWS bill.

Quick Win
Ensure that your AWS RDS DB security groups do not allow
access from 0.0.0.0/0 (i.e. anywhere, every machine that
has the ability to establish a connection) in order to
reduce the risk of unauthorized access.

Quick Win
Ensure that all your AWS RDS Reserved Instances (RI) have
corresponding database instances running within the
same account or within any AWS accounts members of an
AWS Organization

Short Term
Ensure that all the database instances within your Amazon
Aurora clusters have the same accessibility (either public
or private) in order to follow AWS best practices.

Short Term
Ensure that all RDS databases instances provisioned within
your AWS account are using the latest generation of
instance classes in order to get the best performance with
lower costs.

Short Term

Ensure that your Amazon Relational Database Service


(RDS) instances have Deletion Protection feature enabled
in order to protect them from being accidentally deleted.

Long term
Ensure that your Amazon RDS databases instances are not
using their default endpoint ports (i.e. MySQL/Aurora port
3306, SQL Server port 1433, PostgreSQL port 5432, etc) in
order to promote port obfuscation as an additional layer
of defence against non-targeted attacks.

Following are the steps to change default ports:


1. Sign in to the AWS Management Console and open the
Amazon RDS console at
https://console.aws.amazon.com/rds/
2. Choose Databases.
3. Choose the DB instance that you want to modify, and
choose Modify.
4. In the Database Port box, replace the database default
port number with your custom port number.
5. At the bottom of the page select Apply Immediately
checkbox to apply the endpoint port number change
immediately.
Short Term
Determine if the RDS database instances provisioned in
your AWS account (including Read Replicas for Multi-AZ
deployments) have the desired instance types established
by your organization based on the database workload
deployed.

Short Term
Ensure that your RDS instances are using General Purpose
SSDs instead of Provisioned IOPS SSDs for cost-effective
storage that fits a broad range of database workloads.

Long term
Ensure that the number of RDS database instances
provisioned in your AWS account has not reached the limit
quota established by your organization for the RDS
workload deployed.

Short Term
Ensure that your Amazon RDS production databases are
not using any generic or easy to guess names as master
username, regardless of the RDS database engine type
used, instead a unique alphanumeric string must be
defined as the login ID for the master user.

Note: Once master username is set, it cannot be


changed.

Long term
Ensure that your AWS RDS Reserved Instances (RIs) are
renewed before expiration in order to get the appropriate
discount (based on the commitment term) on the hourly
charge for these instances.

Long term
Ensure that your RDS database instances have set a
minimum backup retention period in order to achieve the
compliance requirements.

Short Term
Ensure that Backtrack feature is enabled for your Amazon
Aurora with MySQL compatibility database clusters in
order to backtrack your clusters to a specific time, without
using backups

Short Term
Ensure that your Amazon RDS database instances have
Log Exports feature enabled in order to publish database
log events directly to AWS CloudWatch Logs

Short Term
Ensure that your Amazon Aurora Serverless database
clusters (MySQL-compatible edition) have Log Exports
feature enabled in order to publish general logs, slow
query logs, audit logs and error logs directly to AWS
CloudWatch.

Short Term
Identify any Amazon RDS database instances that appear
to be idle and delete them to help lower the cost of your
monthly AWS bill.

Long term
Ensure that your AWS RDS MySQL and PostgreSQL
database instances have Performance Insights feature
enabled in order to allow you to obtain a better overview
of your databases performance as well as help you to
identify potential performance issues.

Following are the steps to enable performance insights


for existing DB:

1. Sign in to the AWS Management Console and open the


Amazon RDS console at
https://console.aws.amazon.com/rds/
2. Choose Databases.
3. Choose the DB instance that you want to modify, and
choose Modify.
4. In the Performance Insights section, choose Enable
Performance Insights.

You have the following options when you choose Enable


Performance Insights:
Short Term
a. Retention – The amount of time to retain
Performance Insights data. Choose either 7 days (the
default) or 2 years.
b. Master key – Specify your AWS Key Management
Service (AWS KMS) key. Performance Insights encrypts all
potentially sensitive data using your AWS KMS key. Data is
encrypted in flight and at rest. For more information, see
Encrypting Amazon RDS Resources.

5. Choose Continue.
6. For Scheduling of Modifications, choose one of the
following:

Apply during the next scheduled maintenance window –


Wait to apply the Performance Insights modification until
the next maintenance window.

Apply immediately – Apply the Performance Insights


modification as soon as possible.
Ensure that Amazon RDS event notification subscriptions
are enabled for database instance level events.

Long term
Ensure that your Amazon Relational Database (RDS)
instances make use of Copy Tags to Snapshots feature in
order to allow tags set on your database instances to be
automatically copied to any automated or manual RDS
snapshots that are created from these instances.

Short Term

Ensure that your AWS RDS resources have event


notifications enabled in order to be notified when an
event occurs for a given database instance, database
snapshot, database security group or database parameter
group.

Long term
Identify any Amazon RDS database instances that appear
to run low on disk space and scale them up to alleviate
any problems triggered by insufficient disk space and
improve their I/O performance

Short Term
Ensure that your RDS clusters are using Multi-AZ
deployment configurations for high availability and
automatic failover support fully managed by AWS.

Short Term
Ensure that all Amazon RDS Reserved Instance (RI)
purchases are reviewed every 7 days in order to confirm
that no unwanted reservation purchase has been placed
recently.

Long term

Ensure that Amazon RDS event notification subscriptions


are enabled for database security groups events.

Short Term
Patch Status Remark

Already Compliant
Not Compliant

Already Compliant
Already Compliant

Already Compliant
N/A
Already Compliant
Already Compliant
Already Compliant
Not Compliant
Already Compliant

Already Compliant
Already Compliant
Already Compliant
N/A
Already Compliant
Already Compliant
N/A
Already Compliant
N/A
Already Compliant
Already Compliant
Already Compliant
Already Compliant

N/A
Already Compliant

Already Compliant
Already Compliant
Not Compliant
N/A

Already Compliant
HIGH 11
MEDIUM 4
LOW 4
INFO 4
S3

Sr. No. Check Name Description


1 AWS S3 Bucket Ensure that your AWS S3 buckets are not
Authenticated granting FULL_CONTROL access to
'FULL_CONTROL' Access authenticated users (i.e. signed AWS
accounts or AWS IAM users) in order to
prevent unauthorized access. An S3
bucket that allows full control access to
authenticated users will give any AWS
account or IAM user the ability to LIST
(READ) objects, UPLOAD/DELETE (WRITE)
objects, VIEW (READ_ACP) objects
permissions and EDIT (WRITE_ACP)
permissions for the objects within the
bucket.
2 AWS S3 Bucket Ensure that your AWS S3 buckets content
Authenticated 'READ' cannot be listed by AWS authenticated
Access accounts or IAM users in order to protect
your S3 data against unauthorized access.
An S3 bucket that allows READ (LIST)
access to authenticated users will provide
AWS accounts or IAM users the ability to
list the objects within the bucket and use
the information acquired to find objects
with misconfigured ACL permissions and
exploit them.

3 AWS S3 Bucket Ensure that your S3 buckets content


Authenticated permissions cannot be viewed by AWS
'READ_ACP' Access authenticated accounts or IAM users in
order to protect against unauthorized
access. An S3 bucket that grants
READ_ACP (VIEW PERMISSIONS) access to
AWS signed users can allow them to
examine your S3 Access Control Lists
(ACLs) configuration details and find
permission vulnerabilities.
4 AWS S3 Bucket Ensure that your AWS S3 buckets cannot
Authenticated 'WRITE' be accessed for WRITE actions by AWS
Access authenticated accounts or IAM users in
order to protect your S3 data from
unauthorized access. An S3 bucket that
allows WRITE (UPLOAD/DELETE) access to
any AWS authenticated users can provide
them the capability to add, delete and
replace objects within the bucket without
restrictions.

5 AWS S3 Bucket Ensure that your AWS S3 buckets do not


Authenticated allow authenticated AWS accounts or IAM
'WRITE_ACP' Access users to modify access control
permissions to protect your S3 data from
unauthorized access. An S3 bucket that
allows WRITE_ACP access to AWS
authenticated users can give these the
capability to edit permissions and gain full
access to the resource. Allowing this type
of access is dangerous and can lead to
data loss or unexpectedly high S3 charges
on your AWS bill as a result of economic
denial-of-service attacks.
6 Enable S3 Bucket Default Ensure that default encryption is enabled
Encryption at the bucket level to automatically
encrypt all objects when stored in
Amazon S3. The S3 objects are encrypted
during the upload process using Server-
Side Encryption with either AWS S3-
managed keys (SSE-S3) or AWS KMS-
managed keys (SSE-KMS).

7 S3 Bucket Public Access Granting public access to your S3 buckets


Via Policy via bucket policies can allow malicious
users to view, get, upload, modify and
delete S3 objects, actions that can lead to
data loss and unexpected charges on your
AWS bill.
8 AWS S3 Bucket Public Ensure that your AWS S3 buckets content
'READ' Access cannot be publicly listed in order to
protect against unauthorized access. An
S3 bucket that grants READ (LIST) access
to everyone can allow anonymous users
to list the objects within the bucket.
Malicious users can exploit the
information acquired through the listing
process to find objects with
misconfigured ACL permissions and
access these compromised objects.

9 AWS S3 Bucket Public Ensure that your S3 buckets content


'READ_ACP' Access permissions details cannot be viewed by
anonymous users in order to protect
against unauthorized access. An S3
bucket that grants READ_ACP (VIEW
PERMISSIONS) access to everyone can
allow unauthorized users to look for the
objects ACL (Access Control List)
permissions.
10 AWS S3 Bucket Public Ensure that your AWS S3 buckets cannot
'WRITE' Access be publicly accessed for WRITE actions in
order to protect your S3 data from
unauthorized users. An S3 bucket that
allows WRITE (UPLOAD/DELETE) access to
everyone (i.e. anonymous users) can
provide attackers the capability to add,
delete and replace objects within the
bucket, which can lead to S3 data loss or
unintended charges on your AWS bill.

11 AWS S3 Bucket Public Ensure that your AWS S3 buckets do not


'WRITE_ACP' Access allow anonymous users to modify their
access control permissions to protect
your S3 data from unauthorized access.
An S3 bucket that allows public
WRITE_ACP (EDIT PERMISSIONS) access
can give any malicious user on the
Internet the capability to READ and
WRITE ACL permissions, overly permissive
actions that can lead to data loss or
economic denial-of-service attacks (i.e.
uploading a large number of files to drive
up the costs of the S3 service within your
AWS account).
12 Secure Transport When S3 buckets are not configured to
strictly require SSL connections, the
communication between the clients
(users, applications) and these buckets is
vulnerable to eavesdropping and man-in-
the-middle (MITM) attacks.

13 Enable Access Logging for Ensure that AWS S3 Server Access Logging
AWS S3 Buckets feature is enabled in order to record
access requests useful for security audits.
By default, server access logging is not
enabled for S3 buckets.
14 Enable S3 Block Public Ensure that Amazon S3 Block Public
Access for S3 Buckets Access feature is enabled at your S3
buckets level to restrict public access to
all objects available within these buckets,
including those that you upload in the
future. In order to enable Amazon S3
Block Public Access for your S3 buckets,
you must turn on the following settings:
1. Block new public ACLs and uploading
public objects (BlockPublicAcls) – this
setting disallows the use of new public
buckets or object Access Control Lists
(ACLs) and it is usually used to ensure
that future PUT requests that include
them will fail. Enable this setting to
protect against future attempts to use
ACLs to make S3 buckets or objects
publicly available.

2. Remove public access granted through


public ACLs (IgnorePublicAcls) – this
setting instructs the S3 service to stop
evaluating any public ACL when
authorizing a request, ensuring that no
bucket or object can be made public by
using Access Control Lists (ACLs). This
option overrides any current or future
public access settings for current and
future objects in the configured S3
bucket.

3. Block new public bucket policies


(BlockPublicPolicy) – this option disallows
the use of new public bucket policies. This
setting ensures that an S3 bucket policies
cannot be updated to grant public access.

4. Block public and cross-account access


to buckets that have public policies
(RestrictPublicBuckets) – once this option
is enabled, the access to those S3 buckets
15 Server Side Encryption When dealing with sensitive data that is
crucial to your business, it is highly
recommended to implement encryption
in order to protect it from attackers or
unauthorized personnel. Using S3 Server-
Side Encryption (SSE) will enable Amazon
to encrypt your data at the object level as
it writes it to disks and decrypts it
transparently for you when you access it.
16 Limit S3 Bucket Access by Allowing untrustworthy access to your
IP Address AWS S3 buckets can lead to unauthorized
actions such as viewing, uploading,
modifying or deleting S3 objects. To
prevent S3 data exposure, data loss,
unexpected charges on your AWS bill or
you just want a central place to manage
your buckets access using policies, you
need to ensure that your S3 buckets are
accessible only to a short list of
whitelisted IPs.

17 Enable MFA Delete for Using MFA-protected S3 buckets will


AWS S3 Buckets enable an extra layer of protection to
ensure that the S3 objects (files) cannot
be accidentally or intentionally deleted by
the AWS users that have access to the
buckets.

18 Enable Versioning for Using versioning-enabled S3 buckets will


AWS S3 Buckets allow you to preserve, retrieve, and
restore every version of an S3 object. S3
versioning can be used for data
protection and retention scenarios such
as recovering objects that have been
accidentally/intentionally deleted or
overwritten by AWS users or applications
and archiving previous versions of objects
to AWS Glacier for long-term low-cost
storage.
19 DNS Compliant S3 Bucket Ensure that your AWS S3 buckets are
Names using DNS-compliant bucket names in
order to adhere to AWS best practices
and to benefit from new S3 features such
as S3 Transfer Acceleration, to benefit
from operational improvements and to
receive support for virtual-host style
access to buckets. In this rule, a DNS-
compliant name is an S3 bucket name
that doesn't contain periods (i.e. '.'). The
following examples are invalid S3 bucket
names: '.myS3bucket', 'myS3bucket.' and
'my..S3bucket'. To enable AWS S3
Transfer Acceleration on a bucket or use
a virtual hosted–style bucket with SSL, the
bucket name must conform to DNS
naming requirements and must not
contain periods.

20 Enable AWS S3 Object Used in combination with versioning,


Lock which protects objects from being
overwritten, AWS S3 Object Lock enables
you to store your S3 objects in an
immutable form, providing an additional
layer of protection against object changes
and deletion. S3 Object Lock feature can
also help you meet regulatory
requirements within your organization
when it comes to data protection.
21 S3 Buckets Encrypted Ensure that your AWS S3 buckets are
with Customer-Provided configured to use Server-Side Encryption
CMKs with Customer-Provided Keys (SSE-C)
instead of S3-Managed Keys (SSE-S3) in
order to obtain a fine-grained control
over Amazon S3 data-at-rest encryption
and decryption process. Once the server-
side encryption is configured to use
customer-provided keys by default,
Amazon S3 will automatically encrypt any
new objects with the specified KMS CMK.

22 Enable S3 Bucket Ensure that your AWS S3 buckets utilize


Lifecycle Configuration lifecycle configurations to manage S3
objects during their lifetime. An S3
lifecycle configuration is a set of one or
more rules, where each rule defines an
action (transition or expiration action) for
Amazon S3 to apply to a group of objects.
23 Enable AWS S3 Transfer Ensure that your S3 buckets are using
Acceleration Transfer Acceleration feature to increase
the speed (up to 500%) of data transfers
in and out of Amazon S3 using AWS edge
network. S3 Transfer Acceleration feature
enables fast, easy and secure transfers of
files over long distances between your S3
bucket and your client(s) by taking
advantage of AWS CloudFront's globally
distributed edge locations. Once Transfer
Acceleration is enabled, as soon as your
S3 objects reach an edge network
location, the data is routed to Amazon S3
over an optimized network path.
Category Risk Level Impact
User access control Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High
User access control Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High

User access control Exposure of sensitive production


data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High
User access control Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High

User access control Exposure of sensitive production


data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High
Data protection Attacker or any authenticated user with
malicious intent can read through the
files if the data is not encrypted at rest

High

User access control Exposure of sensitive production


data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High
User access control Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High

User access control Exposure of sensitive production


data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High
User access control Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High

User access control Exposure of sensitive production


data can lead to unauthorized
access or data leaks, violating
compliance requirements.

High
Data protection Without HTTPS (TLS), a network-based
attacker can eavesdrop on network
traffic or manipulate it using an attack
such as man-in-the-middle.

High

Logging and tracing Exposure of sensitive production


data can lead to unauthorized
access or data leaks, violating
compliance requirements.

Medium
User access control Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

Medium
Data protection Any authenticated user with malicious
intent can read the data and leak
sensitive data on the Internet

Medium
Secure Network access Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

Medium

Availability Accidental deletion of S3 bucket can


lead to data loss.

Low

Availability With versioning enabled, S3 object of


particular version can be retrieved if
deleted accidentally.

Low
Auditing Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

Low

Availability Object Lock prevents a user from


accidentally deleting any object in S3
bucket from deleting it. Any sensitive
data in the object will not be lost.

Low
Data protection Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

Info

Resource Management Exposure of sensitive production


data can lead to unauthorized
access or data leaks, violating
compliance requirements.

Info
Performance Improvement Exposure of sensitive production
data can lead to unauthorized
access or data leaks, violating
compliance requirements.

Info
Navigation Path/ Location
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu

4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) for any grantee labelled "Any
Authenticated AWS User". A grantee can be an AWS account or an
S3 predefined group. The grantee called "Any Authenticated AWS
User" is the predefined group that allows any AWS authenticated
user to access the S3 resource. If the bucket ACL configuration has
the "Any Authenticated AWS User" predefined group with all the
permissions enabled, i.e. the selected S3 bucket is fully accessible
to other AWS accounts and IAM users and is rendered as insecure.

5) Repeat steps no. 3 and 4 for each S3 bucket that you want to
examine, available in your AWS account.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/ .
3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu
4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) for any grantee named "Any
Authenticated AWS User". A grantee can be an AWS account or an
AWS S3 predefined group. The grantee called "Any Authenticated
AWS User" is an AWS predefined group that allows any
authenticated AWS user (root account or IAM user) to access the
S3 bucket. If the bucket ACL configuration does specify the "Any
Authenticated AWS User" predefined group with the List (READ)
permissions enabled. the selected S3 bucket is accessible to other
AWS accounts and IAM users for content listing and is rendered as
insecure.

5) Repeat steps no. 3 and 4 for each S3 bucket that you want to
examine, available in your AWS account.

1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu

4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) for any grantee named "Any
Authenticated AWS User". A grantee can be an AWS account or an
AWS S3 predefined group. The grantee called "Any Authenticated
AWS User" is an AWS predefined group that allows any
authenticated AWS account or IAM user to access the S3 bucket. If
the bucket ACL configuration displays the "Any Authenticated AWS
User" predefined group with the View Permissions (READ_ACP)
permissions enabled. The selected S3 bucket permissions are
clearly exposed to other AWS authenticated users and the bucket is
rendered as insecure.

5) Repeat steps no. 3 and 4 for each bucket that you want to
examine, available in your AWS account.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu
4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) for any grantee named "Any
Authenticated AWS User". A grantee can be an AWS account or an
AWS S3 predefined group. The grantee called "Any Authenticated
AWS User" is an AWS predefined group that allows any
authenticated AWS user to access the S3 bucket. If the bucket ACL
configuration does specify the "Any Authenticated AWS User"
predefined group with the Upload/Delete (WRITE) permissions
enabled any-, the selected S3 bucket is accessible to other AWS
accounts and IAM users for content updates (add/delete/replace
objects) and is rendered as insecure.

5) Repeat steps no. 3 and 4 for each AWS S3 bucket that you want
to examine.

1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu

4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) for any grantee named "Any
Authenticated AWS User". A grantee can be an AWS account or an
AWS S3 predefined group. The grantee called "Any Authenticated
AWS User" is an AWS predefined group that allows any
authenticated AWS user to access the S3 bucket. If the bucket ACL
configuration does specify the "Any Authenticated AWS User"
predefined group with the WRITE_ACP permissions enabled any-,
the selected S3 bucket is accessible to other AWS accounts and
IAM users for content updates (add/delete/replace objects) and is
rendered as insecure.

5) Repeat steps no. 3 and 4 for each AWS S3 bucket that you want
to examine.
01) Sign in to the AWS Management Console.

02) Navigate to S3 dashboard at


https://console.aws.amazon.com/s3/.

03) Click on the name (link) of the S3 bucket that you want to
examine to access the bucket configuration.

04) Select the Properties tab from the S3 dashboard top menu and
check the Default encryption feature status. If the feature status is
set to Disabled, the default encryption is not currently enabled,
therefore the selected AWS S3 bucket does not encrypt
automatically all objects at upload.

05) Repeat step no. 3 and 4 to check Default Encryption feature


status for other S3 buckets available in your AWS account.

1) Sign in to the AWS Management Console.


2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu

4) Inside the Properties tab, click Permissions to expand the bucket


permissions configuration panel.

5) Now click Edit bucket policy to access the bucket policy currently
used.

6) In the Bucket Policy Editor dialog box, verify the Effect and
Principal policy elements. Effect describes the permission effect
that will be used when the user requests the action(s) defined in
the policy - the element value can be either Allow or Deny. The
Principal is the account or the user that has access to the actions
and resources declared in the policy statement.
If the Effect element value is set to Allow and the Principal element
value is set to "*" (i.e. everyone) or {"AWS": "*"}, the selected S3
bucket is publicly accessible unless there is a Condition element,
and can be marked as insecure. Note that both elements value
must match in order to declare the bucket publicly accessible.

7) Repeat steps no. 3 - 6 to verify the access policies used by other


S3 buckets available in your AWS account.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu
4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) for any grantee named "Everyone". A
grantee can be an AWS account or an AWS S3 predefined group.
The grantee called "Everyone" is an AWS predefined group that
allows access to everyone (i.e. anonymous users). If the bucket ACL
configuration does specify the "Everyone" predefined group with
the List (READ) permission enabled. The selected S3 bucket is
publicly accessible for content listing and is rendered as insecure.

5) Repeat steps no. 3 and 4 for each S3 bucket that you want to
examine, available in your AWS account.

1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu
4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) for any grantee named "Everyone". A
grantee can be an AWS account or an AWS S3 predefined group.
The grantee called "Everyone" is an AWS predefined group that
allows access to everyone (i.e. anonymous users). If the bucket ACL
configuration displays the "Everyone" predefined group with the
View Permissions (READ_ACP) permission enabled:
the selected S3 bucket ACL information is publicly accessible and
the bucket is rendered vulnerable from the security standpoint.

5) >Repeat steps no. 3 and 4 for each S3 bucket that you want to
examine, available in your AWS account.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu
4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) configuration for any grantee named
"Everyone". A grantee can be an AWS account or an AWS S3
predefined group. The grantee called "Everyone" is an AWS
predefined group that allows access to everyone on the Internet. If
the bucket ACL configuration does lists the "Everyone" predefined
group with the Upload/Delete (WRITE) permissions enabled. The
selected S3 bucket is publicly accessible for unrestricted content
updates (add/delete/replace objects) and is rendered as insecure.

5) Repeat steps no. 3 and 4 for each AWS S3 bucket that you want
to examine.

1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu

4) In the Properties panel, click the Permissions tab and check the
Access Control List (ACL) configuration for any grantee named
"Everyone". A grantee can be an AWS account or an AWS S3
predefined group. The grantee called "Everyone" is an AWS
predefined group that allows access to everyone on the Internet. If
the bucket ACL configuration does lists the "Everyone" predefined
group with the Edit Permissions (WRITE_ACP) permissions enabled.
The selected S3 bucket is publicly accessible for unrestricted ACL
permission updates and is rendered as insecure.

5) Repeat steps no. 3 and 4 for each AWS S3 bucket that you want
to examine.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu
4) Inside the Properties tab, click Permissions to expand the bucket
permissions configuration panel.

5) Now click Edit bucket policy to access the bucket policy currently
in use. If the selected S3 bucket does not have an access policy
defined yet, skip the next step and mark the Audit process as
complete.

6) Inside the Bucket Policy Editor dialog box, verify the policy
document for the following elements: "Condition": { "Bool":
{ "aws:SecureTransport": "true" } }, when the Effect element value
is set to "Allow" or "Condition": { "Bool": { "aws:SecureTransport":
"false" } } when the Effect value is "Deny". This S3 policy condition
will allow only SSL (encrypted) access to the objects stored on the
selected bucket. If this condition is not defined within your existing
bucket policy, the selected S3 bucket does not protect its data
while in-transit (i.e. as it travels to and from Amazon S3).

7) Repeat steps no. 3 - 6 to verify the access policy for other S3


buckets created within your AWS account.

1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Select the S3 bucket that you want to examine and click the
Properties tab from the dashboard top right menu

4) In the Properties panel, click the Logging tab and check the
feature configuration status. If the Enabled checkbox is not
selected, the Server Access Logging feature is not currently enabled
for the selected S3 bucket.

5) Repeat steps no. 3 and 4 for each S3 bucket that you want to
examine, available in your AWS account.
1) Sign in to AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Click on the name of the S3 bucket that you want to examine to
access the bucket configuration settings.
4) Select the Permissions tab from the S3 dashboard top menu to
view bucket permissions.

5) On the Permissions panel, under Public access settings for this


bucket, check the configuration status for all the settings available
under Manage public access control lists (ACLs) and Manage public
bucket policies. If the configuration status for all the settings, i.e.
Block new public ACLs and uploading public objects, Remove public
access granted through public ACLs, Block new public bucket
policies and Block public and cross-account access to buckets that
have public policies, is set to False, the Amazon S3 Block Public
Access feature is not enabled for the selected S3 bucket, therefore
public access is not currently restricted at the bucket level for the
specified S3 bucket.

6) Repeat steps no. 3 – 5 to determine the Amazon S3 Public Access


Block feature configuration for other S3 buckets available within
your AWS account.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Select the S3 bucket that you want to examine and click the
Properties tab from the S3 dashboard top right menu
4) Inside the Properties tab, click Permissions to expand the bucket
permissions configuration panel.

5) Now click Edit bucket policy to access the bucket policy currently
in use. If the selected bucket does not have an access policy
defined yet, skip the next step and declare the Audit process
completed.

6) Inside the Bucket Policy Editor dialog box, verify the policy
document for the following element: "Condition": { "Null": { "s3:x-
amz-server-side-encryption": "true" } }. When this condition is
added to the bucket access policy, Amazon will encrypt your data
by adding the x-amz-server-side-encryption header to the upload
request. If this condition is not defined within your bucket policy,
the selected S3 bucket does not have Server-Side Encryption
enabled, therefore your S3 data is not encrypted at rest.
7) Repeat steps no. 3 - 6 to verify the access policy for other S3
buckets provisioned within your AWS account.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Select the S3 bucket that you want to examine and click on the
bucket name (link) to access its configuration page.
4) On the bucket configuration page, click Permissions to access the
permissions panel.

5) Now click Bucket policy to access the bucket access policy


currently in use.
6) Inside the Bucket Policy Editor box, search for the Condition
policy element. The Condition element lets you specify conditions
for when a bucket policy is in effect. Within Condition block you
build expressions in which you use operators to match the
condition in the policy against the values in the request. The
Condition values can include the IP address of the requester, date,
time, the ARN of the request source, the user name, the user ID or
the user agent of the requester.
7) Repeat steps no. 3 - 6 to verify the access policies defined for
other S3 buckets available in your AWS account.

Using AWS CLI

1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Select the S3 bucket that you want to examine and click the
Properties tab from the dashboard top right menu

4) Click to expand the Versioning tab from the Properties panel and
check the feature status. If the following message is displayed:
“Versioning is currently not enabled on this bucket.”, S3 object
versioning is not currently enabled for the selected bucket.

5) Repeat steps no. 3 and 4 for each S3 bucket that you want to
examine, available in your AWS account.
1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Choose the S3 bucket that you want to examine and check its
name, available in the Bucket name column. If the bucket name
contains periods ("."), the selected S3 bucket name does not
comply with the existing DNS naming conventions.

4) Repeat step no. 3 to check other S3 buckets, available in your


AWS account, for non-DNS compliant bucket names.

1)Sign in to AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.

3) Click on the name of the S3 bucket that you want to examine to


access the bucket configuration settings.
4) Select the Properties tab from the S3 dashboard top menu to
view bucket properties.

5) In the Advanced settings section, check the Object Lock feature


status. If the configuration status is set to Disabled, Object Lock is
not enabled for the selected Amazon S3 bucket.

6) Repeat step no. 3 and 4 to verify Object Lock feature status for
other S3 buckets available in your AWS account.
1) Sign in to AWS Management Console.
2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3)Click on the name of the S3 bucket that you want to examine to
access the bucket configuration settings.
4) Select the Properties tab from the S3 dashboard top menu to
view bucket properties.
5) Click on the Default encryption box to access the default
encryption settings and determine Server-Side Encryption (SSE)
configuration available for the selected bucket:
a)If None option is currently selected, the Server-Side Encryption
(SSE) is not enabled by default for the selected Amazon S3 bucket.
b)If AES-256 option is selected, the S3 bucket is configured to use
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3),
therefore the SSE configuration for the selected S3 bucket is not
compliant.
c)If AWS-KMS is selected, but the name of the KMS CMK used is
AWS/s3 (i.e. default key generated and managed by Amazon S3
service), the Server-Side Encryption (SSE) configuration for the
selected S3 bucket is not compliant.
d)If AWS-KMS option is selected, check the ARN available in the
AWS-KMS dropdown list against the customer-provided AWS KMS
CMK. If it is the default KMS Key, the SSE configuration for the
selected Amazon S3 bucket is not compliant.
6) Repeat steps no. 3 – 5 to determine the encryption status and
configuration for other S3 buckets available in your AWS account

1) Sign in to the AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Click on the name (link) of the S3 bucket that you want to
examine to access the bucket configuration.

4) Select the Management tab from the S3 dashboard top menu,


select Lifecycle and search for existing lifecycle configuration rules.
If there are no rules defined on the Lifecycle page, instead a Get
started panel is displayed, the lifecycle configuration for the
selected Amazon S3 bucket is not enabled.

5) Repeat step no. 3 and 4 to check lifecycle configuration for other


S3 buckets available in your AWS account.
1) Sign in to AWS Management Console.

2) Navigate to S3 dashboard at
https://console.aws.amazon.com/s3/.
3) Click on the name of the S3 bucket that you want to examine to
access the bucket configuration.
4) Select the Properties tab from the S3 dashboard top menu and
check the Transfer Acceleration status. If the feature status is set to
Suspended, the S3 Transfer Acceleration is not enabled for the
selected Amazon S3 bucket.

5) Repeat step no. 3 and 4 to verify Transfer Acceleration status for


other S3 buckets available in your AWS account.
Recommendation Patch Priority
Ensure that your AWS S3 buckets are not granting
FULL_CONTROL access to authenticated users (i.e. signed
AWS accounts or AWS IAM users) in order to prevent
unauthorized access.

Quick Win
Ensure that your AWS S3 buckets content cannot be
listed by AWS authenticated accounts or IAM users in
order to protect your S3 data against unauthorized
access.

Quick Win

Ensure that your S3 buckets content permissions cannot


be viewed by AWS authenticated accounts or IAM users
in order to protect against unauthorized access.

Quick Win
Ensure that your AWS S3 buckets cannot be accessed for
WRITE actions by AWS authenticated accounts or IAM
users in order to protect your S3 data from unauthorized
access.

Quick Win

Ensure that your AWS S3 buckets do not allow


authenticated AWS accounts or IAM users to modify
access control permissions to protect your S3 data from
unauthorized access.

Quick Win
Ensure that default encryption is enabled at the bucket
level to automatically encrypt all objects when stored in
Amazon S3.

Quick Win

Its strongly recommended using bucket policies to limit


the access to a particular AWS account (friendly account)
instead of providing public access to everyone on the
Internet.

Quick Win
Ensure that your AWS S3 buckets content cannot be
publicly listed in order to protect against unauthorized
access.

Quick Win

Ensure that your S3 buckets content permissions details


cannot be viewed by anonymous users in order to
protect against unauthorized access.

Quick Win
Ensure that your AWS S3 buckets cannot be publicly
accessed for WRITE actions in order to protect your S3
data from unauthorized users.

Quick Win

Ensure that your AWS S3 buckets do not allow


anonymous users to modify their access control
permissions to protect your S3 data from unauthorized
access.

Quick Win
Ensure AWS S3 buckets enforce SSL to secure data in
transit

Quick Win

AWS S3 Server Access Logging feature should be


Enabled.

Quick Win
Amazon S3 Block Public Access feature is enabled

Short Term
Ensure AWS S3 buckets enforce Server-Side Encryption
(SSE)

Short Term
Ensure that Amazon S3 buckets access is limited only to
specific IP addresses.

Short Term

Multi-Factor Authentication (MFA) Delete feature should


be enabled on S3 buckets

Short Term

Versioning flag should be Enabled.

To enable or disable versioning on an S3 bucket:

1. Sign in to the AWS Management Console and open the


Amazon S3 console at
https://console.aws.amazon.com/s3/
2. In the Bucket name list, choose the name of the
bucket that you want to enable versioning for.
3. Choose Properties
4. Choose Versioning Long Term
5. Choose Enable versioning and then choose Save.
Its recommended that you use '-' instead of '.' for your S3
bucket names to comply with DNS naming conventions.

Long Term

Object Lock feature should be Enabled

Note: You can only enable Object Lock for new buckets.
If you want to turn on Object Lock for an existing bucket,
contact AWS Support.

Long Term
Amazon S3 buckets should be encrypted with customer-
provided AWS KMS CMKs.

Long Term

Amazon S3 buckets should have lifecycle configuration


enabled for security and cost optimization purposes.

Long Term
Ensure that Amazon S3 buckets use Transfer
Acceleration feature.

To enable transfer acceleration for an S3 bucket:


1. Sign in to the AWS Management Console and open the
Amazon S3 console at
https://console.aws.amazon.com/s3/
2. In the Bucket name list, choose the name of the
bucket that you want to enable transfer acceleration for.
3. Choose Properties Long Term
4. Choose Transfer Acceleration
5. Choose Enabled, and then choose Save.
Patch Status Remark

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Already Compliant
Already Compliant

Not Compliant
Not Compliant

N/A
Not Compliant

N/A
Not Compliant

N/A
Not Compliant
Already Compliant
Not Compliant

Not Compliant

Not Compliant
Not Compliant

N/A
N/A

N/A
N/A
HIGH 1
MEDIUM 5
LOW 4
INFO 1
VPC

Sr. No. Check Name Description Category


1 AWS VPN Tunnel State Continuous monitoring for your Availability
VPN tunnels will help you take
immediate actions in the event of
a failure, in order to maximize
uptime and ensure network
traffic flow over your Amazon
VPN connections at all times.
2 AWS VPC Peering Having the VPC peering Secure Network access
Connections Route connection routing tables well
Tables Access configured to allow traffic only
between the desired resources
represents an effective way of
minimizing the impact of security
breaches as AWS resources
outside of these routes become
inaccessible to the peered VPC.
3 Unrestricted Network Regulating the subnets Secure Network access
ACL Inbound Traffic inbound/ingress traffic by
opening just the ports required
by your applications will add an
additional layer of security to
your VPC.

4 Unrestricted Network Controlling the outbound traffic Secure Network access


ACL Outbound Traffic of one or more subnets by
opening just the ports required
by your applications will add an
additional layer of security to
your VPC (a second layer of
defence after security groups).
5 VPC Endpoint Exposed When the Principal element value User access control
is set to "*" within the access
policy, the VPC endpoint allows
full access to any IAM user or
service within the VPC using
credentials from any AWS
accounts. Allowing access in this
manner is considered bad
practice and can lead to security
issues.
6 VPC Peering Having the VPC peering User access control
Connections To communication well configured
Accounts Outside AWS to allow traffic only between the
Organization member accounts of your AWS
Organization represents an
effective way of keeping the
organization resources private
and isolated, and meet regulatory
compliance.

7 Managed NAT Using the AWS VPC Managed Performance Improvement


Gateway In Use NAT Gateway service instead of
an NAT instance to forward traffic
for your instances available in a
private subnet has multiple
advantages. For example, the
Managed NAT Gateway provides
built-in redundancy for high
availability (using the multi-AZ
configuration) compared to the
NAT instance which use just a
script to manage failover,
Managed NAT Gateway provides
better bandwidth (traffic bursts
up to 10Gbps) than the NAT
instance which is limited to the
bandwidth allocated for the EC2
instance type used.
8 Unused VPC Internet For a better management of your Auditing
Gateways VPC resources, all unused
(detached) Internet Gateways
and Egress-Only Internet
Gateways should be removed
from your AWS VPC environment.

9 Unused Virtual Private As good practice, every unused Auditing


Gateways (detached) AWS Virtual Private
Gateway should be removed
from your account for a better
management of your AWS
resources.
10 VPC Flow Logs Enabled Enabling VPC Flow Logs will help Logging and tracing
you detect security and access
issues like overly permissive
security groups and network ACLs
and alert abnormal activities
triggered within your Virtual
Private Cloud network such as
rejected connection requests or
unusual levels of data transfer.

11 VPC Naming Naming (tagging) your AWS VPCs Auditing


Conventions consistently has several
advantages such as providing
additional information about the
VPC location and usage,
promoting consistency within the
selected AWS region,
distinguishing fast similar
resource stacks from one
another, avoiding naming
collisions, improving clarity in
cases of potential ambiguity and
enhancing the aesthetic and
professional appearance.
Risk Level Impact
Site-to-Site VPN connection is used to connect
your remote network to a VPC. There will be
no backup tunnel to route traffic through if
the only tunnel running stops. This affects the
organisation's employees activities.

High
The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Attacker can perform malicious activity such
as Denial of Service (DoS) attacks or
Distributed Denial of Service (DDoS) attacks.

Medium

With all outgoing ports open, attacker will be


able to create a reverse connection to his/her
attacking network through any port. This may
lead to data leakage and more planned attack.

Medium
The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

With no managed NAT gateway in place, any


one on the Internet can make connection to
AWS service with correct credentials or
attacker can perform brute force attack.

Low
The check addresses a security concern
related to misconfigurations or
inefficiencies.

Low

The check addresses a security concern


related to misconfigurations or
inefficiencies.

Low
The check addresses a security concern
related to misconfigurations or
inefficiencies.

Low

The check addresses a security concern


related to misconfigurations or
inefficiencies.

Info
Navigation Path/ Location Recommendation
01) Sign in to the AWS Management Console. Ensure that the state of your Amazon VPN
tunnels is UP (at least two) to ensure network
02) Navigate to AWS VPC dashboard at traffic flow over your Virtual Private Network.
https://console.aws.amazon.com/vpc/.

03) In the left navigation panel, under VPN Connections section,


choose VPN Connections.

04) Select the VPN connection that you want to examine.


05) Select Tunnel Details tab from the bottom panel and verify the
state of the VPN tunnels listed within the Status column:
(UP for online, DOWN for offline). If the current status is set to
DOWN, the VPN tunnels are offline, therefore there is no network
traffic over the selected AWS Virtual Private Network connection.

06) Repeat step no. 4 and 5 for each Amazon VPN connection
available within the current region.

07) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
01) Sign in to the AWS Management Console. Review the routing tables of your peered AWS
Virtual Private Networks (VPCs) to determine if
02) Navigate to AWS VPC dashboard at the existing peering connection configuration is
https://console.aws.amazon.com/vpc/. compliant with the desired routing policy.
03) In the left navigation panel, under Virtual Private Cloud section,
click Peering Connections.
04) Select the VPC peering connection that you want to examine.

05) Select Route Tables tab from the dashboard bottom panel to
access the route tables associated with the VPC peering
connection.
06) Choose the VPC route table that you want to examine then
click on its ID (link) to open its configuration page.

07) On the selected route table configuration page, select Routes


tab from the dashboard bottom panel then verify the value
available in the Destination column for the route with the Target
set to peering connection ID, e.g.
If the Destination value is set to the entire IPv4 CIDR block of the
peer VPC, e.g. 172.31.0.0/16 or to a specific range, e.g.
172.31.0.0/28, the selected VPC route table policy does not comply
with the desired routing policy.

08) Repeat step no. 6 and 7 to verify the routing policy for the
second route table associated with the peering connection. If the
existing route tables do not comply with the desired routing policy
(i.e. one that limits peering traffic to a specific instance such as
172.31.14.203/32), the routing configuration for the selected
Amazon VPC peering connection is overly-permissive and should
be reconfigured.

09) Repeat steps no. 4 - 8 to verify other VPC peering connections


provisioned in the current AWS region.

10) Change the AWS region from the navigation bar and repeat the
process for the other regions.
01)Sign in to the AWS Management Console. Check your AWS Network Access Control Lists
(NACLs) for inbound rules that allow traffic from
02) Navigate to AWS VPC dashboard at all ports and limit access to the required ports or
https://console.aws.amazon.com/vpc/. port ranges only in order to implement the
principle of least privilege and reduce the
03) In the left navigation panel, under SECURITY section, choose possibility of unauthorized access at the subnet
Network ACLs. level.
04) Select the Network ACL that you want to examine.

05) Select the Inbound Rules tab from the dashboard bottom
panel.

06) Verify the value available in the Port Range column for any
inbound NACL rules defined. If one or more rules have the Port
Range attribute value set to ALL, i.e.the selected AWS Network ACL
allows inbound/ingress traffic from all ports, therefore the access
to the VPC subnets associated with your Network ACL is not
restricted.

07) Repeat steps no. 4 – 6 to verify other Amazon Network ACLs


available in the current region.

08) Change the AWS region from the navigation bar and repeat the
audit process for other regions.

01) Sign in to the AWS Management Console. Check your AWS Network Access Control Lists
(NACLs) for outbound rules that allow traffic
02) Navigate to AWS VPC dashboard at from all ports and limit access to the required
https://console.aws.amazon.com/vpc/. ports or port ranges only in order to implement
the principle of least privilege and reduce the
03) In the left navigation panel, under SECURITY section, choose possibility of unauthorized access at the subnet
Network ACLs. level.

04) Select the Network ACL that you want to examine.

05) Select the Outbound Rules tab from the dashboard bottom
panel.
06) Verify the value available in the Port Range column for any
outbound NACL rules defined. If one or more rules have the Port
Range attribute value set to ALL, i.e.
the selected AWS Network ACL allows outbound/egress traffic to
all ports, therefore the access to the Internet for any VPC subnets
associated with your Network ACL is not restricted.
07) Repeat steps no. 4 – 6 to verify other Amazon Network ACLs
available in the current region.
08) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
01) Sign in to the AWS Management Console. Identify any fully accessible VPC endpoints and
update their access policy in order to stop any
02) Navigate to AWS VPC dashboard at unsigned requests made to the supported
https://console.aws.amazon.com/vpc/. services and resources.
03) In the left navigation panel, under Virtual Private Cloud section,
click Endpoints.
04) Select the VPC endpoint that you want to examine.

05) Click the Actions dropdown button from the dashboard top
menu and select Edit Policy to check the endpoint policy.

06) In the Edit Policy dialog box, inside the Policy section, verify the
set of permissions (policy) defined for the selected VPC endpoint. If
the access policy is currently set to Full Access: the selected VPC
endpoint is exposed to everyone. Also, if the endpoint policy is set
to Custom but the Principal element does not promote a certain
AWS account or IAM user, e.g. "Principal": { "AWS": "*" }, and the
policy is not using any Condition clauses to filter the access, the
selected Amazon VPC endpoint is fully exposed.

07) Repeat steps no. 4 - 6 to determine if other VPC endpoints


created in the current region are fully accessible.
08) Change the AWS region from the navigation bar and repeat the
process for the other regions.
01) Sign in to your AWS Organization master account using the Ensure there are no VPC peering connections
account root credentials. established with AWS accounts outside your
AWS Organization in order to secure the peered
02) Navigate to AWS Organizations dashboard at VPC traffic to member AWS accounts only.
https://console.aws.amazon.com/organizations/.

03) Select the Accounts tab to access the list of AWS accounts,
members of the selected AWS Organization.

04) On the Accounts panel, identify the member account IDs (e.g.
123456789012), listed in the Account ID column.

05) Navigate to AWS VPC dashboard at


https://console.aws.amazon.com/vpc/.
06) In the left navigation panel, under Virtual Private Cloud section,
click Peering Connections.

07) Select the active VPC peering connection that you want to
examine. An active VPC peering connection has its status set to
Active.

08) Select the Description tab from the dashboard bottom panel
and check the value (i.e. account ID) set for the Accepter VPC
owner attribute. Compare the Accepter VPC owner ID with each
12-digit AWS account ID listed at step no. 4. If the Accepter VPC
owner ID does not match any member account IDs, the selected
VPC peering connection is linked to a VPC created within an AWS
accounts outside your AWS Organization.

09) Repeat step no. 7 and 8 to verify other VPC peering


connections provisioned in the current AWS region.

10) Change the AWS region from the navigation bar and repeat
steps no. 7 – 9 for other regions.

11) Sign in to each member account of your AWS Organization and


repeat steps no. 5 – 10.

01)Sign in to the AWS Management Console. Ensure that your AWS VPC network(s) use the
highly available Managed NAT Gateway service
02) Navigate to AWS VPC dashboard at instead of an NAT instance in order to enable
https://console.aws.amazon.com/vpc/. EC2 instances sitting in a private subnet to
connect to the internet or with other AWS
03) Under Filter by VPC: components.
select the VPC that you want to examine.
04) In the left navigation panel, under Virtual Private Cloud section,
click NAT Gateways.

05) And search for any managed NAT gateways available. If there is
no NAT gateway created for the selected VPC, the dashboard will
display the following message: “You do not have any NAT gateways
in this region.”.

06) Repeat step no. 3, 4 and 5 for each VPC network available in
the current region. Change the AWS region from the navigation bar
to repeat the process for other regions.
01) Sign in to the AWS Management Console. Identify and remove any unused VPC Internet
Gateways (IGWs) and VPC Egress-Only Internet
02) Navigate to AWS VPC dashboard at Gateways (EIGWs) in order to adhere to best
https://console.aws.amazon.com/vpc/. practices and to avoid approaching the service
limit (by default, you are limited to 5 IGWs and 5
03) To determine the VPC gateway resource state based on its EIGWs per AWS region). An Internet
type, perform the following actions: Gateway/Egress-Only Internet Gateway is
A. For AWS VPC Internet Gateways (IGWs): evaluated as unused when is not attached
*In the left navigation panel, under Virtual Private Cloud, click anymore to an AWS Virtual Private Cloud (VPC).
Internet Gateways.
*Select the VPC IGW that you want to examine.
*Select the Summary tab from the dashboard bottom panel
and check the value set for the State configuration attribute
listed below the resource ID. If the State current value is
"detached", the selected Internet Gateway is not
attached to an AWS Virtual Private Cloud (VPC), therefore the
gateway should be marked as unused and safely removed
from your AWS account.
B. For AWS VPC Egress-Only Internet Gateways (EIGWs):
*In the left navigation panel, under Virtual Private Cloud
section, click Egress Only Internet Gateways.
*Select the VPC EIGW that you want to examine.
*Select the Summary tab from the dashboard bottom panel
and verify the value set for the Attached VPC ID attribute
listed next the resource ID. If the Attached VPC ID attribute does
not have any value assigned, i.e. , the selected Egress-Only
Internet Gateways is not attached to an AWS VPC, therefore the
egress-only gateway should be marked as unused and safely
removed from your AWS account.

04) Repeat step no. 3 (a and b) to check the attachment status for
other AWS VPC IGWs and EIGWs provisioned within the current
region.

05) Change the AWS region from the navigation bar and repeat the
entire audit process for other regions.

01) Sign in to the AWS Management Console. Identify and delete any unused Amazon Virtual
Private Gateways (VGWs) in order to adhere to
02) Navigate to AWS VPC dashboard at best practices and to avoid reaching the service
https://console.aws.amazon.com/vpc/. limit (by default, you are limited to 5 VGWs -
attached or detached - per AWS region). An AWS
03) In the left navigation panel, under VPN Connections section, Virtual Private Gateway is considered unused
click Virtual Private Gateways. when is no longer associated with a VPN
connection (on the VPC side of the connection).
04) Select the VPN VGW that you want to examine.

05) Select the Summary tab from the dashboard bottom panel and
check the value set for the State configuration attribute listed
below the resource ID. If the State value is "detached", the
selected AWS Virtual Private Gateway is not attached to the VPC
side of the VPN connection, therefore is considered unused and
can be safely removed from your AWS account (see
Remediation/Resolution section).
06) Repeat step no. 4 and 5 to determine the current state for
other AWS VGWs available within the current region.
07) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
01) Sign in to the AWS Management Console. Once enabled, the Flow Logs feature will start
collecting network traffic data to and from your
02) Navigate to VPC dashboard at Virtual Private Cloud (VPC), data that can be
https://console.aws.amazon.com/vpc/. useful to detect and troubleshoot security issues
and make sure that the network access rules are
03) In the left navigation panel, select Your VPCs. not overly permissive.

04) Select the VPC that you need to check.

05) Select the Flow Logs tab from the bottom panel.

06) And search for any Flow Logs entries available for the selected
VPC.
07) If there are no Flow Logs created, the status should be “No
Flow Logs found”

01) Sign in to the AWS Management Console. Ensure that your AWS Virtual Private Clouds
(VPCs) are using appropriate naming conventions
02) Navigate to EC2 dashboard at for tagging in order to manage them more
https://console.aws.amazon.com/vpc/. efficiently and adhere to AWS resource tagging
best practices. A naming convention is a well-
03) In the left navigation panel, under Virtual Private Cloud section, defined set of rules useful for choosing the name
choose Your VPCs. of an AWS resource. Its strongly recommended
using the following pattern (default pattern) for
04) Open the dashboard Show/Hide Columns dialog box by clicking naming your AWS VPCs: ^vpc-(ue1|uw1|uw2|
the configuration icon: ew1|ec1|an1|an2|as1|as2|se1)-(d|t|s|p)-([a-
z0-9\-]+)$.
05) Inside the Show/Hide Columns dialog box, under Your Tag Keys
column, select the Name checkbox then click Close to return to
your dashboard.

06) Under Name column, check the name tag value e.g.of each VPC
provisioned in the current AWS region. default pattern (i.e. ^vpc-
(ue1|uw1|uw2|ew1|ec1|an1|an2|as1|as2|se1)-(d|t|s|p)-([a-z0-
9\\-]+)$) or based on a well-defined custom pattern, the naming
structure of these VPCs does not adhere to AWS tagging best
practices.

07) Change the AWS region from the navigation bar and repeat the
audit process for other regions.
Patch Priority Patch Status Remark

Quick Win Not Compliant


Short Term Already Compliant
Short Term Not Compliant

Short Term Not Compliant


Short Term N/A
Short Term Not Compliant

Long Term Not Compliant


Long Term Not Compliant

Long Term Already Compliant


Long Term Already Compliant

Long Term N/A


HIGH 0
MEDIUM 9
LOW 2
CloudFront INFO 1

Sr. No. Check Name Description


1 AWS CloudFront CDN In Ensure that AWS CloudFront Content
Use Delivery Network (CDN) service is used
within your AWS account to secure and
accelerate the delivery of your websites,
media files or static resources (e.g., CSS files,
JavaScript files, images) handled by your
web applications.

2 AWS CloudFront – WAF Ensure that all your AWS CloudFront web
Integration distributions are integrated with the Web
Application Firewall (AWS WAF) service to
protect against application-layer attacks that
can compromise the security of your web
applications or place unnecessary load on
them.
3 Enable Access Logging for Ensure that your AWS Cloudfront
AWS CloudFront distributions have the Logging feature
Distributions enabled in order to track all viewer requests
for the content delivered through the
Content Delivery Network (CDN).

4 Configure CloudFront Ensure that the communication between


Viewer Protocol Policy to your Amazon CloudFront CDN distribution
Enforce Encryption and its viewers (end users) is encrypted
using HTTPS in order to secure the delivery
of your web application content. To enable
data in transit encryption, you need to
configure the web distribution viewer
protocol policy to redirect HTTP requests to
HTTPS requests or to require the viewers to
use only the HTTPS protocol to access your
web content available in the CloudFront
distribution cache.
5 Enable Origin Access Ensure that the origin access identity feature
Identity for CloudFront is enabled for all your AWS Cloudfront CDN
Distributions with S3 distributions that utilize an S3 bucket as an
Origin origin in order to restrict any direct access to
your objects through Amazon S3 URLs.
6 AWS CloudFront Origin Ensure that your AWS Cloudfront Content
Insecure SSL Protocols Delivery Network distributions are not using
insecure SSL protocols (i.e. SSLv3) for HTTPS
communication between CloudFront edge
locations and your custom origins. Its
strongly recommends using TLSv1.0 or later
(ideally use only TLSv1.2 if you origins
support it) and avoid using the SSLv3
protocol.
7 Enable Origin Failover for Ensure that Origin Failover feature is
CloudFront Distributions enabled for your Amazon CloudFront web
distributions in order to improve the
availability of the content delivered to your
end users. To implement Origin Failover, you
have to create an origin group to provide
rerouting during a failover event. Then you
can associate an origin group with a cache
behaviour (using only GET, HEAD and
OPTIONS methods) to have requests routed
from a primary origin to a secondary origin
as a failover strategy. Before you can create
an origin group, you must have two origins
configured for your CloudFront web
distribution.

8 AWS CloudFront Security Ensure that your Amazon CloudFront


Policy distributions use a security policy with
minimum TLSv1.1 or TLSv1.2 and
appropriate security ciphers for HTTPS
viewer connections. An AWS CloudFront
security policy determines two settings: the
SSL/TLS protocol that CloudFront uses to
communicate with the users and the cipher
that CloudFront uses to encrypt the content
that it returns to users. Its recommended
that you use TLSv1.1 as minimum protocol
version for your CloudFront distribution
security policies, unless your users are using
browsers or devices that do not support
TLSv1.1 or later.
9 Unencrypted AWS Ensure that the communication between
CloudFront Traffic your AWS CloudFront distributions and their
custom origins is encrypted using HTTPS in
order to secure the delivery of your web
content and fulfill compliance requirements
for data in transit encryption.
10 Configure AWS Ensure that your Amazon Cloudfront
Cloudfront to Compress Content Delivery Network (CDN)
Objects Automatically distributions are configured to automatically
compress content for web requests that
include "Accept-Encoding: gzip" in the
request header, in order to increase your
web applications performance and reduce
bandwidth costs. AWS Cloudfront
compresses files of certain types for both
Amazon S3 origins and custom origins.
11 Enable Field-Level Ensure that field-level encryption is enabled
Encryption for CloudFront for your Amazon CloudFront web
Distributions distributions in order to help protect
sensitive data like credit card numbers or
social security numbers, and to help protect
your data across application services.
12 Enable AWS CloudFront Ensure that geo restriction is enabled for
Geo Restriction your Amazon CloudFront CDN distribution to
whitelist or blacklist a country in order to
allow or restrict users in specific locations
from accessing web application content.
Category Risk Level Impact
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Secure Network Access The check addresses a security concern


related to misconfigurations or
inefficiencies.

Medium
Logging and tracing The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium

Data protection The check addresses a security concern


related to misconfigurations or
inefficiencies.

Medium
User access control Users will be able to access the files directly
from S3 bucket.

Medium
Data protection The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Availability If the origin is unavailable, or returns
specific HTTP response status codes that
indicate a failure, the website will be
unavailable to the end user

Medium

Data protection The check addresses a security concern


related to misconfigurations or
inefficiencies.

Medium
Data protection The check addresses a security concern
related to misconfigurations or
inefficiencies.

Medium
Performance Improvement The check addresses a security concern
related to misconfigurations or
inefficiencies.

Low
Data protection With no field-level encryption, sensitive
data like credit card details can be viewed
by other applications.

Low
Availability The check addresses a security concern
related to misconfigurations or
inefficiencies.

Info
Navigation Path/ Location Recommendation Patch Priority
1) Login to the AWS Management Console. Ensure AWS CloudFront
2) Navigate to Cloudfront dashboard at CDN service is in use.
https://console.aws.amazon.com/cloudfront/3)
In the left navigation panel, click Distributions.
A web distribution is a Cloudfront service
instance that enables you to deliver web
content through a worldwide network of cache
servers that provide low latency and high data
transfer speeds. If there are no Cloudfront
distributions listed, instead a Getting Started
page is displayed: the Cloudfront CDN service is
not currently used within your AWS account.
Long Term

1) Login to the AWS Management Console. AWS WAF Web ACL


2) Navigate to Cloudfront dashboard at should not set to None.
https://console.aws.amazon.com/cloudfront/3)
On the Distributions page, select the CDN
distribution that you want to examine.
4) Click the Distribution Settings button from
the dashboard top menu to access the selected
distribution configuration page.
5) On the General tab click the Edit button.
6) On the Distribution Settings page, verify the
AWS WAF Web ACL configuration status. If
AWS WAF Web ACL is set to None: the selected
CDN distribution is not currently associated
with an Access Control List (ACL), therefore is Long Term
not integrated with the AWS WAF service for
protection against malicious viewers.
7) Repeat steps no. 3 – 6 for each Cloudfront
CDN distribution available in your AWS
account.
1) Login to the AWS Management Console. Logging feature
2) Navigate to Cloudfront dashboard at configuration status
https://console.aws.amazon.com/cloudfront/. should be Enabled.
3) On the Distributions page, select the CDN
distribution that you want to examine.
4) Click the Distribution Settings button from
the dashboard top menu to access the selected
distribution configuration page.
5) On the General tab click the Edit button.
6) On the Distribution Settings page, verify the
Logging feature configuration status. If Logging
is set to Off: the selected distribution is not
tracking any requests made to your web Long Term
content.
7) Repeat steps no. 3 – 6 for each Cloudfront
CDN distribution available in your AWS
account.

1) Sign in to the AWS Management Console. Viewer protocol policy


2) Navigate to CloudFront dashboard at should be set to
https://console.aws.amazon.com/cloudfront/. redirect HTTP to HTTPS
3) In the left navigation panel, click
Distributions to access the existing
distributions.
4) On CloudFront Distribution page, under the
main menu, select Web and Enabled from
Viewing dropdown menus to list all active web
distributions available in your account.
5) Select the CDN distribution that you want to
examine.
6) Click the Distribution Settings button from
the dashboard top menu to access the resource
configuration page.
7) Choose the Behaviors tab and select the
distribution default behaviour entry.
8) Click the Edit button to access the
distribution behaviour settings.
9) On the Edit Behaviour page, verify the
Viewer Protocol Policy configuration settings. If Long Term
HTTP and HTTPS setting is currently selected,
the viewers can use both HTTP and HTTPS
protocols to access your web content,
therefore the selected CloudFront CDN
distribution does not enforce the HTTPS
protocol for data in transit and the distribution
configuration is not compliant.
10)Repeat steps no. 5 – 9 to verify the viewer
protocol policy configuration for other Amazon
CloudFront CDN distributions available within
your AWS account.
1)Login to the AWS Management Console. Origin access identity
2) Navigate to Cloudfront dashboard at feature should be
https://console.aws.amazon.com/cloudfront/. enabled for all your
3) On the Distributions page, select the CDN AWS Cloudfront CDN
distribution that you want to examine. distributions.
4) Click the Distribution Settings button from
the dashboard top menu to access the selected
distribution configuration page.
5) On the Origins tab, select the entry that has
the Origin Type set to S3 Origin, then click the
Edit button.
6) On the Origin Settings page, verify the
Restrict Bucket Access setting current status. If
Restrict Bucket Access is set to No. The access
to the S3 bucket used as origin is not restricted,
therefore the selected AWS Cloudfront CDN
distribution is using an S3 origin without an
origin access identity.
7) Repeat step no. 5 and 6 for each origin
created for the selected Cloudfront Long Term
distribution.
8) Repeat steps no. 3 – 7 for each Cloudfront
CDN distribution available within your AWS
account.
1) Login to the AWS Management Console. AWS CloudFront
2) Navigate to Cloudfront dashboard at distributions origin(s)
https://console.aws.amazon.com/cloudfront/. do not use insecure SSL
3) On the Distributions page, select the CDN protocols.
distribution that you want to examine.
4) Click the Distribution Settings button from
the dashboard top menu to access the selected
distribution configuration page.
5) Select the Origins tab and choose the
distribution origin that you want to verify from
the Origins list.
6) Click the Edit button to access the selected
origin configuration page.
7) On the Origin Settings page, verify the
protocols enabled within the Origin SSL
Protocols category. If the SSLv3 protocol is
currently enabled: the selected distribution
origin is using an insecure SSL protocol for
HTTPS traffic, therefore the Cloudfront CDN Long Term
current configuration is vulnerable to exploits.
8) Repeat steps no. 5 – 7 for each origin created
for the selected distribution.
9) Repeat steps no. 3 – 8 for each Cloudfront
CDN distribution available in your AWS
account.
1)Sign in to the AWS Management Console. Origin Failover feature
2) Navigate to CloudFront dashboard at should be enabled for
https://console.aws.amazon.com/cloudfront/. Amazon CloudFront
3) In the left navigation panel, click web distributions.
Distributions to access the existing
distributions.
4) On CloudFront Distribution page, under the
main menu, select Web and Enabled from
Viewing dropdown menus to list all active web
distributions available in your AWS account.
5) Select the CloudFront distribution that you
want to examine.
6) Click the Distribution Settings button from
the dashboard top menu to access the resource
configuration page.
7) Choose the Origins and Origin Groups tab to
access the selected distribution origins.
8) Check for any origin groups defined within
the Origin Groups section. If there are no origin
groups available, instead the following
messages is displayed: "You don't have any Long Term
origin groups. To create one, choose Create
Origin Group.", the selected Amazon
CloudFront web distribution does not have an
origin group configured, therefore the Origin
Failover feature is not currently enabled.
9) Repeat steps no. 5 – 8 to determine the
Origin Failover configuration status for other
Amazon CloudFront CDN distributions
provisioned in your AWS account.

1) Sign in to AWS Management Console. Amazon CloudFront


2) Navigate to Cloudfront dashboard at distributions should use
https://console.aws.amazon.com/cloudfront/. a security policy with
3) On the Distributions page, select the web minimum TLSv1.1 or
distribution that you want to examine. TLSv1.2 and
4) Click the Distribution Settings button from appropriate security
the dashboard top menu to access the selected ciphers for HTTPS
distribution configuration page. viewer connections.
5) On the General tab, verify the Security Policy
attribute value. If Security Policy is currently set
to TLSv1 or TLSv1_2016 protocol, the selected
Amazon Cloudfront distribution is not using an
improved security policy that enforces TLS
version 1.1 or 1.2 as the minimum protocol
version, therefore the current configuration is
vulnerable to exploits. Short Term
6) Repeat steps no. 3 – 5 to check the type of
the security policy used by other Cloudfront
distributions available in your AWS account.
1) Login to the AWS Management Console. The communication
2) Navigate to Cloudfront dashboard at between AWS
https://console.aws.amazon.com/cloudfront/. CloudFront
3) On the Distributions page, select the CDN distributions and their
distribution that you want to examine. custom origins should
4) Click the Distribution Settings button from be encrypted using
the dashboard top menu to access the selected HTTPS.
distribution configuration page.
5) Select the Origins tab and choose the
distribution origin that you want to verify from
the Origins list.
6) Click the Edit button to access the origin
settings.
7) On the Origin Settings page, verify the Origin
Protocol Policy current configuration. If HTTP
Only is currently enabled: the traffic between
the Cloudfront distribution edge servers and
the selected origin is not encrypted. If the
Match Viewer option is currently selected:and
the viewer requests to CloudFront are made
using HTTP, CloudFront is also connecting to Short Term
the origin using HTTP, therefore the traffic is
not encrypted.
8) Repeat steps no. 5 – 7 for each origin created
for the selected distribution.
9) Repeat steps no. 3 – 8 for each Cloudfront
CDN distribution available in your AWS
account.
1) Sign in to the AWS Management Console. Amazon Cloudfront
2) Navigate to Cloudfront dashboard at Content Delivery
https://console.aws.amazon.com/cloudfront/. Network (CDN)
3) In the left navigation panel, click distributions should be
Distributions to access the existing configured to
distributions. automatically compress
4) On Cloudfront Distribution page, under the content for web
main menu, select Web and Enabled from requests.
Viewing dropdown lists to return all active web
distributions available in your AWS account.
5) Select the web distribution that you want to
examine.
6) Click the Distribution Settings button from
the dashboard top menu to access the resource
configuration page.
7) Choose the Behaviors tab and select the
default behaviour for the distribution.
8) Click the Edit button to access the
configuration settings for the selected
distribution behaviour.
9) On the Edit Behaviour page, check Compress Long Term
Objects Automatically configuration setting. If
Compress Objects Automatically is set to No,
the selected Amazon Cloudfront web
distribution is not configured to compress
objects (files) automatically.
10) Repeat steps no. 5 – 9 to verify the object
compression configuration for other Cloudfront
CDN distributions available in your AWS
account.
1)Sign in to the AWS Management Console. Field-level encryption
2) Navigate to CloudFront dashboard should be enabled for
3) In the left navigation panel, click your Amazon
Distributions to access the your CloudFront CloudFront web
distributions. distributions.
4) On CloudFront Distribution page, under the
main menu, select Web and Enabled from
Viewing dropdown menus to list all active web
distributions available in your AWS account.
5) Select the web distribution that you want to
examine.
6) Click the Distribution Settings button from
the dashboard top menu to access the resource
configuration page.
7) Choose the Behaviors tab and select the
default behaviour for the distribution.
8) Click the Edit button to access the
configuration settings for the selected
distribution behaviour.
9) On the Edit Behaviour page, check Field-level
Encryption Config configuration setting. If Field- Long Term
level Encryption Config dropdown list is empty,
the selected Amazon Cloudfront CDN
distribution is not configured to use field-level
encryption to protect private data.
10) Repeat steps no. 5 – 9 to determine if field-
level encryption is enabled for other Amazon
CloudFront distributions available in your AWS
account.
1)Sign in to the AWS Management Console. Geo restriction should
2) Navigate to CloudFront dashboard at be enabled for Amazon
https://console.aws.amazon.com/cloudfront/. CloudFront CDN
3) In the left navigation panel, click distribution.
Distributions.
4) On CloudFront Distribution page, under the
main menu, select Web and Enabled from
Viewing dropdown menus to list all active web
distributions available in your account.
5) Select the CDN distribution that you want to
examine.
6) Click the Distribution Settings button from
the dashboard top menu to access the resource
configuration page.
7) Choose the Restrictions tab and check the
Geo Restriction status available in the Status
column. If the status is set to Disabled, the geo
restriction feature is not enabled for the
selected CloudFront distribution, therefore the Long Term
distribution configuration is not compliant.
8) Repeat steps no. 5 – 7 to verify the geo
restriction feature status for other Amazon
CloudFront CDN distributions available in your
AWS account.
Patch Status Remark

Already Compliant

Not Compliant
Not Compliant

N/A
Already Compliant
Already Compliant
Not Compliant

Already Compliant
Not Compliant
N/A
N/A
N/A
HIGH 1
MEDIUM 0
LOW 0
INFO 0
CloudWatch

Sr. No. Check Name Description


1 Enable AWS Set up a CloudWatch billing alarm to receive
CloudWatch alerts when your AWS estimated charges exceed
Billing Alarm a threshold that you choose so you can decide
whether to stop or reconfigure the AWS
components that have reached the cost limit
set. These alerts are triggered by AWS
CloudWatch and sent to you using the AWS
Simple Notification Service (SNS).
Category Risk Level
Cost Reduction

High
Impact
From a security perspective, the absence of billing alarms can
also expose an organization to the risk of malicious activity, such
as an attacker provisioning resources or escalating services to
create a financial burden. CloudWatch billing alarms provide an
automated method for administrators to quickly react to
abnormal usage patterns, allowing them to take corrective
actions before costs spiral out of control. This is especially
important for environments that scale dynamically, where
resource consumption can grow exponentially without direct
oversight. Additionally, enabling these alarms is a critical step in
ensuring compliance with cost management policies and
protecting organizational financial integrity.
Navigation Path/ Location Recommendation
1) Sign in to the AWS Management Console. Receive Billing Alert should be Enabled

2) Navigate to Billing and Cost Management dashboard at


https://console.aws.amazon.com/billing/home.

3) In the left navigation panel, select Preferences and check the


Receive Billing Alerts status. If the feature is disabled, i.e. its
checkbox is unchecked the AWS billing alerts are currently
disabled in your account. In order to create CloudWatch billing
alarms to track the resources costs you must enable this
feature first (see 'Remediation/Resolution' section).
Patch Priority Patch Status Remark

Quick Win Already Compliant


HIGH 2
MEDIUM 5
LOW 1
INFO 1
ALB

Sr. No. Check Name Description


1 ELBv2 ALB Listener When an AWS ALB has no HTTPS listeners,
Security the front-end connection between the
clients and the load balancer is vulnerable
to eavesdropping and Man-In-The-Middle
(MITM) attacks. The risk becomes even
higher when working with sensitive data
such as health and personal records,
credentials and credit card numbers.
2 Internet Facing Using the right scheme (internal or
ELBv2 Load internet-facing) for your Amazon
Balancers Application Load Balancers (ALBs) and
Network Load Balancers (NLBs) is crucial
for maintaining your AWS load balancing
architecture security.
3 ELBv2 ALB Security Having well-configured security groups
Group attached to your ELBv2 load balancers can
reduce substantially the risk of data loss
and unauthorized access. Also, the
security groups must be valid, because
when a load balancer is created without
specifying a security group, the ALB/NLB is
automatically associated with the VPC’s
default security group, which is
considered invalid.
4 ELBv2 ALB Security Using insecure and deprecated security
Policy policies for SSL negotiation configuration
within your Application Load Balancers
will expose the connection between the
client and the load balancer to various
SSL/TLS vulnerabilities. To maintain your
ALBs SSL configuration secure, EY
recommends using one of the latest
predefined security policies released by
Amazon Web Services: "ELBSecurityPolicy-
TLS-1-2-Ext-2018-06", "ELBSecurityPolicy-
FS-2018-06", "ELBSecurityPolicy-2016-08"
or "ELBSecurityPolicy-TLS-1-1-2017-01".
5 ELBv2 Access Log After you enable and configure access
logging for your AWS Application Load
Balancers, the log files will be delivered to
the S3 bucket of your choice. The log files
contain data about each HTTP/HTTPS
request processed by the load balancer,
data that can be extremely useful for
analysing traffic patterns, implementing
protection plans and identifying and
troubleshooting security issues.
6 ELBv2 Minimum To achieve fault tolerance and minimize
Number of EC2 the risk of downtime, even if the load
Target Instances balancer is attached to an AWS Auto
Scaling Group that has max and desired
capacity set to 1, always register at least
two target instances to the target group(s)
associated with your ELBv2 load
balancers.

7 Network Load Using a deprecated security policy for TLS


Balancer Security negotiation configuration within your
Policy Network Load Balancers will expose the
connection between the client and the
load balancer to various vulnerabilities. To
maintain your Amazon NLBs TLS
configuration secure, Cloud AWS
recommends using one of the latest
predefined security policies released by
Amazon Web Services: ELBSecurityPolicy-
2016-08, ELBSecurityPolicy-TLS-1-1-2017-
01, ELBSecurityPolicy-FS-2018-06, or
ELBSecurityPolicy-TLS-1-2-Ext-2018-0
8 ELBv2 Elastic Load With Deletion Protection safety feature
Balancing Deletion enabled, you have the guarantee that
Protection your AWS load balancers cannot be
accidentally deleted and make sure that
your load-balanced environments remain
safe.

9 Unused ELBv2 Load To determine if the target groups


Balancers associated with your ELBv2 load balancers
have registered target instances
Category Risk Level Impact
Data protection The check addresses a security
concern related to
misconfigurations or
inefficiencies.

High
Auditing The check addresses a security
concern related to
misconfigurations or
inefficiencies.

High
User access control The check addresses a security
concern related to
misconfigurations or
inefficiencies.

Medium
Data protection The check addresses a security
concern related to
misconfigurations or
inefficiencies.

Medium
Logging and tracing The access log files contain sensitive
information which can be used for
analysis of malicious activity as part
of incident investigation

Medium
Availability The check addresses a security
concern related to
misconfigurations or
inefficiencies.

Medium

Data protection The check addresses a security


concern related to
misconfigurations or
inefficiencies.

Medium
Availability Accidentally, if the application load
balancer gets deleted, the
redirection of URL will stop. This will
affect the availability of the services
of the organisation.

Low

Cost Reduction The check addresses a security


concern related to
misconfigurations or
inefficiencies.

Info
Navigation Path/ Location Recommendation
01) Sign in to the AWS Management Console. To secure (encrypt) the connection
between your application clients and
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. your load balancers, update AWS
ALBs listeners configuration to
03) In the left navigation panel, under LOAD BALANCING section, choose support the HTTPS protocol (an
Load Balancers. X.509 SSL certificate is required).

04) Select the Application Load Balancer that you want to examine.

05) Select the Listeners tab from the bottom panel to access the load
balancer listeners. Now check the protocol for each listener available
within the ELBv2 listeners list. If there is no listener using the HTTPS
protocol, the listeners configuration for the selected AWS Application
Load Balancer is not secure, therefore the front-end connection between
the clients and the load balancer is not encrypted.

06) Repeat step no. 4 and 5 for each AWS ALBs provisioned in the current
region.

07) Change the AWS region from the navigation bar and repeat the audit
process for other regions.
01) Sign in to the AWS Management Console. Review your ELBv2 internet-facing
load balancers and change the
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. scheme configuration for any
ALB/NLB resource that is not
03) In the left navigation panel, under LOAD BALANCING section, choose following the regulatory security
Load Balancers. requirements. To change the scheme
for your AWS load balancers you
04) Select the AWS load balancer that you want to examine. need to re-create them with the
internal scheme configuration. To
05) Select Description tab from the dashboard bottom panel to view the create internal Amazon ELBv2 load
ELBv2 resource description. balancers

06) Within Basic Configuration section, check the Scheme attribute value
set for the selected load balancer. If the Scheme attribute value is set to
internet-facing, the selected ALB/NLB is internet-facing and routes
requests/connections from clients over the Internet to the registered
target EC2 instances, therefore it must be reviewed from the security
standpoint.

07) Repeat steps no. 4 – 6 to determine the scheme used by other


Amazon ELBv2 load balancers provisioned in the current region.

08) Change the AWS region from the navigation bar and repeat the audit
process for other regions.
01) Sign in to the AWS Management Console. Ensure that all Application Load
Balancers (ALBs) available in your
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. AWS account are associated with
valid and secure security groups that
03) In the left navigation panel, under LOAD BALANCING section, choose restrict access only to the ports
Load Balancers. defined within the load balancers
listeners configuration.
04) Select the load balancer that you want to examine.

05) Select the Listeners tab from the bottom panel to check the load
balancer listeners configuration details (i.e. protocol and port).

06) Select Description tab from the dashboard bottom panel to view the
ELBv2 resource description.
07) Within Security section, click on the listed security group ID, e.g. sg-
1234abcd, to open the security group configuration page.

08) On the selected security group configuration page, perform the


following checks:
Select Description tab from the dashboard bottom panel and check the
security group name listed as value for the Group name attribute. If the
name is set to default, it references the VPC’s default security group,
therefore the selected security group is considered invalid.
Select Inbound tab from the dashboard bottom panel and check for any
inbound/ingress rules that are not defined within the ELBv2 load balancer
listeners configuration verified at step no. 5. If there are inbound rules
that do not match the listeners current configuration, the selected
security group is considered insecure.
Select Outbound tab from the dashboard bottom panel and check for any
outbound/egress rules that are not defined within the load balancer
listeners configuration verified at step no. 5. If there are outbound rules
that do not match the listeners current configuration, the selected
security group is considered insecure.

09) Repeat steps no. 7 and 8 to check other security groups associated
with the selected load balancer.

10) Repeat steps no. 4 – 9 to verify other Amazon ELBv2 load balancers,
provisioned in the current region, for invalid/insecure security groups.
01) Sign in to the AWS Management Console. To update your Application Load
Balancers (ALBs) listeners
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. configuration to use the latest
predefined security policies
03) In the left navigation panel, under LOAD BALANCING section, choose
Load Balancers.

04) Select the Application Load Balancer that you want to examine.

05) Select the Listeners tab from the bottom panel to access the load
balancer listeners configuration.

06) Select the HTTPS : 443 listener and verify its security policy name
available within Security policy column. If the name of the policy is
different than ELBSecurityPolicy-2016-08, ELBSecurityPolicy-TLS-1-2-Ext-
2018-06, ELBSecurityPolicy-FS-2018-06 or ELBSecurityPolicy-TLS-1-1-
2017-01, the security policy used employs outdated protocols and
ciphers, therefore the selected ALB SSL negotiation configuration is
insecure and vulnerable to exploits.

07) Repeat steps no. 4 – 6 for each AWS Application Load Balancers
provisioned in the current region.
08) Change the AWS region from the navigation bar and repeat the audit
process for other regions.
01) Sign in to the AWS Management Console. enable access logging for your AWS
Application Load Balancers (ALBs),
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/.
To enable access logs for your load
03) In the left navigation panel, under LOAD BALANCING section, choose balancer using the console:
Load Balancers.
1. Open the Amazon EC2 console at
04) Select the Application Load Balancer that you want to examine. https://console.aws.amazon.com/ec
2/
05) Select Description tab from the dashboard bottom panel to view the 2. On the navigation pane, under
ELBv2 resource description. LOAD BALANCING, choose Load
Balancers.
06) Inside Attributes section, check the Access logs configuration attribute 3. Select your load balancer.
value. If the attribute value is set to Disabled, the Access Logging feature 4. On the Description tab, choose
is not enabled for the selected AWS Application Load Balancer. Configure Access Logs.
5. On the Configure Access Logs
07) Repeat steps no. 4 – 6 to verify Access Logging feature status for page, do the following:
other AWS load balancers provisioned in the current region.
a. Choose Enable access logs.
08) Change the AWS region from the navigation bar and repeat the audit
process for other regions. b. Leave Interval as the default, 60
minutes.

c. For S3 location, type the name


of your S3 bucket, including the
prefix (for example, my-load
balancer-logs/my-app). You can
specify the name of an existing
bucket or a name for a new bucket.

d. (Optional) If the bucket does not


exist, choose Create this location for
me. You must specify a name that is
unique across all existing bucket
names in Amazon S3 and follows the
DNS naming conventions. For more
information, see Rules for Bucket
Naming in the Amazon Simple
Storage Service Developer Guide.

6. Choose Save.
01) Sign in to the AWS Management Console. Register additional healthy EC2
instances to the target group(s)
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. associated with your ELBv2 load
balancers
03) In the left navigation panel, under LOAD BALANCING section, choose
Target Groups.

04) Select the target group associated with the AWS ELBv2 load balancer
that you want to examine. To check the resources association, verify the
Load balancer attribute value available on the Description tab.

05) Select Targets tab from the dashboard bottom panel to view the
registered targets.
06) Under Registered targets, check for healthy target instances with the
current status set to healthy. If the number of healthy instances
registered to the selected target group is less than two, the selected
Amazon ELBv2 load balancer does not have a fault-tolerant configuration.

07) Repeat steps no. 4 – 6 to check other AWS ELBv2 load balancers for
healthy target instances, available within the current region.

08) Change the AWS region from the navigation bar and repeat the audit
process for other regions.

01) Sign in to AWS Management Console. Ensure that your Amazon Network
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. Load Balancers (NLBs) are using the
03) In the left navigation panel, under LOAD BALANCING section, choose latest recommended predefined
Load Balancers. security policy for TLS negotiation
04) Select the Network Load Balancer that you want to examine. configuration in order to protect
05) Select the Listeners tab from the bottom panel to access the load their front-end connections against
balancer listeners configuration. TLS vulnerabilities and meet security
06) Select the TLS : 443 listener and verify the name of the associated requirements.
security policy available in the Security policy column. If the name of the
policy is different than ELBSecurityPolicy-2016-08, ELBSecurityPolicy-TLS-
1-1-2017-01, ELBSecurityPolicy-FS-2018-06 or ELBSecurityPolicy-TLS-1-2-
Ext-2018-06, the security policy used employs outdated protocols and
ciphers, therefore the selected Amazon NLB TLS negotiation
configuration is vulnerable to exploits.
07) Repeat steps no. 4 – 6 for each AWS Network Load Balancers
provisioned in the current region.
08) Change the AWS region from the navigation bar and repeat the audit
process for other regions.
01) Sign in to the AWS Management Console. Ensure ELBv2 Load Balancers have
Deletion Protection feature enabled
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. in order to protect them from being
accidentally deleted.
03) In the left navigation panel, under LOAD BALANCING section, choose
Load Balancers. To enable deletion protection using
the console:
04) Select the AWS load balancer that you want to examine.
1. Open the Amazon EC2 console at
05) elect Description tab from the dashboard bottom panel to view the https://console.aws.amazon.com/ec
ELBv2 resource description. 2/
2. On the navigation pane, under
06) Inside Attributes section, check the Deletion Protection configuration LOAD BALANCING, choose Load
attribute value. If the attribute value is set to Disabled, the Deletion Balancers.
Protection safety feature is not enabled for the selected AWS load 3. Select the load balancer.
balancer. 4. On the Description tab, choose
Edit attributes.
07) Repeat steps no. 4 – 6 to verify Deletion Protection feature status for 5. On the Edit load balancer
other AWS load balancers provisioned in the current region. attributes page, select Enable for
Delete Protection, and then choose
08) Change the AWS region from the navigation bar and repeat the audit Save.
process for other regions. 6. Choose Save.

01) Sign in to the AWS Management Console. delete any unused Application Load
02) Navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. Balancers (ALBs) or Network Load
03) In the left navigation panel, under LOAD BALANCING, choose Target Balancers (NLBs) currently available
Groups. within your AWS account
04) Select the target group associated with the AWS ELBv2 load balancer
(ALB or NLB) that you want to examine. To determine the resources
association, verify the Load balancer attribute value available on the
Description tab.
05) Select Targets tab from the dashboard bottom panel to access the list
with the registered targets.
06) Under Registered targets, check for EC2 target instances registered to
the selected target group. If there are no target instances currently
registered to the group, i.e.

the selected ELBv2 load balancer is considered "unused" and can be


safely removed from your AWS account in order to avoid unexpected
service charges.
07) Repeat steps no. 4 – 6 to verify other target groups associated with
your load balancers for registered target instances, available within the
current region.
08) Change the AWS region from the navigation bar and repeat the audit
process for other regions.
Patch Priority Patch Status Remark

Quick Win Not Compliant


Short Term Already Compliant
Short Term Already Compliant
Short Term Already Compliant
Short Term Already Compliant
Short Term Already Compliant

Short Term Not Compliant


Long Term N/A

Short Term N/A


HIGH 8
MEDIUM 12
LOW 8
INFO 4
IAM

Sr. No. Check Name Description Category


1 AWS IAM Enforcing AWS IAM passwords strength, pattern Secure Authentication
Password Policy and rotation is vital when it comes to
maintaining the security of your AWS account.
Having a strong password policy in use will
significantly reduce the risk of password-
guessing and brute-force attacks.

2 AWS IAM Server Due to the increasing computing power Availability


Certificate Size available nowadays to decrypt SSL/TLS
certificates, any server certificate that is using
1024-bit keys can no longer be considered
secure. Plus, all major web browsers dropped
support for 1024-bit RSA certificates at the end
of 2013. If your AWS IAM server certificates are
still using 1024-bit keys, you should raise their
bit length to 2048 or higher in order to increase
its security level.

3 Account Security By default there are no security challenge Secure Authentication


Challenge questions set for your AWS account. Enabling
Questions and configuring security challenge questions will
add an extra layer of security to your account.
4 Hardware MFA Having hardware-based MFA protection for your Secure Authentication
for AWS Root root account is the best way to protect your
Account AWS resources and services against attackers. A
hardware MFA device signature adds an extra
layer of protection on top of your existing root
credentials making your Amazon Web Services
root account virtually impossible to penetrate
without the MFA generated passcode.

5 Pre-Heartbleed Using SSL/TLS certificates vulnerable to Auditing


Server Heartbleed can allow attackers to extract
Certificates sensitive data such as user names and
passwords, instant messages, emails and critical
documents directly from the system memory
without leaving any traces.

6 Root Account Anyone who has your root access keys can gain User access control
Access Keys unrestricted access to all the services within
Present your AWS environment, including billing
information. Removing these credentials from
your root account will significantly reduce the
risk of unauthorized access to your AWS
resources.
7 Root Account Locking down (restricting) your root account User access control
Usage usage is crucial for keeping your AWS account
safe because anyone who has your root
credentials has unrestricted access to all the
resources and services within your AWS
environment, including billing information and
the ability to change the root password. To
avoid root account usage, we recommend
implementing the principle of least privilege by
creating AWS IAM users with minimal set of
actions required to perform just the desired
task(s).

8 Root MFA Having an MFA-protected root account is the Secure Authentication


Enabled best way to protect your AWS resources and
services against attackers. An MFA device
signature adds an extra layer of protection on
top of your existing root credentials making
your AWS root account virtually impossible to
penetrate without the MFA generated
passcode.
9 Access Keys Unnecessary AWS IAM access keys generate Performance Improvement
During Initial IAM unnecessary management work in auditing and
User Setup rotating IAM credentials.

10 Credentials Last Disabling or removing unused AWS IAM user User access control
Used credentials can significantly reduce the risk of
unauthorized access to your AWS cloud
resources. Ideally, you will want to restrict
access for IAM users who leave your
organization or for applications and tools that
are no longer using these credentials.
11 Cross-Account Increase the security of your cross-account IAM Secure Authentication
Access Lacks role by requiring either an optional external ID
External ID and (similar to a password) or an MFA device to
MFA secure further the access to your AWS resources
and prevent "confused deputy" attacks. This is
highly recommended if you do not own or have
administrative access to the AWS account that
can assume this IAM role. To assume this cross-
account role, users must be in the trusted
account and provide the exact external ID or the
unique passcode generated by the MFA device
installed.

12 IAM Group With Defining access permissions for your IAM groups Resource Management
Inline Policies using managed policies can offer multiple
benefits such as reusability, versioning and
rollback, automatic updates, larger policy size
and fine-grained control over your policies
assignment.
13 IAM Policies With Providing full administrative privileges instead User access control
Full of restricting to the minimum set of permissions
Administrative can expose your AWS resources to potentially
Privileges unwanted actions. It is strongly recommends
creating and using IAM policies that implement
the principle of least privilege (i.e. providing the
minimal set of actions required to perform
successfully the desired tasks) instead of using
overly permissive policies.

14 IAM Role Policy Providing the right permissions for your IAM User access control
Too Permissive roles will significantly reduce the risk of
unauthorized access (through API requests) to
your AWS resources and services.

15 IAM User Present Using individual IAM users (with specific set of Secure Authentication
permissions) to access your AWS environment
will eliminate the risk of compromising your
root account credentials.
16 MFA For IAM Having MFA-protected IAM users is the best way Secure Authentication
Users With to protect your AWS resources and services
Console against attackers. An MFA device signature adds
Password an extra layer of protection on top of your
existing IAM user credentials (username and
password), making your AWS account virtually
impossible to penetrate without the MFA
generated passcode.

17 SSL/TLS When SSL/TLS certificates are not renewed prior Data Protection
Certificate Expiry to their expiration date, these become invalid
7 Days and the communication between the client and
the AWS resource that implements the
certificates (e.g. AWS ELB) is no longer secure.

18 Unnecessary Removing unnecessary AWS IAM access keys Resource Management


Access Keys will lower the risk of unauthorized access to
your AWS resources and components, and
adhere to AWS IAM security best practices.
19 Unnecessary SSH Removing unnecessary IAM SSH public keys will Resource Management
Public Keys lower the risk of unauthorized access to your
AWS CodeCommit repositories and adhere to
AWS IAM security best practices.

20 Unused IAM User Removing unused IAM users can reduce the risk User access control
of unauthorized access to your AWS resources
and help you manage the user-based access to
the AWS Management Console more efficiently.

21 Access Keys Rotating Identity and Access Management (IAM) Resource Management
Rotated 30 Days credentials periodically will significantly reduce
the chances that a compromised set of access
keys can be used without your knowledge to
access certain components within your AWS
account.
22 Account Once specified, the alternate contacts will Incident Notification
Alternate enable Amazon to contact another designated
Contacts person about the security issues found within
your account, even if you are unavailable.

23 Expired SSL/TLS Removing expired SSL/TLS certificates Auditing


Certificate eliminates the risk that an invalid certificate will
be deployed accidentally to a resource such as
AWS Elastic Load Balancer (ELB), which will
trigger font-end errors and damage the
credibility of the application/website behind the
ELB.

24 Inactive IAM Disabling access for inactive IAM users can User access control
Console User reduce the risk of unauthorized access to your
AWS resources and help you manage the user-
based access more efficiently.
25 Root Account Disabling X.509 signing certificates created for User access control
Active Signing your AWS root account eliminates the risk of
Certificates unauthorized access to certain AWS services
and resources, in case the private certificate
keys are stolen or shared accidentally.

26 SSH Public Keys Rotating periodically the SSH keys assigned to Resource Management
Rotated 30 Days your IAM users will significantly reduce the
chances that a compromised set of keys can be
used without your knowledge to access your
private Git repositories hosted with AWS
CodeCommit.

27 Support Role A Support Role is an IAM role that is configured User access control
to allow authorized users to manage incidents
with AWS Support.
28 Unused IAM Removing orphaned and unused IAM groups User access control
Group eliminates the risk that a forgotten group will be
used accidentally to allow unauthorized users to
access AWS resources.

29 Canary Access Your AWS API access keys represent an Incident Notification
Token attractive target for attackers and malicious
users. Knowing that, you can create
Canarytokens (i.e. valid access keys with a very
limited set of permissions) and leave them as
bait on different targets such as web
applications, code repositories, EC2 instances,
etc. If attackers breach one of these targets,
they will find the access keys and attempt to use
them. And when such credentials are used by
attackers, a notification alert will inform you of
their actions so you can use this information to
take measures and secure your AWS
environment and/or applications.
30 IAM User Monitoring the age of your IAM user credentials Secure Authentication
Password Expiry can help you prevent password expiry for less
7 Days frequent logins and manage the user-based
access to your account more efficiently.

31 IAM User Policies Defining permissions at the IAM group level Resource Management
instead of IAM user level will allow you manage
more efficiently the user-based access to your
AWS resources. With this new model you can
create groups, attach the necessary policies for
each group, then assign IAM users to these
groups as needed.
32 IAM User With Segregating the IAM users in your account by User access control
Password And controlling their privileges will help you
Access Keys maintain a secure AWS environment.
Risk Level Impact Navigation Path/ Location
If password policy does not expire, it is 01) Sign in to the AWS Management Console.
possible for the attacker to crack the 02) Navigate to IAM dashboard at
password of the user. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, select Account Settings.
04) In the Password Policy section, check the password policy
current state. If the policy configuration does not enforce any
of the predefined requirements provided by AWS and it
displays the following message: “Currently, this AWS account
High does not have a password policy. Specify a password policy
below”, your AWS account does not have an active IAM
password policy and is not protected against unauthorized
access.

The check addresses a security AWS CLI


concern related to misconfigurations
or inefficiencies.

High

Account Security challenge question is 01) Sign in to the AWS Management Console.
extra layer of security to protect it from 02) Navigate to your AWS account settings page at
unauthorised access or account getting https://console.aws.amazon.com/billing/home?#/account/.
compromised. 03) Scroll down to Configure Security Challenge Questions
section and verify the feature configuration. If there are no
security challenge questions set and the following status is
displayed: “Security questions are currently not enabled.”, the
High feature is not currently enabled and configured to identify you
as the account owner in case the account is compromised.
The check addresses a security 01) Sign in to the AWS Management Console using your root
concern related to misconfigurations credentials.
or inefficiencies. 02) Click on the AWS account name or number in the upper-
right corner of the management console and select Security
Credentials from the dropdown menu:
03) On Your Security Credentials page, click on the Multi-Factor
Authentication (MFA) accordion tab to expand the MFA
management panel.
04) On the MFA management panel, check for any enabled
MFA device that has the Device Type attribute set "Hardware
MFA". If the MFA device listed here does not have the Device
High Type set to "Hardware MFA", your AWS root account is not
protected using a hardware-based MFA device, therefore does
not adhere to AWS security best practices.
05) Repeat steps no. 1 – 4 for each Amazon Web Services root
account that you want to examine.

The check addresses a security AWS CLI


concern related to misconfigurations
or inefficiencies.
High

The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Click on the AWS account name or number in the upper-
or inefficiencies. right corner of the management console and select Security
Credentials from the dropdown menu:
03) On Your Security Credentials page, click on the Access Keys
(Access Key ID and Secret Access Key) accordion tab to expand
the root access keys management section.
04) In the access keys table, under Status column, check for any
keys with the status set to Active. If the table displays one or
more active keys:
High your AWS root account is not following the IAM security best
practices regarding the protection against unauthorized access.
05) Repeat steps no. 1 – 5 for each AWS root account that you
want to examine.
If attacker is successful to get a session of 01) Sign in to the AWS Management Console.
root account, attacker will get 02) Navigate to IAM dashboard at
unrestricted access to the resources. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, select Credential report.
04) Click Download Report button to download the AWS report
that lists all your account's users and the status of their various
credentials.
05) Open the downloaded credentials report (CSV file)
downloaded at the previous step in your favorite
spreadsheet/CSV editor and check the timestamp value (e.g.
201)7-06)-16T06):27:14+00:00), available in the
password_last_used column for the <root_account> user. If the
selected timestamp value (i.e. the time at which the root
credentials have been last used) represents a date recorded
High within the past 30 days, the verified credentials have been
used recently to access your AWS root account, therefore the
root account access policy currently used is not following the
AWS security best practices.
06) Repeat steps no. 1 – 5 for each Amazon Web Services root
account that you want to examine.

The check addresses a security 01) Sign in to the AWS Management Console using your root
concern related to misconfigurations credentials.
or inefficiencies. 02) Click on the AWS account name or number in the upper-
right corner of the management console and select Security
Credentials from the dropdown menu:
03) On Your Security Credentials page, click on the Multi-Factor
Authentication (MFA) accordion tab to expand the MFA
management section.
04) Inside the MFA management section, check for any enabled
MFA devices e.g.
High If there are no MFA devices listed and the Activate MFA button
is displayed, your root account is not MFA-protected and the
authentication process is not following AWS IAM security best
practices.
05) Repeat steps no. 1 – 4 for each AWS root account that you
want to examine.
If a user leaves your organization or a user 01) Sign in to the AWS Management Console.
is created but never accessed the 02) Navigate to IAM dashboard at
resources, remove the corresponding IAM https://console.aws.amazon.com/iam/.
user so that the user's access to your 03) In the left navigation panel, choose Users.
resources is removed. 04) Click on the IAM user that you want to examine.
05) On the Summary page, check the user creation date listed
as value for the Creation time attribute.
06) Select Security Credentials tab, search for any active access
keys available inside Access Keys section and verify their
creation date listed in Created column.
07) Compare the IAM user creation date (step no. 5) to each
access key creation date (step no. 6). If the creation dates
match, the key pair was created during initial user setup. If the
Medium access keys that were created at the same time as the selected
IAM user profile do not have a last used date (i.e. Last used
attribute value is set to N/A), the verified IAM access key pair is
considered unnecessary and can be deleted from your AWS
account.
08) Repeat steps no. 4 – 7 for each IAM user created within
your AWS account.

The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Credential report.
04) On the Credential report page, click Download Report to
download the IAM report that lists all your account's users and
the status of their various credentials.
05) Open the downloaded file (i.e.
status_reports_<download_date>.csv) in your preferred CSV
file editor and check the following details, based on the
credentials type:
For IAM user passwords, identify each user with the
password_enabled set to TRUE and check the
password_last_used attribute value. If password_last_used
value is set to N/A (not applicable), verify the
password_last_changed attribute value, otherwise check the
password_last_used value. Based on the verified values (i.e.
human readable dates), you can determine when was the last
time the selected IAM users used their passwords. If one or
more user passwords are older than 90 days, these are
Medium considered unused credentials and are most likely associated
with a compromised or abandoned IAM user account,
therefore these passwords should be deactivated.
For IAM user access keys, identify each user with the
access_key_1_active or access_key_2_active set to TRUE and
check the access_key_x_last_used_date attribute value –
where x is 1 or 2. If access_key_x_last_used_date value is set to
N/A, verify the access_key_x_last_rotated attribute value,
otherwise check the access_key_x_last_used_date value. Based
on these values, you can determine when was the last time the
verified IAM users used their access keys. If one or more access
key sets are older than 90 days, the keys are considered
unused and are most likely associated with a compromised or
abandoned IAM user account, therefore these credentials
should be decommissioned.

06) Repeat steps no. 1 – 5 for each AWS account that you want
to examine for unused IAM user credentials.
The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Roles.
04) Click on the name (link) of the IAM role that you want to
examine.
05) On the Summary page, select the Trust relationships tab
and verify the following details:
Check Trusted entities list to determine if the role allows
cross-account access. If one or more AWS accounts are listed as
trusted entities, these can assume the role, therefore the
selected IAM role provides cross-account access to other AWS
accounts. If Trusted entities lists AWS services as identity
providers, the selected IAM role does not provide cross-
account access and the audit process ends here.
Check Conditions section to determine the conditions that
define how and when trusted entities can assume the IAM role.
The selected cross-account IAM role lacks MFA-based
protection and external ID support if the following conditions
are met:
Medium The conditions listed within Conditions section does not
include aws:MultiFactorAuthPresent key (representing Multi-
Factor Authentication protection) or sts:ExternalId key
(representing external ID-based access).
The conditions listed include aws:MultiFactorAuthPresent
key or sts:ExternalId key but the aws:MultiFactorAuthPresent
key value is set to false or sts:ExternalId key does not have any
value set.

06) Repeat steps no. 3 – 5 to determine if other AWS IAM roles,


available in the current region, provide cross-account access
using either MFA-based protection or external IDs support.
07) Change the AWS region from the navigation bar and repeat
the audit process for the other regions.

The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Groups.
04) Click on the IAM group name that you want to examine.
05) On the IAM group configuration page, select Permissions
tab.
06) Inside Inline Policies section, search for any existing inline
policies. If one or more policies are listed, e.g.
the selected group is using inline (embedded) policies for its
Medium access permissions configuration and is not following AWS IAM
best practices.
07) Repeat steps no. 4 – 6 for each IAM group that you want to
examine within your AWS account.
The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Policies.
04) From the Filter dropdown menu, select Customer managed
to list only the customer managed policies available.
05) Click on the name (link) of the IAM policy that you want to
examine.
06) Select Permissions tab and click {} JSON button to access
the selected policy document in JSON format.
07) Inside the policy document box, search for statements with
the following combination of elements: "Effect": "Allow",
"Action": "*", "Resource": "*". If the verified policy utilizes the
specified combination, i.e.
the selected IAM customer managed policy allows full
Medium administrative privileges, therefore the policy does not follow
security best practices and should be deactivated (detached
from any IAM users, group or roles).
08) Repeat steps no. 5 – 7 to determine if other IAM customer
managed policies, created within your AWS account, provide
full administrative privileges.

The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Roles.
04) Click on the AWS IAM role that you want to examine.
05) On the IAM role configuration page, select the Permissions
tab from the bottom panel.
06) Inside the Managed Policies and/or Inline Policies
section(s), click the Show Policy link:
to open the attached IAM policy.
Medium 07) In the Show Policy dialog box, identify the Action element
and its current value. If the element value is set to "*", all
existing actions can be performed by the AWS resource(s)
defined within the policy statement, therefore the IAM policy is
too permissive.

01) Sign in to the AWS Management Console.


02) Navigate to IAM dashboard at
https://console.aws.amazon.com/iam/.
03) In the left navigation panel, select Users.
04) On the Users page, check the list for any available IAM
users. If the users list is empty and a “No records found.”
message is displayed, there are no IAM users created and the
Medium access to your account is made via the root user (not
recommended).
05) Repeat steps no. 1 – 4 for all the AWS accounts that you
want to examine.
If credentials are compromised, without 01) Sign in to the AWS Management Console.
MFA enabled attacker can authenticate 02) Navigate to IAM dashboard at
and access the resources. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, select Users.
04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) Inside the Sign-In Credentials section, check the Console
password and Multi-Factor Authentication Device status. If the
Console password feature status is set to Yes and Multi-Factor
Medium Authentication Device is set to No, the selected IAM user
authentication process is not MFA-protected and is not
following AWS IAM security best practices.
07) Repeat steps no. 4 – 6 for each IAM user that you want to
examine available in your AWS account.

The check addresses a security AWS CLI


concern related to misconfigurations
or inefficiencies.
Medium

The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Users.
04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) Under Access Keys section, in the Status column, check the
current status for each access key associated with the IAM
user. If the selected IAM user has more than one access keys
Medium activated e.g.
the user access configuration do not adhere to AWS IAM
security best practices and the risk of accidental exposures
increases.
07) Repeat steps no. 4 – 6 for each IAM user that you want to
examine, available in your AWS account.
The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Users.
04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) Under SSH keys for AWS CodeCommit section, in the Status
column, check the current status for each SSH key assigned to
the selected IAM user. If the IAM user has more than one
Medium access keys activated:
the user access configuration do not adhere to AWS IAM
security best practices and the risk of accidental exposures
remains high.
07) Repeat steps no. 4 – 6 for each IAM user that you want to
examine, available in your AWS account.

The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Users.
04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) In the Access Keys section, check for any IAM access keys
assigned to the selected user. If one or more access key pairs
are currently attached, e.g.
the user is used for AWS API access and the audit process for
the selected user stops here, otherwise, continue with the next
step.
Medium 07) Inside the Sign-In Credentials section, check the Last Used
attribute value to determine the user password last used date.
If the current value is set to Never:
the selected IAM user has never been logged in, therefore was
not unused and can be safely removed.
08) Repeat steps no. 4 – 7 for each IAM user available in your
AWS account.

Reusing the same access key for long 01) Sign in to the AWS Management Console.
time, access key might get in hand of the 02) Navigate to IAM dashboard at
attacker and give access to the AWS https://console.aws.amazon.com/iam/.
resources. 03) In the left navigation panel, choose Users.
04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) Under Access Keys section, in the Created column: check
for any keys older than 30 days with the status set to Active:
If an active access key is older than 30 days, the key is
Low outdated and needs to be changed in order to secure the
access to your AWS resources.
07) Repeat steps no. 4 – 6 for each IAM user that you want to
examine, available in your AWS account.
The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to your AWS account settings page at
or inefficiencies. https://console.aws.amazon.com/billing/home?#/account/.
03) In the Alternate Contacts section, under Security category,
verify the contact details available. If there are no alternate
contact details provided and the Contact status is set to None,
the feature is not currently enabled, therefore the security
Low notifications will not be sent to another person or third-party
support service if you are unavailable.

The check addresses a security AWS CLI


concern related to misconfigurations
or inefficiencies.
Low

If console access is not required, it is best 01) Sign in to the AWS Management Console.
practice to disable it. It’s a precautionary 02) Navigate to IAM dashboard at
measure, as having console access https://console.aws.amazon.com/iam/.
enabled increases surface area of attack 03) In the left navigation panel, choose Users.
for the attacker. 04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) In the Access Keys section, check for any IAM access keys
assigned to the selected user. If one or more access key pairs
are currently attached, e.g.
the user is used for AWS API access and the audit process for
the selected user stops here, otherwise, continue with the next
step.
07) Inside the Sign-In Credentials section, check for the user
Low Last Used attribute value to determine its password last used
date. If the timestamp displayed is older than 90 days, e.g.
the selected IAM user is rendered as inactive, therefore its
access to the AWS resources can be safely disabled.
08) Repeat steps no. 4 – 7 for each IAM user available in your
AWS account.
The check addresses a security 01) Sign in to the AWS Management Console using the root
concern related to misconfigurations account credentials.
or inefficiencies. 02) Click on the AWS account name or number available in the
upper-right corner of the management console and select My
Security Credentials from the dropdown menu.
03) On Your Security Credentials page, click on the X.509)
certificate tab to expand the panel with the X.509) certificates
deployed for your root account.
04) Within the X.509) certificates table, in the Status column,
check for any certificates with the status set to Active. If the
table lists one or more active certificates:
Low there are active X.509) signing certificates deployed for your
AWS root user, therefore your root account access
configuration does not follow AWS security best practices.
05) Repeat steps no. 1 – 4 for each Amazon Web Services root
account that you want to examine for active X.509) certificates.

The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Users.
04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) Under SSH keys for AWS CodeCommit section, in the
Uploaded column:
check for any SSH keys older than 30 days with the status set to
Low Active:
If an active public key is older than 30 days, the key is outdated
and needs to be changed in order to secure the access to your
private repositories.
07) Repeat steps no. 4 – 6 for each IAM user that you want to
examine, available in your AWS account.

Roles must have AWSSupportAccess 01) Sign in to the AWS Management Console.
policy in place so that at the time of any 02) Navigate to IAM dashboard at
incident or require guidance, AWS https://console.aws.amazon.com/iam/.
Support can help quickly. 03) In the left navigation panel, choose Roles.
04) Click on the IAM role that you want to examine.
05) On the IAM role configuration page, select the Permissions
tab from the bottom panel.
06) Inside the Managed Policies and section, check the list of
attached policies for a policy named "AWSSupportAccess". If
there is no policy named "AWSSupportAccess" currently
attached, the selected IAM role does not qualify for the IAM
Low Support Role.
07) Repeat steps no. 4 – 6 to verify other IAM roles available in
your AWS account for Support Role permissions. If the
condition applied at step no. 6 is not met for all the verified
roles, there is no IAM Support Role currently available within
your AWS account.
The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Groups.
04) Click on the IAM group name that you want to examine.
05) On the IAM group configuration page, select Users tab.
06) On the Users panel, search for any IAM users attached to
the group. If there are no IAM users currently attached, the
AWS console will display the following warning message: “This
group does not contain any users.”. This means that the
Low selected group is orphaned and most likely not in use anymore.
07) Repeat steps no. 4 – 6 for each IAM group that you want to
examine within your AWS account.

When attacker or any authenticated user 01) Sign in to AWS Management Console.
with malicious intent are in AWS 02) Navigate to IAM dashboard at
environment, CanaryTokens acts as bait or https://console.aws.amazon.com/iam/.
decoy so that designated person can be 03) In the left navigation panel, choose Users to list the IAM
notified of the attack and no actual users available in your AWS account.
resources are exposed. 04) Click on the name (link) of the IAM user that you want to
examine.
05) Select Permissions tab from the dashboard bottom panel,
expand Permissions policies and Permissions boundary
sections and check for any IAM policies attached to the
selected user. If there are no policies currently attached, the
selected IAM identity does not have any permissions set,
therefore the user account permissions configuration is
Canarytoken-compliant, otherwise the configuration is not
compliant.
06) In the left navigation panel, choose Credential report.
07) On the Credential report page, click Download Report to
download the IAM report that lists all your account's users and
the status of their various credentials.
08) Open the downloaded file (i.e. status_reports_<report-
Info download-date>.csv) in your CSV file editor and check the
following details for the IAM user selected earlier in the
process:
If password_enabled attribute is set to FALSE and
password_last_used value is set to N/A (not applicable), the
AWS Management Console access is not enabled for the
selected user, therefore the IAM user account configuration is
compliant, otherwise the configuration is not compliant with
the rule requirements.
If access_key_1_active and/or access_key_2_active values
are set to TRUE, the selected IAM user has one or more access
keys attached, therefore the verified user account
configuration is Canarytoken-compliant, otherwise the
configuration is not compliant.
09) If the user account configuration is not compliant for both
step no. 5 and 8 (a and b), the access keys associated with
selected AWS IAM user account are not used as Canarytokens.
10) Repeat steps no. 4 – 9 for each Amazon IAM user available
in your AWS account. If there are no IAM user accounts with
Canarytoken-compliant configurations, Canary access tokens
The check addresses a security 01) Sign in to the AWS Management Console.
concern related to misconfigurations 02) Navigate to IAM dashboard at
or inefficiencies. https://console.aws.amazon.com/iam/.
03) In the left navigation panel, choose Credential report.
04) On the Credential report page, click Download Report to
download the AWS IAM report that lists all your account's
users and the status of their various credentials.
05) Open the downloaded file (i.e.
status_reports_<download_date>.csv) in your preferred CSV
file editor and check the value available within the
password_next_rotation column for each listed AWS IAM user.
The password_next_rotation attribute describes the date and
time when the IAM user is required to set a new password
according to the password policy used by the account. The
value for the AWS account (root user) is always set to
not_supported. If your AWS account does have a password
policy that requires password rotation, ensure that the IAM
user passwords are changed according to the current password
policy and skip the next steps within this section. If your AWS
account does not have a password policy implemented yet, the
Info password_next_rotation attribute value is set to N/A and you
need to continue the audit process to get the IAM credentials
age.
06) Within the credential report file (i.e.
status_reports_<download_date>.csv), check the value
available in the password_last_changed attribute column for
each AWS IAM user. The password_last_changed attribute
describes the date and time when an IAM user password was
last set in ISO 8601) date-time format. If an existing IAM user
does not have a password, the value for this attribute should
be is N/A. Also, the value for the AWS account (root) is always
set to not_supported.

07) Based on the data available for the password_last_changed


attribute, determine the age of your IAM user passwords. If the
validity period for one or more AWS IAM user passwords is
about to end soon, follow the steps outlined within
Remediation/Resolution section to reset these credentials in
order to follow best practices and prevent their expiration.
01) Sign in to the AWS Management Console.
02) Navigate to IAM dashboard at
https://console.aws.amazon.com/iam/.
03) In the left navigation panel, select Users.
04) Click on the IAM user name that you want to examine.
05) On the IAM user configuration page, select Permissions tab.
06) Inside Managed Policies section, search for any access
policies available. If one or more policies are currently
Info attached, the selected user permissions configuration is not
following AWS IAM best practices.
07) Repeat steps no. 4 – 6 for each IAM user that you want to
examine available in your AWS account.
01) Sign in to the AWS Management Console.
02) Navigate to IAM dashboard at
https://console.aws.amazon.com/iam/.
03) In the left navigation panel, select Users.
04) Click on the IAM user that you want to examine.
05) On the IAM user configuration page, select Security
Credentials tab.
06) Inside Access Keys section, check for any access keys
associated with the selected IAM user.
07) Inside Sign-In Credentials section, check the password
configuration status for the selected IAM user.
08) If the user has one or more access keys assigned:
Info and its Password status is set to Yes:
the selected user access configuration is not following the IAM
security best practices and the risk of exposing access
credentials increases.
09) Repeat steps no. 4 – 8 for each IAM user that you want to
examine available in your AWS account.
Recommendation Patch Priority Patch Status
Ensure that your AWS IAM users are using a
strong password policy to define password
requirements such as minimum length,
expiration date, whether it requires a certain
pattern, and so forth.

Quick Win Already Compliant

Ensure that all your SSL/TLS certificates,


managed by AWS IAM, have a strong key
length of 2048 or 4096 bit in order to adhere
to security best practices and protect them
from cryptographic algorithm hacking attacks
using brute-force methods.
Short Term Not Compliant

Ensure your AWS account is configured to


use security challenge questions so Amazon
can use these questions to verify your
identity in case your account become
compromised or if you just need to contact
their customer service for help.
Long Term Not Compliant
Ensure that hardware Multi-Factor
Authentication (MFA) is enabled for your
root account in order to secure the access to
your AWS resources and adhere to Amazon
security best practices.

Quick Win Not Compliant

Ensure that none of the server certificates


managed by AWS IAM were compromised by
the Heartbleed bug, meaning that none of
the SSL/TLS certificates were uploaded
before April 1st 2014, when the security bug Quick Win N/A
was publicly disclosed.

To secure your AWS environment and adhere


to IAM best practices ensure that the AWS
account (root user) is not using access keys
to perform API requests to access resources
or billing information.

Quick Win Not Compliant


Ensure that the AWS root account
credentials have not been used within the
past 30 days (default threshold) to access
your Amazon Web Services account in order
to keep the root account usage minimised.

Short Term Not Compliant

Ensure that Multi-Factor Authentication


(MFA) is enabled for your root account in
order to secure your AWS environment and
adhere to IAM security best practices.

Quick Win Already Compliant


Ensure that no Amazon IAM access keys are
created during initial setup for all IAM users
that have a console password.

Short Term Not Compliant

Disable or remove any unused Amazon IAM


user credentials such as access keys and
passwords in order to protect your AWS
resources against unapproved access.

Quick Win Not Compliant


Ensure that Amazon IAM roles used to
establish a trusted relationship between your
AWS account and a third-party entity (also
known as cross-account access roles) are
using Multi-Factor Authentication (MFA) or
external IDs to secure the access to your
resources and to prevent "confused deputy"
attacks.

Not Compliant

Ensure that all your IAM groups are using


managed policies (AWS and customer
managed policies) instead of inline policies
(embedded policies) to better control and
manage the access permissions to your AWS
account.

Short Term Already Compliant


Ensure there are no Amazon IAM policies
(inline and customer managed) that allow full
administrative privileges available in your
AWS account, in order to promote the
principle of least privilege and provide the
users, groups and roles that use these
policies the minimal amount of access
required to perform their tasks.

Quick Win Already Compliant

Ensure that the access policies attached to


your IAM roles adhere to the principle of
least privilege by giving the roles the minimal
set of actions required to perform their tasks
successfully.

Quick Win Already Compliant

Ensure that the access to your AWS services


and resources is made only through
individual IAM users instead of the root
account.

Quick Win Not Compliant


Ensure that all users with AWS Console
access have Multi-Factor Authentication
(MFA) enabled in order to secure your AWS
environment and adhere to IAM security best
practices.

Quick Win Not Compliant

Ensure that your SSL/TLS certificates stored


in AWS IAM are renewed 7 (seven) days
before their validity period ends.
Short Term Not Compliant

Identify and deactivate any unnecessary IAM


access keys as a security best practice. AWS
allows you to assign maximum two active
access keys but this is recommended only
during the key rotation process.

Short Term Not Compliant


Identify and deactivate any unnecessary IAM
SSH public keys used to authenticate to AWS
CodeCommit repositories.

Short Term Not Compliant

Identify and remove any unused AWS IAM


users, which are not designed for API access,
as an extra security measure for protecting
your AWS resources against unapproved
access.

Short Term Not Compliant

Ensure that all your IAM user access keys are


rotated every month in order to decrease the
likelihood of accidental exposures and
protect your AWS resources against
unauthorized access.

Long Term Not Compliant


Ensure your AWS account is configured to
use alternate contact details for security
communications in case you are not
available.

Long Term Not Compliant

Ensure that all the expired SSL/TLS


certificates stored in AWS IAM are removed
in order to follow IAM security best practices.

Long Term Not Compliant

Identify any inactive IAM users, which are


not designed for API access, and disable their
access as an extra security measure for
protecting your AWS resources against
unauthorized access.

Long Term Not Compliant


To secure your Amazon Web Services
account and adhere to security best
practices, ensure that your AWS root user is
not using X.509 certificates to perform SOAP-
protocol requests to AWS services.

Long Term N/A

Ensure that all your IAM SSH public keys are


rotated every month in order to decrease the
likelihood of accidental exposures and
protect your AWS CodeCommit repositories
against unauthorized access.

Long Term Not Compliant

Ensure there is an active IAM Support Role


available within your AWS account. A
Support Role is an IAM role configured to
allow authorized users to manage incidents
with AWS Support.

Long Term Already Compliant


Ensure that all the IAM groups within your
AWS account are currently used and have at
least one user attached. Otherwise, remove
any orphaned (unused) IAM groups in order
to prevent attaching unauthorized users.

Long Term Already Compliant

Ensure one IAM user is created as


Canarytokens within your AWS account in
order to implement proactive security
defence by using threat deception
technology.

Long Term N/A


Identify the age of your Amazon IAM user
passwords and ensure that these credentials
are reset before their validity period ends in
order to prevent password expiry.

Long Term Not Compliant

Ensure that the existing IAM policies are


attached only to groups in order to efficiently
assign permissions to all the users within
your AWS account.

Long Term Already Compliant


Ensure that your existing IAM users are
either being used for API access or for
console access in order to reduce the risk of
unauthorized access in case their credentials
(access keys or passwords) are compromised.

Long Term Not Compliant


Remark
Patched
Not Patched
Not Applicable
Already Compliant
HIGH 2
MEDIUM 5
LOW 3
EKS INFO 0

Sr. No. Check Name Description


1 Enforce IAM Least Privilege Access The principle of least privilege ensures that
users and services are only granted the
minimal set of permissions required to
perform their tasks. This prevents privilege
escalation and unauthorized access. By
reviewing IAM roles and policies regularly,
over-permissioned roles can be identified and
minimized, reducing the potential attack
surface.

2 Encrypt EKS Data at Rest Node IAM roles allow worker nodes to
interact with other AWS services. It’s crucial
to ensure that these roles are configured to
grant the minimum required permissions.
Overly permissive IAM roles for nodes can
lead to an elevated risk if a node is
compromised. Regular auditing of these roles
ensures that nodes do not have excessive
permissions that could be exploited by
attackers.

3 Enable Control Plane Logging Control plane logging enables the capture of
API requests, audit logs, and authentication
logs. It helps in tracking user activities,
detecting potential misconfigurations, and
monitoring the health of the Kubernetes
cluster. Additionally, it aids in compliance
audits by maintaining a clear log of all
activities, and enables detailed analysis in
case of security incidents.
4 Restrict Public Access to EKS Restricting public access to the EKS control
Endpoint plane endpoint is a critical measure to reduce
the attack surface. By limiting access to
trusted networks or internal systems, it
prevents attackers from exploiting EKS APIs.
This ensures that the management of the
cluster is restricted to secure, internal
networks and that external attackers cannot
gain unauthorized access via the internet.

5 Enable Kubernetes RBAC Kubernetes RBAC (Role-Based Access Control)


defines and enforces roles and permissions in
the Kubernetes cluster. It helps ensure that
only authorized personnel and services can
access and modify resources. Proper RBAC
policies prevent unauthorized access to
sensitive data, maintain control over who can
perform actions, and minimize the risk of
privilege escalation.

6 Enable AWS Security Groups for AWS Security Groups for Pods provide
Pods network isolation within the Kubernetes
cluster by controlling traffic at the pod level.
By defining inbound and outbound rules, you
can ensure that only trusted pods or services
can communicate with each other. This
approach enhances the security posture of
the cluster by preventing unauthorized
network access at a granular level.

7 Implement Pod Security Policies Kubernetes audit logs track all activities and
changes in the cluster. Enabling audit logs
provides detailed records of who did what,
when, and where in the cluster. This is vital
for detecting anomalous activities and
identifying potential security incidents. Audit
logs also provide valuable information for
compliance, troubleshooting, and security
investigations.
8 Enable Audit Logging for Worker Encryption at rest is an essential security
Nodes feature to protect sensitive data stored within
the cluster. By enabling encryption for storage
volumes, databases, and other components,
data is protected even if the underlying
hardware or storage system is compromised.
Encryption ensures compliance with data
protection laws and adds a critical layer of
protection for sensitive information.

9 Restrict Kubernetes Dashboard Pod Security Policies (PSPs) enable


Access administrators to define security controls that
restrict which pods can be deployed and what
actions they can perform. These policies help
enforce best practices such as avoiding
privilege escalation, preventing pods from
running as root, and controlling which
capabilities pods can use. PSPs are essential
for maintaining a secure and well-defined
cluster environment.

10 Enable AWS Shield for EKS AWS Identity Federation allows the use of
external identity providers such as AWS
Cognito, Active Directory, or third-party
services for managing user access to EKS. This
simplifies user management by centralizing
authentication and authorization processes. It
also provides more flexibility in controlling
user access based on external identity
sources, ensuring that access to EKS is secure
and manageable.
Category Risk Level Impact
Identity & Access When IAM roles are not configured
Management with the principle of least privilege
in mind, users and services may
gain more permissions than
necessary, leading to unauthorized
access or privilege escalation. This
could result in data leaks,
High unauthorized changes, or even a
full compromise of the cluster. This
is a key contributor to security
incidents in cloud environments.

Data Protection Over-permissioned IAM roles for


worker nodes increase the risk of
an attacker gaining unauthorized
access if a node is compromised. If
a node has access to critical AWS
resources or services with excessive
permissions, an attacker could
High escalate their privileges and launch
a broader attack. Ensuring minimal
permissions limits the damage that
could be done in case of a
compromise.

Logging & Monitoring Without control plane logging, it


becomes difficult to track malicious
activities, identify
misconfigurations, or investigate
incidents. Without proper logs,
identifying the root cause of
security breaches or operational
Medium failures becomes a significant
challenge. Additionally, the lack of
logging may result in non-
compliance with security and audit
standards.
Network Security Allowing public access to the EKS
API endpoint increases the risk of
unauthorized access, potential
DDoS attacks, and exploitation of
vulnerabilities in the control plane.
Attackers may exploit publicly
exposed services to compromise
Medium the cluster or steal sensitive data.
This significantly increases the
security risk and potential attack
surface of your environment.

Identity & Access Without properly configured RBAC


Management policies, unauthorized users or
services may gain access to critical
cluster resources. This could lead to
privilege escalation, where users
gain higher levels of access than
intended, allowing them to execute
malicious actions or gain control of
Medium sensitive resources. Insufficient
RBAC configurations can
significantly undermine security in
Kubernetes environments.

Network Security Not enforcing AWS Security Groups


at the pod level increases the risk
of lateral movement within the
cluster, where malicious pods can
communicate with other pods or
services that they should not. This
Medium could lead to data leakage,
unauthorized access to services, or
compromise of other parts of the
infrastructure.

Container Security Audit logs are essential for tracking


user actions and identifying
unauthorized or malicious
behaviour within the cluster.
Without audit logging, it becomes
challenging to detect security
incidents or operational failures.
Additionally, audit logs are required
Medium for compliance with security
standards and regulatory
requirements, making them
essential for governance and risk
management.
Logging & Monitoring Without encryption at rest,
sensitive data stored in the EKS
cluster is vulnerable to
unauthorized access. If an attacker
gains physical access to the storage
or backup systems, unencrypted
data could be exposed. Encryption
Low at rest mitigates this risk and
ensures that data remains
confidential, even in the case of a
breach.

Access Control Insecure pod configurations can


lead to privilege escalation or
unauthorized access to sensitive
resources. Without PSPs, malicious
or misconfigured pods could bypass
security measures, run with
excessive privileges, or interact
Low with sensitive data. PSPs are critical
to enforcing security boundaries
and ensuring only trusted and
secure pods are deployed.

Threat Protection Relying on external identity


providers for access control
provides centralized user
management and strengthens
security by leveraging secure
authentication methods. Without
identity federation, managing user
Low roles and permissions can become
fragmented and difficult to control,
increasing the risk of
misconfigurations and
unauthorized access.
Navigation Path/ Location Recommendation
1) Review IAM roles assigned to EKS users Review and refine IAM roles and
and workloads. policies regularly to follow the principle
2) Use AWS IAM Access Analyzer to detect of least privilege. Use IAM Access
excessive permissions. Analyzer to detect over-permissioned
roles and adjust them as needed.
Implement MFA (Multi-Factor
Authentication) for IAM users and
service accounts to add an additional
layer of security.

1) Enable AWS KMS encryption for Regularly audit and review IAM roles for
Kubernetes secrets. worker nodes to ensure that they only
2) Configure encryption at the EBS volume have the permissions necessary for
level for worker nodes. node management. Use the least
privileged IAM roles for nodes, and
avoid granting broad permissions such
as full S3 or EC2 access. Implement
granular permission management to
reduce the risk of node compromise.

1) Navigate to AWS Console. Enable control plane logging in the AWS


2) Go to EKS Clusters. Console and ensure logs are stored in
3) Enable logging for API server, audit, an S3 bucket with restricted access. Use
authenticator, controller manager, and CloudWatch for real-time monitoring
scheduler logs. and analysis of logs to quickly identify
issues or potential security incidents.
Regularly audit and review logs to
ensure security compliance and
facilitate troubleshooting.
1) Open AWS Console. Restrict public access to the EKS API
2) Go to EKS Cluster settings. endpoint by configuring VPC endpoint
3) Disable public access and configure policies or using AWS PrivateLink. Limit
private endpoints via VPC. access to only trusted IP ranges, and
enable encryption in transit to protect
the API from unauthorized access. Use
IAM policies to control who can manage
the EKS control plane and ensure that
sensitive resources are not exposed to
the public internet.

1) Verify RBAC policies by checking Ensure that RBAC policies are


Kubernetes roles and role bindings. implemented with the principle of least
2) Assign permissions based on namespace privilege in mind. Regularly review and
and resource needs. update RBAC rules to ensure that users
and services have only the necessary
permissions to perform their tasks.
Consider using namespaces to isolate
resources and minimize the risk of
privilege escalation.

1) Ensure that AWS VPC CNI plugin supports Apply AWS Security Groups for Pods
security groups per pod. and ensure that they define strict
2) Configure security groups for pod- ingress and egress rules. Regularly audit
specific traffic control. security group configurations to ensure
they are correctly applied to all pods.
Use tools like Calico or Cilium to enforce
network policies within the Kubernetes
cluster.

1) Enable PodSecurityPolicy admission Enable Kubernetes audit logging and


controller. configure it to capture all user actions
2) Define policies to restrict privileged and changes to the cluster. Use tools
containers and enforce security controls. like Fluentd or ELK (Elasticsearch,
Logstash, Kibana) for log aggregation
and analysis. Review audit logs regularly
to detect unusual or unauthorized
activities and ensure compliance with
internal policies and regulations.
1) Configure CloudWatch Logs Agent on Enable encryption for all data at rest,
worker nodes. including EBS volumes, S3 buckets, and
2) Collect logs from /var/log/kube-audit.log RDS databases. Use AWS-managed keys
and send them to CloudWatch. or create your own encryption keys with
AWS KMS (Key Management Service).
Ensure that all backups and persistent
data are encrypted to protect sensitive
information.

1) Disable public access to Kubernetes Implement Pod Security Policies (PSPs)


Dashboard. that restrict the use of privileged
2) Use RBAC to assign access only to containers, root users, and dangerous
necessary users. capabilities. Define strict requirements
for the containers, such as the need for
specific images or security
configurations. Review and enforce
these policies regularly to ensure cluster
security.

1) Enable AWS Shield Advanced protection Integrate EKS with AWS Identity
for EKS. Federation to manage user access via
2) Monitor network traffic patterns for external identity providers. Use AWS
anomalies. Cognito or Active Directory for identity
management and ensure that user roles
and permissions are synchronized
between the identity provider and EKS.
This improves user management and
security by ensuring consistent access
control policies.
Patch Priority Patch Status Remark

Quick Win Not Compliant

Quick Win Not Compliant

Short Term Already Compliant


Quick Win Not Compliant

Short Term Not Compliant

Short Term Already Compliant

Short Term Not Compliant


Long Term Not Compliant

Long Term Already Compliant

Long Term Not Compliant


HIGH 2
MEDIUM 0
LOW 8
INFO 0
ECS

Sr. No. Check Name Description


1 Enable Task-Level IAM Roles Task-level IAM roles allow ECS tasks to
assume specific roles with defined
permissions, instead of using ECS
instance roles. This reduces the risk of
over-permissioned IAM roles and aligns
with the least-privilege principle.

2 Apply Security Groups to ECS services that are backed by EC2


ECS Services instances can be secured with AWS
Security Groups, which define network
traffic rules for instances. Security
groups help prevent unauthorized
communication and attacks.

3 Enable ECS Service ECS Service Discovery automatically


Discovery manages and assigns DNS names to
services within ECS tasks and services,
enabling easy communication between
ECS services. It helps in automating
service discovery and simplifies DNS
management.

4 Use ECS Fargate for Task ECS Fargate is a serverless compute


Isolation engine for containers that runs ECS tasks
without the need to provision or
manage the underlying EC2 instances. It
ensures task isolation and reduces
operational overhead.

5 Enable Container-Level AWS Secrets Manager and Systems


Secrets Management Manager Parameter Store can securely
store and manage secrets, such as
database passwords, API keys, or tokens,
and provide them to ECS containers at
runtime.

6 Enable ECS Cluster-Level ECS provides support for encrypting


Encryption container data at rest, such as Amazon
EBS volumes, and in transit, such as
communication between ECS tasks. This
ensures sensitive data is protected in the
event of a breach.
7 Enable ECS Task Definitions Versioning ECS task definitions ensures
Versioning that each revision of your containerized
application is well-documented and can
be rolled back if necessary. This adds
flexibility in managing container
deployments and updates.

8 Implement Container ECS integrates with Amazon ECR to


Vulnerability Scanning provide vulnerability scanning for
container images before they are
deployed. This ensures that known
vulnerabilities are identified and
mitigated before container deployment.

9 Enable Logging and Enabling ECS logging and monitoring


Monitoring with with Amazon CloudWatch provides
CloudWatch visibility into containerized application
performance and health. It collects logs,
metrics, and events for analysis and
troubleshooting.

10 Use ECS Auto Scaling ECS Auto Scaling automatically adjusts


the number of ECS tasks running in
response to application load, ensuring
optimal performance and resource
utilization.
Category Risk Level Impact
Identity & Access Without task-level IAM roles, ECS tasks
Management rely on instance roles, which may
provide excessive permissions for the
High tasks running within ECS. This could
lead to unauthorized access or misuse
of AWS services.

Networking & Security Without security groups, ECS instances


may allow unrestricted access to their
ports, leaving them vulnerable to
High network-based attacks, unauthorized
communication, or lateral movement
within the infrastructure.

Networking & Without ECS Service Discovery, tasks


Communication and services might be manually
configured, leading to configuration
errors, hard-to-manage DNS setups, or
Low failure to resolve service names.

Compute Security Using EC2 instances for ECS tasks


increases the attack surface by
requiring the management of EC2
Low instances, whereas Fargate reduces this
surface by abstracting infrastructure
management.

Data Security Without secret management, sensitive


information such as API keys or
credentials can be hardcoded into
Low containers, leading to security risks if
the container image is exposed or
compromised.

Data Protection Without encryption at the cluster level,


data stored in ECS volumes or
communicated between ECS tasks
Low could be exposed in the event of a
breach, leading to data leaks or loss.
Deployment Management Without versioning, you risk running
outdated or misconfigured containers,
leading to broken deployments or
incompatibility issues with other
Low services. Task definition versioning
ensures that only validated container
versions are deployed.

Image Security Without vulnerability scanning,


containers might run with outdated or
vulnerable libraries, increasing the risk
of exploitation through known
Low vulnerabilities.

Monitoring & Logging Without proper logging and monitoring,


it's difficult to track the performance
and health of ECS services. Failure to
Low monitor could lead to undetected
failures, downtime, or security
incidents.

Performance & Scalability Without auto scaling, ECS services may


experience performance degradation
during traffic spikes or waste resources
Low when demand is low. This can lead to
inefficiency and potential service
disruptions.
Navigation Path/ Location Recommendation
1) Create IAM role with required Use task-level IAM roles to grant tasks only the
permissions. permissions they need to interact with AWS
2) Attach IAM role to ECS task definition. services. Regularly review these roles for the
least privilege.

1) Create security group in VPC. Apply security groups to ECS services and their
2) Attach security group to ECS service and associated instances to control ingress and
instances. egress traffic. Regularly audit and update
security group rules for compliance.

1) Navigate to ECS Console. Enable ECS Service Discovery to automatically


2) Go to Service Discovery settings. assign DNS names to ECS services. This allows
3) Enable Service Discovery. seamless communication between ECS tasks
and services.

1) Choose ECS Fargate as the launch type. Use ECS Fargate for task isolation and to avoid
2) Define task definition with Fargate managing EC2 instances. This reduces the risk
configuration. of vulnerability exposure in the underlying
infrastructure.

1) Store secrets in AWS Secrets Manager or Use AWS Secrets Manager or Systems
SSM Parameter Store. Manager Parameter Store to securely manage
2) Reference secrets in ECS task definitions. and provide secrets to ECS containers during
runtime. Avoid hardcoding secrets in container
images.

1) Enable encryption for ECS volumes. Enable encryption for data at rest and in
2) Use encryption for data transfer with transit within ECS. Use AWS Key Management
TLS. Service (KMS) for key management.
1) Define task definition version. Enable ECS task definition versioning to
2) Update ECS service with the new task maintain a history of task definitions. Use
definition version. versioned deployments to ensure consistency
across environments and roll back when
necessary.

1) Enable Amazon ECR image scanning. 2) Enable ECS container vulnerability scanning in
Configure ECS service to pull from scanned Amazon ECR to identify security issues in
repositories. images before deployment. Always use the
latest secure versions of images.

1) Configure CloudWatch Logs for ECS Enable logging and monitoring with
services. CloudWatch to track ECS service health,
2) Set up CloudWatch Metrics for container application performance, and resource usage.
performance. Set up CloudWatch Alarms to detect issues
early.

1) Define ECS service auto scaling policies. Configure ECS Auto Scaling to automatically
2) Set scaling triggers based on CPU, scale tasks based on resource demand. Ensure
memory, or custom metrics. scaling policies are in place to handle traffic
surges and minimize resource wastage.
Patch Priority Patch Status Remark

Quick Win Already Compliant

Quick Win Already Compliant

Long Term Not Compliant

Long Term Not Compliant

Long Term N/A

Long Term Not Compliant


Long Term N/A

Long Term N/A

Long Term Already Compliant

Long Term Not Compliant


HIGH 1
MEDIUM 0
LOW 9
Lambda INFO 0

Sr. No. Check Name Description


1 Secure Lambda API When Lambda functions are integrated
Gateway Integration with API Gateway, ensure that API keys,
JWT tokens, or IAM authentication are
used to secure access to the functions.

2 Implement Lambda Lambda execution roles define the AWS


Execution Role with Least resources and services that Lambda
Privilege functions can access. Using the least-
privilege principle, only grant the
minimum permissions necessary.

3 Enable Lambda VPC Access Lambda functions can access resources


within a Virtual Private Cloud (VPC). By
enabling VPC access, Lambda functions
can securely interact with database
instances, private APIs, and other VPC
resources.

4 Enable Encryption for Lambda allows environment variables to


Lambda Environment store sensitive data, such as API keys or
Variables database credentials. Enabling encryption
ensures that sensitive data is securely
stored.

5 Use AWS Lambda Layers AWS Lambda Layers provide a way to


Securely manage dependencies. Only use trusted
layers from reliable sources to ensure no
malicious code is included.

6 Set Lambda Timeout and Setting appropriate timeouts and memory


Memory Limits limits for Lambda functions ensures they
Appropriately don’t run indefinitely or consume
excessive resources. This helps in
mitigating resource exhaustion or abuse.
7 Monitor Lambda with CloudWatch Logs allows you to capture
CloudWatch Logs and monitor Lambda execution logs. This
helps to detect failures, performance
issues, and any unusual behaviour.

8 Enable Lambda Concurrency limits for Lambda functions


Concurrency Limits help prevent excessive resource
consumption and ensure that no function
consumes too many resources during
sudden traffic surges.

9 Use Dead Letter Queues Dead Letter Queues allow failed Lambda
(DLQs) for Lambda executions to be captured and analyzed.
This helps ensure that any failed
invocations do not cause data loss or
service disruption.

10 Review Lambda Permissions Lambda functions should be regularly


Regularly reviewed to ensure that only the
necessary permissions are assigned,
especially as new services are introduced
or code is updated.
Category Risk Level Impact
API Security Without securing API
Gateway endpoints,
unauthorized users may
High invoke Lambda functions,
leading to data breaches or
abuse.

Identity & Access Over-permissioned Lambda


Management roles can result in
unauthorized access to
sensitive data or other AWS
Low resources if a Lambda
function is compromised.

Network Security Without VPC access,


Lambda functions can’t
securely access private
resources within your VPC,
Low exposing the application to
data leaks or unauthorized
access.

Data Security Without encryption,


sensitive information in
environment variables can
Low be exposed if Lambda
configuration is leaked or
compromised.

Code & Dependency Using untrusted layers can


Security introduce vulnerabilities or
malicious code into the
Low Lambda function, which
could lead to data leaks or
security breaches.

Resource Management Without proper limits,


Lambda functions may run
indefinitely or consume
more memory than
necessary, leading to
Low wasted resources or
potential denial-of-service
attacks.
Monitoring & Logging Without logging, you lose
visibility into Lambda
executions, making it
Low difficult to identify issues,
troubleshoot, or detect
malicious activities.

Performance & Without concurrency limits,


Scalability Lambda functions can scale
infinitely, leading to
resource exhaustion and
Low possible service
degradation during traffic
spikes.

Failure Handling Without DLQs, failed


Lambda executions may
result in silent failures,
Low leading to data loss, missed
events, or undetected
issues.

Security & Compliance If Lambda permissions are


not reviewed regularly,
excessive permissions can
be granted over time,
Low potentially increasing the
attack surface and risk of
exploitation.
Navigation Path/ Location Recommendation
1) Enable API Gateway authentication via IAM, API Secure Lambda API Gateway
keys, or JWT tokens. integrations by enforcing authentication
2) Secure API Gateway endpoints with proper and authorization mechanisms like IAM
authorization. roles or JWT tokens.

1) Create an IAM policy with minimal permissions. 2) Always use the least privilege model by
Attach the IAM policy to the Lambda function granting only the necessary permissions
execution role. for Lambda functions to interact with
required AWS resources.

1) Configure Lambda to connect to a VPC. Ensure that Lambda functions requiring


2) Attach necessary security groups and subnets. access to VPC resources are properly
configured with VPC access. Apply
security groups and limit access to only
trusted resources.

1) Enable encryption for Lambda environment Encrypt Lambda environment variables


variables. using KMS to protect sensitive
2) Use KMS for key management. information, ensuring it is encrypted at
rest.

1) Use trusted public layers or manage custom layers Ensure only verified and secure Lambda
internally. layers are used. Regularly review and
2) Regularly audit and update layers. manage custom layers to avoid
introducing vulnerabilities.

1) Set memory and timeout limits in Lambda Set appropriate memory and timeout
configuration. configurations to ensure efficient
2) Review resource usage periodically. execution. Review and adjust limits as
required.
1) Enable CloudWatch Logs in Lambda configuration. 2) Enable detailed logging with
Set up log retention and alarms for critical events. CloudWatch for Lambda functions. Set
up alerts and review logs regularly for
unusual behaviour.

1) Set reserved concurrency for critical Lambda Set appropriate concurrency limits to
functions. ensure that Lambda functions scale
2) Define concurrency limits based on resource within available resources and prevent
requirements. overutilization during traffic spikes.

1) Configure Dead Letter Queues in Lambda settings. Configure DLQs for Lambda to capture
2) Define SQS or SNS as DLQ destination. failed invocations and ensure they are
retried or analyzed.

1) Regularly review IAM policies attached to Lambda Regularly review and adjust Lambda
functions. function permissions to follow the least-
2) Use IAM Access Analyzer to identify overly privilege principle. Use IAM Access
permissive roles. Analyzer to monitor permissions.
Patch Priority Patch Status Remark

Quick Win N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A


Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A


HIGH 1
MEDIUM 0
LOW 9
INFO 0
SQS

Sr. No. Check Name Description


1 Implement Least Privilege Configure IAM policies for users and
for SQS Access services with minimal permissions to
interact with the SQS queues.
2 Enable Encryption for SQS Enable encryption at rest for SQS queues to
Messages protect the data in the queues.

3 Set Dead Letter Queue Use a Dead Letter Queue (DLQ) to capture
(DLQ) for SQS failed message deliveries for analysis and
troubleshooting.
4 Monitor SQS with Enable CloudWatch monitoring to track SQS
CloudWatch Metrics queue metrics like message count, delivery
delays, etc.
5 Implement Message Set an appropriate visibility timeout for
Visibility Timeout messages to prevent multiple consumers
from processing the same message.

6 Secure SQS Queues with IP Use VPC endpoints to restrict SQS access to
and VPC Restrictions resources within a VPC and secure access
via IP restrictions.

7 Use Message Batch Implement batch message processing to


Processing for Efficiency reduce the number of requests and
increase processing efficiency.

8 Enable SQS Queue Policy for Use queue policies to specify which users
Fine-grained Access Control and services can access the queue and what
actions they can perform.

9 Ensure FIFO (First In, First Use FIFO queues when message order is
Out) Queue for Critical important for your application.
Applications

10 Set Proper Retention Period Define a retention period that aligns with
for SQS Messages the use case, ensuring that old or
unprocessed messages are deleted timely.
Category Risk Level Impact
Identity & Access Management Over-permissioned IAM policies can
High lead to unauthorized access to
sensitive data.
Data Security Without encryption, sensitive data
Low may be exposed in the event of a
security breach.
Message Handling Without DLQs, failed messages could
Low be lost, affecting system reliability and
user experience.
Monitoring & Logging Without monitoring, issues like
Low message backlog or delayed
processing could go unnoticed.
Message Handling Incorrect timeout settings could lead
to duplicate message processing or
Low missed messages.

Network Security Without proper restrictions, SQS could


be vulnerable to unauthorized access
Low over the public internet.

Performance & Optimization Without batch processing, there can


be excessive API calls, resulting in
Low higher costs and slower performance.

Identity & Access Management Without a queue policy, any user with
broad IAM permissions could
Low potentially interact with the queue in
undesired ways.

Message Handling If ordering is critical, using a standard


queue might cause messages to be
Low processed out of order, leading to
potential errors.

Resource Management Retaining messages too long can lead


to unnecessary costs and data
Low buildup.
Navigation Path/ Location Recommendation
1) Create IAM roles with minimum permissions required Always implement least privilege and
for interacting with SQS. grant only the necessary permissions for
2) Use resource-based policies to restrict queue access. SQS interactions.
1) Enable KMS encryption for SQS. Enable encryption for all SQS queues,
2) Ensure the key policies allow access to the relevant using KMS to control access.
users/services.
1) Configure DLQ for each primary queue. Always set up DLQs for all queues to
2) Set appropriate redrive policy and time-to-live (TTL) for capture undelivered messages for
messages. further investigation.
1) Enable CloudWatch metrics for each queue. Enable monitoring and set alarms to get
2) Set up CloudWatch Alarms for critical thresholds. real-time alerts for potential issues.

1) Set visibility timeout based on processing time of the Set the visibility timeout correctly based
consumer. on expected processing times to avoid
2) Adjust visibility timeout according to failure scenarios. duplication.

1) Configure VPC endpoints to restrict access to specific IP Use VPC endpoints to enforce secure
ranges. communication between services and
2) Use security groups to limit access to only authorized SQS within the VPC.
resources.

1) Use batch operations (SendMessageBatch, Implement batch processing where


ReceiveMessageBatch) for multiple messages. possible to minimize cost and improve
2) Optimize the batch size for the service's capacity. throughput.

1) Use SQS queue policies to grant specific permissions Use queue policies for more granular
for different actions like SendMessage, ReceiveMessage. control over who can access the queue
2) Restrict actions based on user roles. and what actions they can perform.

1) Configure FIFO queues for applications that require Use FIFO queues when processing
message order. messages in a specific order is essential
2) Ensure that message group IDs are set appropriately for your application's logic.
for message sequencing.

1) Set message retention periods according to business Define an appropriate retention period
needs. to balance between message availability
2) Ensure that expired messages are automatically and cost control.
deleted.
Patch Priority Patch Status Remark

Quick Win N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A

Long Term N/A


HIGH 1
MEDIUM 1
LOW 8
INFO 0
Elasti Cache

Sr. No. Check Name Description


1 Implement Least Privilege Configure IAM policies for users and
for ElastiCache Access services with minimal permissions to
interact with ElastiCache clusters.

2 Use VPC for ElastiCache Deploy ElastiCache in a VPC to ensure


Deployment secure network access and prevent
unauthorized access from external
sources.

3 Enable Encryption for Enable encryption at rest for ElastiCache


ElastiCache Data data to protect sensitive data stored in
the cache.

4 Set Up ElastiCache Backups Enable automatic and manual backups


to ensure data can be recovered in case
of failure.

5 Monitor ElastiCache with Enable CloudWatch monitoring to track


CloudWatch Metrics ElastiCache metrics like memory usage,
hit/miss ratios, etc.

6 Use Redis AUTH for Use Redis AUTH to ensure that only
ElastiCache Redis authorized clients can interact with the
ElastiCache Redis cluster.

7 Enable Auto Discovery for Enable auto discovery in ElastiCache to


ElastiCache allow clients to automatically detect and
connect to available cache nodes.

8 Set ElastiCache Timeouts Set timeouts and eviction policies based


and Eviction Policies on your application’s cache needs to
optimize performance and resource
usage.

9 Enable Multi-AZ for Enable Multi-AZ deployments for high


ElastiCache Clusters availability and failover protection for
ElastiCache clusters.
10 Configure ElastiCache Configure parameter groups to
Parameter Groups customize and optimize ElastiCache
settings based on workload
requirements.
Category Risk Level Impact
Identity & Access Management Over-permissioned IAM policies can
lead to unauthorized access to sensitive
cache data.
High

Network Security Without VPC deployment, ElastiCache


can be vulnerable to unauthorized
Medium access over the public internet.

Data Security Without encryption, sensitive data may


be exposed in the event of a security
Low breach.

Data Availability Without backups, data loss may occur


during failures or other incidents.
Low

Monitoring & Logging Without monitoring, issues like cache


underperformance or bottlenecks could
go unnoticed.
Low

Access Control Without authentication, unauthorized


users could gain access to your cache,
Low affecting data integrity.

Resource Management Without auto discovery, manual


intervention may be required to update
Low client connections during scaling.

Performance & Optimization Improper timeouts and eviction policies


can lead to inefficient resource
utilization and application performance
Low degradation.

High Availability Without Multi-AZ, ElastiCache clusters


may become single points of failure.
Low
Resource Management Without properly configured parameter
groups, ElastiCache performance and
behaviour may be suboptimal.
Low
Navigation Path/ Location Recommendation
1) Create IAM roles with minimum Always implement least privilege and grant only
permissions required for interacting with the necessary permissions for ElastiCache
ElastiCache. interactions.
2) Use resource-based policies to restrict
cluster access.

1) Launch ElastiCache within a VPC to Deploy ElastiCache within a VPC to enhance


ensure secure network connectivity. security and prevent unauthorized access from the
2) Use security groups and NACLs for public network.
access control.

1) Enable encryption for data at rest in Encrypt ElastiCache data at rest to protect
ElastiCache. sensitive information from unauthorized access.
2) Ensure proper key management using
KMS.

1) Configure automatic and manual Enable regular backups to safeguard against


backups for ElastiCache clusters. potential data loss and ensure business continuity.
2) Set appropriate backup retention period.

1) Enable CloudWatch metrics for memory Enable monitoring and set alarms to get real-time
usage, cache hits/misses, evictions, etc. alerts for performance degradation or potential
2) Set up CloudWatch Alarms for critical issues.
thresholds.

1) Enable Redis AUTH and configure a Ensure Redis AUTH is enabled to avoid
strong password for your Redis cluster. unauthorized access to your cache cluster.
2) Ensure all clients use authentication
when connecting.

1) Enable auto discovery and configure Use auto discovery to simplify client connection
clients to use the auto discovery endpoint. management and scaling operations.

1) Configure timeouts and eviction policies Configure timeouts and eviction policies to align
based on your application’s needs. with your application’s cache usage patterns for
2) Set memory thresholds and eviction optimal performance.
policies accordingly.

1) Enable Multi-AZ for ElastiCache clusters. Enable Multi-AZ deployment to ensure high
2) Configure failover behaviour for high availability and resilience in case of node failures.
availability.
1) Configure parameter groups based on Regularly review and optimize parameter group
the workload requirements. settings to ensure ElastiCache performance is
2) Adjust parameters like max memory, tailored to your application needs.
eviction policy, and others for optimization.
Patch Priority Patch Status Remark

Quick Win Already Compliant

Quick Win Already Compliant

Long Term Not Compliant

Long Term Already Compliant

Long Term Not Compliant

Long Term Not Compliant

Long Term N/A

Long Term N/A

Long Term Not Compliant


Long Term Already Compliant
HIGH 0
MEDIUM 0
LOW 10
INFO 0
WAF

Sr. No. Check Name Description


1 Enable WAF Protection for Enable WAF to protect web applications
Web Applications from common exploits and attacks like
SQL injection, cross-site scripting, etc.

2 Configure WAF Rules for Configure WAF rules to protect against


OWASP Top 10 Protection the OWASP Top 10 vulnerabilities,
ensuring your application is secure from
the most common threats.

3 Enable Logging for WAF Enable logging for WAF to capture


details about blocked requests, which
helps in analyzing threats and refining
rules.

4 Enable Custom WAF Rules Create and apply custom WAF rules
for Specific Application tailored to your specific application logic
Needs or vulnerabilities not covered by
standard rules.

5 Integrate WAF with Integrate WAF with CloudWatch to


CloudWatch for Monitoring monitor security events and trigger
alarms for abnormal traffic patterns or
potential threats.

6 Configure WAF Rate Set rate limiting on WAF to protect


Limiting against brute-force attacks by restricting
the number of requests a client can
make within a specified time.

7 Set Up WAF Managed Rules Enable WAF Managed Rules, which offer
for Common Web Exploits pre-configured protection against
common web application vulnerabilities
such as SQL injections, XSS, etc.

8 Enable WAF Protection for Enable WAF protection for API Gateway
API Gateway to ensure that your API endpoints are
protected from malicious requests and
attacks.

9 Configure WAF Web ACL for Configure Web ACLs (Access Control
Fine-grained Control Lists) to allow or block specific traffic
based on custom criteria, such as IP, URI,
etc.
10 Implement WAF Geo- Implement geo-blocking in WAF to block
blocking for Security or allow traffic based on geographic
location, protecting against attacks from
specific regions.
Category Risk Level Impact
Web Application Security Without WAF, web applications are
vulnerable to common web exploits
Low that can compromise sensitive data and
application integrity.

Vulnerability Protection Without protection from the OWASP


Top 10, your application is exposed to
Low critical vulnerabilities that are
commonly exploited.

Monitoring & Logging Without logging, it’s difficult to analyze


and respond to security incidents
Low effectively, making proactive defence
harder.

Custom Rules & Configuration Custom rules help to cover specific


threats or logic flaws unique to your
Low application, ensuring protection beyond
default rules.

Monitoring & Logging Without integration with CloudWatch,


you miss out on the ability to monitor,
Low analyze, and respond to threats in real-
time.

Rate Limiting Without rate limiting, web applications


are vulnerable to brute-force attacks
Low that can overload systems or gain
unauthorized access.

Vulnerability Protection Without managed rules, applications


may remain unprotected from common
exploits that are covered by predefined
Low WAF protections.

API Protection Without API Gateway protection, your


API endpoints remain exposed to
Low malicious traffic, leading to potential
breaches.

Access Control Without fine-grained access control,


traffic management becomes less
efficient, and security gaps may
Low emerge.
Geographic Access Control Without geo-blocking, traffic from
regions that are a known threat can
enter, putting the web application at
Low risk.
Navigation Path/ Location Recommendation
1) Enable WAF in the AWS Management Always enable WAF protection to secure web
Console for your web applications. applications from common exploits like SQLi,
2) Choose from available WAF rule sets or XSS, and others.
configure your custom rules.

1) Enable OWASP Top 10 rules in WAF. Ensure WAF is configured with the OWASP Top
2) Ensure rules like SQL injection and XSS 10 rules to safeguard your web application from
are actively blocking malicious traffic. high-risk vulnerabilities.

1) Enable logging for WAF in the AWS Enable logging to capture security-related
Console. events, allowing you to investigate incidents
2) Use S3 buckets or CloudWatch Logs to and optimize WAF rules.
store logs for analysis.

1) Create custom WAF rules tailored to Implement custom rules for any application-
your application’s needs. specific vulnerabilities that are not covered by
2) Regularly review and refine custom rules the default WAF rules.
as your application evolves.

1) Integrate WAF with CloudWatch for Integrate WAF with CloudWatch to gain insights
monitoring security events. into attack patterns and respond proactively
2) Set up CloudWatch Alarms for unusual with alarms and metrics.
traffic or blocked requests.

1) Set rate limiting on WAF to prevent Set rate limiting to prevent DDoS or brute-force
excessive requests. attacks that could overwhelm your web
2) Define request limits based on expected applications.
application usage.

1) Enable managed rules within WAF for Use WAF Managed Rules to gain broad
common threats like SQLi, XSS, and others. protection from common attacks like SQL
2) Regularly update rules to keep up with injection and cross-site scripting.
new threats.

1) Enable WAF protection for API Gateway Protect API Gateway endpoints with WAF to
in the AWS Console. prevent unauthorized access and malicious
2) Configure API-specific rules to protect requests from compromising your API.
sensitive endpoints.

1) Configure Web ACLs for WAF to manage Use Web ACLs to implement granular control
which traffic is allowed or blocked. over allowed and blocked traffic, providing
more flexibility and security.
2) Set criteria like IP addresses, headers,
URIs, and more.
1) Set up geo-blocking in WAF to allow or Apply geo-blocking to reduce the attack surface
block traffic based on geographic location. from known regions or countries with high rates
2) Apply geo-blocking rules to unwanted of malicious activity.
regions.
Patch Priority Patch Status Remark

Long Term Not Compliant

Long Term Not Compliant

Long Term Not Compliant

Long Term Not Compliant

Long Term Not Compliant

Long Term Already Compliant

Long Term Already Compliant

Long Term Not Compliant

Long Term N/A


Long Term N/A
LEGENDS

Significant security control weaknesses and vulnerabilities exist in key subject areas.
HIGH management is required to address the issue.
MEDIUM Security control weaknesses and vulnerabilities exist in subject areas. Prompt attenti
prevent
Security serious
control problems from
weaknesses anddeveloping.
vulnerabilities create limited exposure in the subjec
LOW
in these areas is needed to improve the overall security control or efficiency.

Pass Passed the Check

Fail Failed the Check

Closed Denotes the observations which are patched


Open Denotes the observations which are still unpatched

Standard Referred
exist in key subject areas. Immediate attention by

bject areas. Prompt attention by management is required to


ited exposure in the subject areas. Management consideration
ntrol or efficiency.

NA Not Applicable

Quick Wins 2 Weeks to patch


Short Term 3 Weeks to patch
Long Term 4 Weeks to patch
Evidences/POC

POC UPDATED.zip

You might also like