AWS Interview Questions
AWS Interview Questions
1. What is AWS?
• AWS (Amazon Web Services) is a cloud computing platform and
services provided by Amazon for building, deploying, and managing
applications and services through a global network of data centers.
2. Explain the key components of AWS.
• AWS consists of various services, including compute, storage,
networking, databases, AI/ML, IoT, security, and management tools,
organized into categories like EC2, S3, RDS, Lambda, IAM, etc.
3. What is EC2?
• Amazon Elastic Compute Cloud (EC2) is a web service that provides
resizable compute capacity in the cloud, allowing users to run virtual
servers (instances) for various workloads.
4. What is S3?
• Amazon Simple Storage Service (S3) is an object storage service that
offers scalable storage for data storage, backup, and archival, accessible
via a simple web interface or API.
5. Explain the difference between S3 and EBS.
• S3 is object storage suitable for storing large amounts of unstructured
data, while EBS (Elastic Block Store) provides block-level storage
volumes for use with EC2 instances, offering higher performance and
lower latency.
AWS Compute:
6. What is Lambda?
• AWS Lambda is a serverless compute service that allows users to run
code in response to events without provisioning or managing servers,
paying only for the compute time consumed.
7. How do you scale EC2 instances?
• EC2 instances can be scaled vertically (resizing the instance type) or
horizontally (increasing the number of instances) manually or
automatically using Auto Scaling groups based on demand or schedule.
8. What is Elastic Beanstalk?
• AWS Elastic Beanstalk is a platform-as-a-service (PaaS) offering that
enables users to deploy, manage, and scale web applications and
services quickly and easily without worrying about infrastructure
management.
9. Explain the difference between EC2 and ECS.
• EC2 is a virtual server service for running applications on virtual
machines, while ECS (Elastic Container Service) is a container
orchestration service for managing Docker containers at scale.
10. What is AWS Batch?
• AWS Batch is a fully managed batch processing service that enables
users to run batch computing workloads at any scale efficiently,
automatically provisioning compute resources based on workload
requirements.
AWS Storage:
AWS Networking:
AWS Databases:
AWS IoT:
AWS Integration:
AWS Development:
AWS Security:
• Computing
• Storage
• Networking
Here are some of the AWS products that are built based on the three cloud service
types:
Storage - These include S3, Glacier, Elastic Block Storage, Elastic File System.
3. What is auto-scaling?
Auto-scaling is a function that allows you to provision and launch new instances
whenever there is a demand. It allows you to automatically increase or decrease
resource capacity in relation to the demand.
2. Save the code in an S3 bucket, which serves as a repository for the code.
3. Use AWS CloudFormation to call the bucket and create a stack on your template.
4. CloudFormation reads the file and understands the services that are called, their
order, the relationship between the services, and provisions the services one after
the other.
You can upgrade or downgrade a system with near-zero downtime using the following
steps of migration:
• Install applications
• Once it’s deployed, you can upgrade or downgrade the system with near-zero
downtime.
The that can help you log into the AWS resources are:
• Putty
• AWS SDK
• Eclipse
The essential services that you can use are Amazon CloudWatch Logs, store them
in Amazon S3, and then use Amazon Elastic Search to visualize them. You can use
Amazon Kinesis Firehose to move the data from Amazon S3 to Amazon
ElasticSearch.
10. What are the native AWS Security logging
capabilities?
Most of the AWS services have their logging options. Also, some of them have an
account level logging, like in AWS CloudTrail, AWS Config, and others. Let’s take a
look at two services in specific:
AWS CloudTrail
This is a service that provides a history of the AWS API calls for every account. It lets
you perform security analysis, resource change tracking, and compliance auditing of
your AWS environment as well. The best part about this service is that it enables you
to configure it to send notifications via AWS SNS when new logs are delivered.
AWS Config
This helps you understand the configuration changes that happen in your
environment. This service provides an AWS inventory that includes configuration
history, configuration change notification, and relationships between AWS
resources. It can also be configured to send information via AWS SNS when new
logs are delivered.
• AWS Shield
• AWS WAF
• Amazon Route53
• Amazon CloudFront
• ELB
• VPC
Amazon CloudWatch helps you to monitor the application status of various AWS
services and custom events. It helps you to monitor:
• Scheduled events
HVM is a full virtualization mode where the guest operating system runs on top of
a hypervisor without modification. The hypervisor presents a complete set of virtual
hardware to the guest operating system, allowing it to run unmodified. It is a fully
virtualized hardware, where all the virtual machines act separate from each other.
These virtual machines boot by executing a master boot record in the root block
device of your image. Suitable for running a wide range of operating systems,
including Windows and newer versions of Linux. HVM instances are recommended
for most use cases due to their broad compatibility and better performance
compared to PV instances.
• Paravirtualization (PV)
• Paravirtualization on HVM
PV on HVM helps operating systems take advantage of storage and network I/O
available through the host. PV on HVM combines the advantages of both HVM and
PV modes. It allows you to run a paravirtualized guest operating system on top of
a hardware virtual machine, enabling better performance and compatibility Provides
the flexibility and compatibility of HVM with the improved performance of PV.
Offers a balance between compatibility and performance, making it suitable for a
wide range of workloads.
15. Name some of the AWS services that are not region-
specific
• IAM
• Route 53
• CloudFront
NAT (Network Address Translation) gateways and NAT instances are both used in AWS
to enable instances in private subnets to initiate outbound traffic to the internet while
preventing inbound traffic from directly reaching those instances.
NAT Gateway is a AWS managed service and NAT instance is an EC2 instance
configured to perform NAT functionality and is managed by customer.
NAT Gateway automatically scales up to meet the demand of outbound traffic from
instances in private subnets providing high availability across multiple availability
zones (AZs) within a region without any additional configuration. Users need to
manually scale NAT instances based on traffic requirements and scalability is limited
by the instance type and size.
NAT Gateway is highly available by default, with redundancy across multiple Azs and
provides a service level agreement (SLA) for availability. Achieving high availability
with NAT instances requires deploying and managing multiple instances across
multiple Azs with manual failover configurations and implementations.
NAT Gateway offers better performance compared to NAT instances, especially for
high-throughput workloads designed to handle large volumes of outbound traffic
efficiently. NAT Instance performance depends on the instance type and size chosen
and may not offer the same level of performance and scalability as NAT Gateways.
• Helps in monitoring the AWS environments by collecting and notifying metrics for
different AWS services:
EC2 → CPU utilization, Disk I/O, Network Traffic, Memory Utilization, Disk Space
Utilization .
RDS → CPU utilization, Database connections, Free Storage space, Read and Write
IOPS, Database Throughput, Database Engine Metrics (buffer cache hit ratio,
transaction throughput).
To support multiple devices with various resolutions like laptops, tablets, and
smartphones, we need to change the resolution and format of the video. This can be
done easily by an AWS Service tool called the Elastic Transcoder efficiently converting
media files from one format to another, enabling businesses to deliver high-quality
video content to viewers across various above mentioned devices.
Features :
AWS Elastic Transcoder offers a simple and intuitive web interface for configuring
transcoding jobs, defining presets, and managing transcoding pipelines
Supports a variety of input media formats, including popular formats such as MP4,
MOV, FLV, and AVI.
Offers flexibility in choosing output formats and codecs, including H.264, H.265, VP8,
VP9, AAC, and MP3, among others
Provides a range of predefined presets for common transcoding tasks, allowing users
to easily configure output settings for various devices and platforms such as
smartphones, tablets, and web browsers
Enables users to create custom presets with specific configurations for resolution,
bitrate, codec, and other parameters, tailored to their unique requirements
Ensures data security and compliance with AWS security best practices, including
encryption of data in transit and at rest, fine-grained access controls, and compliance
with industry standards and regulations.
Yes. Within an Amazon VPC, users can define their own IP address range (CIDR block)
and allocate private IP addresses to EC2 instances launched within that VPC. When
launching an EC2 instance, users can specify the desired private IP address for the
instance, either manually or through automated methods like AWS CloudFormation or
the AWS Management Console..
Steps:
Users can create subnets within a AWS VPC and define their own IP address ranges for
those subnets based on the CIDR block
Within this created VPC, users can create subnets, which are segments of the VPC's IP
address range and can be associated with a specific availability zone within an AWS
region
When launching an EC2 instance within a subnet, users can specify the desired private
IP address for the instance which must be within the range of IP addresses allocated
to the subnet
Each EC2 instance launched within a subnet is associated with a network interface (ENI)
that contains its private IP address.
The image that will be used to boot an EC2 instance is stored on the root device drive.
This occurs when an Amazon AMI runs a new EC2 instance. And this root device
volume is supported by EBS or an instance store. In general, the root device data on
Amazon EBS is not affected by the lifespan of an EC2 instance.
They are used to compute a range of workloads and aid in the allocation of
processing, memory, and networking resources.
These are ideal for compute-intensive applications. They can handle batch
processing workloads, high-performance web servers, machine learning inference,
and various other tasks.
3. Memory Optimized:
They process workloads that handle massive datasets in memory and deliver them
quickly.
r6i.large, r6g.xlarge, r6i.16xlarge → memory-intensive workloads such as in-
memory databases, real-time analytics, and high-performance computing (HPC)
applications → Powered by the AWS Graviton2 processors offering a high ratio of
vCPUs to memory
4. Accelerated Computing:
5. Storage Optimised:
They handle tasks that require sequential read and write access to big data sets on
local storage.
No, standby instances are launched in different availability zones than the
primary, resulting in physically separate infrastructures. This is because the
entire purpose of standby instances is to prevent infrastructure failure. As a
result, if the primary instance fails, the backup instance will assist in recovering
all of the data.
Spot instances are unused EC2 instances that users can use at a reduced cost.
When you use on-demand instances, you must pay for computing resources
without making long-term obligations.
Reserved instances, on the other hand, allow you to specify attributes such as
instance type, platform, tenancy, region, and availability zone. Reserved
instances offer significant reductions and capacity reservations when
instances in certain availability zones are used.
25. How would you address a situation in which the
relational database engine frequently collapses when
traffic to your RDS instances increases, given that the
RDS instance replica is not promoted as the master
instance?
To make limit administration easier for customers, Amazon EC2 now offers the
option to switch from the current 'instance count-based limitations' to the new
'vCPU Based restrictions.' As a result, when launching a combination of
instance types based on demand, utilization is measured in terms of the
number of vCPUs.
Snapshots are stored in Amazon Simple Storage Service (Amazon S3) and are
retained until you explicitly delete them. You can keep multiple snapshots for
each instance or volume, allowing you to maintain a history of backups over
time.
Create a CloudWatch alarm that monitors CPU utilization on the EC2 instance.
Configure the alarm to trigger when the CPU utilization exceeds 80%.
Set up an Auto Scaling group for your EC2 instance(s) and configure scaling
policies based on the CloudWatch alarm.
Create a scale-out policy that adds more instances to the Auto Scaling group
when the CPU utilization exceeds 80%. This policy will help handle increased
load by distributing it across multiple instances.
Create a scale-in policy that removes instances from the Auto Scaling group
when CPU utilization drops below a certain threshold.
Configure the Auto Scaling group to register instances with the ELB, ensuring
that new instances launched in response to increased load automatically receive
traffic.
Create read replicas of your primary RDS instance to offload read traffic and
distribute the load across multiple database instances.
Take regular snapshots of your Amazon EBS volumes storing application data
to create backups. EBS snapshots are point-in-time copies of volumes and
ensure data durability and integrity.
Use Elastic Load Balancing (ELB) to distribute incoming traffic across multiple
EC2 instances running your web application.
ELB automatically scales with traffic demands and provides fault tolerance by
rerouting traffic to healthy instances, improving application resilience.
Configure Auto Scaling groups for your EC2 instances to automatically adjust
capacity based on demand. Auto Scaling helps maintain application availability
and performance during traffic spikes and infrastructure failures
If your web application requires shared file storage that can be accessed
concurrently by multiple EC2 instances, EFS can be a suitable solution. EFS
provides a scalable and fully managed file system that supports NFSv4
protocol, allowing multiple EC2 instances to access the same file system
simultaneously
If your application needs to share data across multiple EC2 instances, such as
configuration files, static assets, or user uploads, EFS can simplify data sharing
and synchronization between instances. This can be particularly useful in
distributed or microservices architectures
EFS is designed for high availability and durability, with data automatically
replicated across multiple Availability Zones within a region. By using EFS, you
can improve the resilience of your application's file storage layer and ensure
data durability in case of AZ-level failures
A maximum of five elastic IP addresses can be generated per location and AWS
account.
EC2 is short for Elastic Compute Cloud, and it provides scalable computing
capacity. Using Amazon EC2 eliminates the need to invest in hardware, leading
to faster development and deployment of applications. You can use Amazon
EC2 to launch as many or as few virtual servers as needed, configure security
and networking, and manage storage. It can scale up or down to handle
changes in requirements, reducing the need to forecast traffic. EC2 provides
virtual computing environments called “instances.”
Implement security groups to control inbound and outbound traffic to your EC2
instances. Restrict access to only necessary ports and protocols, and regularly
Review and update security group rules.
Network Access Control Lists provide an additional layer of security by
controlling traffic at the subnet level. Use NACLs to filter traffic entering and
leaving subnets associated with your EC2 instances.
Use AWS Identity and Access Management (IAM) to manage access to AWS
resources securely. Assign IAM roles to EC2 instances to grant permissions for
accessing other AWS services without the need for long-term credentials.
Secure remote access to your EC2 instances by limiting SSH (for Linux
instances) and RDP (for Windows instances) access to trusted IP addresses
only.
Keep your EC2 instances up to date by regularly applying security patches and
updates to the operating system, applications, and software installed on the
instances.
Encrypt sensitive data stored on EBS volumes using AWS Key Management
Service (KMS) encryption. Enable encryption in transit by using SSL/TLS for
web traffic and implementing VPN or AWS Direct Connect for secure network
communication through outside user systems or premise data-centers.
Enable AWS CloudTrail to log API activity and AWS Config to track resource
configuration changes.
Utilize Amazon CloudWatch to monitor EC2 instance metrics, set up alarms for
security events, and centralize logs for analysis using services like Amazon
CloudWatch Logs
Enforce MFA for accessing the AWS Management Console and sensitive APIs
to add an extra layer of security.
Require IAM users and roles to authenticate using a combination of password
and MFA device
Use separate VPCs, subnets, and security groups to isolate different tiers of
your application and restrict communication between components based on
the principle of least privilege
Conduct regular security audits and assessments of your EC2 instances and
associated resources to identify vulnerabilities, misconfigurations, and
potential security risks. Utilize AWS Trusted Advisor and third-party security
tools for comprehensive assessments.
Store data in Amazon S3 buckets and access it from your EC2 instances. S3
buckets act as storage containers for objects, such as files, documents,
images, and videos.
Upload data to S3 buckets directly from your EC2 instances using AWS SDKs,
AWS Command Line Interface (CLI), or AWS Management Console. You can
also use third-party tools or applications that support S3 integration
Configure S3 buckets for static website hosting and point domain names to S3
endpoints using Amazon Route 53 or other DNS services
Use the AWS Transfer Family service to enable secure file transfers over SFTP,
FTPS, and FTP protocols between EC2 instances and S3 buckets.
Use Amazon S3 for backup and disaster recovery by storing snapshots, images,
database backups, and application data in S3 buckets.
While you may think that both stopping and terminating are the same, there is
a difference. When you stop an EC2 instance, it performs a normal shutdown
on the instance and moves to a stopped state. However, when you terminate
the instance, it is transferred to a stopped state, and the EBS volumes attached
to it are deleted and can never be recovered.
Spot → applications with flexible start and end times, fault-tolerant workloads, or batch
processing jobs that can be interrupted or rescheduled, disaster for mission critical or
time-sensitive operations. → Since bidding for unused EC2 inventory cheaper as
compared to Ondemand.
If you haven't already, generate an SSH key pair on your local machine using
the ssh-keygen command. This command creates a public and private key pair,
typically stored in ~/.ssh/id_rsa for the private key and ~/.ssh/id_rsa.pub for
the public key.
Add your private SSH key to the SSH agent running on your local machine using
the ssh-add command. This command loads the private key into memory and
manages it for you. ssh-add ~/.ssh/id_rsa
Edit your SSH client configuration file (usually ~/.ssh/config) and add the
following lines to enable SSH agent forwarding:
Host *
ForwardAgent yes
Now, when you SSH into your EC2 instance using the ssh command, SSH agent
forwarding will automatically forward your local SSH agent to the remote EC2
instance.
You should be able to connect without providing the SSH key again
AIX is an operating system that runs only on Power CPU and not on Intel, which
means that you cannot create AIX instances in EC2.
Check the AWS Marketplace for third-party vendors who might offer Solaris or
AIX AMIs that you can run on EC2 instances. Some vendors may provide pre-
configured images or virtual appliances for these operating systems
Consider using AWS migration services, such as AWS Server Migration Service
(SMS) or AWS Database Migration Service (DMS), to migrate your existing
Solaris or AIX workloads to AWS-supported operating systems
Implement hybrid cloud solutions where you run Solaris or AIX workloads on-
premises or in a colocation facility while leveraging AWS services for other
aspects of your infrastructure. Use AWS Direct Connect or VPN to establish
secure connectivity between your on-premises environment and AWS.
42. How do you configure CloudWatch to recover an
EC2 instance?
Generate an SSH key pair on your local machine using the ssh-keygen
command. This command creates a public and private key pair, typically stored
in ~/.ssh/id_rsa for the private key and ~/.ssh/id_rsa.pub for the public key
45. What is Amazon S3?
S3 is short for Simple Storage Service, and Amazon S3 is the most supported
storage platform available. S3 is object storage that can store and retrieve any
amount of data from anywhere. Despite that versatility, it is practically
unlimited as well as cost-effective because it is storage available on demand.
In addition to these benefits, it offers unprecedented levels of durability and
availability. Amazon S3 helps to manage data for cost optimization, access
control, and compliance.
Follow the steps provided below to recover an EC2 instance if you have lost the key:
If you have AWS Systems Manager Run Command configured on the instance,
you can try running commands remotely to reset the SSH key or create a new
user with sudo privileges
If you have access to another EC2 instance in the same availability zone and
subnet, you can stop the affected instance, detach its root volume, attach the
volume to the other instance, mount the volume, and modify the SSH
configuration or create a new user with sudo privileges
If the instance was launched with CloudInit or User Data scripts that configure
SSH access, you can modify the script to add a new SSH key or create a new
user with sudo privileges.
If you have a recent snapshot of the instance's root volume, you can create a
new volume from the snapshot, attach it to a new instance, modify the SSH
configuration or create a new user with sudo privileges, and then launch the
new instance.
Use Cases:
Access Method:
Pricing Model:
S3 pricing is based on the amount of data stored, data transfer out of the S3
bucket, and requests made to the service. EBS pricing is based on the
provisioned storage capacity (per GB per month), IOPS (input/output
operations per second), and snapshot storage.
Data Access:
Use the IAM service provided by your cloud provider to create a user account
or group for the person or team that needs access to the bucket.
Permissions:
Configure the access control settings for the specific bucket. Depending on the
cloud provider, you can either use a bucket policy or ACL to control access.
These configurations define who can access the bucket and what level of
access they have (e.g., read, write, delete).
CloudWatch Metrics:
AWS S3 provides various CloudWatch metrics that you can monitor to ensure
replication consistency. These metrics include ReplicationLatency,
SyncOperations, PendingReplicationCount, etc. Monitoring these metrics can
give you insights into the replication status and any potential issues.
ReplicationLatency → This metric measures the time it takes for changes made
to data in one node or datacenter to be replicated to other nodes or datacenters
within the system → High replication latency can indicate issues in the
replication process, such as network congestion, resource constraints, or
inefficient replication mechanisms
CloudWatch Alarms:
Set up CloudWatch alarms based on these metrics. For example, you can
create an alarm to notify you if the PendingReplicationCount exceeds a certain
threshold for a specified period. This can indicate a problem with replication
lag.
Use AWS Config to set up rules that monitor the configuration of your S3
replication setup. You can define rules to ensure that replication configurations
are compliant with your organization's policies and best practices.
S3 Event Notifications:
Data Encryption:
Simple Interface:
Snowball offers a simple and intuitive interface for managing the data
transfer process. Users can request a Snowball device through the AWS
Management Console, specify the data to be transferred, and track the
progress of the transfer.
Cost-Effective:
Isolation:
One of the primary needs for AWS VPC is to create an isolated section
of the AWS Cloud for your resources that provides security and privacy
by allowing you to define your own virtual network topology, including
subnets, route tables, and network gateways.
Security:
AWS VPC enables you to define network access control policies using
security groups and network access control lists (ACLs) allowing you to
control which resources can communicate with each other and with the
internet, providing a secure environment for your applications and data.
Custom Networking:
With AWS VPC, you have full control over your virtual network, including
IP address ranges, subnets, and routing allowing you to create a network
topology that meets the specific requirements of your applications and
workloads.
AWS VPC provides features for connecting your virtual network to your
on-premises data center or other AWS VPCs using VPN connections,
Direct Connect, or AWS Transit Gateway, enabling you to extend your
existing network infrastructure into the AWS Cloud and build hybrid
cloud solutions.
Scalability:
AWS VPC is highly scalable, allowing you to create and manage large-
scale networks with thousands of resources to accommodate growing
workloads and traffic patterns without disruption.
Compliance:
Resource Organization:
AWS VPC enables you to organize your resources into logical groups
using subnets, route tables, and network access control policies. This
makes it easier to manage and maintain your infrastructure, especially
as it grows in size and complexity.
Cost Management:
By using AWS VPC, you can optimize costs by only provisioning the
network resources you need and scaling them as necessary.
Additionally, you can leverage features like AWS PrivateLink to reduce
data transfer costs between services within the same VPC.
Routing:
Security:
Security Groups:
Security Groups act as virtual firewalls for your instances to control inbound
and outbound traffic. You can define rules that allow specific types of traffic
based on protocol, port, and source/destination IP addresses. Security Groups
are stateful, meaning if you allow inbound traffic, the outbound traffic is
automatically allowed, simplifying the configuration.
NACLs are stateless packet filters that control traffic at the subnet level. They
allow you to create rules that define which traffic is allowed to enter or leave a
subnet.NACLs provide an additional layer of security beyond Security Groups,
especially for blocking specific IP ranges or protocols.
AWS Shield:
VPC Flow Logs capture information about the IP traffic going to and from
network interfaces in your VPC. You can use Flow Logs for security analysis,
troubleshooting, and compliance auditing. Flow Logs can be configured to
capture metadata about each packet (e.g., source/destination IP addresses,
ports, protocol) and can be sent to Amazon S3, CloudWatch Logs, or Amazon
Kinesis Data Firehose for storage and analysis.
IAM enables you to manage access to AWS services and resources securely.
You can create and manage IAM users, groups, and roles to control who can
access your VPC resources and what actions they can perform. IAM policies
allow you to define granular permissions, limiting access to specific VPC
resources based on roles and responsibilities.
VPC Endpoints:
VPC Endpoints enable you to privately connect your VPC to supported AWS
services and VPC endpoint services without requiring internet gateway, NAT
device, VPN connection, or Direct Connect connection. This helps in keeping
traffic between your VPC and AWS services within the AWS network, reducing
exposure to the public internet and enhancing security.
KMS is a managed service that allows you to create and control the encryption
keys used to encrypt your data. You can use KMS to encrypt data stored in
various AWS services, such as Amazon S3, Amazon EBS, and Amazon RDS, as
well as your own applications. By encrypting your data, you add an additional
layer of security, especially for sensitive data stored in your VPC.
VPC PrivateLink:
Secrets Manager helps you securely store, rotate, and manage the credentials,
API keys, and other secrets used by your applications. It centralizes and
automates the management of secrets, reducing the risk of unauthorized
access and exposure. Secrets Manager integrates with AWS services, allowing
you to securely access secrets from your VPC-based applications without
hardcoding credentials.
AWS CloudWatch
CloudWatch can be used to collect and track metrics related to VPC resources such
as EC2 instances, load balancers, VPN connections, and NAT gateways. Set up
CloudWatch Alarms to receive notifications when certain thresholds are exceeded,
such as high CPU utilization or low network throughput. Use CloudWatch Logs to
capture and analyze logs from VPC Flow Logs, DHCP logs, Firewall logs and other
sources to monitor network traffic and diagnose connectivity issues.
VPC Flow Logs capture information about the IP traffic going to and from network
interfaces in your VPC. VPC Flow Logs can be enabled for individual subnets or the
entire VPC. Analyze Flow Logs using tools like Amazon CloudWatch Logs Insights or
third-party log management solutions to monitor traffic patterns, identify anomalies,
and troubleshoot connectivity issues.
AWS Config
AWS Config provides a detailed view of the configuration changes made to resources
within your VPC. You can use Config to monitor changes to VPC settings, security
group rules, route tables, and network ACLs. Set up Config Rules to enforce desired
configurations and detect deviations from them, helping you maintain compliance and
security in your VPC environment.
Amazon VPC Dashboard in the AWS Management Console provides a centralized view
of your VPC resources and their status. Monitor metrics such as VPC traffic, VPN
connection status, and Elastic IP address usage directly from the dashboard. Use the
VPC Dashboard to quickly identify any issues or abnormalities within your VPC
configuration.
Consider using third-party monitoring tools (Datadog, New Relic, and Sumo Logic) and
services that offer advanced features for monitoring and managing VPC
environments. These tools may provide additional insights, visualization capabilities,
and integrations with other monitoring systems. Implement security monitoring
solutions (Amazon GuardDuty, AWS Security Hub, or third-party security tools) to
detect and respond to security threats within your VPC. Monitor for suspicious activity,
unauthorized access attempts, and potential vulnerabilities across your VPC
resources.
57. How many Subnets can you have per VPC?
We can have up to 200 Subnets per Amazon Virtual Private Cloud (VPC).
Performance-sensitive Workloads:
Provisioned IOPS is ideal for applications that require high and consistent I/O
performance, such as databases powering online transaction processing
(OLTP) systems or data warehouses. If your application experiences
performance degradation during peak usage periods or when handling
complex queries, Provisioned IOPS can ensure that your database maintains
the required performance levels.
Applications that demand low latency and fast response times, such as real-
time analytics or high-frequency trading platforms, benefit from Provisioned
IOPS. With Provisioned IOPS, you can reduce disk latency and ensure that
database operations are executed quickly and responsively.
IO-Intensive Workloads:
Workloads that involve frequent read and write operations, large database
scans, or heavy data processing can benefit from Provisioned
IOPS.Provisioned IOPS provides dedicated I/O capacity, allowing your
database to handle intensive workloads without experiencing performance
bottlenecks.
Cost-Effective Scaling:
While Provisioned IOPS typically incurs higher costs compared to standard RDS
storage, it can be more cost-effective in scenarios where you need to scale your
database vertically. By adjusting the provisioned IOPS and storage capacity
based on your workload requirements, you can optimize performance while
controlling costs effectively.
Deploying resources across multiple data centers and Availability Zones (AZs)
worldwide, can achieve high availability and durability for their applications and
data via built-in redundancy and fault tolerance minimizing the risk of downtime
and data loss during disasters.
Scalability:
Cost-effectiveness:
AWS's pay-as-you-go pricing model enables organizations to pay only for the
resources they use, reducing the upfront costs associated with traditional
disaster recovery solutions in form of setting up expensive data-centers from
scratch after recovery from a disaster.
Fast Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO):
AWS offers solutions that enable organizations to achieve fast recovery times
and minimize data loss during disasters. Services like AWS Backup, AWS
Storage Gateway, and AWS Database Migration Service (DMS) help
organizations meet their RTO and RPO targets by providing efficient backup,
replication, and data migration capabilities.
AWS operates in multiple regions and complies with industry standards and
regulations, making it suitable for organizations with global operations and
compliance requirements.
RTO
RTO refers to the maximum amount of time allowed for recovering a system,
application, or service after a disruption or disaster occurs.In AWS, RTO
measures the time it takes to restore operations and bring the affected
resources back online following an outage or failure. Organizations typically
define RTO based on business requirements, considering factors such as the
criticality of the application, the impact of downtime on revenue and
productivity, and customer expectations.
AWS Services → Replication, Auto Scaling, AWS Elastic Beanstalk, and AWS
Lambda
RPO
RPO defines the maximum acceptable amount of data loss that an organization
can tolerate during a disaster or disruption. In AWS, RPO measures the point in
time to which data must be recovered to ensure minimal data
loss.Organizations determine RPO based on factors such as data sensitivity,
regulatory requirements, and business continuity needs.
AWS Services → Amazon S3 for object storage, Amazon EBS snapshots, and
AWS Backup
Snowball:
Snowball is designed for transferring large amounts of data (up to 80TB per
device) to and from AWS in situations where high-speed internet connections
are not available or where transferring data over the network would be time-
consuming and costly. Snowball devices are rugged, portable, and secure,
equipped with built-in encryption and tamper-resistant features. Snowball is
ideal for one-time data transfers, migrations, and data backup projects where
the data volume is relatively large but not massive compared to the capacity of
Snowmobile.
Snowball Edge:
Snowmobile:
IAM enables you to define fine-grained permissions and access policies that
specify who can access specific AWS resources and what actions they can
perform.You can create custom IAM policies to grant or deny permissions at
the level of individual API actions, resources, or resource groups, allowing you
to enforce the principle of least privilege and minimize the risk of unauthorized
access.
Centralized Identity Management:
IAM helps you improve security and maintain compliance with regulatory
requirements by enforcing strong authentication and access controls.You can
enable multi-factor authentication (MFA) for IAM users to add an extra layer of
security to their accounts. IAM supports AWS CloudTrail integration, allowing
you to monitor and log IAM API activity for auditing, compliance, and security
analysis.
IAM helps you optimize costs by allowing you to grant permissions only to the
resources and actions required by users, groups, and roles.By implementing
least privilege access and monitoring IAM usage, you can identify and eliminate
unnecessary permissions, reducing the risk of accidental or malicious actions
that could incur additional costs.
• Data tables → Mappings allow you to define key-value pairs that can
be used to specify conditional values based on a key lookup.They are
typically used to map different values for resources based on regions,
environments, or other criteria defined in the template
• File format version → The format version specifies the version of the
CloudFormation template schema being used → Helps Cloud
Formation determine which syntax rules and features are supported
Yes, you can use the EFS-to-EFS backup solution to recover from unintended
changes or deletion in Amazon EFS. Follow these steps:
3. Use the region selector in the console navigation bar to select region
4. Verify if you have chosen the right template on the Select Template page
6. Review the parameters for the template and modify them if necessary
• As per procedure and best practices, take snapshots of the EBS volumes
on Amazon S3.
There are three types of load balancers that are supported by Elastic Load Balancing:
CLB provides basic load balancing capabilities for distributing incoming traffic
across multiple EC2 instances in one or more Availability Zones. Use CLB for
simple, traditional load balancing scenarios where you need to distribute traffic
evenly across instances and do not require advanced features such as content-
based routing or SSL offloading. CLB is suitable for applications that rely on
TCP and SSL protocols and do not require advanced routing or application layer
features.
Application Load Balancer (ALB):
ALB operates at the application layer (Layer 7) of the OSI model and provides
advanced routing and content-based routing capabilities. Use ALB for modern,
microservices-based architectures and applications where you need to route
traffic based on URL path, host header, or query string parameters. ALB
supports features such as host-based routing, path-based routing, WebSocket
protocol, HTTP/2, and containerized applications running on ECS or EKS. ALB
is suitable for applications with HTTP and HTTPS traffic that require flexible
routing and traffic management capabilities.
NLB operates at the transport layer (Layer 4) of the OSI model and provides
high-performance, low-latency load balancing for TCP, UDP, and TLS traffic. Use
NLB for applications that require high throughput, low latency, and support for
TCP/UDP protocols, such as gaming applications, real-time streaming, and IoT
platforms. NLB is designed for extreme performance and scalability, making it
suitable for handling millions of requests per second with minimal latency. NLB
also supports static IP addresses, preservation of source IP addresses, and
TCP/UDP session stickiness, making it suitable for stateful applications and
scenarios where client IP preservation is important.
79. How can you use AWS WAF in monitoring your AWS
applications?
A web ACL is a collection of rules that defines how AWS WAF filters and
monitors web requests to your application. Define conditions and rules within
the web ACL to specify the criteria for allowing, blocking, or monitoring HTTP
and HTTPS requests.
Define Conditions and Rules:
Within the web ACL, define conditions based on various attributes of HTTP
requests, such as IP addresses, headers, query strings, or request body
content. Create rules that use these conditions to allow, block, or monitor
requests that match specific patterns or criteria. For monitoring purposes, you
can create rules that count or log requests that match certain conditions
without blocking them.
Enable Logging:
Configure AWS WAF to log requests that match specific rules or conditions to
Amazon CloudWatch Logs or Amazon Kinesis Data Firehose for monitoring
and analysis.
Enable logging for web ACLs to capture detailed information about incoming
requests, including request headers, source IP addresses, user agents, and
more.
Use Amazon CloudWatch Logs Insights or other log analysis tools to query and
analyze the logs generated by AWS WAF. Monitor metrics such as the number
of allowed, blocked, or monitored requests over time to gain insights into your
application's traffic patterns and potential security threats.
80. What are the different AWS IAM categories that you
can control?
• Create and manage policies to grant access to AWS services and resources
81. What are the policies that you can set for your users’
passwords?
• You can set a minimum length of the password, or you can ask the users to add at
least one number or special characters in it.
• You can enforce automatic password expiration, prevent reuse of old passwords,
and request for a password reset upon their next AWS sign in.
• You can have the AWS users contact an account administrator when the user has
allowed the password to expire.
82. What is the difference between an IAM role and an
IAM user?
The two key differences between the IAM role and IAM user are:
• An IAM role is an IAM entity that defines a set of permissions for making AWS
service requests, while an IAM user has permanent long-term credentials and is
used to interact with the AWS services directly.
• In the IAM role, trusted entities, like IAM users, applications, or an AWS service,
assume roles whereas the IAM user has full access to all the AWS IAM
functionalities.
There are two types of managed policies; one that is managed by you and one that is
managed by AWS. They are IAM resources that express permissions using IAM
policy language. You can create, edit, and manage them separately from the IAM
users, groups, and roles to which they are attached.
Here’s an example of an IAM policy to grant access to add, update, and delete
objects from a specific folder.
Here’s an example of a policy summary:
• Manage IAM users and their access - AWS IAM provides secure resource access
to multiple users
• Manage access for federated users – AWS allows you to provide secure access to
resources in your AWS account to your employees and applications without
creating IAM roles
Amazon Route 53 is a scalable and highly available Domain Name System (DNS).
The name refers to TCP or UDP port 53, where DNS server requests are addressed.
CloudTrail is a service that captures information about every request sent to the
Amazon Route 53 API by an AWS account, including requests that are sent by IAM
users. CloudTrail saves log files of these requests to an Amazon S3 bucket.
CloudTrail captures information about all requests. You can use information in the
CloudTrail log files to determine which requests were sent to Amazon Route 53, the
IP address that the request was sent from, who sent the request, when it was sent,
and more.
The Geo Based DNS routing takes decisions based on the geographic location of the
request. Whereas, the Latency Based Routing utilizes latency measurements
between networks and AWS data centers. Latency Based Routing is used when you
want to give your customers the lowest latency possible. On the other hand, Geo
Based routing is used when you want to direct the customer to different websites
based on the country or region they are browsing from.
Domain
Hosted zone
A hosted zone is a container that holds information about how you want to route
traffic on the internet for a specific domain. For example, lms.simplilearn.com is a
hosted zone.
Amazon is a global service and consequently has DNS services globally. Any
customer creating a query from any part of the world gets to reach a DNS server
local to them that provides low latency.
Dependency
Optimal Locations
Route 53 uses a global anycast network to answer queries from the optimal position
automatically.
AWS CloudTrail records user API activity on your account and allows you to access
information about the activity. Using CloudTrail, you can get full details about API
actions such as the identity of the caller, time of the call, request parameters, and
response elements. On the other hand, AWS Config records point-in-time
configuration details for your AWS resources as Configuration Items (CIs).
You can use a CI to ascertain what your AWS resource looks like at any given point in
time. Whereas, by using CloudTrail, you can quickly answer who made an API call to
modify the resource. You can also use Cloud Trail to detect if a security group was
incorrectly configured.
There are two types of scaling - vertical scaling and horizontal scaling. Vertical
scaling lets you vertically scale up your master database with the press of a button.
A database can only be scaled vertically, and there are 18 different instances in
which you can resize the RDS. On the other hand, horizontal scaling is good for
replicas. These are read-only replicas that can only be done through Amazon Aurora.
There are two consistency models In DynamoDB. First, there is the Eventual
Consistency Model, which maximizes your read throughput. However, it might not
reflect the results of a recently completed write. Fortunately, all the copies of data
usually reach consistency within a second. The second model is called the Strong
Consistency Model. This model has a delay in writing the data, but it guarantees that
you will always see the updated data every time you read it.
Amazon DynamoDB
Amazon CloudWatch
Amazon Cognito
4. You are a Machine Learning Engineer who is on the
lookout for a solution that will discover sensitive
information that your enterprise stores in AWS and then
use NLP to classify the data and provide business-
related insights. Which among the services would you
choose?
AWS Macie
AWS IAM
Amazon VPC
AWS Lambda
Amazon Chime
13. As your company's AWS Solutions Architect, you are
in charge of designing thousands of similar individual
jobs. Which of the following services best meets your
requirements?
AWS Batch
1. Amazon RDS
2. Amazon Neptune
3. Amazon Snowball
4. Amazon DynamoDB
2. Amazon GuardDuty
3. Amazon CloudWatch
4. Amazon EBS
3. As a web developer, you are developing an app,
targeted especially for the mobile platform. Which of the
following lets you add user sign-up, sign-in, and access
control to your web and mobile apps quickly and easily?
1. AWS Shield
2. AWS Macie
3. AWS Inspector
4. Amazon Cognito
2. AWS IAM
3. AWS Macie
4. AWS CloudHSM
4. AWS IAM
1. Amazon Route 53
2. Amazon VPC
4. Amazon CloudFront
2. Amazon Elasticache
3. Amazon VPC
4. Amazon Glacier
4. Multi-Factor Authentication
2. AWS Batch
4. Amazon Lightsail
2. AWS Lambda
3. AWS Batch
4. Amazon Inspector
2. Amazon MQ
1. Amazon Chime
2. Amazon WorkSpaces
3. Amazon MQ
4. Amazon AppStream
2. AWS Snowball
3. AWS Fargate
4. AWS Batch
1. Amazon SageMaker
2. AWS DeepLens
3. Amazon Comprehend
4. Device Farm
1. Amazon VPC
2. AWS IAM
3. Amazon Inspector
1. Amazon GameLift
2. AWS Greengrass
3. Amazon Lumberyard
4. Amazon Sumerian
1. AWS Budgets
3. Amazon WorkMail
4. Amazon Connect
1. AWS CloudFormation
2. AWS Aurora
1. Amazon Aurora
2. AWS RDS
3. Amazon Elasticache
1. AWS CloudTrail
2. AWS Config
3. Amazon Chime
4. AWS Simple Notification Service