0% found this document useful (0 votes)
9 views

AWS Interview Questions

Uploaded by

Amit Kulkarni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

AWS Interview Questions

Uploaded by

Amit Kulkarni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

AWS Fundamentals:

1. What is AWS?
• AWS (Amazon Web Services) is a cloud computing platform and
services provided by Amazon for building, deploying, and managing
applications and services through a global network of data centers.
2. Explain the key components of AWS.
• AWS consists of various services, including compute, storage,
networking, databases, AI/ML, IoT, security, and management tools,
organized into categories like EC2, S3, RDS, Lambda, IAM, etc.
3. What is EC2?
• Amazon Elastic Compute Cloud (EC2) is a web service that provides
resizable compute capacity in the cloud, allowing users to run virtual
servers (instances) for various workloads.
4. What is S3?
• Amazon Simple Storage Service (S3) is an object storage service that
offers scalable storage for data storage, backup, and archival, accessible
via a simple web interface or API.
5. Explain the difference between S3 and EBS.
• S3 is object storage suitable for storing large amounts of unstructured
data, while EBS (Elastic Block Store) provides block-level storage
volumes for use with EC2 instances, offering higher performance and
lower latency.

AWS Compute:

6. What is Lambda?
• AWS Lambda is a serverless compute service that allows users to run
code in response to events without provisioning or managing servers,
paying only for the compute time consumed.
7. How do you scale EC2 instances?
• EC2 instances can be scaled vertically (resizing the instance type) or
horizontally (increasing the number of instances) manually or
automatically using Auto Scaling groups based on demand or schedule.
8. What is Elastic Beanstalk?
• AWS Elastic Beanstalk is a platform-as-a-service (PaaS) offering that
enables users to deploy, manage, and scale web applications and
services quickly and easily without worrying about infrastructure
management.
9. Explain the difference between EC2 and ECS.
• EC2 is a virtual server service for running applications on virtual
machines, while ECS (Elastic Container Service) is a container
orchestration service for managing Docker containers at scale.
10. What is AWS Batch?
• AWS Batch is a fully managed batch processing service that enables
users to run batch computing workloads at any scale efficiently,
automatically provisioning compute resources based on workload
requirements.

AWS Storage:

11. What is EBS?


• Amazon Elastic Block Store (EBS) provides block-level storage volumes
for use with EC2 instances, offering persistent, high-performance
storage optimized for I/O-intensive workloads.
12. Explain the use cases for S3 Glacier.
• S3 Glacier is a low-cost storage class designed for long-term data
archival and backup, offering durable and secure storage with flexible
retrieval options for infrequently accessed data.
13. How do you secure data in S3?
• Data in S3 can be secured using encryption (server-side or client-side),
access controls (bucket policies, ACLs), versioning, MFA delete, and
monitoring using AWS CloudTrail and S3 Access Logs.
14. What is AWS Storage Gateway?
• AWS Storage Gateway is a hybrid cloud storage service that enables
seamless integration between on-premises environments and AWS
cloud storage, providing file, volume, and tape storage interfaces.
15. Explain the difference between EFS and FSx.
• Amazon EFS (Elastic File System) is a fully managed file storage service
for use with EC2 instances, while Amazon FSx provides fully managed
file systems optimized for specific workloads like Windows file storage
(FSx for Windows File Server) or Lustre (FSx for Lustre).

AWS Networking:

16. What is VPC?


• Amazon Virtual Private Cloud (VPC) allows users to provision a logically
isolated section of the AWS cloud where they can launch AWS
resources in a defined virtual network.
17. What are Security Groups and NACLs in AWS?
• Security Groups and Network Access Control Lists (NACLs) are AWS
networking features used to control traffic to and from EC2 instances in
a VPC, with Security Groups operating at the instance level and NACLs
operating at the subnet level.
18. Explain AWS Direct Connect.
• AWS Direct Connect is a dedicated network connection service that
provides private, high-speed, low-latency connectivity between an
organization's data center or colocation facility and AWS cloud services.
19. What is AWS Transit Gateway?
• AWS Transit Gateway is a service that simplifies network connectivity by
enabling centralized management of multiple VPCs, on-premises
networks, and VPN connections, allowing for scalable and efficient
routing between connected networks.
20. How do you set up VPC peering in AWS?
• VPC peering allows users to connect two VPCs within the same AWS
region, enabling instances in different VPCs to communicate with each
other using private IP addresses. It is established by creating peering
connections between VPCs and updating route tables.

AWS Databases:

21. What is RDS?


• Amazon Relational Database Service (RDS) is a fully managed relational
database service that provides scalable, high-performance database
instances for MySQL, PostgreSQL, SQL Server, Oracle, and MariaDB.
22. Explain the use cases for DynamoDB.
• Amazon DynamoDB is a fully managed NoSQL database service that
offers single-digit millisecond latency at any scale, suitable for high-
performance, low-latency applications with flexible data models.
23. What is Aurora?
• Amazon Aurora is a fully managed relational database engine that
offers high performance, scalability, and availability, compatible with
MySQL and PostgreSQL, designed for demanding workloads and
mission-critical applications.
24. How do you secure data in RDS?
• Data in RDS can be secured using encryption (at rest and in transit),
access controls (IAM, database users, security groups), database
auditing, and compliance certifications (e.g., HIPAA, PCI DSS).
25. Explain the difference between RDS and DynamoDB.
• RDS is a managed relational database service suitable for traditional
SQL databases, while DynamoDB is a fully managed NoSQL database
service designed for high scalability, low-latency applications with
flexible data models.
AWS Identity and Access Management (IAM):

26. What is IAM?


• AWS Identity and Access Management (IAM) is a web service that
enables you to securely control access to AWS services and resources
by managing users, groups, roles, and permissions.
27. Explain the principle of least privilege in IAM.
• The principle of least privilege in IAM dictates that users should be
granted only the permissions they need to perform their tasks,
minimizing the risk of unauthorized access and potential security
breaches.
28. What is IAM Role?
• An IAM role is an AWS identity with permissions attached to it that can
be assumed by users, applications, or AWS services to access AWS
resources securely.
29. How do you grant temporary access to AWS resources?
• Temporary access to AWS resources can be granted using IAM roles
with temporary security credentials obtained through AWS STS
(Security Token Service), which can be limited by time, permissions, and
other factors.
30. What is AWS Organizations?
• AWS Organizations is a service that enables centralized management of
multiple AWS accounts and resources, providing features like
consolidated billing, organizational units, service control policies, and
cross-account access.

AWS Security and Compliance:

31. What is AWS Security Hub?


• AWS Security Hub is a comprehensive security and compliance service
that provides centralized visibility into security alerts and compliance
status across AWS accounts and services, helping organizations to
identify and remediate security issues.
32. Explain AWS WAF.
• AWS WAF (Web Application Firewall) is a managed service that helps
protect web applications from common web exploits by allowing users
to create rules that filter traffic based on IP addresses, HTTP headers,
and URI strings.
33. What is AWS Shield?
• AWS Shield is a managed Distributed Denial of Service (DDoS)
protection service that helps protect web applications running on AWS
against large-scale, sophisticated DDoS attacks by automatically
detecting and mitigating threats.
34. How do you monitor and log AWS activities?
• AWS CloudTrail provides a comprehensive logging solution for
monitoring and logging AWS activities, including API calls, user activity,
resource changes, and system events, enabling audit trails, security
analysis, and compliance reporting.
35. Explain the Shared Responsibility Model in AWS.
• The Shared Responsibility Model in AWS defines the division of security
responsibilities between AWS and the customer, where AWS is
responsible for the security of the cloud (hardware, software,
networking, and facilities), while the customer is responsible for security
in the cloud (data, applications, identity and access management,
network configuration, etc.).

AWS DevOps and Automation:

36. What is AWS CodeCommit?


• AWS CodeCommit is a fully managed source control service that
enables teams to securely store, manage, and collaborate on Git
repositories in the AWS cloud.
37. How do you automate infrastructure deployment in AWS?
• Infrastructure deployment in AWS can be automated using services like
AWS CloudFormation for infrastructure as code (IaC), AWS CLI
(Command Line Interface), AWS SDKs (Software Development Kits), and
third-party tools like Terraform or Ansible.
38. Explain AWS CodePipeline.
• AWS CodePipeline is a continuous integration and continuous delivery
(CI/CD) service that automates the build, test, and deployment
pipelines for applications hosted on AWS, allowing users to rapidly and
reliably deliver code changes.
39. What is AWS OpsWorks?
• AWS OpsWorks is a configuration management service that enables
users to automate the deployment and management of applications
and infrastructure using Chef, Puppet, or AWS-managed Chef
Automate.
40. How do you monitor AWS resources and applications?
• AWS provides various monitoring and logging services, including
Amazon CloudWatch for monitoring AWS resources and applications,
AWS X-Ray for tracing and analyzing requests, and AWS Config for
resource inventory and configuration history.
AWS Monitoring and Management:

41. What is AWS CloudWatch?


• Amazon CloudWatch is a monitoring and observability service that
provides real-time monitoring, logs analysis, and metrics collection for
AWS resources and applications, enabling users to gain insights into
their operational health and performance.
42. How do you set up alarms in CloudWatch?
• CloudWatch alarms can be set up to automatically trigger notifications
or actions based on predefined thresholds or conditions for metrics
collected from AWS resources, such as CPU utilization, latency, or error
rates.
43. Explain AWS CloudFormation.
• AWS CloudFormation is a service that enables users to provision and
manage AWS resources using templates that describe the infrastructure
and configuration as code, allowing for automated and repeatable
infrastructure deployment.
44. What is AWS Systems Manager?
• AWS Systems Manager is a management service that enables users to
automate operational tasks and manage infrastructure at scale,
including patch management, configuration management, automation,
and inventory tracking.
45. How do you automate backups in AWS?
• Backups in AWS can be automated using services like AWS Backup for
centralized backup management across AWS services, AWS Data Li`
fecycle Manager for EBS volume snapshots, and AWS Database Backup
for RDS and DynamoDB.

AWS Migration and Hybrid Solutions:

46. What is AWS Database Migration Service (DMS)?


• AWS Database Migration Service is a fully managed service that
enables users to migrate databases to AWS easily and securely with
minimal downtime, supporting homogenous and heterogeneous
migrations.
47. Explain AWS Server Migration Service (SMS).
• AWS Server Migration Service is a service that enables users to migrate
on-premises servers to AWS, including VMware, Hyper-V, and Azure
virtual machines, by replicating server volumes to Amazon EC2
instances.
48. What is AWS Snowball?
• AWS Snowball is a petabyte-scale data transfer service that enables
users to securely transfer large amounts of data into and out of AWS
using physical storage appliances, minimizing data transfer times and
costs.
49. How do you set up hybrid cloud connectivity with AWS?
• Hybrid cloud connectivity with AWS can be set up using services like
AWS Direct Connect for dedicated network connections, AWS VPN for
secure VPN connections over the internet, and AWS Storage Gateway
for seamless integration between on-premises environments and AWS
cloud storage.
50. Explain the AWS Hybrid Architecture.
• AWS Hybrid Architecture refers to the design and deployment of
applications and infrastructure that span both on-premises
environments and the AWS cloud, leveraging hybrid connectivity,
shared services, and common management tools for seamless
integration and operation.

AWS AI and Machine Learning:

51. What is Amazon SageMaker?


• Amazon SageMaker is a fully managed machine learning service that
enables data scientists and developers to build, train, and deploy
machine learning models at scale, simplifying the process of building
and deploying AI applications.
52. Explain AWS Rekognition.
• Amazon Rekognition is a deep learning-based image and video
analysis service that enables users to analyze and recognize objects,
scenes, and faces in images and videos, providing capabilities like facial
recognition, object detection, and content moderation.
53. What is AWS Comprehend?
• Amazon Comprehend is a natural language processing (NLP) service
that enables users to analyze and extract insights from unstructured
text data, including sentiment analysis, entity recognition, language
detection, and topic modeling.
54. Explain Amazon Lex.
• Amazon Lex is a service for building conversational interfaces
(chatbots) using natural language understanding (NLU) and speech
recognition, enabling developers to create interactive voice and text-
based conversational experiences.
55. How do you train and deploy machine learning models in AWS?
• Machine learning models in AWS can be trained and deployed using
services like Amazon SageMaker for end-to-end machine learning
workflows, including data preparation, model training, hyperparameter
tuning, and model deployment.

AWS IoT:

56. What is AWS IoT Core?


• AWS IoT Core is a managed cloud service that enables users to connect
and manage IoT devices securely at scale, providing features like device
provisioning, device shadowing, and MQTT message brokering.
57. Explain AWS IoT Greengrass.
• AWS IoT Greengrass is a software that extends AWS IoT Core to edge
devices, enabling local compute, messaging, data caching, and machine
learning inference capabilities for IoT devices in disconnected or
intermittently connected environments.
58. What is AWS IoT Device Management?
• AWS IoT Device Management is a service that enables users to
onboard, organize, monitor, and remotely manage IoT devices at scale,
providing features like over-the-air (OTA) updates, device provisioning,
and fleet indexing.
59. What is AWS IoT Analytics?
• AWS IoT Analytics is a service that enables users to collect, process, and
analyze IoT data at scale, providing features like data ingestion, data
enrichment, data storage, and data visualization for IoT applications
and analytics.
60. Explain the use cases for AWS IoT Events.
• AWS IoT Events is a service that enables users to detect and respond to
events from IoT sensors and applications in real-time, enabling use
cases like predictive maintenance, equipment monitoring, and anomaly
detection in industrial IoT environments.

AWS Governance and Compliance:

61. What is AWS Config?


• AWS Config is a service that enables users to assess, audit, and evaluate
the configuration of AWS resources over time, providing a detailed
view of resource relationships, configuration changes, and compliance
status.
62. How do you manage access control in AWS?
• Access control in AWS can be managed using AWS Identity and Access
Management (IAM) for users, groups, roles, and permissions, along
with resource policies, bucket policies, and access control lists (ACLs)
for specific AWS services like S3 and KMS.
63. Explain AWS Service Catalog.
• AWS Service Catalog is a service that enables organizations to create
and manage catalogs of IT services, applications, and infrastructure
resources that are approved for use on AWS, providing self-service
access and governance controls for users.
64. What is AWS Control Tower?
• AWS Control Tower is a service that provides a centralized, multi-
account environment for setting up and governing a secure, compliant,
and scalable AWS environment, automating the setup of AWS best
practices and guardrails.
65. How does AWS help with compliance requirements like GDPR or HIPAA?
• AWS offers a wide range of compliance certifications and industry-
specific standards, such as GDPR, HIPAA, ISO, SOC, PCI DSS, FedRAMP,
etc., ensuring that AWS services meet the stringent requirements for
data protection, privacy, and security.

AWS Solutions Architecture:

66. What is the AWS Well-Architected Framework?


• The AWS Well-Architected Framework provides best practices and
guidance for designing, deploying, and operating secure, high-
performing, resilient, and efficient applications and workloads on AWS.
67. Explain the concept of multi-tier architecture in AWS.
• Multi-tier architecture in AWS refers to the design of applications with
multiple layers or tiers (e.g., presentation, application, data) deployed
across different AWS services like EC2, RDS, S3, and CloudFront, to
improve scalability, reliability, and performance.
68. How do you design for high availability in AWS?
• Designing for high availability in AWS involves deploying redundant
resources across multiple availability zones (AZs), using Auto Scaling
groups, load balancing, and leveraging managed services with built-in
fault tolerance, to minimize downtime and ensure continuous
operation.
69. What are the considerations for designing a global-scale solution in
AWS?
• Design considerations include selecting the appropriate AWS regions
for data residency and compliance requirements, implementing geo-
replication and global traffic routing for low-latency access, and
ensuring data consistency and compliance with local regulations.
70. Explain the concept of serverless computing in AWS.
• Serverless computing in AWS refers to the ability to run code without
provisioning or managing servers, where the cloud provider
dynamically manages the allocation of resources and automatically
scales the application based on demand, using services like AWS
Lambda, API Gateway, and DynamoDB.

AWS Integration:

71. What is AWS Step Functions?


• AWS Step Functions is a serverless orchestration service that enables
users to coordinate multiple AWS services into serverless workflows
using visual workflows, providing features like state management, error
handling, and parallel execution.
72. Explain AWS SQS.
• Amazon Simple Queue Service (SQS) is a fully managed message
queuing service that enables decoupling and asynchronous
communication between distributed applications, allowing messages to
be stored and processed asynchronously by consumers.
73. What is AWS SNS?
• Amazon Simple Notification Service (SNS) is a fully managed
messaging service that enables users to send and receive messages or
notifications between distributed systems or applications, supporting
multiple delivery protocols and endpoints.
74. How do you integrate AWS services with on-premises systems?
• AWS offers several integration options for connecting cloud services
with on-premises systems, including AW S Direct Connect for dedicated
network connections, AWS VPN for secure VPN connections over the
internet, and AWS Storage Gateway for seamless integration between
on-premises environments and AWS cloud storage.
75. Explain the use cases for AWS API Gateway.
• AWS API Gateway enables users to create, publish, maintain, monitor,
and secure APIs at any scale, facilitating integration with backend
services, serverless functions, and HTTP endpoints, and enabling
features like authentication, authorization, throttling, and caching.

AWS Development:

76. What is AWS SDK?


• The AWS SDK (Software Development Kit) is a collection of libraries,
tools, and documentation for developers to build applications that
integrate with AWS services, providing language-specific APIs for
interacting with AWS resources programmatically.
77. How do you develop serverless applications in AWS?
• Serverless applications in AWS can be developed using services like
AWS Lambda for running code without provisioning or managing
servers, Amazon API Gateway for creating RESTful APIs, AWS
DynamoDB for NoSQL database storage, and AWS SAM (Serverless
Application Model) for defining serverless application resources.
78. Explain AWS CodeDeploy.
• AWS CodeDeploy is a deployment service that automates code
deployments to EC2 instances, on-premises servers, or Lambda
functions, enabling users to release new features and updates rapidly
and reliably.
79. What is AWS CodeBuild?
• AWS CodeBuild is a fully managed build service that compiles source
code, runs tests, and produces deployable artifacts, enabling
continuous integration and deployment (CI/CD) pipelines for AWS
applications and services.
80. How do you monitor and debug applications in AWS?
• Applications in AWS can be monitored and debugged using services
like Amazon CloudWatch for real-time monitoring, AWS X-Ray for
tracing and analyzing requests, AWS CloudTrail for auditing and
logging API activity, and AWS CloudWatch Logs for centralized log
management.

AWS Cost Management:

81. What is AWS Cost Explorer?


• AWS Cost Explorer is a tool that enables users to visualize, understand,
and manage AWS costs and usage over time, providing insights into
cost drivers, usage trends, and cost optimization opportunities.
82. How do you estimate costs for AWS services?
• Costs for AWS services can be estimated using the AWS Pricing
Calculator, which provides pricing information for various AWS services
and configurations based on usage metrics like instance type, storage
size, data transfer, and region.
83. Explain AWS Budgets.
• AWS Budgets is a service that enables users to set custom cost and
usage budgets for AWS services, providing alerts and notifications
when actual costs exceed predefined thresholds, allowing for proactive
cost management and optimization.
84. What is AWS Savings Plans?
• AWS Savings Plans is a pricing model that enables users to save money
on their AWS usage by committing to a consistent amount of usage
(e.g., compute instance hours) over a one- or three-year period,
offering significant discounts compared to on-demand pricing.
85. How do you optimize costs in AWS?
• Costs in AWS can be optimized by rightsizing resources to match
workload requirements, using reserved instances or savings plans for
predictable workloads, leveraging spot instances for flexible workloads,
implementing auto-scaling, and using cost management tools like AWS
Cost Explorer and AWS Trusted Advisor.

AWS Advanced Concepts:

86. What is AWS Lambda Layers?


• AWS Lambda Layers is a feature that enables users to centrally manage
code and dependencies that are shared across multiple Lambda
functions, reducing duplication and simplifying code management and
deployment.
87. Explain AWS CloudFormation StackSets.
• AWS CloudFormation StackSets is a service that enables users to
deploy and manage CloudFormation templates across multiple AWS
accounts and regions, allowing for centralized management and
governance of infrastructure deployments at scale.
88. What is AWS AppConfig?
• AWS AppConfig is a service that enables users to deploy application
configurations to applications hosted on AWS dynamically, allowing for
controlled and validated rollout of configuration changes without
requiring code deployments.
89. What is AWS Glue?
• AWS Glue is a fully managed extract, transform, and load (ETL) service
that enables users to prepare and transform data for analytics and
machine learning using serverless data integration capabilities and
built-in job orchestration.
90. Explain AWS Fargate.
• AWS Fargate is a serverless compute engine for containers that enables
users to run containers without managing the underlying infrastructure,
providing on-demand, scalable, and efficient compute resources for
containerized applications.

AWS Security:

91. What is AWS Key Management Service (KMS)?


• AWS Key Management Service (KMS) is a managed service that enables
users to create and control encryption keys for encrypting data at rest
and in transit in AWS services and applications.
92. How do you encrypt data in AWS?
• Data in AWS can be encrypted using services like AWS Key
Management Service (KMS) for managing encryption keys, AWS
Encryption SDK for client-side encryption, and AWS-managed
encryption for various AWS services like S3, EBS, RDS, and EFS.
93. Explain AWS Secrets Manager.
• AWS Secrets Manager is a service that enables users to securely store,
manage, and rotate secrets, such as API keys, passwords, and database
credentials, used by applications, services, and users.
94. What is AWS Shield Advanced?
• AWS Shield Advanced is an enhanced Distributed Denial of Service
(DDoS) protection service that provides advanced threat detection and
mitigation capabilities, dedicated DDoS response team support, and
cost protection for AWS resources.
95. How do you secure containers in AWS?
• Containers in AWS can be secured using services like AWS Fargate for
serverless container management, Amazon ECR for secure container
image storage, AWS IAM for access control, AWS Security Hub for
continuous security monitoring, and AWS AppMesh for service mesh
integration.

AWS Big Data and Analytics:

96. What is Amazon Redshift?


• Amazon Redshift is a fully managed data warehousing service that
enables users to analyze large datasets using SQL queries and business
intelligence tools, providing high performance, scalability, and cost-
effectiveness for analytical workloads.
97. Explain Amazon EMR.
• Amazon EMR (Elastic MapReduce) is a managed big data processing
service that enables users to run Apache Hadoop, Spark, HBase, Presto,
and other big data frameworks on scalable clusters of EC2 instances,
enabling data processing and analysis at scale.
98. What is Amazon Athena?
• Amazon Athena is an interactive query service that enables users to
analyze data stored in Amazon S3 using standard SQL queries, without
the need for complex ETL processes or data warehousing infrastructure.
99. How do you visualize data in AWS?
• Data in AWS can be visualized using services like Amazon QuickSight
for business intelligence and analytics, Amazon Kinesis Data Analytics
for real-time data processing and visualization, and third-party tools
like Tableau, Power BI, or Grafana.
100. Explain Amazon Forecast.
• Amazon Forecast is a fully managed service that uses machine learning
to generate accurate forecasts for time-series data, enabling users to
predict future trends and make data-driven decisions in various
industries like retail, finance, and supply chain management.

Basic AWS Interview Questions

1. Define and explain the three basic types of cloud


services and the AWS products that are built based on
them?

The three basic types of cloud services are:

• Computing

• Storage

• Networking

Here are some of the AWS products that are built based on the three cloud service
types:

Computing - These include EC2, Elastic Beanstalk, Lambda, Auto-Scaling, and


Lightsat.

Storage - These include S3, Glacier, Elastic Block Storage, Elastic File System.

Networking - These include VPC, Amazon CloudFront, Route53

2. What is the relation between the Availability Zone and


Region?
AWS regions are separate geographical areas, like the US-West 1 (North California)
and Asia South (Mumbai). On the other hand, availability zones are the areas that are
present inside the regions. These are generally isolated zones that can replicate
themselves whenever required.

3. What is auto-scaling?

Auto-scaling is a function that allows you to provision and launch new instances
whenever there is a demand. It allows you to automatically increase or decrease
resource capacity in relation to the demand.

4. What is geo-targeting in CloudFront?

Geo-Targeting is a concept where businesses can show personalized content to their


audience based on their geographic location without changing the URL. This helps you
create customized content for the audience of a specific geographical area, keeping
their needs in the forefront.

5. What are the steps involved in a CloudFormation


Solution?

Here are the steps involved in a CloudFormation solution:


1. Create or use an existing CloudFormation template using JSON or YAML format.

2. Save the code in an S3 bucket, which serves as a repository for the code.

3. Use AWS CloudFormation to call the bucket and create a stack on your template.

4. CloudFormation reads the file and understands the services that are called, their
order, the relationship between the services, and provisions the services one after
the other.

6. How do you upgrade or downgrade a system with


near-zero downtime?

You can upgrade or downgrade a system with near-zero downtime using the following
steps of migration:

• Open EC2 console

• Choose Operating System AMI

• Launch an instance with the new instance type

• Install all the updates

• Install applications

• Test the instance to see if it’s working


• If working, deploy the new instance and replace the older instance

• Once it’s deployed, you can upgrade or downgrade the system with near-zero
downtime.

8. Is there any other alternative tool to log into the cloud


environment other than console?

The that can help you log into the AWS resources are:

• Putty

• AWS CLI for Linux

• AWS CLI for Windows

• AWS CLI for Windows CMD

• AWS SDK

• Eclipse

9. What services can be used to create a centralized


logging solution?

The essential services that you can use are Amazon CloudWatch Logs, store them
in Amazon S3, and then use Amazon Elastic Search to visualize them. You can use
Amazon Kinesis Firehose to move the data from Amazon S3 to Amazon
ElasticSearch.
10. What are the native AWS Security logging
capabilities?

Most of the AWS services have their logging options. Also, some of them have an
account level logging, like in AWS CloudTrail, AWS Config, and others. Let’s take a
look at two services in specific:

AWS CloudTrail

This is a service that provides a history of the AWS API calls for every account. It lets
you perform security analysis, resource change tracking, and compliance auditing of
your AWS environment as well. The best part about this service is that it enables you
to configure it to send notifications via AWS SNS when new logs are delivered.

AWS Config

This helps you understand the configuration changes that happen in your
environment. This service provides an AWS inventory that includes configuration
history, configuration change notification, and relationships between AWS
resources. It can also be configured to send information via AWS SNS when new
logs are delivered.

11. What is a DDoS attack, and what services can


minimize them?

DDoS is a cyber-attack in which the perpetrator accesses a website and creates


multiple sessions so that the other legitimate users cannot access the service. The
native tools that can help you deny the DDoS attacks on your AWS services are:

• AWS Shield

• AWS WAF

• Amazon Route53

• Amazon CloudFront

• ELB

• VPC

12. You are trying to provide a service in a particular


region, but you do not see the service in that region. Why
is this happening, and how do you fix it?
Not all Amazon AWS services are available in all regions. When Amazon initially
launches a new service, it doesn’t get immediately published in all the regions. They
start small and then slowly expand to other regions. So, if you don’t see a specific
service in your region, chances are the service hasn’t been published in your region
yet. However, if you want to get the service that is not available, you can switch to the
nearest region that provides the services.

13. How do you set up a system to monitor website


metrics in real-time in AWS?

Amazon CloudWatch helps you to monitor the application status of various AWS
services and custom events. It helps you to monitor:

• State changes in Amazon EC2

• Auto-scaling lifecycle events

• Scheduled events

• AWS API calls

• Console sign-in events


14. What are the different types of virtualization in AWS,
and what are the differences between them?

In AWS, HVM (Hardware Virtual Machine), PV (Paravirtual), and PV on HVM (Paravirtual


on Hardware Virtual Machine) refer to different virtualization types or modes available
for Amazon EC2 instances. Each mode offers different levels of performance, features,
and compatibility.

The three major types of virtualization in AWS are:

• Hardware Virtual Machine (HVM)

HVM is a full virtualization mode where the guest operating system runs on top of
a hypervisor without modification. The hypervisor presents a complete set of virtual
hardware to the guest operating system, allowing it to run unmodified. It is a fully
virtualized hardware, where all the virtual machines act separate from each other.
These virtual machines boot by executing a master boot record in the root block
device of your image. Suitable for running a wide range of operating systems,
including Windows and newer versions of Linux. HVM instances are recommended
for most use cases due to their broad compatibility and better performance
compared to PV instances.

• Paravirtualization (PV)

PV is a lightweight virtualization mode where the guest operating system is aware


that it is running in a virtualized environment. The hypervisor provides a
paravirtualized interface to the guest operating system, allowing it to communicate
more efficiently with the underlying hardware. Paravirtualization-GRUB is the
bootloader that boots the PV AMIs. The PV-GRUB chain loads the kernel specified
in the menu. Offers lower overhead and improved performance compared to HVM
in some scenarios, especially for I/O-bound workloads. Suitable for older Linux
distributions that lack support for HVM or for workloads that require optimized I/O
performance.

• Paravirtualization on HVM

PV on HVM helps operating systems take advantage of storage and network I/O
available through the host. PV on HVM combines the advantages of both HVM and
PV modes. It allows you to run a paravirtualized guest operating system on top of
a hardware virtual machine, enabling better performance and compatibility Provides
the flexibility and compatibility of HVM with the improved performance of PV.
Offers a balance between compatibility and performance, making it suitable for a
wide range of workloads.

15. Name some of the AWS services that are not region-
specific

AWS services that are not region-specific are:

• IAM

• Route 53

• Web Application Firewall

• CloudFront

16. What are the differences between NAT Gateways


and NAT Instances?

NAT (Network Address Translation) gateways and NAT instances are both used in AWS
to enable instances in private subnets to initiate outbound traffic to the internet while
preventing inbound traffic from directly reaching those instances.

NAT Gateway is a AWS managed service and NAT instance is an EC2 instance
configured to perform NAT functionality and is managed by customer.

NAT Gateway is maintained (patching and software/hardware updates) and managed


by AWS, reducing administrative overhead for users, while NAT Instance has to be
maintained by customer and has high administrative overhead.

NAT Gateway automatically scales up to meet the demand of outbound traffic from
instances in private subnets providing high availability across multiple availability
zones (AZs) within a region without any additional configuration. Users need to
manually scale NAT instances based on traffic requirements and scalability is limited
by the instance type and size.

NAT Gateway is highly available by default, with redundancy across multiple Azs and
provides a service level agreement (SLA) for availability. Achieving high availability
with NAT instances requires deploying and managing multiple instances across
multiple Azs with manual failover configurations and implementations.

NAT Gateway offers better performance compared to NAT instances, especially for
high-throughput workloads designed to handle large volumes of outbound traffic
efficiently. NAT Instance performance depends on the instance type and size chosen
and may not offer the same level of performance and scalability as NAT Gateways.

17. What is CloudWatch?

The Amazon CloudWatch has the following features:

• Depending on multiple metrics and policies set by user, it participates in triggering


alarms.

• Helps in monitoring the AWS environments by collecting and notifying metrics for
different AWS services:

EC2 → CPU utilization, Disk I/O, Network Traffic, Memory Utilization, Disk Space
Utilization .

RDS → CPU utilization, Database connections, Free Storage space, Read and Write
IOPS, Database Throughput, Database Engine Metrics (buffer cache hit ratio,
transaction throughput).

SQS → # of messages, approx age of oldest messages, # of messages sent and


received.

SNS → messages published, notifications delivered, notifications failed.

S3 → bucket size, # of objects, Data transfer metrics.


ELB → # request, Healthy host count, Latency, # HTTP codes, Backend connection
errors.

18. What is an Elastic Transcoder?

To support multiple devices with various resolutions like laptops, tablets, and
smartphones, we need to change the resolution and format of the video. This can be
done easily by an AWS Service tool called the Elastic Transcoder efficiently converting
media files from one format to another, enabling businesses to deliver high-quality
video content to viewers across various above mentioned devices.

Features :

AWS Elastic Transcoder offers a simple and intuitive web interface for configuring
transcoding jobs, defining presets, and managing transcoding pipelines

Elastic Transcoder automatically scales to handle large volumes of transcoding jobs


processing multiple files concurrently to meet fluctuating demand.

Supports a variety of input media formats, including popular formats such as MP4,
MOV, FLV, and AVI.

Offers flexibility in choosing output formats and codecs, including H.264, H.265, VP8,
VP9, AAC, and MP3, among others

Provides a range of predefined presets for common transcoding tasks, allowing users
to easily configure output settings for various devices and platforms such as
smartphones, tablets, and web browsers

Enables users to create custom presets with specific configurations for resolution,
bitrate, codec, and other parameters, tailored to their unique requirements

Allows users to organize transcoding jobs into pipelines, enabling efficient


management and customization of transcoding workflows.
Supports defining input and output buckets, notifications, and permissions for each
pipeline.Seamlessly integrates with other AWS services, such as Amazon S3 for
storing input and output media files, Amazon CloudFront for content delivery, and
Amazon SNS for notifications

Pay-as-you-go-Pricing is based on the duration and resolution of the output content,


providing cost-effective transcoding solutions for businesses of all sizes

Ensures data security and compliance with AWS security best practices, including
encryption of data in transit and at rest, fine-grained access controls, and compliance
with industry standards and regulations.

AWS Interview Questions for Intermediate and Experienced

19. With specified private IP addresses, can an Amazon


Elastic Compute Cloud (EC2) instance be launched? If
so, which Amazon service makes it possible?

Yes. Within an Amazon VPC, users can define their own IP address range (CIDR block)
and allocate private IP addresses to EC2 instances launched within that VPC. When
launching an EC2 instance, users can specify the desired private IP address for the
instance, either manually or through automated methods like AWS CloudFormation or
the AWS Management Console..

Steps:

Users can create subnets within a AWS VPC and define their own IP address ranges for
those subnets based on the CIDR block

Within this created VPC, users can create subnets, which are segments of the VPC's IP
address range and can be associated with a specific availability zone within an AWS
region

When launching an EC2 instance within a subnet, users can specify the desired private
IP address for the instance which must be within the range of IP addresses allocated
to the subnet
Each EC2 instance launched within a subnet is associated with a network interface (ENI)
that contains its private IP address.

Users can assign multiple private IP addresses to an instance by attaching additional


network interfaces

20. Define Amazon EC2 regions and availability zones?

Availability zones are geographically separate locations. As a result, failure in one


zone has no effect on EC2 instances in other zones. When it comes to regions, they
may have one or more availability zones. This configuration also helps to reduce
latency and costs and ensures high availabilty and fault-tolerance.

21. Explain Amazon EC2 root device volume?

The image that will be used to boot an EC2 instance is stored on the root device drive.
This occurs when an Amazon AMI runs a new EC2 instance. And this root device
volume is supported by EBS or an instance store. In general, the root device data on
Amazon EBS is not affected by the lifespan of an EC2 instance.

22. Mention the different types of instances in Amazon


EC2 and explain its features.

1. General Purpose Instances:

They are used to compute a range of workloads and aid in the allocation of
processing, memory, and networking resources.

t3.micro, t3.small, t3.medium, t3.large → general-purpose workloads that require a


balance of performance and cost-efficiency → offer a baseline level of CPU
performance with the ability to burst to higher levels of performance

t2.micro, t2.small, t2.medium, t2.large → a wide range of general-purpose


applications, test and development environments, and small databases → a baseline
level of CPU performance with the ability to burst to higher levels of performance
when needed
t4g.micro, t4g.small, t4g.medium → wide range of general-purpose workloads such
as web servers, development environments, and small databases → powered by the
AWS Graviton2 processors and are designed for a balance of compute, memory,
and network resources.

m6g.large, m6gd.xlarge, m6g.4xlarge → Wide variety of general purpose workloads


→ Latest generation of general-purpose instances and are powered by the AWS
Graviton2 processors offer a balance of compute, memory, and network resources

m5.large, m5.xlarge, m5.4xlarge → General-purpose workloads such as web servers,


application servers, and small to medium-sized databases → offer a balance of
compute, memory, and network resources and provide consistent performance for
a variety of applications

m4.large, m4.xlarge, m4.4xlarge → general-purpose workloads that require a


balance of performance, memory, and network resources → offer a combination of
high CPU performance and moderate memory capacity for a wide range of
applications

2. Compute Optimized Instances:

These are ideal for compute-intensive applications. They can handle batch
processing workloads, high-performance web servers, machine learning inference,
and various other tasks.

c6g.large, c6gd.xlarge, c6g.4xlarge → compute-intensive workloads such as high-


performance web servers, batch processing, and gaming applications → high ratio
of vCPUs to memory

c5.large, c5.4xlarge, c5.9xlarge → applications such as scientific computing, data


analytics, and simulation → a balanced ratio of vCPUs to memory

c4.large, c4.8xlarge, c4.4xlarge → low-latency communication between instances →


powered by high-frequency Intel Xeon processors offering dedicated CPU resources
and enhanced networking capabilites.

3. Memory Optimized:

They process workloads that handle massive datasets in memory and deliver them
quickly.
r6i.large, r6g.xlarge, r6i.16xlarge → memory-intensive workloads such as in-
memory databases, real-time analytics, and high-performance computing (HPC)
applications → Powered by the AWS Graviton2 processors offering a high ratio of
vCPUs to memory

x1e.xlarge, x1e.32xlarge, x1e.32xlarge → large-scale, in-memory applications such


as SAP HANA, Apache Spark, and other data-intensive workloads → large amounts
of memory and high-speed, Non-Volatile Memory Express (NVMe) storage for low-
latency access to data

4. Accelerated Computing:

It aids in the execution of floating-point number calculations, data pattern matching,


and graphics processing. These functions are carried out using hardware
accelerators.

p4d.xlarge, p4d.24xlarge, p4.16xlarge → machine learning, deep learning, and high-


performance computing (HPC) workloads that require GPU acceleration → powered
by powerful NVIDIA Tesla GPUs with Tensor Core technology for accelerated training
and inference tasks,

g4dn.xlarge, g4dn.12xlarge, g4dn.metal → graphics-intensive applications such as


gaming, rendering, and virtual desktops. → powered by NVIDIA T4 Tensor Core
GPUs for accelerated graphics rendering and video transcoding

f1.2xlarge, f1.16xlarge → hardware acceleration of custom FPGA (Field-


Programmable Gate Array) applications such as genomics processing, financial
modeling, and data compression

5. Storage Optimised:

They handle tasks that require sequential read and write access to big data sets on
local storage.

i3.large, i3en.xlarge, i3.16xlarge → high performance, storage intensive NoSQL


databases, data warehousing, and data processing applications. → high-speed,
NVMe-based SSDs for low-latency access to data and offer a balance of compute
and storage resources
d2.xlarge, d2.8xlarge, d2.8xlarge → mass-scale, data-intensive workloads such as
distributed file systems, data warehousing, and big data analytics → high-capacity,
spinning magnetic disks (HDDs) for cost-effective storage of large datasets

h1.2xlarge, h1.4xlarge, h1.8xlarge → high throughput, sequential read/write


workloads such as map-reduce, distributed file systems and log processing.

23. Will your standby RDS be launched in the same


availability zone as your primary?

No, standby instances are launched in different availability zones than the
primary, resulting in physically separate infrastructures. This is because the
entire purpose of standby instances is to prevent infrastructure failure. As a
result, if the primary instance fails, the backup instance will assist in recovering
all of the data.

Advanced AWS Interview Questions and Answers

24. What is the difference between a Spot Instance, an


On-demand Instance, and a Reserved Instance?

Spot instances are unused EC2 instances that users can use at a reduced cost.

When you use on-demand instances, you must pay for computing resources
without making long-term obligations.

Reserved instances, on the other hand, allow you to specify attributes such as
instance type, platform, tenancy, region, and availability zone. Reserved
instances offer significant reductions and capacity reservations when
instances in certain availability zones are used.
25. How would you address a situation in which the
relational database engine frequently collapses when
traffic to your RDS instances increases, given that the
RDS instance replica is not promoted as the master
instance?

A larger RDS instance type is required for handling significant quantities of


traffic, as well as producing manual or automated snapshots to recover data if
the RDS instance fails.

26. What do you understand by 'changing' in Amazon


EC2?

To make limit administration easier for customers, Amazon EC2 now offers the
option to switch from the current 'instance count-based limitations' to the new
'vCPU Based restrictions.' As a result, when launching a combination of
instance types based on demand, utilization is measured in terms of the
number of vCPUs.

27. Define Snapshots in Amazon Lightsail?

The point-in-time backups of EC2 instances, block storage drives, and


databases are known as snapshots. They can be produced manually or
automatically at any moment. Your resources can always be restored using
snapshots, even after they have been created. These resources will also
perform the same tasks as the original ones from which the snapshots were
made.

Lightsail snapshots use an incremental backup mechanism, which means that


only the changes made since the last snapshot are stored. This reduces storage
costs and minimizes the time required to create snapshots

Snapshots are stored in Amazon Simple Storage Service (Amazon S3) and are
retained until you explicitly delete them. You can keep multiple snapshots for
each instance or volume, allowing you to maintain a history of backups over
time.

AWS Scenario-based Questions

28. On an EC2 instance, an application of yours is active.


Once the CPU usage on your instance hits 80%, you must
reduce the load on it. What strategy do you use to
complete the task?

Create a CloudWatch alarm that monitors CPU utilization on the EC2 instance.
Configure the alarm to trigger when the CPU utilization exceeds 80%.

Set up an Auto Scaling group for your EC2 instance(s) and configure scaling
policies based on the CloudWatch alarm.

Create a scale-out policy that adds more instances to the Auto Scaling group
when the CPU utilization exceeds 80%. This policy will help handle increased
load by distributing it across multiple instances.

Create a scale-in policy that removes instances from the Auto Scaling group
when CPU utilization drops below a certain threshold.

Consider using an Elastic Load Balancer (ELB) to distribute incoming traffic


across the multiple EC2 instances evenly.

Configure the Auto Scaling group to register instances with the ELB, ensuring
that new instances launched in response to increased load automatically receive
traffic.

29. Multiple Linux Amazon EC2 instances running a web


application for a firm are being used, and data is being
stored on Amazon EBS volumes. The business is
searching for a way to provide storage that complies
with atomicity, consistency, isolation, and durability
while also increasing the application's resilience in the
event of a breakdown (ACID). What steps should a
solutions architect take to fulfill these demands?

Migrate your application's data to Amazon Relational Database Service (RDS)


as it provides ACID compliance out of the box and handles database
management tasks such as backups, software patching, and scaling, reducing
administrative overhead.

Configure your RDS instance to use Multi-AZ deployment, which automatically


replicates data to a standby instance in a different Availability Zone (AZ) for
failover in case of an infrastructure failure thus ensures high availability and
durability of your database while maintaining ACID compliance.

Create read replicas of your primary RDS instance to offload read traffic and
distribute the load across multiple database instances.

Read replicas enhance application resilience by providing additional copies of


the data that can be used for failover in case of primary instance failure.

Take regular snapshots of your Amazon EBS volumes storing application data
to create backups. EBS snapshots are point-in-time copies of volumes and
ensure data durability and integrity.

Schedule automated snapshots using Amazon Data Lifecycle Manager (DLM)


to streamline the backup process and meet compliance requirements

Use Elastic Load Balancing (ELB) to distribute incoming traffic across multiple
EC2 instances running your web application.

ELB automatically scales with traffic demands and provides fault tolerance by
rerouting traffic to healthy instances, improving application resilience.
Configure Auto Scaling groups for your EC2 instances to automatically adjust
capacity based on demand. Auto Scaling helps maintain application availability
and performance during traffic spikes and infrastructure failures

Implement robust error handling mechanisms in your application to gracefully


handle database errors and failures.

Set up comprehensive monitoring using Amazon CloudWatch to monitor


performance metrics, alarms, and logs, allowing you to quickly detect and
respond to issues affecting application availability and data integrity.

If your web application requires shared file storage that can be accessed
concurrently by multiple EC2 instances, EFS can be a suitable solution. EFS
provides a scalable and fully managed file system that supports NFSv4
protocol, allowing multiple EC2 instances to access the same file system
simultaneously

If your application needs to share data across multiple EC2 instances, such as
configuration files, static assets, or user uploads, EFS can simplify data sharing
and synchronization between instances. This can be particularly useful in
distributed or microservices architectures

EFS is designed for high availability and durability, with data automatically
replicated across multiple Availability Zones within a region. By using EFS, you
can improve the resilience of your application's file storage layer and ensure
data durability in case of AZ-level failures

EFS scales automatically to accommodate growing data volumes and


concurrent access from multiple EC2 instances. It eliminates the need to
provision and manage storage capacity, making it easy to scale your
application's storage infrastructure as needed.

30. Your business prefers to use its email address and


domain to send and receive compliance emails. What
service do you recommend to implement it easily and
budget-friendly?
This can be accomplished by using Amazon Simple Email Service (Amazon
SES), a cloud-based email-sending service. Amazon offers the Simple Email
Service (SES) service, which allows you to send bulk emails to customers
swiftly at a minimal cost.

33. How many S3 buckets can be created?

Up to 100 buckets can be created by default.

34. What is the maximum limit of elastic IPs anyone can


produce?

A maximum of five elastic IP addresses can be generated per location and AWS
account.

35. What is Amazon EC2?

EC2 is short for Elastic Compute Cloud, and it provides scalable computing
capacity. Using Amazon EC2 eliminates the need to invest in hardware, leading
to faster development and deployment of applications. You can use Amazon
EC2 to launch as many or as few virtual servers as needed, configure security
and networking, and manage storage. It can scale up or down to handle
changes in requirements, reducing the need to forecast traffic. EC2 provides
virtual computing environments called “instances.”

36. What Are Some of the Security Best Practices for


Amazon EC2?

Implement security groups to control inbound and outbound traffic to your EC2
instances. Restrict access to only necessary ports and protocols, and regularly
Review and update security group rules.
Network Access Control Lists provide an additional layer of security by
controlling traffic at the subnet level. Use NACLs to filter traffic entering and
leaving subnets associated with your EC2 instances.

Use AWS Identity and Access Management (IAM) to manage access to AWS
resources securely. Assign IAM roles to EC2 instances to grant permissions for
accessing other AWS services without the need for long-term credentials.

Secure remote access to your EC2 instances by limiting SSH (for Linux
instances) and RDP (for Windows instances) access to trusted IP addresses
only.

Disable password-based authentication and use SSH keys or Windows


passwords stored in AWS Systems Manager Parameter Store

Keep your EC2 instances up to date by regularly applying security patches and
updates to the operating system, applications, and software installed on the
instances.

Utilize AWS Systems Manager Patch Manager or other automated patch


management solutions

Encrypt sensitive data stored on EBS volumes using AWS Key Management
Service (KMS) encryption. Enable encryption in transit by using SSL/TLS for
web traffic and implementing VPN or AWS Direct Connect for secure network
communication through outside user systems or premise data-centers.

Enable AWS CloudTrail to log API activity and AWS Config to track resource
configuration changes.

Utilize Amazon CloudWatch to monitor EC2 instance metrics, set up alarms for
security events, and centralize logs for analysis using services like Amazon
CloudWatch Logs

Enforce MFA for accessing the AWS Management Console and sensitive APIs
to add an extra layer of security.
Require IAM users and roles to authenticate using a combination of password
and MFA device

Use separate VPCs, subnets, and security groups to isolate different tiers of
your application and restrict communication between components based on
the principle of least privilege

Conduct regular security audits and assessments of your EC2 instances and
associated resources to identify vulnerabilities, misconfigurations, and
potential security risks. Utilize AWS Trusted Advisor and third-party security
tools for comprehensive assessments.

37. Can S3 Be Used with EC2 Instances, and If Yes,


How?

Store data in Amazon S3 buckets and access it from your EC2 instances. S3
buckets act as storage containers for objects, such as files, documents,
images, and videos.

Upload data to S3 buckets directly from your EC2 instances using AWS SDKs,
AWS Command Line Interface (CLI), or AWS Management Console. You can
also use third-party tools or applications that support S3 integration

Host static websites or web applications on Amazon S3 by storing HTML, CSS,


JavaScript, and other web assets in S3 buckets.

Configure S3 buckets for static website hosting and point domain names to S3
endpoints using Amazon Route 53 or other DNS services

Transfer data between EC2 instances and S3 buckets using HTTP/HTTPS


requests over the internet or via the AWS Direct Connect service for dedicated
network connectivity.

Use the AWS Transfer Family service to enable secure file transfers over SFTP,
FTPS, and FTP protocols between EC2 instances and S3 buckets.
Use Amazon S3 for backup and disaster recovery by storing snapshots, images,
database backups, and application data in S3 buckets.

38. What is the difference between stopping and


terminating an EC2 instance?

While you may think that both stopping and terminating are the same, there is
a difference. When you stop an EC2 instance, it performs a normal shutdown
on the instance and moves to a stopped state. However, when you terminate
the instance, it is transferred to a stopped state, and the EBS volumes attached
to it are deleted and can never be recovered.

39. What are the different types of EC2 instances based


on their costs?

Ondemand → short-term, unpredictable workloads, or for users who want to avoid


upfront investments and commitments

Reserved → long-term, steady-state or predictable workloads → provides All Upfront,


Partial Upfront, and No Upfront → Substantial savings over Ondemand over long-term.

Spot → applications with flexible start and end times, fault-tolerant workloads, or batch
processing jobs that can be interrupted or rescheduled, disaster for mission critical or
time-sensitive operations. → Since bidding for unused EC2 inventory cheaper as
compared to Ondemand.

Dedicated host → applications having compliance requirements, licensing restrictions,


or workloads that require consistent performance and isolation from other tenants →
Priced on a per-hour basis and can be purchased On-Demand or reserved for a specific
term

Dedicated instance → workloads that require compliance with specific regulatory


requirements or where you need to meet contractual obligations regarding instance
isolation → Dedicated Instances run on hardware that is dedicated to a single AWS
account but share physical hosts with other instances from the same account → priced
at the same rate as On-Demand instances but offer improved isolation and compliance
benefits
40. How do you set up SSH agent forwarding so that you
do not have to copy the key every time you log in?

Generate SSH Key Pair:

If you haven't already, generate an SSH key pair on your local machine using
the ssh-keygen command. This command creates a public and private key pair,
typically stored in ~/.ssh/id_rsa for the private key and ~/.ssh/id_rsa.pub for
the public key.

Add SSH key to Agent:

Add your private SSH key to the SSH agent running on your local machine using
the ssh-add command. This command loads the private key into memory and
manages it for you. ssh-add ~/.ssh/id_rsa

Enable SSH agent forwarding:

Edit your SSH client configuration file (usually ~/.ssh/config) and add the
following lines to enable SSH agent forwarding:

Host *

ForwardAgent yes

Connect to EC2 instance:

Now, when you SSH into your EC2 instance using the ssh command, SSH agent
forwarding will automatically forward your local SSH agent to the remote EC2
instance.

ssh -i path/to/your/private/key.pem ec2-user@your-ec2-instance-ip


Once connected to the EC2 instance, you can verify that SSH agent forwarding
is working by attempting to SSH into another server or Git repository that
requires authentication with the same SSH key.

You should be able to connect without providing the SSH key again

41. What are Solaris and AIX operating systems? Are


they available with AWS?

Solaris is an operating system that uses SPARC processor architecture, which


is not supported by the public cloud currently.

AIX is an operating system that runs only on Power CPU and not on Intel, which
means that you cannot create AIX instances in EC2.

Check the AWS Marketplace for third-party vendors who might offer Solaris or
AIX AMIs that you can run on EC2 instances. Some vendors may provide pre-
configured images or virtual appliances for these operating systems

Consider using AWS migration services, such as AWS Server Migration Service
(SMS) or AWS Database Migration Service (DMS), to migrate your existing
Solaris or AIX workloads to AWS-supported operating systems

Containerize your applications using Docker containers or virtualize them using


VMware on AWS allowing you to run Solaris or AIX workloads on EC2 instances
as part of a containerized or virtualized environment

Implement hybrid cloud solutions where you run Solaris or AIX workloads on-
premises or in a colocation facility while leveraging AWS services for other
aspects of your infrastructure. Use AWS Direct Connect or VPN to establish
secure connectivity between your on-premises environment and AWS.
42. How do you configure CloudWatch to recover an
EC2 instance?

Here’s how you can configure them:

• Create an Alarm using Amazon CloudWatch

• Set Up AWS Auto Scaling

• Associate the CloudWatch Alarm with the Auto Scaling Group

• Test the Configuration

• Monitor and Fine-Tune

43. What are the common types of AMI designs?


Base AMIs contain the minimal set of software and configurations required to
launch an instance. They typically include the operating system and essential
utilities but may lack application-specific software or custom configurations.
Golden AMIs, also known as standard or template AMIs, are fully configured
and customized images tailored for specific use cases or applications. They
include the operating system, required software packages, patches, updates,
and configurations pre-installed and pre-configured. Golden AMIs serve as a
starting point for deploying new instances with consistent configurations
across environments. Golden AMIs are designed to be generic and reusable
across multiple applications or workloads within an organization and are are
often maintained and updated centrally by IT or DevOps teams
Application AMIs are specialized images that include the operating system,
middleware, runtime dependencies, and application code packaged together.
They are tailored for specific applications or workloads, such as web servers,
databases, or application servers. Application AMIs are often customized for
performance optimization, security hardening, and scalability requirements.
Application AMIs are customized and optimized for a particular application
stack or use case. They may contain pre-installed and pre-configured versions
of web servers, databases, middleware which could be derived from some
Golden AMI, other components specific to the application created and
maintained by developer or application teams to streamline the deployment
process.
Custom AMIs are created by users based on their unique requirements and
configurations. They can be derived from existing base, golden, or application
AMIs and further customized to meet specific needs. Custom AMIs allow users
to incorporate proprietary software, custom scripts, and configurations into the
image, ensuring consistency and repeatability in deployments.
Community AMIs are publicly available images shared by AWS users or third-
party vendors in the AWS Marketplace. They provide a wide range of pre-
configured images for various operating systems, applications, and use cases.
Community AMIs can be used as-is or customized further to fit specific
requirements.
Machine Learning AMIs are specialized images designed for running machine
learning frameworks and libraries, such as TensorFlow, PyTorch, or Apache
MXNet. They include pre-installed machine learning tools, libraries, and sample
code for developing and deploying machine learning models.

Security-hardened AMIs are images that have undergone rigorous security


testing and compliance checks to meet industry standards and regulatory
requirements.They include security enhancements, patches, and
configurations to mitigate common security threats and vulnerabilities.
Multi-tier AMIs consist of multiple interconnected instances or layers, such as
web servers, application servers, and database servers, pre-configured and
integrated into a single image. They provide a complete stack for deploying
multi-tier applications with minimal setup effort

44. What are Key-Pairs in AWS?

The Key-Pairs are password-protected login credentials for the Virtual


Machines that are used to prove our identity while connecting the Amazon EC2
instances. The Key-Pairs are made up of a Private Key and a Public Key which
lets us connect to the instances.

Generate an SSH key pair on your local machine using the ssh-keygen
command. This command creates a public and private key pair, typically stored
in ~/.ssh/id_rsa for the private key and ~/.ssh/id_rsa.pub for the public key
45. What is Amazon S3?

S3 is short for Simple Storage Service, and Amazon S3 is the most supported
storage platform available. S3 is object storage that can store and retrieve any
amount of data from anywhere. Despite that versatility, it is practically
unlimited as well as cost-effective because it is storage available on demand.
In addition to these benefits, it offers unprecedented levels of durability and
availability. Amazon S3 helps to manage data for cost optimization, access
control, and compliance.

46. How can you recover/login to an EC2 instance for


which you have lost the key?

Follow the steps provided below to recover an EC2 instance if you have lost the key:

If you have previously configured AWS Systems Manager Session Manager on


the instance, you can try accessing the instance using the AWS Management
Console or AWS CLI. Session Manager provides secure, browser-based access
to your instances without requiring SSH keys.

If you have AWS Systems Manager Run Command configured on the instance,
you can try running commands remotely to reset the SSH key or create a new
user with sudo privileges

If you have access to another EC2 instance in the same availability zone and
subnet, you can stop the affected instance, detach its root volume, attach the
volume to the other instance, mount the volume, and modify the SSH
configuration or create a new user with sudo privileges

If the instance was launched with CloudInit or User Data scripts that configure
SSH access, you can modify the script to add a new SSH key or create a new
user with sudo privileges.
If you have a recent snapshot of the instance's root volume, you can create a
new volume from the snapshot, attach it to a new instance, modify the SSH
configuration or create a new user with sudo privileges, and then launch the
new instance.

47. What are some critical differences between AWS S3


and EBS?

Object Storage vs. Block Storage:

S3 is an object storage service, meaning it stores data as objects in buckets.


Each object consists of data, metadata, and a unique key identifier. EBS, on the
other hand, is a block storage service. It provides persistent block-level storage
volumes that can be attached to EC2 instances and used like physical hard
drives.

Use Cases:

S3 is ideal for storing large amounts of unstructured data such as images,


videos, log files, backups, and static website content. It is highly scalable,
durable, and designed for high availability. EBS is typically used for storing data
that requires frequent updates, such as databases and application file systems.
It provides low-latency access and is suitable for transactional workloads.

Access Method:

S3 is accessed over HTTP/HTTPS using RESTful APIs or SDKs provided by


AWS. It's designed for internet-scale applications and can be accessed from
anywhere with an internet connection. EBS volumes are attached to EC2
instances and accessed as block devices. They appear as locally attached
storage to the instance and can be formatted with a file system just like a
physical disk.
Durability and Availability:

S3 is designed for 99.999999999% (11 nines) durability of objects over a given


year and offers built-in redundancy across multiple availability zones within a
region.EBS volumes are replicated within a single availability zone for high
availability but do not offer cross-AZ redundancy by default. However, you can
use features like EBS snapshots to create backups and replicate data across
multiple AZs.

Pricing Model:

S3 pricing is based on the amount of data stored, data transfer out of the S3
bucket, and requests made to the service. EBS pricing is based on the
provisioned storage capacity (per GB per month), IOPS (input/output
operations per second), and snapshot storage.

Data Access:

S3 is optimized for read-heavy workloads, making it suitable for scenarios


where data is primarily read and accessed by many users simultaneously. EBS
is optimized for low-latency random access, making it suitable for scenarios
where data is frequently written and updated, such as databases.

48. How do you allow a user to gain access to a specific


bucket?

Identity and Access Management (IAM):

Use the IAM service provided by your cloud provider to create a user account
or group for the person or team that needs access to the bucket.
Permissions:

Assign appropriate permissions to the user or group. Permissions are usually


defined through policies that specify what actions the user or group can
perform on specific resources like buckets, objects, or even specific operations
within the bucket.

Bucket Policy or Access Control List (ACL):

Configure the access control settings for the specific bucket. Depending on the
cloud provider, you can either use a bucket policy or ACL to control access.
These configurations define who can access the bucket and what level of
access they have (e.g., read, write, delete).

49. How can you monitor S3 cross-region replication to


ensure consistency without actually checking the
bucket?

CloudWatch Metrics:

AWS S3 provides various CloudWatch metrics that you can monitor to ensure
replication consistency. These metrics include ReplicationLatency,
SyncOperations, PendingReplicationCount, etc. Monitoring these metrics can
give you insights into the replication status and any potential issues.

ReplicationLatency → This metric measures the time it takes for changes made
to data in one node or datacenter to be replicated to other nodes or datacenters
within the system → High replication latency can indicate issues in the
replication process, such as network congestion, resource constraints, or
inefficient replication mechanisms

SyncOperations → Sync operations refer to the # synchronization tasks


performed between replicas to ensure consistency → High sync operation rates
could indicate either a high rate of data changes needing synchronization or
inefficiencies in the synchronization process

Pending Replication Count → This metric represents the number of changes or


updates that are yet to be replicated to the desired number of replicas. It
indicates the backlog in replication tasks → High pending replication count
suggests that the replication system is struggling to keep up with the rate of
changes or that there are bottlenecks in the replication pipeline.

CloudWatch Alarms:

Set up CloudWatch alarms based on these metrics. For example, you can
create an alarm to notify you if the PendingReplicationCount exceeds a certain
threshold for a specified period. This can indicate a problem with replication
lag.

S3 Replication Time Control (S3 RTC):

AWS offers S3 Replication Time Control, which provides SLA-backed


replication of objects across regions. By enabling S3 RTC, you can monitor the
compliance status of your replicated objects against the configured replication
time objectives (RTOs) and replication time windows (RTWs).

AWS Config Rules:

Use AWS Config to set up rules that monitor the configuration of your S3
replication setup. You can define rules to ensure that replication configurations
are compliant with your organization's policies and best practices.

S3 Event Notifications:

Configure S3 event notifications to trigger notifications whenever certain


replication-related events occur. For example, you can set up notifications for
replication failures or when objects are replicated successfully.
AWS CloudTrail:

Enable AWS CloudTrail to capture API calls related to S3 replication. CloudTrail


provides detailed logs of API activity, including replication-related actions. By
analyzing CloudTrail logs, you can track changes to replication configurations
and diagnose issues.

Third-Party Monitoring Tools:

Consider using third-party monitoring and logging solutions that specialize in


AWS infrastructure monitoring. These tools often offer more advanced
monitoring capabilities and can provide insights into S3 replication
performance and consistency.
50. What is SnowBall?

Snowball is a physical data transport solution provided by AWS (Amazon


Web Services) that helps in transferring large amounts of data into and out
of the AWS cloud securely and efficiently.

High Capacity Storage Device:

Snowball devices come in various storage capacities, ranging from tens of


terabytes to hundreds of terabytes. They provide a large amount of storage
space in a rugged and durable form factor.

Offline Data Transfer:

Snowball enables offline data transfer, which is particularly useful when


transferring large datasets over the internet would be time-consuming or
impractical due to limited bandwidth or security concerns.

Data Encryption:

Snowball devices feature built-in encryption capabilities to ensure that data


is securely encrypted during transit. AWS Key Management Service (KMS)
keys are used to encrypt the data stored on the Snowball device, providing
end-to-end encryption.

Simple Interface:

Snowball offers a simple and intuitive interface for managing the data
transfer process. Users can request a Snowball device through the AWS
Management Console, specify the data to be transferred, and track the
progress of the transfer.

Fast Data Transfer:

Snowball utilizes high-speed connections to transfer data between the


device and AWS, enabling fast data transfer rates. This helps reduce the time
required to transfer large volumes of data compared to traditional methods
such as shipping hard drives.
Integration with AWS Services:

Snowball integrates seamlessly with various AWS services, allowing users


to easily import data into services like Amazon S3, Amazon Glacier,Amazon
EBS, and Amazon Elastic File System (EFS).

Cost-Effective:

Snowball can be a cost-effective solution for transferring large datasets,


especially for one-time or infrequent transfers. Users pay a flat fee for each
Snowball job, which includes the use of the device, data transfer, and
shipping costs.

51. What are the Storage Classes available in Amazon


S3?

Amazon S3 Standard → Default storage class for Amazon S3


→Designed for frequently accessed data that requires low-latency
access offers high durability, availability, and performance → wide range
of use cases, including serving website content, storing critical business
data, and hosting static assets

Amazon S3 Intelligent-Tiering → Storage class is ideal for data with


unknown or changing access patterns. Intelligent-Tiering automatically
moves objects between two access tiers: frequent access and
infrequent access, based on their access patterns → wide range of use
cases, such as CMS, Backup and Archival, Data Lake, Data Warehousing
and Analytics, Application Logs and Metrics.

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) → Designed


for data that is accessed less frequently but requires rapid access when
needed. It offers the same high durability and availability as S3 Standard
but at a lower storage cost. Standard-IA is suitable for long-term storage
of data that may be accessed infrequently but still requires low-latency
access → wide range of use cases such as long-term but immediately
required data backup such as compliance, disaster recovery, legal
documents, corporate historical documents, real-time analytics of old
data, old media files requiring low storage cost but faster retrieval.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) →

Similar to Standard-IA, One Zone-IA is designed for infrequently


accessed data but stores data in a single AWS Availability Zone instead
of across multiple zones like the standard IA class. This makes it more
cost-effective but also potentially less resilient to the loss of an
Availability Zone → It's ideal for data that can be easily reproduced, less
critical, transient or short-lived, and not candidate for fault tolerancy,
Data in dev and testing environments.

Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) → Glacier is


designed for long-term archival storage of data that is accessed
infrequently and can tolerate retrieval times of several hours. It offers
very low storage costs but incurs additional retrieval fees and has longer
retrieval times compared to other S3 storage classes. Glacier is suitable
for data that needs to be retained for compliance or regulatory reasons
but may not need to be accessed frequently

Amazon S3 Glacier Instant Retrieval storage class → Archive storage


class that delivers the lowest-cost storage for long-lived data that is
rarely accessed and requires retrieval in milliseconds. → performance-
sensitive use cases like image hosting, online file-sharing applications,
medical imaging and health records, news media assets, and genomics.

Amazon S3 Glacier Flexible Retrieval (Formerly S3 Glacier) storage class


→ It comes between Amazon S3 Glacier Instant Retrieval storage class
and Amazon S3 Glacier Deep Archive

Amazon S3 Reduced Redundancy Storage → Reduced Redundancy


Storage (RRS) is an Amazon S3 storage option that enables customers
to store noncritical, reproducible data at lower levels of redundancy than
Amazon S3’s standard storage. → use cases involve storing thumbnails,
transcoded media, or other processed data that can be easily
reproduced.

Amazon S3 on Outposts → Create S3 buckets on your Outposts and


easily store and retrieve objects on premises. S3 on Outposts provides
a new storage class, OUTPOSTS, which uses the Amazon S3 APIs and is
designed to store data durably and redundantly across multiple devices
and servers on your Outposts

52. What Is Amazon Virtual Private Cloud (VPC) and


Why Is It Used?

Isolation:

One of the primary needs for AWS VPC is to create an isolated section
of the AWS Cloud for your resources that provides security and privacy
by allowing you to define your own virtual network topology, including
subnets, route tables, and network gateways.

Security:

AWS VPC enables you to define network access control policies using
security groups and network access control lists (ACLs) allowing you to
control which resources can communicate with each other and with the
internet, providing a secure environment for your applications and data.

Custom Networking:

With AWS VPC, you have full control over your virtual network, including
IP address ranges, subnets, and routing allowing you to create a network
topology that meets the specific requirements of your applications and
workloads.

Hybrid Cloud Connectivity:

AWS VPC provides features for connecting your virtual network to your
on-premises data center or other AWS VPCs using VPN connections,
Direct Connect, or AWS Transit Gateway, enabling you to extend your
existing network infrastructure into the AWS Cloud and build hybrid
cloud solutions.
Scalability:

AWS VPC is highly scalable, allowing you to create and manage large-
scale networks with thousands of resources to accommodate growing
workloads and traffic patterns without disruption.

Compliance:

Many regulatory requirements and industry standards, such as GDPR,


HIPAA, and PCI DSS, require organizations to implement strong network
security controls. AWS VPC provides the necessary features and
capabilities to help you meet compliance requirements and protect
sensitive data.

Resource Organization:

AWS VPC enables you to organize your resources into logical groups
using subnets, route tables, and network access control policies. This
makes it easier to manage and maintain your infrastructure, especially
as it grows in size and complexity.

Cost Management:

By using AWS VPC, you can optimize costs by only provisioning the
network resources you need and scaling them as necessary.
Additionally, you can leverage features like AWS PrivateLink to reduce
data transfer costs between services within the same VPC.

54. How do you connect multiple sites to a VPC?

Virtual Private Network (VPN):


Configure VPN connections between each site's local network and the
VPC. In AWS, you would set up a Virtual Private Gateway (VGW) and
Customer Gateway (CGW), and then create VPN connections between
them. In Azure, you would create a Virtual Network Gateway and
configure a Site-to-Site VPN connection.

Direct Connect (DX) (specific to AWS):

Direct Connect establishes a dedicated network connection between


your on-premises data center and AWS. This provides a more consistent
network experience and can be advantageous for transferring large
volumes of data for mission-critical applications.

You would set up a Direct Connect Gateway in AWS and connect it to


your VPC, and then establish a physical connection between your on-
premises network and AWS through a Direct Connect location.

Routing:

Configure routing tables in your VPC to direct traffic appropriately.


Ensure that routes to on-premises networks are properly configured and
propagated. In AWS, you might use Route Tables and VPN Route
Propagation for VPN connections, and BGP (Border Gateway Protocol)
for Direct Connect. In Azure, you would configure local network
gateways and update the route tables accordingly.

Security:

Implement appropriate security measures such as VPN encryption,


access control lists (ACLs), and security groups. Ensure that only
authorized traffic is allowed to traverse the connections between your
sites and the VPC.

Monitoring and Management:

Monitor the connections for performance, availability, and security.


Utilize logging, monitoring, and alerting tools provided by your cloud
provider to stay informed about the status of your connections.
Regularly review and update configurations as needed to maintain
optimal performance and security.

55. Name and explain some security products and


features available in VPC?

Security Groups:

Security Groups act as virtual firewalls for your instances to control inbound
and outbound traffic. You can define rules that allow specific types of traffic
based on protocol, port, and source/destination IP addresses. Security Groups
are stateful, meaning if you allow inbound traffic, the outbound traffic is
automatically allowed, simplifying the configuration.

Network Access Control Lists (NACLs):

NACLs are stateless packet filters that control traffic at the subnet level. They
allow you to create rules that define which traffic is allowed to enter or leave a
subnet.NACLs provide an additional layer of security beyond Security Groups,
especially for blocking specific IP ranges or protocols.

Web Application Firewall (WAF):


WAF helps protect your web applications from common web exploits by
filtering and monitoring HTTP requests. It enables you to create rules that
allow, block, or monitor HTTP/HTTPS requests based on conditions you define,
such as IP addresses, HTTP headers, or URI strings. WAF integrates with AWS
services like CloudFront and Application Load Balancers to provide
comprehensive protection for your web applications.

AWS Shield:

AWS Shield is a managed Distributed Denial of Service (DDoS) protection


service that safeguards web applications running on AWS. It provides always-
on detection and automatic inline mitigation to minimize the impact of DDoS
attacks. AWS Shield Standard is automatically included for all AWS customers
at no additional cost, while AWS Shield Advanced offers additional features
like DDoS cost protection and 24/7 access to DDoS response experts for an
additional fee.

VPC Flow Logs:

VPC Flow Logs capture information about the IP traffic going to and from
network interfaces in your VPC. You can use Flow Logs for security analysis,
troubleshooting, and compliance auditing. Flow Logs can be configured to
capture metadata about each packet (e.g., source/destination IP addresses,
ports, protocol) and can be sent to Amazon S3, CloudWatch Logs, or Amazon
Kinesis Data Firehose for storage and analysis.

AWS Identity and Access Management (IAM):

IAM enables you to manage access to AWS services and resources securely.
You can create and manage IAM users, groups, and roles to control who can
access your VPC resources and what actions they can perform. IAM policies
allow you to define granular permissions, limiting access to specific VPC
resources based on roles and responsibilities.

VPC Endpoints:

VPC Endpoints enable you to privately connect your VPC to supported AWS
services and VPC endpoint services without requiring internet gateway, NAT
device, VPN connection, or Direct Connect connection. This helps in keeping
traffic between your VPC and AWS services within the AWS network, reducing
exposure to the public internet and enhancing security.

Key Management Service (KMS):

KMS is a managed service that allows you to create and control the encryption
keys used to encrypt your data. You can use KMS to encrypt data stored in
various AWS services, such as Amazon S3, Amazon EBS, and Amazon RDS, as
well as your own applications. By encrypting your data, you add an additional
layer of security, especially for sensitive data stored in your VPC.

VPC PrivateLink:

VPC PrivateLink allows you to privately access services hosted on AWS or by


AWS partners from within your VPC without exposing your traffic to the public
internet.It enables you to create interface VPC endpoints in your VPC that act
as entry points for accessing supported services. With PrivateLink, you can
securely connect to services like AWS Elastic Load Balancing, Amazon S3, and
AWS Marketplace partners, among others.

AWS Secrets Manager:

Secrets Manager helps you securely store, rotate, and manage the credentials,
API keys, and other secrets used by your applications. It centralizes and
automates the management of secrets, reducing the risk of unauthorized
access and exposure. Secrets Manager integrates with AWS services, allowing
you to securely access secrets from your VPC-based applications without
hardcoding credentials.

56. How do you monitor Amazon VPC?

AWS CloudWatch

CloudWatch can be used to collect and track metrics related to VPC resources such
as EC2 instances, load balancers, VPN connections, and NAT gateways. Set up
CloudWatch Alarms to receive notifications when certain thresholds are exceeded,
such as high CPU utilization or low network throughput. Use CloudWatch Logs to
capture and analyze logs from VPC Flow Logs, DHCP logs, Firewall logs and other
sources to monitor network traffic and diagnose connectivity issues.

VPC Flow Logs

VPC Flow Logs capture information about the IP traffic going to and from network
interfaces in your VPC. VPC Flow Logs can be enabled for individual subnets or the
entire VPC. Analyze Flow Logs using tools like Amazon CloudWatch Logs Insights or
third-party log management solutions to monitor traffic patterns, identify anomalies,
and troubleshoot connectivity issues.

AWS Config

AWS Config provides a detailed view of the configuration changes made to resources
within your VPC. You can use Config to monitor changes to VPC settings, security
group rules, route tables, and network ACLs. Set up Config Rules to enforce desired
configurations and detect deviations from them, helping you maintain compliance and
security in your VPC environment.

Amazon VPC Dashboard

Amazon VPC Dashboard in the AWS Management Console provides a centralized view
of your VPC resources and their status. Monitor metrics such as VPC traffic, VPN
connection status, and Elastic IP address usage directly from the dashboard. Use the
VPC Dashboard to quickly identify any issues or abnormalities within your VPC
configuration.

Third-Party monitoring tools

Consider using third-party monitoring tools (Datadog, New Relic, and Sumo Logic) and
services that offer advanced features for monitoring and managing VPC
environments. These tools may provide additional insights, visualization capabilities,
and integrations with other monitoring systems. Implement security monitoring
solutions (Amazon GuardDuty, AWS Security Hub, or third-party security tools) to
detect and respond to security threats within your VPC. Monitor for suspicious activity,
unauthorized access attempts, and potential vulnerabilities across your VPC
resources.
57. How many Subnets can you have per VPC?

We can have up to 200 Subnets per Amazon Virtual Private Cloud (VPC).

58. When Would You Prefer Provisioned IOPS over


Standard Rds Storage?

Performance-sensitive Workloads:

Provisioned IOPS is ideal for applications that require high and consistent I/O
performance, such as databases powering online transaction processing
(OLTP) systems or data warehouses. If your application experiences
performance degradation during peak usage periods or when handling
complex queries, Provisioned IOPS can ensure that your database maintains
the required performance levels.

Low Latency Requirements:

Applications that demand low latency and fast response times, such as real-
time analytics or high-frequency trading platforms, benefit from Provisioned
IOPS. With Provisioned IOPS, you can reduce disk latency and ensure that
database operations are executed quickly and responsively.

IO-Intensive Workloads:

Workloads that involve frequent read and write operations, large database
scans, or heavy data processing can benefit from Provisioned
IOPS.Provisioned IOPS provides dedicated I/O capacity, allowing your
database to handle intensive workloads without experiencing performance
bottlenecks.

Predictable Performance Requirements:

If your application has strict performance SLAs (Service Level Agreements) or


requires predictable performance levels, Provisioned IOPS is the preferred
storage option.
By provisioning a specific amount of IOPS, you can ensure that your database
consistently meets performance expectations under varying workload
conditions.

Database Replication and Backup Operations:

Provisioned IOPS can be beneficial for database replication, backup, and


restore operations, where consistent and high-performance storage is
essential for maintaining data integrity and minimizing downtime.

Cost-Effective Scaling:

While Provisioned IOPS typically incurs higher costs compared to standard RDS
storage, it can be more cost-effective in scenarios where you need to scale your
database vertically. By adjusting the provisioned IOPS and storage capacity
based on your workload requirements, you can optimize performance while
controlling costs effectively.

60. What Are the Benefits of AWS’s Disaster Recovery?

High Availability and Durability:

Deploying resources across multiple data centers and Availability Zones (AZs)
worldwide, can achieve high availability and durability for their applications and
data via built-in redundancy and fault tolerance minimizing the risk of downtime
and data loss during disasters.

Scalability:

AWS offers on-demand scalability, allowing organizations to scale their


disaster recovery infrastructure up or down based on demand easily adjusting
compute, storage, network and database capacity to accommodate changing
workload requirements during disaster recovery scenarios.

Cost-effectiveness:

AWS's pay-as-you-go pricing model enables organizations to pay only for the
resources they use, reducing the upfront costs associated with traditional
disaster recovery solutions in form of setting up expensive data-centers from
scratch after recovery from a disaster.

Automation and Orchestration:

Organizations can automate the deployment, configuration, and management


of their disaster recovery infrastructure, streamlining DR processes and
reducing the time required to recover from disasters.

Fast Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO):

AWS offers solutions that enable organizations to achieve fast recovery times
and minimize data loss during disasters. Services like AWS Backup, AWS
Storage Gateway, and AWS Database Migration Service (DMS) help
organizations meet their RTO and RPO targets by providing efficient backup,
replication, and data migration capabilities.

Global Reach and Compliance:

AWS operates in multiple regions and complies with industry standards and
regulations, making it suitable for organizations with global operations and
compliance requirements.

Managed Services and Support:

AWS offers managed services and support options to help organizations


design, implement, and maintain their disaster recovery solutions. With AWS
Managed Services (AMS) and AWS Support plans, organizations can access
expertise and assistance from AWS professionals to optimize their DR
strategies and respond to incidents effectively.

63. What is RTO and RPO in AWS?

RTO

RTO refers to the maximum amount of time allowed for recovering a system,
application, or service after a disruption or disaster occurs.In AWS, RTO
measures the time it takes to restore operations and bring the affected
resources back online following an outage or failure. Organizations typically
define RTO based on business requirements, considering factors such as the
criticality of the application, the impact of downtime on revenue and
productivity, and customer expectations.

AWS Services → Replication, Auto Scaling, AWS Elastic Beanstalk, and AWS
Lambda

RPO

RPO defines the maximum acceptable amount of data loss that an organization
can tolerate during a disaster or disruption. In AWS, RPO measures the point in
time to which data must be recovered to ensure minimal data
loss.Organizations determine RPO based on factors such as data sensitivity,
regulatory requirements, and business continuity needs.

AWS Services → Amazon S3 for object storage, Amazon EBS snapshots, and
AWS Backup

64. If you would like to transfer vast amounts of data,


which is the best option among Snowball, Snowball
Edge, and Snowmobile?

Snowball:

Snowball is designed for transferring large amounts of data (up to 80TB per
device) to and from AWS in situations where high-speed internet connections
are not available or where transferring data over the network would be time-
consuming and costly. Snowball devices are rugged, portable, and secure,
equipped with built-in encryption and tamper-resistant features. Snowball is
ideal for one-time data transfers, migrations, and data backup projects where
the data volume is relatively large but not massive compared to the capacity of
Snowmobile.
Snowball Edge:

Snowball Edge combines the capabilities of Snowball with on-board compute


and storage, enabling edge computing and data processing at the edge
locations. In addition to data transfer, Snowball Edge can run AWS Lambda
functions, EC2 instances, and other AWS services locally, making it suitable for
scenarios requiring data processing and analysis at remote or disconnected
locations.Snowball Edge is recommended for use cases such as IoT data
collection, machine learning inference at the edge, and disaster recovery
scenarios where local compute capabilities are required along with data
transfer.

Snowmobile:

Snowmobile is a massive data transfer service designed for transferring


exabytes of data (up to 100PB per Snowmobile) to AWS in a single
operation.Snowmobile is a rugged shipping container with storage capacity
equivalent to multiple Snowballs, equipped with high-speed networking and
security features for transferring petabytes of data securely and
efficiently.Snowmobile is typically used by organizations with extremely large
data sets, such as media and entertainment companies, scientific research
organizations, and enterprises with massive data archives.

66. What are the advantages of AWS IAM?

AWS Identity and Access Management (IAM) is a powerful service that


provides centralized control over access to AWS resources.

Granular Access Control:

IAM enables you to define fine-grained permissions and access policies that
specify who can access specific AWS resources and what actions they can
perform.You can create custom IAM policies to grant or deny permissions at
the level of individual API actions, resources, or resource groups, allowing you
to enforce the principle of least privilege and minimize the risk of unauthorized
access.
Centralized Identity Management:

IAM provides a centralized identity management system for managing users,


groups, and roles across your AWS accounts and services. You can create IAM
users for individuals, IAM groups to manage sets of users, and IAM roles for
applications and services to assume temporary permissions. IAM integrates
with AWS Single Sign-On (SSO) and external identity providers (IdPs) using
standards such as SAML and OpenID Connect, enabling you to manage access
to AWS resources centrally and securely.

Security and Compliance:

IAM helps you improve security and maintain compliance with regulatory
requirements by enforcing strong authentication and access controls.You can
enable multi-factor authentication (MFA) for IAM users to add an extra layer of
security to their accounts. IAM supports AWS CloudTrail integration, allowing
you to monitor and log IAM API activity for auditing, compliance, and security
analysis.

Flexible Credential Management:

IAM provides flexible credential management options, including access keys,


IAM roles, and temporary security credentials. You can generate access keys
for programmatic access to AWS resources, IAM roles for cross-account
access or federated users, and temporary security credentials for applications
running on EC2 instances.

Scalability and Performance:

IAM is designed to scale with your AWS environment, supporting thousands of


users, groups, roles, and permissions without impacting performance. You can
manage access control policies centrally and apply them across multiple AWS
accounts and services, ensuring consistent security and access management
as your organization grows.
Cost Management:

IAM helps you optimize costs by allowing you to grant permissions only to the
resources and actions required by users, groups, and roles.By implementing
least privilege access and monitoring IAM usage, you can identify and eliminate
unnecessary permissions, reducing the risk of accidental or malicious actions
that could incur additional costs.

67. Explain Connection Draining

Connection Draining is an AWS service that allows us to serve current requests


on the servers that are either being decommissioned or updated. By enabling
this Connection Draining, we let the Load Balancer make an outgoing instance
finish its existing requests for a set length of time before sending it any new
requests. A departing instance will immediately go off if Connection Draining
is not enabled, and all pending requests will fail.

68. What is Power User Access in AWS?

The AWS Resources owner is identical to an Administrator User. The


Administrator User can create change, delete, and inspect resources, as well
as grant permissions to other AWS users. Administrator Access without the
ability to control users and permissions is provided to a Power User. A Power
User Access user cannot provide permissions to other users but has the ability
to modify, remove, view, and create resources.

70. What are the elements of an AWS CloudFormation


template?

AWS CloudFormation templates are YAML or JSON formatted text files:

• Template parameters → Parameters are input values that users


provide when creating or updating a CloudFormation stack based on
the template → Customize the stack by specifying values such as
instance types, storage sizes, or network configurations at runtime.
• Output values → Outputs define the values that users can retrieve after
the stack is created or updated → Can include resource identifiers,
URLs, or other information generated during the stack creation
process, allowing users to easily access and reference these values

• Data tables → Mappings allow you to define key-value pairs that can
be used to specify conditional values based on a key lookup.They are
typically used to map different values for resources based on regions,
environments, or other criteria defined in the template

• Resources → Resources define the AWS resources that make up the


stack, such as EC2 instances, S3 buckets, RDS databases, IAM roles,
and more → Each resource is declared using a resource type, a logical
name, and its properties, which specify the resource's configuration
setting

• File format version → The format version specifies the version of the
CloudFormation template schema being used → Helps Cloud
Formation determine which syntax rules and features are supported

71. What happens when one of the resources in a stack


cannot be created successfully?

If the resource in the stack cannot be created, then the CloudFormation


automatically rolls back and terminates all the resources that were created in
the CloudFormation template. This is a handy feature when you accidentally
exceed your limit of Elastic IP addresses or don’t have access to an EC2 AMI.
74. Can you take a backup of EFS like EBS, and if yes,
how?

Yes, you can use the EFS-to-EFS backup solution to recover from unintended
changes or deletion in Amazon EFS. Follow these steps:

1. Sign in to the AWS Management Console

2. Click the launch EFS-to-EFS-restore button

3. Use the region selector in the console navigation bar to select region

4. Verify if you have chosen the right template on the Select Template page

5. Assign a name to your solution stack

6. Review the parameters for the template and modify them if necessary

75. How do you auto-delete old snapshots?

Here’s the procedure for auto-deleting old snapshots:

• As per procedure and best practices, take snapshots of the EBS volumes
on Amazon S3.

• Use AWS Ops Automator to handle all the snapshots automatically.


• This allows you to create, copy, and delete Amazon EBS snapshots.

76. What are the different types of load balancers in


AWS?

There are three types of load balancers that are supported by Elastic Load Balancing:

1. Application Load Balancer

2. Network Load Balancer

3. Classic Load Balancer

77. What are the different uses of the various load


balancers in AWS Elastic Load Balancing?

Classic Load Balancer (CLB):

CLB provides basic load balancing capabilities for distributing incoming traffic
across multiple EC2 instances in one or more Availability Zones. Use CLB for
simple, traditional load balancing scenarios where you need to distribute traffic
evenly across instances and do not require advanced features such as content-
based routing or SSL offloading. CLB is suitable for applications that rely on
TCP and SSL protocols and do not require advanced routing or application layer
features.
Application Load Balancer (ALB):

ALB operates at the application layer (Layer 7) of the OSI model and provides
advanced routing and content-based routing capabilities. Use ALB for modern,
microservices-based architectures and applications where you need to route
traffic based on URL path, host header, or query string parameters. ALB
supports features such as host-based routing, path-based routing, WebSocket
protocol, HTTP/2, and containerized applications running on ECS or EKS. ALB
is suitable for applications with HTTP and HTTPS traffic that require flexible
routing and traffic management capabilities.

Network Load Balancer (NLB):

NLB operates at the transport layer (Layer 4) of the OSI model and provides
high-performance, low-latency load balancing for TCP, UDP, and TLS traffic. Use
NLB for applications that require high throughput, low latency, and support for
TCP/UDP protocols, such as gaming applications, real-time streaming, and IoT
platforms. NLB is designed for extreme performance and scalability, making it
suitable for handling millions of requests per second with minimal latency. NLB
also supports static IP addresses, preservation of source IP addresses, and
TCP/UDP session stickiness, making it suitable for stateful applications and
scenarios where client IP preservation is important.

79. How can you use AWS WAF in monitoring your AWS
applications?

Create Web ACLs:

A web ACL is a collection of rules that defines how AWS WAF filters and
monitors web requests to your application. Define conditions and rules within
the web ACL to specify the criteria for allowing, blocking, or monitoring HTTP
and HTTPS requests.
Define Conditions and Rules:

Within the web ACL, define conditions based on various attributes of HTTP
requests, such as IP addresses, headers, query strings, or request body
content. Create rules that use these conditions to allow, block, or monitor
requests that match specific patterns or criteria. For monitoring purposes, you
can create rules that count or log requests that match certain conditions
without blocking them.

Enable Logging:

Configure AWS WAF to log requests that match specific rules or conditions to
Amazon CloudWatch Logs or Amazon Kinesis Data Firehose for monitoring
and analysis.

Enable logging for web ACLs to capture detailed information about incoming
requests, including request headers, source IP addresses, user agents, and
more.

Analyze Logs and Metrics:

Use Amazon CloudWatch Logs Insights or other log analysis tools to query and
analyze the logs generated by AWS WAF. Monitor metrics such as the number
of allowed, blocked, or monitored requests over time to gain insights into your
application's traffic patterns and potential security threats.

Set Up Alerts and Notifications:

Configure CloudWatch Alarms to monitor specific metrics or patterns in AWS


WAF logs and trigger notifications or automated responses when certain
thresholds are exceeded.

Set up alerts for unusual or suspicious activity, such as a sudden increase in


traffic from specific IP addresses or patterns indicative of a potential attack.

Regularly Review and Update Rules:


Regularly review and update your web ACLs and rule sets based on the analysis
of AWS WAF logs and metrics. Adjust rules and conditions to adapt to changing
traffic patterns, emerging threats, or new vulnerabilities discovered in your
applications.

80. What are the different AWS IAM categories that you
can control?

Using AWS IAM, you can do the following:

• Create and manage IAM users

• Create and manage IAM groups

• Manage the security credentials of the users

• Create and manage policies to grant access to AWS services and resources

81. What are the policies that you can set for your users’
passwords?

Here are some of the policies that you can set:

• You can set a minimum length of the password, or you can ask the users to add at
least one number or special characters in it.

• You can assign requirements of particular character types, including uppercase


letters, lowercase letters, numbers, and non-alphanumeric characters.

• You can enforce automatic password expiration, prevent reuse of old passwords,
and request for a password reset upon their next AWS sign in.

• You can have the AWS users contact an account administrator when the user has
allowed the password to expire.
82. What is the difference between an IAM role and an
IAM user?

The two key differences between the IAM role and IAM user are:

• An IAM role is an IAM entity that defines a set of permissions for making AWS
service requests, while an IAM user has permanent long-term credentials and is
used to interact with the AWS services directly.

• In the IAM role, trusted entities, like IAM users, applications, or an AWS service,
assume roles whereas the IAM user has full access to all the AWS IAM
functionalities.

83. What are the managed policies in AWS IAM?

There are two types of managed policies; one that is managed by you and one that is
managed by AWS. They are IAM resources that express permissions using IAM
policy language. You can create, edit, and manage them separately from the IAM
users, groups, and roles to which they are attached.

84. Can you give an example of an IAM policy and a


policy summary?

Here’s an example of an IAM policy to grant access to add, update, and delete
objects from a specific folder.
Here’s an example of a policy summary:

85. How does AWS IAM help your business?

IAM enables to:

• Manage IAM users and their access - AWS IAM provides secure resource access
to multiple users

• Manage access for federated users – AWS allows you to provide secure access to
resources in your AWS account to your employees and applications without
creating IAM roles

AWS Interview Questions for Route 53

86. What Is Amazon Route 53?

Amazon Route 53 is a scalable and highly available Domain Name System (DNS).
The name refers to TCP or UDP port 53, where DNS server requests are addressed.

87. What Is Cloudtrail and How Do Cloudtrail and Route


53 Work Together?

CloudTrail is a service that captures information about every request sent to the
Amazon Route 53 API by an AWS account, including requests that are sent by IAM
users. CloudTrail saves log files of these requests to an Amazon S3 bucket.
CloudTrail captures information about all requests. You can use information in the
CloudTrail log files to determine which requests were sent to Amazon Route 53, the
IP address that the request was sent from, who sent the request, when it was sent,
and more.

88. What is the difference between Latency Based


Routing and Geo DNS?

The Geo Based DNS routing takes decisions based on the geographic location of the
request. Whereas, the Latency Based Routing utilizes latency measurements
between networks and AWS data centers. Latency Based Routing is used when you
want to give your customers the lowest latency possible. On the other hand, Geo
Based routing is used when you want to direct the customer to different websites
based on the country or region they are browsing from.

89. What is the difference between a Domain and a


Hosted Zone?

Domain

A domain is a collection of data describing a self-contained administrative and


technical unit. For example, www.simplilearn.com is a domain and a general DNS
concept.

Hosted zone

A hosted zone is a container that holds information about how you want to route
traffic on the internet for a specific domain. For example, lms.simplilearn.com is a
hosted zone.

90. How does Amazon Route 53 provide high availability


and low latency?

Here’s how Amazon Route 53 provides the resources in question:


Globally Distributed Servers

Amazon is a global service and consequently has DNS services globally. Any
customer creating a query from any part of the world gets to reach a DNS server
local to them that provides low latency.

Dependency

Route 53 provides a high level of dependability required by critical applications

Optimal Locations

Route 53 uses a global anycast network to answer queries from the optimal position
automatically.

AWS Interview Questions for Config

91. How does AWS config work with AWS CloudTrail?

AWS CloudTrail records user API activity on your account and allows you to access
information about the activity. Using CloudTrail, you can get full details about API
actions such as the identity of the caller, time of the call, request parameters, and
response elements. On the other hand, AWS Config records point-in-time
configuration details for your AWS resources as Configuration Items (CIs).

You can use a CI to ascertain what your AWS resource looks like at any given point in
time. Whereas, by using CloudTrail, you can quickly answer who made an API call to
modify the resource. You can also use Cloud Trail to detect if a security group was
incorrectly configured.

92. Can AWS Config aggregate data across different


AWS accounts?
Yes, you can set up AWS Config to deliver configuration updates from different
accounts to one S3 bucket, once the appropriate IAM policies are applied to the S3
bucket.

AWS Interview Questions for Database

94. Which type of scaling would you recommend for


RDS and why?

There are two types of scaling - vertical scaling and horizontal scaling. Vertical
scaling lets you vertically scale up your master database with the press of a button.
A database can only be scaled vertically, and there are 18 different instances in
which you can resize the RDS. On the other hand, horizontal scaling is good for
replicas. These are read-only replicas that can only be done through Amazon Aurora.

95. What is a maintenance window in Amazon RDS? Will


your DB instance be available during maintenance
events?

RDS maintenance window lets you decide when DB instance modifications,


database engine version upgrades, and software patching have to occur. The
automatic scheduling is done only for patches that are related to security and
durability. By default, there is a 30-minute value assigned as the maintenance
window and the DB instance will still be available during these events though you
might observe a minimal effect on performance.

96. What are the consistency models in DynamoDB?

There are two consistency models In DynamoDB. First, there is the Eventual
Consistency Model, which maximizes your read throughput. However, it might not
reflect the results of a recently completed write. Fortunately, all the copies of data
usually reach consistency within a second. The second model is called the Strong
Consistency Model. This model has a delay in writing the data, but it guarantees that
you will always see the updated data every time you read it.

97. What type of query functionality does DynamoDB


support?

DynamoDB supports GET/PUT operations by using a user-defined primary key. It


provides flexible querying by letting you query on non-primary vital attributes using
global secondary indexes and local secondary indexes.

AWS Interview Questions - Short Answer Questions

1. Suppose you are a game designer and want to


develop a game with single-digit millisecond latency,
which of the following database services would you
use?

Amazon DynamoDB

2. If you need to perform real-time monitoring of AWS


services and get actionable insights, which services
would you use?

Amazon CloudWatch

3. As a web developer, you are developing an app,


targeted primarily for the mobile platform. Which of the
following lets you add user sign-up, sign-in, and access
control to your web and mobile apps quickly and easily?

Amazon Cognito
4. You are a Machine Learning Engineer who is on the
lookout for a solution that will discover sensitive
information that your enterprise stores in AWS and then
use NLP to classify the data and provide business-
related insights. Which among the services would you
choose?

AWS Macie

5. You are the system administrator in your company,


which is running most of its infrastructure on AWS. You
are required to track your users and keep tabs on how
they are being authenticated. You wish to create and
manage AWS users and use permissions to allow and
deny their access to AWS resources. Which of the
following services suits you best?

AWS IAM

6. Which service do you use if you want to allocate


various private and public IP addresses to make them
communicate with the internet and other instances?

Amazon VPC

7. This service provides you with cost-efficient and


resizable capacity while automating time-consuming
administration tasks

Amazon Relational Database Service


8. Which of the following is a means for accessing
human researchers or consultants to help solve
problems on a contractual or temporary basis?

Amazon Mechanical Turk

9. This service is used to make it easy to deploy,


manage, and scale containerized applications using
Kubernetes on AWS. Which of the following is this AWS
service?

Amazon Elastic Container Service

10. This service lets you run code without provisioning


or managing servers. Select the correct service from the
below options

AWS Lambda

11. As an AWS Developer, using this pay-per-use service,


you can send, store, and receive messages between
software components. Which of the following is it?

Amazon Simple Queue Service

12. Which service do you use if you would like to host a


real-time audio and video conferencing application on
AWS, this service provides you with a secure and easy-
to-use application?

Amazon Chime
13. As your company's AWS Solutions Architect, you are
in charge of designing thousands of similar individual
jobs. Which of the following services best meets your
requirements?

AWS Batch

WS Interview Questions - Multiple-Choice

1. Suppose you are a game designer and want to


develop a game with single-digit millisecond latency,
which of the following database services would you
use?

1. Amazon RDS

2. Amazon Neptune

3. Amazon Snowball

4. Amazon DynamoDB

2. If you need to perform real-time monitoring of AWS


services and get actionable insights, which services
would you use?

1. Amazon Firewall Manager

2. Amazon GuardDuty

3. Amazon CloudWatch

4. Amazon EBS
3. As a web developer, you are developing an app,
targeted especially for the mobile platform. Which of the
following lets you add user sign-up, sign-in, and access
control to your web and mobile apps quickly and easily?

1. AWS Shield

2. AWS Macie

3. AWS Inspector

4. Amazon Cognito

4. You are a Machine Learning Engineer who is on the


lookout for a solution that will discover sensitive
information that your enterprise stores in AWS and then
use NLP to classify the data and provide business-
related insights. Which among the services would you
choose?

1. AWS Firewall Manager

2. AWS IAM

3. AWS Macie

4. AWS CloudHSM

5. You are the system administrator in your company,


which is running most of its infrastructure on AWS. You
are required to track your users and keep tabs on how
they are being authenticated. You wish to create and
manage AWS users and use permissions to allow and
deny their access to AWS resources. Which of the
following services suits you best?

1. AWS Firewall Manager


2. AWS Shield

3. Amazon API Gateway

4. AWS IAM

6. Which service do you use if you want to allocate


various private and public IP addresses in order to make
them communicate with the internet and other
instances?

1. Amazon Route 53

2. Amazon VPC

3. Amazon API Gateway

4. Amazon CloudFront

7. This service provides you with cost-efficient and


resizable capacity while automating time-consuming
administration tasks

1. Amazon Relational Database Service

2. Amazon Elasticache

3. Amazon VPC

4. Amazon Glacier

8. Which of the following is a means for accessing


human researchers or consultants to help solve
problems on a contractual or temporary basis?

1. Amazon Mechanical Turk

2. Amazon Elastic Mapreduce


3. Amazon DevPay

4. Multi-Factor Authentication

9. This service is used to make it easy to deploy,


manage, and scale containerized applications using
Kubernetes on AWS. Which of the following is this AWS
service?

1. Amazon Elastic Container Service

2. AWS Batch

3. AWS Elastic Beanstalk

4. Amazon Lightsail

10. This service lets you run code without provisioning


or managing servers. Select the correct service from the
below options

1. Amazon EC2 Auto Scaling

2. AWS Lambda

3. AWS Batch

4. Amazon Inspector

11. As an AWS Developer, using this pay-per-use service,


you can send, store and receive messages between
software components. Which of the following is it?

1. AWS Step Functions

2. Amazon MQ

3. Amazon Simple Queue Service


4. Amazon Simple Notification Service

12. Which service do you use if you would like to host


real-time audio and video conferencing application on
AWS, this service provides you with a secure and easy-
to-use application?

1. Amazon Chime

2. Amazon WorkSpaces

3. Amazon MQ

4. Amazon AppStream

13. As your company's AWS Solutions Architect, you are


in charge of designing thousands of similar individual
jobs. Which of the following services best meets your
requirements?

1. AWS EC2 Auto Scaling

2. AWS Snowball

3. AWS Fargate

4. AWS Batch

14. You are a Machine Learning engineer and you are


looking for a service that helps you build and train
Machine Learning models in AWS. Which among the
following are we referring to?

1. Amazon SageMaker

2. AWS DeepLens

3. Amazon Comprehend
4. Device Farm

15. Imagine that you are working for your company's IT


team. You are assigned to adjusting the capacity of
AWS resources based on the incoming application and
network traffic. How would you do it?

1. Amazon VPC

2. AWS IAM

3. Amazon Inspector

4. Amazon Elastic Load Balancing

This cross-platform video game development engine


that supports PC, Xbox, Playstation, iOS, and Android
platforms allows developers to build and host their
games on Amazon's servers.

1. Amazon GameLift

2. AWS Greengrass

3. Amazon Lumberyard

4. Amazon Sumerian

17. You are the Project Manager of your company's


Cloud Architects team. You are required to visualize,
understand and manage your AWS costs and usage
over time. Which of the following services works best?

1. AWS Budgets

2. AWS Cost Explorer

3. Amazon WorkMail
4. Amazon Connect

18. You are the chief Cloud Architect at your company.


How can you automatically monitor and adjust
computer resources to ensure maximum performance
and efficiency of all scalable resources?

1. AWS CloudFormation

2. AWS Aurora

3. AWS Auto Scaling

4. Amazon API Gateway

19. As a database administrator. you will employ a


service that is used to set up and manage databases
such as MySQL, MariaDB, and PostgreSQL. Which
service are we referring to?

1. Amazon Aurora

2. AWS RDS

3. Amazon Elasticache

4. AWS Database Migration Service

20. A part of your marketing work requires you to push


messages onto Google, Facebook, Windows, and Apple
through APIs or AWS Management Console. Which of
the following services do you use?

1. AWS CloudTrail

2. AWS Config

3. Amazon Chime
4. AWS Simple Notification Service

You might also like