0% found this document useful (0 votes)
25 views14 pages

AWS QUESTIONS For Interview

Uploaded by

srinivas.sip46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views14 pages

AWS QUESTIONS For Interview

Uploaded by

srinivas.sip46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

1. What is AWS And Why Is It So Popular?

Amazon Web Services (AWS) is a cloud computing platform that offers a variety of services,
including:
Computing, Storage, Content delivery, Database, Networking, Machine learning, and Security, identity,
and compliance.

AWS offers over 200 services from data centres around the world. It's used by millions of customers,
including startups, enterprises, and government agencies.

AWS helps organizations:


 Lower costs
 Become more agile
 Innovate faster
 Build solutions to transform industries, communities, and lives
 Move faster
 Scale

AWS offers several benefits, including:


 Replacing upfront capital infrastructure expenses with low variable costs
 Instantly spinning up hundreds or thousands of servers in minutes

Some AWS services include:


 AWS CloudFormation: Allows users to create, update, and delete stacks
 AWS Cloud9: A cloud-based integrated development environment (IDE) that lets
users write, run, and debug code

AWS providing Cloud Services:


IAAS: Infrastructure as a Service: Hardware
PAAS: Platform as a Service: Hardware + Runtime
SAAS: Software as Service: Hardware + Runtime + Application

AWS is a cloud computing platform known for its scalability, cost-effectiveness, and
global infrastructure. It allows businesses to efficiently scale operations, reduce
costs, and innovate rapidly1

AWS providing Cloud Services:

 IAAS: Infrastructure as a Service: Hardware


 PAAS: Platform as a Service: Hardware + Runtime
 SAAS: Software as Service: Hardware + Runtime + Application

2. Explain The Key Components Of AWS?


 AWS provides fundamental components crucial for cloud computing:
 EC2: Elastic Cloud Compute service offers scalable computing capabilities.
 S3: Simple Storage Service provides object storage accessible via the internet.
 RDS: Manages various databases.
3. What Is an EC2 Instance and How Does It Work?

 stands for Elastic Cloud Compute service


 It is a virtual server in the cloud.
 When we launch this, it will run the selected operating system with
a specified application stack.
 For instance, you can deploy a web server or a database in this EC2
service.
 It can also be configured for specific computing needs, making it a
flexible and scalable solution

4. Describe The Difference Between S3 And EBS In AWS.


 S3 (Simple Storage Service) is an object-level data storage that distributes the data
objects across several machines and allows the users to access the storage via the
internet from any corner of the world.

 Amazon EBS (Elastic Block Service) is a block-level data storage offered by Amazon.
Block storage stores files in multiple volumes called blocks, which act as separate
hard drives, and this storage is not accessible via the internet. Use cases include
business continuity, transactional and NO SQL database, software testing, etc.

5. How Does Auto Scaling Work In AWS?

Auto Scaling is an aws service that provides dynamically adjusts, on running the number of EC2
instances based on traffic demand. For instance, during the high traffic periods, Auto Scaling adds
instances, improving optimal performance as per the policies configuration. Conversely, while during
low traffic, it will reduce the number of instances, optimizes the cost efficiency maintaining high
availability.

6. What Is The AWS Free Tier, And What Services Are Included?

The AWS Free Tier provides a set of AWS services for limited at no cost for the duration of 12
months. The services include EC2, S3, Lambda etc.. This helps the users to explore and experiment
with AWS services without suffering with charges and helps in making a kick-starting point for cloud
beginners

 750 hrs. + 12 months


 CC/DC --- Master/VISA --- 2 rs | $1
 free service + paid service
 Rs 200 max (per month)
 pay as you go model
7. What Is Elastic Load Balancing (ELB) And How Does It Function?

• Elastic Load balancer (ELB) is a service provided by AWS that helps in distribution of
incoming traffic of the applications across multi targets such as EC2 instances, containers
etc.. in one or more Availability zones.
• It helps in improving fault tolerance and ensuring the utilization of resources, bringing high
availability of the application by preventing a single node (instance ) faulterance by
improving application’s resilience.

8. How Is Data Transfer Handled In AWS

The data transfer in AWS happens in between regions, within regions, and between the
services. It is essential to consider that these data transfer comes with costs when designing the
architectures. For example, transfer of the data between an EC2 instance and an S3 bucket within
the same region is often free, but the transfer of data in between inter-region comes with charges.

9. What Is Amazon RDS, And What Database Engines Does It Support?

Amazon Relational Database Service (RDS) is a cloud-based service that helps users set up, operate,
and scale relational databases:

 Ease of use: RDS automates many database management tasks, such as provisioning,
configuring, backing up, and patching.
 Cost-efficient: RDS offers resizable capacity and you only pay for the resources you use.
 Scalability: You can scale the compute resources or storage capacity of your database
instance.
 Compatibility: RDS works with popular database platforms, including MySQL, Oracle,
MariaDB, SQL Server, and Db2.
 Flexibility: You can customize databases to meet your needs.
 Performance: You can optimize performance with features like multi-AZ, optimized writes
and reads, and AWS Graviton3-based instances.
 Other features of RDS include: Replication to enhance database availability and improve
data durability, Automated backups that enable point-in-time recovery, Automated scaling,
and Maintenance and updates.
 RDS is designed for use with relational databases, while DynamoDB is intended for use with
non-relational databases.

10. Explain The Concept of AWS Identity and Access Management (IAM).

AWS Identity and Access Management (IAM) is a web service that controls access to AWS services by
managing users, security credentials, and permissions. IAM helps ensure that only authorized users
have access to AWS resources and can perform specific actions on them.

Here are some key components of IAM:

 Users:
Individuals or applications that need access to AWS resources. Each user has unique
credentials, like passwords and access keys.
Console Access User
-Console Access User called as root user --web access, this user having username
and password
-username
-password -- custom password / Auto Generated
-Login URL
Programmatic Access user -- cmd access -AccessKey and -SecretKey

 Groups
Collections of users. Permissions can be assigned to groups instead of individual users.
 Roles
Roles can be assumed by anyone who needs them, instead of being associated with a single
person. Roles use temporary security credentials that automatically expire.
 Policies
JSON documents that define permissions. Policies can be attached to users, groups, or roles.
IAM also allows you to connect to other identity services to grant external users access to
your AWS resources.
AWS Managed Policies: read only Access and full Access
AWS custom Policies: -JSON script

11. What Is Amazon VPC and How Does It Help in Securing Your Resources?

Amazon Virtual Private Cloud (VPC) is a cloud computing service that helps secure resources by
creating a private, isolated network within the AWS Cloud. It allows users to launch AWS resources,
like databases and EC2 instances, in a virtual network that they define. VPC provides several ways to
secure resources.

The following features help you configure a VPC to provide the connectivity that your applications
need:

IP addressing

You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also bring your
public IPv4 addresses and IPv6 GUA addresses to AWS and allocate them to resources in your VPC,
such as EC2 instances, NAT gateways, and Network Load Balancers

Virtual private clouds (VPC)

VPC is an Isolated Network (without CIDR block called as IP, with CIDR block called as VPC) after you
create a VPC, you can add subnets.

X.X.X.X/X --- CIDR -- Classless Inter Domain Routing

/X -- subnet masking -- decides the number of ips

/X : As per General networking /8 to /30

As per AWS networking /16 to /28

Subnets

Subnet is Slice of the VPC (VPC will provide by company)

subnets are derived from VPC.

X.X.X.X/X --- CIDR -- Classless Inter Domain Routing

/X -- subnet masking -- decides the number of ips

As per General Networking Each subnet reserves 2 ips (min & max)

As per AWS Networking Each subnet reserves 5 ips

1.Network Address ---------10.50.0.0

2.DNS Server -------------10.50.0.1

3.VPC Router -------------10.50.0.2

4.Future Usage ------------10.50.0.3


5.Network broadcast address-10.50.0.255

IGW | NAT IGW (IGW=Internet gate way)

IGW: Internet Gateway: to provide internet services to public subnets, free of cast

VPC: IGW ---- 1:1 (VPC and IGW should be in 1:1 ratio)

NAT IGW: to provide internet services to private subnets, chargeable

Route tables

Use route tables to determine where network traffic from your subnet or gateway is directed.

decides whether it is public or private subnet

default route table: it is nothing but main route table

public route table: IGW , subnet-1,2

private route table: NAT IGW, subnet-3,4

SG | NACL (SG- security group, NACL )

A virtual firewall that controls traffic to and from resources associated with it. You can specify
which traffic is allowed based on IP addresses and port ranges

a) inbound rules: incoming traffic

SSH --- 22

HTTP -- 80

HTTPs -- 443

RDP ---- 3389

b) outbound rules: outgoing traffic

ALL Traffic

NACL: Network Access control list

inbound rules: incoming traffic

outbound rules: outgoing traffic

rule no --- allow/deny

Peering connections

Use a VPC peering connection to route traffic between the resources in two VPCs.

Traffic Mirroring

Copy network traffic from network interfaces and send it to security and monitoring appliances for
deep packet inspection.
Transit gateways

Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN
connections, and AWS Direct Connect connections.

VPC Flow Logs

A flow log captures information about the IP traffic going to and from network interfaces in your
VPC.

VPN connections

Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN).

Note:

Here are some other benefits of using VPC:

Secure communication: You can establish secure communication with corporate data centers using
VPN connections and direct and private links.

Scalable infrastructure: You can use the scalable infrastructure of AWS.

Preconfigured default VPC: Each Amazon account created after 2013 comes with a preconfigured
default VPC

12. Describe The Use of Amazon Route 53.

Amazon Route 53 is a Domain Name System (DNS) web service that helps businesses and developers
route end users to internet applications:

Domain name registration

 Register domain names, such as example.com

DNS service

 Translate domain names into IP addresses, such as 192.0.2.1, that computers use to
connect to each other

Health checking

 Send automated requests to verify that an application is reachable, available, and


functional

Resolver

 Forward DNS queries from a VPC to DNS resolvers in a network, and vice versa

Other features of Amazon Route 53 include:


 Traffic Flow visual policy builder
 Automatic configuration of DNS settings for domains
 DNS Firewall to block queries for malicious domains and allow queries for trusted
domains
 Cost-effective, with users only paying for the services they use
 Secure, with Identity and Access Management (IAM)

Amazon Route 53 is a scalable and highly available service that simplifies the process of managing
domain names and routing requests to infrastructure.

13. How Does AWS Handle Disaster Recovery and Backup?

AWS comes up with various services for disaster recovery and backup. Amazon S3 service is the
most preferable service for backup storage and centralized management. Additionally, it supports in
business continuity in the event of a disaster by replicating AWS workloads to on-premises.

14. What Is AWS Elastic Beanstalk, And How Does It Simplify Application Deployment?

AWS Elastic Beanstalk is a AWS managed service helps in providing simplifyed application’s
deployment and management through automatically handling the infrastructure provision. It allows
the developers to focus completely on writing the code. For example, you only need to upload your
code for deploying web application, Elastic Beanstalk will care of the rest of underlying
infrastructures provisioning of EC2 instances and load balancing.

15. Explain The Significance of AWS Organizations in Managing Multiple AWS Accounts.

AWS Organizations manages multiple AWS accounts on centralizing them. It organizing the
billing, applying consistent policies across the accounts, and facilitates sharing of resources. For
instance, you can use Organizations to implement a policy that provides the specific security settings
across all accounts, safe guarding a unified and secure AWS environment.

AWS Organizations is a free tool that helps users manage multiple AWS accounts by:

Centralizing management: Users can manage all accounts from one location, instead of switching
between them.

 Consolidating costs: Users can roll up billing to a single account for payment.
 Applying policies: Users can set policies for custom environments.
 Sharing resources: Users can share resources among accounts.
 Managing permissions: Users can manage permissions and roles for all users in one place.
 Strengthening security: Users can enhance security with unified policies.
 Improving compliance: Users can improve compliance with strengthened security and
compliance features.

Other benefits of AWS Organizations include: Working seamlessly with other AWS services,
Scalability, Robust governance and control mechanisms, and Cost optimization capabilities.
To create an organization, users can:

 Navigate to Security, Identity & Compliance in the AWS Management Console


 Select AWS Organizations
 Create and manage organizational units (OUs)
 Attach policies
 Users can also use AWS Control Tower to automate many of the steps required to build their
environment.

16. Describe The Difference Between Amazon S3 And EBS.

17. How Does AWS Lambda Work, And What Are Its Use Cases?

18. What Are Security Groups And NACLs In The Context Of AWS VPC?

Security groups and network access control lists (NACLs) are both tools in Amazon Web Services
(AWS) that control access to resources within a Virtual Private Cloud (VPC). They work together to
provide layered security for your cloud resources:

 Security groups: Control traffic at the instance level, allowing you to specify which traffic can
come to or go from an Amazon EC2 instance. Security groups perform stateful filtering.
 Network ACLs: Control traffic at the subnet level, allowing you to evaluate traffic entering
and exiting a subnet. Network ACLs perform stateless filtering.

Here are some other differences between security groups and NACLs:

 Rules: Network ACLs can be used to set both allow and deny rules, while security groups
only support rules.
 Default behaviour: The default behaviour for network ACLs is to deny all rules.
 A Rules application: In a security group, all rules are applied to an instance, while in a
network ACL, rules are applied in the order of their priority.
 Non-modifiable rules: Each network ACL includes a non-modifiable rule that denies packets
that don't match any of the other numbered rules.
19. Explain The Purpose of AWS CloudFormation.

AWS CloudFormation is a service that helps developers and businesses create, provision, and
manage AWS and third-party resources:

 Create and provision resources: Use a template file to create and provision resources in an
orderly and predictable way.
 Manage resources: Manage resources across accounts and regions, and automate state
management and rollbacks.
 Update resources: Update resources in a simple, declarative style.
 Use AWS products: Leverage AWS products like Amazon EC2, Amazon Elastic Block Store,
and Amazon SNS to build scalable, cost-effective applications.

Here are some features of AWS CloudFormation:

 Templates: Use a template to describe resources and their dependencies. You can create,
update, and delete an entire stack as a single unit.
 JSON or YAML format: Write resources in text files using JSON or YAML format.
 Mappings: Use mappings to define properties and values based on input parameters.
 UpdatePolicy: Use the UpdatePolicy attribute to manage and replace updates of instances in
an Auto Scaling group.

AWS CloudFormation is an infrastructure as code (IaC) service that allows you to define and
manage your AWS infrastructure as code.

20. How Do You Monitor and Log AWS Resources?

21. Discuss The Various Storage Classes In Amazon S3.

Amazon S3 offers storage classes with different types as per needs. Standard storage class type
provides low-latency access, Intelligent-Tiering provides optimization of costs by moving data
between access tiers, Glacier is designed for archival purposes, offering retrieval times that span
from minutes to hours, and finally the Glacier Deep Archive class type offers lowest cost for long-
term archival.

1. S3 Standard: The default storage class for Amazon S3 is S3 Standard:

S3 Standard is a good choice for data that is accessed frequently, such as for websites, mobile
gaming, and big data analytics. It's also recommended for data that is critical and non-reproducible

Features

High durability, availability, and performance, with low latency and high throughput

 Use cases: Ideal for a variety of use cases, including cloud applications, websites, content
distribution, mobile and gaming applications, and big data analytics
 Reliability: 99.99999999% reliability, meaning that for every 100 billion objects stored per
year, only one is at risk of being lost
 Availability: 99.99% availability, meaning that within one hour, data will be available for 10
thousand hours
 Cost: The most expensive storage class due to its general capabilities
 Default: The default storage class when a user doesn't specify a storage type when creating
an account
 Amazon S3 offers a range of storage classes designed for different use cases, each with a
different offering at different prices.
2. S3 Standard-IA: Designed for data that is infrequently accessed and long-lived.

3. S3 One zone_IA
 Amazon announced on April 2018 the availability of a new storage class, named One-
Zone Infrequent Access, that complements the plain IA option by lowering the cost via
the usage of only one AZ for storage, designed for 99.999999999% durability
 S3 One Zone-IA offers the same high durability†, high throughput, and low latency of
Amazon S3 Standard and S3 Standard-IA
4. S3 Intelligent Tiering:
 Automatically moves data to the most cost-effective access tier. This is a good option if you
don't know how your data will be accessed.
 Frequent, Infrequent, and Archive Instant Access tiers have the same low-latency and high-
throughput performance of S3 Standard
 The Infrequent Access tier saves up to 40% on storage costs
 The Archive Instant Access tier saves up to 68% on storage costs
 Opt-in asynchronous archive capabilities for objects that become rarely accessed
 Archive Access and Deep Archive Access tiers have the same performance as S3 Glacier
Flexible Retrieval and S3 Glacier Deep Archive and save up to 95% for rarely accessed
objects
 Designed for durability of 99.999999999% of objects across multiple Availability Zones and
for 99.9% availability over a given year
 No operational overhead, no lifecycle charges, no retrieval charges, and no minimum
storage duration

5. Glacier: Amazon S3 Glacier is a cloud storage service that's designed for data archiving and
backup. It's a lower-cost option for storing data that doesn't need to be accessed frequently
or quickly.

Here are some features of Amazon S3 Glacier:

 Storage classes: There are multiple storage classes to choose from, each optimized for
different access patterns and storage durations:
 S3 Glacier Instant Retrieval: This class is for data that needs immediate access, like medical
images or news media assets. It offers milliseconds retrieval and can save up to 68% on
storage costs compared to S3 Standard-Infrequent Access.
 S3 Glacier Flexible Retrieval: This class is for data that doesn't need immediate access, but
you need the flexibility to retrieve large sets of data at no cost. It offers retrieval in minutes
or free bulk retrievals in 5-12 hours.

Retrieval times

Retrieval times range from a few minutes to a few hours, depending on the speed option you
choose:

 Expedited: The fastest option, with retrieval in a few minutes. It costs $0.03 per GB and
$0.01 per request.
 Standard: Retrieval in 3-5 hours. It costs $0.01 per GB and $0.05 per 1000 requests.
 Bulk: The slowest option, with retrieval in 5-12 hours. It costs $0.0025 per GB and $0.025
per 1,000 requests.
 Data durability: All S3 Glacier storage classes are designed for 99.999999999% (11 nines) of
data durability.

 S3 Glacier Deep Archive Storage Class: The new Glacier Deep Archive storage class is
designed to provide durable and secure long-term storage for large amounts of data at a
price that is competitive with off-premises tape archival services. Data is stored across 3 or
more AWS Availability Zones and can be retrieved in 12 hours or less. You no longer need to
deal with expensive and finicky tape drives, arrange for off-premises storage, or worry about
migrating data to newer generations of media.

6. Amazon S3 Reduced Redundancy Storage (RRS): Amazon S3 Reduced Redundancy Storage


(RRS) is a storage class in Amazon S3 that allows users to store noncritical data at a lower
cost and with lower durability than Amazon S3 Standard Storage:
 Cost: RRS costs 30% less than Amazon S3 Standard Storage.
 Durability: RRS has a durability of 99.99%, while Amazon S3 Standard Storage has a
durability of 99.999999999%.
 Storage: RRS stores objects on multiple devices across multiple facilities.
 Use cases: RRS is ideal for storing data that can be easily reproduced, such as thumbnails,
transcoded media, or processed data. It's also a good option for distributing or sharing
content that's stored elsewhere.

Here are some other things to know about RRS:

On average, if you store 10,000 objects in RRS, you're at risk of losing only one of them within a year.

You can use S3 Browser to manage automated site backups.

22. What Is AWS OpsWorks, And How Does It Work?

AWS OpsWorks is a configuration management service that helps you:

 Automate the configuration, deployment, and management of servers


 Provision AWS resources
 Monitor the health of applications
 Deploy and monitor applications in stacks

AWS OpsWorks offers three services:

 OpsWorks for Chef Automate,


 OpsWorks for Puppet Enterprise,
 OpsWorks Stacks.

Here are some features of AWS OpsWorks:


 Stacks: You can create stacks to manage cloud resources in layers. A layer is a set of EC2
instances that serve a specific purpose, such as hosting a database server or serving
applications.
 Recipes: You can add functionality to a layer by implementing recipes to handle tasks like
deploying apps or installing packages. You can package your custom recipes and related files
in cookbooks and store them in a repository like Amazon S3 or Git.
 Lifecycle events: Each layer has five lifecycle events, and each event has a set of recipes
associated with it. When an event occurs on a layer's instance, AWS OpsWorks Stacks
automatically runs the appropriate recipes.
 Development stacks: You can use a development stack to implement new features or fix
bugs. It's essentially a prototype production stack, but it usually doesn't have to handle the
same load as the production stack.

23. Explain AWS Key Management Service (KMS) And Its Use Cases.

AWS Key Management Service (KMS) is a managed service that helps organizations create, control,
and manage cryptographic keys for encrypting and protecting data:

 Key creation: You can create new keys whenever you want.

 Key control: You can control who can manage keys separately from who can use them.

 Key lifecycle: You can manage the lifecycle of your keys, including rotation and deletion.

 Key auditing: You can audit who used which keys, on which resources, and when.

 Key integration: KMS integrates with other AWS services, making it easier to encrypt data
and control access to the keys that decrypt it.

 Key protection: KMS protects keys with FIPS 140 Level 3 validated hardware security
modules (HSMs).

 Key use: You can use KMS to encrypt and decrypt data, sign and verify messages, generate
data keys and key pairs, and generate HMAC codes.

Here are some use cases for AWS KMS:

 Server-side encryption: You can use KMS keys to perform server-side encryption on an
Amazon Elastic Block Store (Amazon EBS) volume.

 Client-side encryption: You can use envelope encryption to protect content with KMS.

You can create customer managed keys or use AWS owned keys. Customer managed keys are
exclusively controlled by the customer, while AWS owned keys are controlled by the service and
viewable by the customer.
24. How Does AWS Support Hybrid Cloud Architectures?

AWS supports hybrid cloud architectures through the services such as AWS Direct Connect, VPN, and
AWS Outposts. Direct Connect service helps in establishing a dedicated network connection, VPN
helps in enabling the secure communication over the internet, and finally Outposts service helps in
expansion of AWS infrastructure to on-premises data centers on providing a seamless hybrid
solution.

You might also like