0% found this document useful (0 votes)
22 views

Aws Interview

Containers provide isolation among processes while using the same host kernel. Container formats differ in how virtualized they are, from more VM-like (LXC) to lighter weight for single apps (early Docker). Kubernetes is a tool for managing containers at scale, useful for production. CI servers integrate changes continuously, building code multiple times daily to detect issues. Configuration management tools automate server deployment and configuration across many servers. Popular tools include Ansible, Puppet, and Chef.

Uploaded by

Bharath
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Aws Interview

Containers provide isolation among processes while using the same host kernel. Container formats differ in how virtualized they are, from more VM-like (LXC) to lighter weight for single apps (early Docker). Kubernetes is a tool for managing containers at scale, useful for production. CI servers integrate changes continuously, building code multiple times daily to detect issues. Configuration management tools automate server deployment and configuration across many servers. Popular tools include Ansible, Puppet, and Chef.

Uploaded by

Bharath
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 23

1) Have you worked on containers ?

Containers are form of lightweight virtualization, more heavy than chroot but
lighter than hypervisors. They provide isolation among processes while using same
kernel as the host machine, and cgroups functionality within kernel. But container
formats differ among themselves in a way that some provide more VM-like experience
while other containerize only application.

LXC containers are most VM-like and most heavy weight, while Docker used to be more
light weight and was initially designed for single application container. But in
more recent releases Docker introduced whole machine containerization features so
now Docker can be used both ways. There is also rkt from CoreOS and LXD from
Canonical, which builds upon LXC.

2) What is Kubernetes? Explain


It is massively scalable tool for managing containers, made by Google. It is used
internally on huge deployments and because of that it is maybe the best option for
production use of containers. It supports self healing by restating non responsive
containers, it pack containers in a way that they take less resources and has many
other great features.

3) What is the function of CI (Continuous Integration) server ?


CI server function is to continuously integrate all changes being made and
committed to repository by different developers and check for compile errors. It
needs to build code several times a day, preferably after every commit so it can
detect which commit made the breakage if the breakage happens.

Note: Other available and popular CI tools are Jenkins, TeamCity, CircleCI ,
Hudson, Buildbot etc

4) What is Continuous Delivery ?


Is it practice of delivering the software for testing as soon as it is build by CI
(Continuous Integration) server’s. It requires heavy use of Versioning Control
System for so always available to developers and testers alike.

5) What is Vagrant and what is it used for ?


Vagrant is a tool that can create and manage virtualized (or containerized)
environments for testing and developing software. At first, Vagrant used virtualbox
as the hypervisor for virtual environments, but now it supports also KVM.

6) Do you ever used any scripting language ?


As far as scripting languages go, the simpler the better. In fact, the language
itself isn’t as important as understanding design patterns and development
paradigms such as procedural, object-oriented, or functional programming.

Currently, several scripting languages are available so the question arises : what
is the most appropriate language for DevOps approach? Simply everything , it
depends on the context of the project and tools used for example if Ansible used
its good have knowledge in Python and if its for Chef its on Ruby.

7) What is the role of a configuration management tool in devops ?


Automation plays an essential role in server configuration management. For that
purpose we use CM tools , they store information about versions and builds of the
software and testware and provide the traceability between software and testware.

8) What is the purpose of CM tools and which one you have used ?
Configuration Management tools’ purpose is to automatize deployment and
configuration of software on big number of servers. Most CM tools usually use agent
architecture which means that every machine being manged needs to have agent
installed. My favorite tool is one that uses agentless architecture – Ansible. It
only requires SSH and Python. And if raw module is being used, not even Python is
required because it can run raw bash commands. Other available and popular CM tools
are Puppet, Chef, SaltStack.

9) What is OpenStack ?
OpenStack is often called Cloud Operating System, and that is not far from the
truth. It is the complete environment for deploying IaaS which gives you
possibility of making your own cloud similar to AWS. It is highly modular and
consists of many sub-projects so you can pick and chose which functionality you
need. OpenStack distribution are available from Red Hat, Mirantis, HPE, Oracle,
Canonical and many others. It is completely open source project but some vendors
make proprietary distributions.

10) Classify Cloud Platforms anategory ?


Cloud Computing software can be classified as Software as a Service or SaaS,
Infrastructure as a Service or IaaS and Platform as a Service or PaaS.

SaaS is peace of software that runs over network on remote server and has only user
interface exposed to users, usually in web browser. For example salesforce.com.

Infrastructure as a service is a cloud environment that exposes VM to user to use


as entire OS or container where you could install anything you would install on
your server. Example for this would be OpenStack, AWS, Eucalyptus.
PaaS allows users to deploy their own application on the preinstalled platform,
usually framework of application server and suite of developer tools. Examples for
this would be OpenShHeroku.

11) What are easiest ways to build a small cloud ?


VMfest is one one of the options for making IaaS cloud from VirtualBox VMs in no
time. If you want a lightweight PaaS there is Dokku which is basically a bash
script that makes PaaS out of Dokku containers.

12) What is AWS (Amazon Web Services)? Did got chance to work on Amazon tools ?
AWS provides a set of flexible services designed to enable companies to create and
deliver products with greater speed and reliability using AWS and DevOps
practices . These services simplify commissioning and infrastructure management ,
application code deployment , automated software release process and monitoring of
the application and infrastructure performance. Amazon used tools like AWS
CodeCommit, AWS CodeDeploy, AWS CodePipeline etc, that helps to make devops easier.

13) What is EC2 ?


Amazon EC2 Container Service (ECS) is a highly scalable container management
service and high performance that supports the Docker containers and allows you to
easily run applications on a cluster managed by Amazon EC2 instances.

The EC2 service is inseparable from the concept of Amazon Machine Image – AMI . The
May is Indeed the image of a virtual machine That Will Be Executed . EC2 based on
XEN virtualization , that’s why it is quite easy to move XEN servers to EC2 .

14) Do you find any advantage of using NoSQL database over RDBMS ?
Typical web applications are built with a three-tier architecture. To carry the
load, more Web servers are simply added behind a load balancer to support more
users. The ability to scale out is a key principle in the world of cloud computing,
more and more important in which VM instances can be easily added or removed to
meet demand.

However, when it comes to the data layer, relational databases (RDBMS) does not
allow a passage to the simple scale and do not provide a flexible data model.
Manage more users means adding more servers and large servers are very complex,
owners and disproportionately expensive, in contrast to low-cost hardware, the
“commodity hardware”, architectures in the cloud. Organizations are beginning to
see performance issues with their relational databases for existing or new
applications. Especially as the number of users increases, they realize the need
for a faster and more flexible basis. This is the time to begin to assess and adopt
NoSQL database like in their Web applications.

15) What are the main SQL migration difficulties NoSQL ?


Each record in a relational database according to a schema – with a fixed number of
fields (columns) each having a specified object and a data type. Each record is the
same. The data is denormalized in several tables. The advantage is that there is
less of duplicate data in the database. The downside is that a change in the
pattern means performing several “alter table” that require expensive to lock
multiple tables simultaneously to ensure that change does not leave the database in
an inconsistent state.

1)Explain what is DevOps?


It is a newly emerging term in IT field, which is nothing but a practice that
emphasizes the collaboration and communication of both software developers and
other information-technology (IT) professionals. It focuses on delivering software
product faster and lowering the failure rate of releases.
2) Mention what are the key aspects or principle behind DevOps?
The key aspects or principle behind DevOps is
Infrastructure as code
Continuous deployment
Automation
Monitoring
Security
3) What are the core operations of DevOps with application development and with
infrastructure?
The core operations of DevOps with
Application development
Code building
Code coverage
Unit testing
Packaging
Deployment
With infrastructure
Provisioning
Configuration
Orchestration
Deployment
4) Explain how “Infrastructure of code” is processed or executed in AWS?
In AWS,
The code for infrastructure will be in simple JSON format
This JSON code will be organized into files called templates
This templates can be deployed on AWS and then managed as stacks
Later the CloudFormation service will do the Creating, deleting, updating, etc.
operation in the stack
5) Explain which scripting language is most important for a DevOps engineer?
A simpler scripting language will be better for a DevOps engineer. Python seems to
be very popular.
2000px-Devops.svg
6) Explain how DevOps is helpful to developers?
DevOps can be helpful to developers to fix the bug and implement new features
quickly. It also helps for clearer communication between the team members.
7) List out some popular tools for DevOps?
Some of the popular tools for DevOps are
Jenkins
Nagios
Monit
ELK (Elasticsearch, Logstash, Kibana)
io
Jenkins
Docker
Ansible
Git
Collectd/Collectl
8) Mention at what instance have you used the SSH?
I have used SSH to log into a remote machine and work on the command line. Beside
this, I have also used it to tunnel into the system in order to facilitate secure
encrypted communications between two untrusted hosts over an insecure network.
9) Explain how would you handle revision (version) control?
My approach to handle revision control would be to post the code on SourceForge or
GitHub so everyone can view it. Also, I will post the checklist from the last
revision to make sure that any unsolved issues are resolved.
10) Mention what are the types of Http requests?
The types of Http requests are
GET
HEAD
PUT
POST
PATCH
DELETE
TRACE
CONNECT
OPTIONS
11) Explain what would you check If a Linux-build-server suddenly starts getting
slow?
If a Linux-build-server suddenly starts getting slow, you will check for following
three things
Application Level troubleshooting
RAM related issues, Disk I/O read write issues, Disk Space related Issues, etc.
System Level troubleshooting
Check for Application log file OR application server log file, system performance
issues, Web Server Log – check HTTP, tomcat log, etc. or check jboss, weblogic logs
to see if the application server response/receive time is the issues for slowness,
Memory Leak of any application
Dependent Services troubleshooting
Antivirus related issues, Firewall related issues, Network issues, SMTP server
response time issues, etc.
12) Whether your video card can run Unity how would you know?
When you use command
/usr/lib/nux/unity_support_test-p
1
/usr/lib/nux/unity_support_test-p
it will give detailed output about Unity’s requirements and if they are met, then
your video card can run unity.
13) Explain how to enable startup sound in Ubuntu?
To enable startup sound
Click control gear and then click on Startup Applications
In the Startup Application Preferences window, click Add to add an entry
Then fill the information in comment box like Name, Command and Comment
/usr/bin/canberra-gtk-play—id= “desktop-login”—description= “play login sound”
1
/usr/bin/canberra-gtk-play—id= “desktop-login”—description= “play login sound”
Logout and then login once you are done
You can also open it with shortcut key Ctrl+Alt+T.
14) What is the quicker way to open an Ubuntu terminal in a particular directory?
To open Ubuntu terminal in a particular directory you can use custom keyboard short
cut.
To do that, in the command field of a new custom keyboard , type genome – terminal
– – working – directory = /path/to/dir.
15) Explain how you can get the current color of the current screen on the Ubuntu
desktop?
You can open the background image in The Gimp (image editor) and then use the
dropper tool to select the color on the specific point. It gives you the RGB value
of the color at that point.
16) Explain how you create launchers on desktop in Ubuntu?
To create launchers on desktop in Ubuntu you can use
ALT+F2 then type “ gnome-desktop-item-edit –create-new~/desktop “, it will launch
the old GUI dialog and create a launcher on your desktop
17) Explain what is Memcached?
Memcached is a free and open source, high-performance, distributed memory object
caching system. The primary objective of Memcached is to enhance the response time
for data that can otherwise be recovered or constructed from some other source or
database. It is used to avoid the need to operate SQL data base or another source
repetitively to fetch data for concurrent request.
Memcached can be used for
• Social Networking -> Profile Caching
• Content Aggregation -> HTML/ Page Caching
• Ad targeting -> Cookie/profile tracking
• Relationship -> Session caching
• E-commerce -> Session and HTML caching
• Location-based services -> Data-base query scaling
• Gaming and entertainment -> Session caching
Memcache helps in
• Speed up application processes
• It determines what to store and what not to
• Reduce the number of retrieval requests to the database
• Cuts down the I/O ( Input/Output) access (hard disk)
Drawback of Memcached is
• It is not a persistent data store
• Not a database
• It is not an application specific
• It cannot cache large object
18) Mention some important features of Memcached?
Important features of Memcached includes
• CAS Tokens: A CAS token is attached to any object retrieved from cache. You can
use that token to save your updated object.
• Callbacks: It simplifies the code
• getDelayed: It reduces the delay time of your script which is waiting for results
to come back from server
• Binary protocol: You can use binary protocol instead of ASCII with the newer
client
• Igbinary: Previously, client always used to do serialization of the value with
complex data, but with Memcached you can use igbinary option.
19) Explain whether it is possible to share a single instance of a Memcache between
multiple projects?
Yes, it is possible to share a single instance of Memcache between multiple
projects. Memcache is a memory store space, and you can run memcache on one or more
servers. You can also configure your client to speak to a particular set of
instances. So, you can run two different Memcache processes on the same host and
yet they are completely independent. Unless, if you have partitioned your data,
then it becomes necessary to know from which instance to get the data from or to
put into.

Add to Bookmark Email this Post 169.1K 8




Why AWS Architect Interview Questions ?
For the 6th straight year, Gartner placed Amazon Web Services in the “Leaders”
quadrant. Also Forbes reported, AWS Certified Solutions Architect Leads the 15 Top
Paying IT Certifications. Undoubtedly, AWS Solution Architect position is one of
the most sought after amongst IT jobs.
We at Edureka are committed to helping you upgrade your career in sync with
industry requirements. That’s why we have created a list of AWS Architect Interview
questions and answers that will most probably get asked during your interview. If
you’ve attended an AWS Architect interview or have additional questions beyond what
we have covered, we encourage you to add them in the comments section below.
In the meantime, you can maximize the Cloud computing career opportunities that are
sure to come your way by taking AWS Architect online training with Edureka. You can
write the AWS Architect certification exam after the course at edureka.
AWS Interview Questions And Answers | AWS Tutorial | AWS Training | Edureka
1. I have some private servers on my premises, also I have distributed some of my
workload on the public cloud, what is this architecture called?
 Virtual Private Network
 Private Cloud
 Virtual Private Cloud
 Hybrid Cloud
Answer D.
Explanation: This type of architecture would be a hybrid cloud. Why? Because we are
using both, the public cloud, and your on premises servers i.e the private cloud.
To make this hybrid architecture easy to use, wouldn’t it be better if your private
and public cloud were all on the same network(virtually). This is established by
including your public cloud servers in a virtual private cloud, and connecting this
virtual cloud with your on premise servers using a VPN(Virtual Private Network).
2. What does the following command do with respect to the Amazon EC2 security
groups?
ec2-create-group CreateSecurityGroup
 Groups the user created security groups into a new group for easy access.
 Creates a new security group for use with your account.
 Creates a new group inside the security group.
 Creates a new rule inside the security group.
Answer B.
Explanation: A Security group is just like a firewall, it controls the traffic in
and out of your instance. In AWS terms, the inbound and outbound traffic. The
command mentioned is pretty straight forward, it says create security group, and
does the same. Moving along, once your security group is created, you can add
different rules in it. For example, you have an RDS instance, to access it, you
have to add the public IP address of the machine from which you want access the
instance in its security group.
3. You have a video trans-coding application. The videos are processed according to
a queue. If the processing of a video is interrupted in one instance, it is resumed
in another instance. Currently there is a huge back-log of videos which needs to be
processed, for this you need to add more instances, but you need these instances
only until your backlog is reduced. Which of these would be an efficient way to do
it?
You should be using an On Demand instance for the same. Why? First of all, the
workload has to be processed now, meaning it is urgent, secondly you don’t need
them once your backlog is cleared, therefore Reserved Instance is out of the
picture, and since the work is urgent, you cannot stop the work on your instance
just because the spot price spiked, therefore Spot Instances shall also not be
used. Hence On-Demand instances shall be the right choice in this case.
4. You have a distributed application that periodically processes large volumes of
data across multiple Amazon EC2 Instances. The application is designed to recover
gracefully from Amazon EC2 instance failures. You are required to accomplish this
task in the most cost effective way.
Which of the following will meet your requirements?
A. Spot Instances
B. Reserved instances
C. Dedicated instances
D. On-Demand instances
Answer: A
Explanation: Since the work we are addressing here is not continuous, a reserved
instance shall be idle at times, same goes with On Demand instances. Also it does
not make sense to launch an On Demand instance whenever work comes up, since it is
expensive. Hence Spot Instances will be the right fit because of their low rates
and no long term commitments.
5. How is stopping and terminating an instance different from each other?
Starting, stopping and terminating are the three states in an EC2 instance, let’s
discuss them in detail:
• Stopping and Starting an instance: When an instance is stopped, the instance
performs a normal shutdown and then transitions to a stopped state. All of its
Amazon EBS volumes remain attached, and you can start the instance again at a later
time. You are not charged for additional instance hours while the instance is in a
stopped state.
• Terminating an instance: When an instance is terminated, the instance
performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless
the volume’s deleteOnTermination attribute is set to false. The instance itself is
also deleted, and you can’t start the instance again at a later time.
6. If I want my instance to run on a single-tenant hardware, which value do I have
to set the instance’s tenancy attribute to?
A. Dedicated
B. Isolated
C. One
D. Reserved
Answer A.
Explanation: The Instance tenancy attribute should be set to Dedicated Instance.
The rest of the values are invalid.
7. When will you incur costs with an Elastic IP address (EIP)?
A. When an EIP is allocated.
B. When it is allocated and associated with a running instance.
C. When it is allocated and associated with a stopped instance.
D. Costs are incurred regardless of whether the EIP is associated with a running
instance.
Answer C.
Explanation: You are not charged, if only one Elastic IP address is attached with
your running instance. But you do get charged in the following conditions:
• When you use more than one Elastic IPs with your instance.
• When your Elastic IP is attached to a stopped instance.
• When your Elastic IP is not attached to any instance.
8. How is a Spot instance different from an On-Demand instance or Reserved
Instance?
First of all, let’s understand that Spot Instance, On-Demand instance and Reserved
Instances are all models for pricing. Moving along, spot instances provide the
ability for customers to purchase compute capacity with no upfront commitment, at
hourly rates usually lower than the On-Demand rate in each region. Spot instances
are just like bidding, the bidding price is called Spot Price. The Spot Price
fluctuates based on supply and demand for instances, but customers will never pay
more than the maximum price they have specified. If the Spot Price moves higher
than a customer’s maximum price, the customer’s EC2 instance will be shut down
automatically. But the reverse is not true, if the Spot prices come down again,
your EC2 instance will not be launched automatically, one has to do that manually.
In Spot and On demand instance, there is no commitment for the duration from the
user side, however in reserved instances one has to stick to the time period that
he has chosen.
9. Are the Reserved Instances available for Multi-AZ Deployments?
A. Multi-AZ Deployments are only available for Cluster Compute instances types
B. Available for all instance types
C. Only available for M3 instance types
D. D. Not Available for Reserved Instances
Answer B.
Explanation: Reserved Instances is a pricing model, which is available for all
instance types in EC2.
10. How to use the processor state control feature available on the c4.8xlarge
instance?
The processor state control consists of 2 states:
• The C state – Sleep state varying from c0 to c6. C6 being the deepest sleep
state for a processor
• The P state – Performance state p0 being the highest and p15 being the lowest
possible frequency.
Now, why the C state and P state. Processors have cores, these cores need thermal
headroom to boost their performance. Now since all the cores are on the processor
the temperature should be kept at an optimal state so that all the cores can
perform at the highest performance.
Now how will these states help in that? If a core is put into sleep state it will
reduce the overall temperature of the processor and hence other cores can perform
better. Now the same can be synchronized with other cores, so that the processor
can boost as many cores it can by timely putting other cores to sleep, and thus get
an overall performance boost.
Concluding, the C and P state can be customized in some EC2 instances like the
c4.8xlarge instance and thus you can customize the processor according to your
workload.
11. What kind of network performance parameters can you expect when you launch
instances in cluster placement group?
The network performance depends on the instance type and network performance
specification, if launched in a placement group you can expect up to
• 10 Gbps in a single-flow,
• 20 Gbps in multiflow i.e full duplex
• Network traffic outside the placement group will be limited to 5 Gbps(full
duplex).
12. To deploy a 4 node cluster of Hadoop in AWS which instance type can be used?
First let’s understand what actually happens in a Hadoop cluster, the Hadoop
cluster follows a master slave concept. The master machine processes all the data,
slave machines store the data and act as data nodes. Since all the storage happens
at the slave, a higher capacity hard disk would be recommended and since master
does all the processing, a higher RAM and a much better CPU is required. Therefore,
you can select the configuration of your machine depending on your workload. For
e.g. – In this case c4.8xlarge will be preferred for master machine whereas for
slave machine we can select i2.large instance. If you don’t want to deal with
configuring your instance and installing hadoop cluster manually, you can straight
away launch an Amazon EMR (Elastic Map Reduce) instance which automatically
configures the servers for you. You dump your data to be processed in S3, EMR picks
it from there, processes it, and dumps it back into S3.
13. Where do you think an AMI fits, when you are designing an architecture for a
solution?
AMIs(Amazon Machine Images) are like templates of virtual machines and an instance
is derived from an AMI. AWS offers pre-baked AMIs which you can choose while you
are launching an instance, some AMIs are not free, therefore can be bought from the
AWS Marketplace. You can also choose to create your own custom AMI which would help
you save space on AWS. For example if you don’t need a set of software on your
installation, you can customize your AMI to do that. This makes it cost efficient,
since you are removing the unwanted things.
14. How do you choose an Availability Zone?
Let’s understand this through an example, consider there’s a company which has user
base in India as well as in the US.
Let us see how we will choose the region for this use case :

So, with reference to the above figure the regions to choose between are, Mumbai
and North Virginia. Now let us first compare the pricing, you have hourly prices,
which can be converted to your per month figure. Here North Virginia emerges as a
winner. But, pricing cannot be the only parameter to consider. Performance should
also be kept in mind hence, let’s look at latency as well. Latency basically is the
time that a server takes to respond to your requests i.e the response time. North
Virginia wins again!
So concluding, North Virginia should be chosen for this use case.
15. Is one Elastic IP address enough for every instance that I have running?
Depends! Every instance comes with its own private and public address. The private
address is associated exclusively with the instance and is returned to Amazon EC2
only when it is stopped or terminated. Similarly, the public address is associated
exclusively with the instance until it is stopped or terminated. However, this can
be replaced by the Elastic IP address, which stays with the instance as long as the
user doesn’t manually detach it. But what if you are hosting multiple websites on
your EC2 server, in that case you may require more than one Elastic IP address.
16. What are the best practices for Security in Amazon EC2?
There are several best practices to secure Amazon EC2. A few of them are given
below:
• Use AWS Identity and Access Management (IAM) to control access to your AWS
resources.
• Restrict access by only allowing trusted hosts or networks to access ports on
your instance.
• Review the rules in your security groups regularly, and ensure that you apply
the principle of least
• Privilege – only open up permissions that you require.
• Disable password-based logins for instances launched from your AMI. Passwords
can be found or cracked, and are a security risk.
17. You need to configure an Amazon S3 bucket to serve static assets for your
public-facing web application. Which method will ensure that all objects uploaded
to the bucket are set to public read?
A. Set permissions on the object to public read during upload.
B. Configure the bucket policy to set all objects to public read.
C. Use AWS Identity and Access Management roles to set the bucket to public
read.
D. Amazon S3 objects default to public read, so no action is needed.
Answer B.
Explanation: Rather than making changes to every object, its better to set the
policy for the whole bucket. IAM is used to give more granular permissions, since
this is a website, all objects would be public by default.
18. A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon
Glacier as part of their backup and archive infrastructure. The customer plans to
use third-party software to support this integration. Which approach will limit the
access of the third party software to only the Amazon S3 bucket named “company-
backup”?
A. A custom bucket policy limited to the Amazon S3 API in three Amazon Glacier
archive “company-backup”
B. A custom bucket policy limited to the Amazon S3 API in “company-backup”
C. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier
archive “company-backup”.
D. A custom IAM user policy limited to the Amazon S3 API in “company-backup”.
Answer D.
Explanation: Taking queue from the previous questions, this use case involves more
granular permissions, hence IAM would be used here.
19. Can S3 be used with EC2 instances, if yes, how?
Yes, it can be used for instances with root devices backed by local instance
storage. By using Amazon S3, developers have access to the same highly scalable,
reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its
own global network of web sites. In order to execute systems in the Amazon EC2
environment, developers use the tools provided to load their Amazon Machine Images
(AMIs) into Amazon S3 and to move them between Amazon S3 and Amazon EC2.
Another use case could be for websites hosted on EC2 to load their static content
from S3.
20. A customer implemented AWS Storage Gateway with a gateway-cached volume at
their main office. An event takes the link between the main and branch office
offline. Which methods will enable the branch office to access their data?
A. Restore by implementing a lifecycle policy on the Amazon S3 bucket.
B. Make an Amazon Glacier Restore API call to load the files into another Amazon
S3 bucket within four to six hours.
C. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from
a gateway snapshot.
D. Create an Amazon EBS volume from a gateway snapshot, and mount it to an
Amazon EC2 instance.
Answer C.
Explanation: The fastest way to do it would be launching a new storage gateway
instance. Why? Since time is the key factor which drives every business,
troubleshooting this problem will take more time. Rather than we can just restore
the previous working state of the storage gateway on a new instance.
21. When you need to move data over long distances using the internet, for instance
across countries or continents to your Amazon S3 bucket, which method or service
will you use?
A. Amazon Glacier
B. Amazon CloudFront
C. Amazon Transfer Acceleration
D. Amazon Snowball
Answer C.
Explanation: You would not use Snowball, because for now, the snowball service does
not support cross region data transfer, and since, we are transferring across
countries, Snowball cannot be used. Transfer Acceleration shall be the right choice
here as it throttles your data transfer with the use of optimized network paths and
Amazon’s content delivery network upto 300% compared to normal data transfer speed.
22. How can you speed up data transfer in Snowball?
The data transfer can be increased in the following way:
• By performing multiple copy operations at one time i.e. if the workstation is
powerful enough, you can initiate multiple cp commands each from different
terminals, on the same Snowball device.
• Copying from multiple workstations to the same snowball.
• Transferring large files or by creating a batch of small file, this will
reduce the encryption overhead.
• Eliminating unnecessary hops i.e. make a setup where the source machine(s)
and the snowball are the only machines active on the switch being used, this can
hugely improve performance.
23. If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign
each instance a predetermined private IP address you should:
A. Launch the instance from a private Amazon Machine Image (AMI).
B. Assign a group of sequential Elastic IP address to the instances.
C. Launch the instances in the Amazon Virtual Private Cloud (VPC).
D. Launch the instances in a Placement Group.
Answer C.
Explanation: The best way of connecting to your cloud resources (for ex- ec2
instances) from your own data center (for eg- private cloud) is a VPC. Once you
connect your datacenter to the VPC in which your instances are present, each
instance is assigned a private IP address which can be accessed from your
datacenter. Hence, you can access your public cloud resources, as if they were on
your own network.
24. Can I connect my corporate datacenter to the Amazon Cloud?
Yes, you can do this by establishing a VPN(Virtual Private Network) connection
between your company’s network and your VPC (Virtual Private Cloud), this will
allow you to interact with your EC2 instances as if they were within your existing
network.
25. Is it possible to change the private IP addresses of an EC2 while it is
running/stopped in a VPC?
Primary private IP address is attached with the instance throughout its lifetime
and cannot be changed, however secondary private addresses can be unassigned,
assigned or moved between interfaces or instances at any point.
26. Why do you make subnets?
A. Because there is a shortage of networks
B. To efficiently utilize networks that have a large no. of hosts.
C. Because there is a shortage of hosts.
D. To efficiently utilize networks that have a small no. of hosts.
Answer B.
Explanation: If there is a network which has a large no. of hosts, managing all
these hosts can be a tedious job. Therefore we divide this network into subnets
(sub-networks) so that managing these hosts becomes simpler.
27. Which of the following is true?
A. You can attach multiple route tables to a subnet
B. You can attach multiple subnets to a route table
C. Both A and B
D. None of these.
Answer B.
Explanation: Route Tables are used to route network packets, therefore in a subnet
having multiple route tables will lead to confusion as to where the packet has to
go. Therefore, there is only one route table in a subnet, and since a route table
can have any no. of records or information, hence attaching multiple subnets to a
route table is possible.
28. In CloudFront what happens when content is NOT present at an Edge location and
a request is made to it?
A. An Error “404 not found” is returned
B. CloudFront delivers the content directly from the origin server and stores it
in the cache of the edge location
C. The request is kept on hold till content is delivered to the edge location
D. The request is routed to the next closest edge location
Answer B.
Explanation: CloudFront is a content delivery system, which caches data to the
nearest edge location from the user, to reduce latency. If data is not present at
an edge location, the first time the data may get transferred from the original
server, but from the next time, it will be served from the cached edge.
29. If I’m using Amazon CloudFront, can I use Direct Connect to transfer objects
from my own data center?
Yes. Amazon CloudFront supports custom origins including origins from outside of
AWS. With AWS Direct Connect, you will be charged with the respective data transfer
rates.
30. If my AWS Direct Connect fails, will I lose my connectivity?
If a backup AWS Direct connect has been configured, in the event of a failure it
will switch over to the second one. It is recommended to enable Bidirectional
Forwarding Detection (BFD) when configuring your connections to ensure faster
detection and failover. On the other hand, if you have configured a backup IPsec
VPN connection instead, all VPC traffic will failover to the backup VPN connection
automatically. Traffic to/from public resources such as Amazon S3 will be routed
over the Internet. If you do not have a backup AWS Direct Connect link or a IPsec
VPN link, then Amazon VPC traffic will be dropped in the event of a failure.
31. If I launch a standby RDS instance, will it be in the same Availability Zone as
my primary?
A. Only for Oracle RDS types
B. Yes
C. Only if it is configured at launch
D. No
Answer D.
Explanation: No, since the purpose of having a standby instance is to avoid an
infrastructure failure (if it happens), therefore the standby instance is stored in
a different availability zone, which is a physically different independent
infrastructure.
32. When would I prefer Provisioned IOPS over Standard RDS storage?
A. If you have batch-oriented workloads
B. If you use production online transaction processing (OLTP) workloads.
C. If you have workloads that are not sensitive to consistent performance
D. All of the above
Answer A.
Explanation: Provisioned IOPS deliver high IO rates but on the other hand it is
expensive as well. Batch processing workloads do not require manual intervention
they enable full utilization of systems, therefore a provisioned IOPS will be
preferred for batch oriented workload.
33. How is Amazon RDS, DynamoDB and Redshift different?
• Amazon RDS is a database management service for relational databases, it
manages patching, upgrading, backing up of data etc. of databases for you without
your intervention. RDS is a Db management service for structured data only.
• DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with
unstructured data.
• Redshift, is an entirely different service, it is a data warehouse product
and is used in data analysis.
34. If I am running my DB Instance as a Multi-AZ deployment, can I use the standby
DB Instance for read or write operations along with primary DB instance?
A. Yes
B. Only with MySQL based RDS
C. Only for Oracle RDS instances
D. No
Answer D.
Explanation: No,Standby DB instance cannot be used with primary DB instance in
parallel, as the former is solely used for standby purposes, it cannot be used
unless the primary instance goes down.
35. Your company’s branch offices are all over the world, they use a software with
a multi-regional deployment on AWS, they use MySQL 5.6 for data persistence.
The task is to run an hourly batch process and read data from every region to
compute cross-regional reports which will be distributed to all the branches. This
should be done in the shortest time possible. How will you build the DB
architecture in order to meet the requirements?
A. For each regional deployment, use RDS MySQL with a master in the region and a
read replica in the HQ region
B. For each regional deployment, use MySQL on EC2 with a master in the region
and send hourly EBS snapshots to the HQ region
C. For each regional deployment, use RDS MySQL with a master in the region and
send hourly RDS snapshots to the HQ region
D. For each regional deployment, use MySQL on EC2 with a master in the region
and use S3 to copy data files hourly to the HQ region
Answer A.
Explanation: For this we will take an RDS instance as a master, because it will
manage our database for us and since we have to read from every region, we’ll put a
read replica of this instance in every region where the data has to be read from.
Option C is not correct since putting a read replica would be more efficient than
putting a snapshot, a read replica can be promoted if needed to an independent DB
instance, but with a Db snapshot it becomes mandatory to launch a separate DB
Instance.
36. Can I run more than one DB instance for Amazon RDS for free?
Yes. You can run more than one Single-AZ Micro database instance, that too for
free! However, any use exceeding 750 instance hours, across all Amazon RDS Single-
AZ Micro DB instances, across all eligible database engines and regions, will be
billed at standard Amazon RDS prices. For example: if you run two Single-AZ Micro
DB instances for 400 hours each in a single month, you will accumulate 800 instance
hours of usage, of which 750 hours will be free. You will be billed for the
remaining 50 hours at the standard Amazon RDS price.
37. Which AWS services will you use to collect and process e-commerce data for near
real-time analysis?
A. Amazon ElastiCache
B. Amazon DynamoDB
C. Amazon Redshift
D. Amazon Elastic MapReduce
Answer B,C.
Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB,
therefore can be fed any type of unstructured data, which can be data from e-
commerce websites as well, and later, an analysis can be done on them using Amazon
Redshift. We are not using Elastic MapReduce, since a near real time analyses is
needed.
38. Can I retrieve only a specific element of the data, if I have a nested JSON
data in DynamoDB?
Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a
Projection Expression to determine which attributes should be retrieved from the
table. Those attributes can include scalars, sets, or elements of a JSON document.
39. A company is deploying a new two-tier web application in AWS. The company has
limited staff and requires high availability, and the application requires complex
queries and table joins. Which configuration provides the solution for the
company’s requirements?
A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
B. Amazon RDS for MySQL with Multi-AZ
C. Amazon ElastiCache
D. Amazon DynamoDB
Answer D.
Explanation: DynamoDB has the ability to scale more than RDS or any other
relational database service, therefore DynamoDB would be the apt choice.
40. What happens to my backups and DB Snapshots if I delete my DB Instance?
When you delete a DB instance, you have an option of creating a final DB snapshot,
if you do that you can restore your database from that snapshot. RDS retains this
user-created DB snapshot along with all other manually created DB snapshots after
the instance is deleted, also automated backups are deleted and only manually
created DB Snapshots are retained.
41. Which of the following use cases are suitable for Amazon DynamoDB? Choose 2
answers
A. Managing web sessions.
B. Storing JSON documents.
C. Storing metadata for Amazon S3 objects.
D. Running relational joins and complex updates.
Answer C,D.
Explanation: If all your JSON data have the same fields eg [id,name,age] then it
would be better to store it in a relational database, the metadata on the other
hand is unstructured, also running relational joins or complex updates would work
on DynamoDB as well.
42. How can I load my data to Amazon Redshift from different data sources like
Amazon RDS, Amazon DynamoDB and Amazon EC2?
You can load the data in the following two ways:
• You can use the COPY command to load data in parallel directly to Amazon
Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
• AWS Data Pipeline provides a high performance, reliable, fault tolerant
solution to load data from a variety of AWS data sources. You can use AWS Data
Pipeline to specify the data source, desired data transformations, and then execute
a pre-written import script to load your data into Amazon Redshift.
43. Your application has to retrieve data from your user’s mobile every 5 minutes
and the data is stored in DynamoDB, later every day at a particular time the data
is extracted into S3 on a per user basis and then your application is later used to
visualize the data to the user. You are asked to optimize the architecture of the
backend system to lower cost, what would you recommend?
A. Create a new Amazon DynamoDB (able each day and drop the one for the previous
day after its data is on Amazon S3.
B. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table
and reduce provisioned write throughput.
C. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table
and reduce provisioned read throughput.
D. Write data directly into an Amazon Redshift cluster replacing both Amazon
DynamoDB and Amazon S3.
Answer C.
Explanation: Since our work requires the data to be extracted and analyzed, to
optimize this process a person would use provisioned IO, but since it is expensive,
using a ElastiCache memoryinsread to cache the results in the memory can reduce the
provisioned read throughput and hence reduce cost without affecting the
performance.
44. You are running a website on EC2 instances deployed across multiple
Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site
performs a high number of small reads and writes per second and relies on an
eventual consistency model. After comprehensive tests you discover that there is
read contention on RDS MySQL. Which are the best approaches to meet these
requirements? (Choose 2 answers)
A. Deploy ElastiCache in-memory cache running in each availability zone
B. Implement sharding to distribute load to multiple RDS MySQL instances
C. Increase the RDS MySQL Instance size and Implement provisioned IOPS
D. Add an RDS MySQL read replica in each availability zone
Answer A,C.
Explanation: Since it does a lot of read writes, provisioned IO may become
expensive. But we need high performance as well, therefore the data can be cached
using ElastiCache which can be used for frequently reading the data. As for RDS
since read contention is happening, the instance size should be increased and
provisioned IO should be introduced to increase the performance.
45. A startup is running a pilot deployment of around 100 sensors to measure street
noise and air quality in urban areas for 3 months. It was noted that every month
around 4GB of sensor data is generated. The company uses a load balanced auto
scaled layer of EC2 instances and a RDS database with 500 GB standard storage. The
pilot was a success and now they want to deploy at least 100K sensors which need
to be supported by the backend. You need to store the data for at least 2 years to
analyze it. Which setup of the following would you prefer?
A. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
B. Ingest data into a DynamoDB table and move old data to a Redshift cluster
C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
D. Keep the current architecture but upgrade RDS storage to 3TB and 10K
provisioned IOPS
Answer C.
Explanation: A Redshift cluster would be preferred because it easy to scale, also
the work would be done in parallel through the nodes, therefore is perfect for a
bigger workload like our use case. Since each month 4 GB of data is generated,
therefore in 2 year, it should be around 96 GB. And since the servers will be
increased to 100K in number, 96 GB will approximately become 96TB. Hence option C
is the right answer.
46. Suppose you have an application where you have to render images and also do
some general computing. From the following services which service will best fit
your need?
A. Classic Load Balancer
B. Application Load Balancer
C. Both of them
D. None of these
Answer B.
Explanation: You will choose an application load balancer, since it supports path
based routing, which means it can take decisions based on the URL, therefore if
your task needs image rendering it will route it to a different instance, and for
general computing it will route it to a different instance.
47. What is the difference between Scalability and Elasticity?
Scalability is the ability of a system to increase its hardware resources to handle
the increase in demand. It can be done by increasing the hardware specifications or
increasing the processing nodes.
Elasticity is the ability of a system to handle increase in the workload by adding
additional hardware resources when the demand increases(same as scaling) but also
rolling back the scaled resources, when the resources are no longer needed. This is
particularly helpful in Cloud environments, where a pay per use model is followed.
48. How will you change the instance type for instances which are running in your
application tier and are using Auto Scaling. Where will you change it from the
following areas?
A. Auto Scaling policy configuration
B. Auto Scaling group
C. Auto Scaling tags configuration
D. Auto Scaling launch configuration
Answer D.
Explanation: Auto scaling tags configuration, is used to attach metadata to your
instances, to change the instance type you have to use auto scaling launch
configuration.
49. You have a content management system running on an Amazon EC2 instance that is
approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2
instance?
A. Create a load balancer, and register the Amazon EC2 instance with it
B. Create a CloudFront distribution, and configure the Amazon EC2 instance as
the origin
C. Create an Auto Scaling group from the instance using the
CreateAutoScalingGroup action
D. Create a launch configuration from the instance using the
CreateLaunchConfigurationAction
Answer A.
Explanation:Creating alone an autoscaling group will not solve the issue, until you
attach a load balancer to it. Once you attach a load balancer to an autoscaling
group, it will efficiently distribute the load among all the instances. Option B –
CloudFront is a CDN, it is a data transfer tool therefore will not help reduce load
on the EC2 instance. Similarly the other option – Launch configuration is a
template for configuration which has no connection with reducing loads.
50. When should I use a Classic Load Balancer and when should I use an Application
load balancer?
A Classic Load Balancer is ideal for simple load balancing of traffic across
multiple EC2 instances, while an Application Load Balancer is ideal for
microservices or container-based architectures where there is a need to route
traffic to multiple services or load balance across multiple ports on the same EC2
instance.
For a detailed discussion on Auto Scaling and Load Balancer, please refer our EC2
AWS blog.
51. What does Connection draining do?
A. Terminates instances which are not in use.
B. Re-routes traffic from instances which are to be updated or failed a health
check.
C. Re-routes traffic from instances which have more workload to instances which
have less workload.
D. Drains all the connections from an instance, with one click.
Answer B.
Explanation: Connection draining is a service under ELB which constantly monitors
the health of the instances. If any instance fails a health check or if any
instance has to be patched with a software update, it pulls all the traffic from
that instance and re routes them to other instances.
52. When an instance is unhealthy, it is terminated and replaced with a new one,
which of the following services does that?
A. Sticky Sessions
B. Fault Tolerance
C. Connection Draining
D. Monitoring
Answer B.
Explanation: When ELB detects that an instance is unhealthy, it starts routing
incoming traffic to other healthy instances in the region. If all the instances in
a region becomes unhealthy, and if you have instances in some other availability
zone/region, your traffic is directed to them. Once your instances become healthy
again, they are re routed back to the original instances.
53. What are lifecycle hooks used for in AutoScaling?
A. They are used to do health checks on instances
B. They are used to put an additional wait time to a scale in or scale out
event.
C. They are used to shorten the wait time to a scale in or scale out event
D. None of these
Answer B.
Explanation: Lifecycle hooks are used for putting wait time before any lifecycle
action i.e launching or terminating an instance happens. The purpose of this wait
time, can be anything from extracting log files before terminating an instance or
installing the necessary softwares in an instance before launching it.
54. A user has setup an Auto Scaling group. Due to some issue the group has failed
to launch a single instance for more than 24 hours. What will happen to Auto
Scaling in this condition?
A. Auto Scaling will keep trying to launch the instance for 72 hours
B. Auto Scaling will suspend the scaling process
C. Auto Scaling will start an instance in a separate region
D. The Auto Scaling group will be terminated automatically
Answer B.
Explanation: Auto Scaling allows you to suspend and then resume one or more of the
Auto Scaling processes in your Auto Scaling group. This can be very useful when you
want to investigate a configuration problem or other issue with your web
application, and then make changes to your application, without triggering the Auto
Scaling process.
55. You have an EC2 Security Group with several running EC2 instances. You changed
the Security Group rules to allow inbound traffic on a new port and protocol, and
then launched several new instances in the same Security Group. The new rules
apply:
A. Immediately to all instances in the security group.
B. Immediately to the new instances only.
C. Immediately to the new instances, but old instances must be stopped and
restarted before the new rules apply.
D. To all instances, but it may take several minutes for old instances to see
the changes.
Answer A.
Explanation: Any rule specified in an EC2 Security Group applies immediately to all
the instances, irrespective of when they are launched before or after adding a
rule.
56. To create a mirror image of your environment in another region for disaster
recovery, which of the following AWS resources do not need to be recreated in the
second region? ( Choose 2 answers )
A. Route 53 Record Sets
B. Elastic IP Addresses (EIP)
C. EC2 Key Pairs
D. Launch configurations
E. Security Groups
Answer A,B.
Explanation: Elastic IPs and Route 53 record sets are common assets therefore there
is no need to replicate them, since Elastic IPs and Route 53 are valid across
regions
57. A customer wants to capture all client connection information from his load
balancer at an interval of 5 minutes, which of the following options should he
choose for his application?
A. Enable AWS CloudTrail for the loadbalancer.
B. Enable access logs on the load balancer.
C. Install the Amazon CloudWatch Logs agent on the load balancer.
D. Enable Amazon CloudWatch metrics on the load balancer.
Answer A.
Explanation: AWS CloudTrail provides inexpensive logging information for load
balancer and other AWS resources This logging information can be used for analyses
and other administrative work, therefore is perfect for this use case.
58. A customer wants to track access to their Amazon Simple Storage Service (S3)
buckets and also use this information for their internal security and access
audits. Which of the following will meet the Customer requirement?
A. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
B. Enable server access logging for all required Amazon S3 buckets.
C. Enable the Requester Pays option to track access via AWS Billing
D. Enable Amazon S3 event notifications for Put and Post.
Answer A.
Explanation: AWS CloudTrail has been designed for logging and tracking API calls.
Also this service is available for storage, therefore should be used in this use
case.
59. Which of the following are true regarding AWS CloudTrail? (Choose 2 answers)
A. CloudTrail is enabled globally
B. CloudTrail is enabled on a per-region and service basis
C. Logs can be delivered to a single Amazon S3 bucket for aggregation.
D. CloudTrail is enabled for all available services within a region.
Answer B,C.
Explanation: Cloudtrail is not enabled for all the services and is also not
available for all the regions. Therefore option B is correct, also the logs can be
delivered to your S3 bucket, hence C is also correct.
60. What happens if CloudTrail is turned on for my account but my Amazon S3 bucket
is not configured with the correct policy?
CloudTrail files are delivered according to S3 bucket policies. If the bucket is
not configured or is misconfigured, CloudTrail might not be able to deliver the log
files.
61. How do I transfer my existing domain name registration to Amazon Route 53
without disrupting my existing web traffic?
You will need to get a list of the DNS record data for your domain name first, it
is generally available in the form of a “zone file” that you can get from your
existing DNS provider. Once you receive the DNS record data, you can use Route 53’s
Management Console or simple web-services interface to create a hosted zone that
will store your DNS records for your domain name and follow its transfer process.
It also includes steps such as updating the nameservers for your domain name to the
ones associated with your hosted zone. For completing the process you have to
contact the registrar with whom you registered your domain name and follow the
transfer process. As soon as your registrar propagates the new name server
delegations, your DNS queries will start to get answered.
62. Which of the following services you would not use to deploy an app?
A. Elastic Beanstalk
B. Lambda
C. Opsworks
D. CloudFormation
Answer B.
Explanation: Lambda is used for running server-less applications. It can be used to
deploy functions triggered by events. When we say serverless, we mean without you
worrying about the computing resources running in the background. It is not
designed for creating applications which are publicly accessed.
63. How does Elastic Beanstalk apply updates?
A. By having a duplicate ready with updates before swapping.
B. By updating on the instance while it is running
C. By taking the instance down in the maintenance window
D. Updates should be installed manually
Answer A.
Explanation: Elastic Beanstalk prepares a duplicate copy of the instance, before
updating the original instance, and routes your traffic to the duplicate instance,
so that, incase your updated application fails, it will switch back to the original
instance, and there will be no downtime experienced by the users who are using your
application.
64. How is AWS Elastic Beanstalk different than AWS OpsWorks?
AWS Elastic Beanstalk is an application management platform while OpsWorks is a
configuration management platform. BeanStalk is an easy to use service which is
used for deploying and scaling web applications developed with Java, .Net, PHP,
Node.js, Python, Ruby, Go and Docker. Customers upload their code and Elastic
Beanstalk automatically handles the deployment. The application will be ready to
use without any infrastructure or resource configuration.
In contrast, AWS Opsworks is an integrated configuration management platform for IT
administrators or DevOps engineers who want a high degree of customization and
control over operations.
65. What happens if my application stops responding to requests in beanstalk?
AWS Beanstalk applications have a system in place for avoiding failures in the
underlying infrastructure. If an Amazon EC2 instance fails for any reason,
Beanstalk will use Auto Scaling to automatically launch a new instance. Beanstalk
can also detect if your application is not responding on the custom link, even
though the infrastructure appears healthy, it will be logged as an environmental
event( e.g a bad version was deployed) so you can take an appropriate action.
66. How is AWS OpsWorks different than AWS CloudFormation?
OpsWorks and CloudFormation both support application modelling, deployment,
configuration, management and related activities. Both support a wide variety of
architectural patterns, from simple web applications to highly complex
applications. AWS OpsWorks and AWS CloudFormation differ in abstraction level and
areas of focus.
AWS CloudFormation is a building block service which enables customer to manage
almost any AWS resource via JSON-based domain specific language. It provides
foundational capabilities for the full breadth of AWS, without prescribing a
particular model for development and operations. Customers define templates and use
them to provision and manage AWS resources, operating systems and application code.
In contrast, AWS OpsWorks is a higher level service that focuses on providing
highly productive and reliable DevOps experiences for IT administrators and ops-
minded developers. To do this, AWS OpsWorks employs a configuration management
model based on concepts such as stacks and layers, and provides integrated
experiences for key activities like deployment, monitoring, auto-scaling, and
automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range
of application-oriented AWS resource types including Amazon EC2 instances, Amazon
EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.
67. I created a key in Oregon region to encrypt my data in North Virginia region
for security purposes. I added two users to the key and an external AWS account. I
wanted to encrypt an object in S3, so when I tried, the key that I just created was
not listed. What could be the reason?
A. External aws accounts are not supported.
B. AWS S3 cannot be integrated KMS.
C. The Key should be in the same region.
D. New keys take some time to reflect in the list.
Answer C.
Explanation: The key created and the data to be encrypted should be in the same
region. Hence the approach taken here to secure the data is incorrect.
68. A company needs to monitor the read and write IOPS for their AWS MySQL RDS
instance and send real-time alerts to their operations team. Which AWS services can
accomplish this?
A. Amazon Simple Email Service
B. Amazon CloudWatch
C. Amazon Simple Queue Service
D. Amazon Route 53
Answer B.
Explanation: Amazon CloudWatch is a cloud monitoring tool and hence this is the
right service for the mentioned use case. The other options listed here are used
for other purposes for example route 53 is used for DNS services, therefore
CloudWatch will be the apt choice.
69. What happens when one of the resources in a stack cannot be created
successfully in AWS OpsWorks?
When an event like this occurs, the “automatic rollback on error” feature is
enabled, which causes all the AWS resources which were created successfully till
the point where the error occurred to be deleted. This is helpful since it does not
leave behind any erroneous data, it ensures the fact that stacks are either created
fully or not created at all. It is useful in events where you may accidentally
exceed your limit of the no. of Elastic IP addresses or maybe you may not have
access to an EC2 AMI that you are trying to run etc.
70. What automation tools can you use to spinup servers?
Any of the following tools can be used:
• Roll-your-own scripts, and use the AWS API tools. Such scripts could be
written in bash, perl or other language of your choice.
• Use a configuration management and provisioning tool like puppet or its
successor Opscode Chef. You can also use a tool like Scalr.
• Use a managed solution such as Rightscale.
These are some common AWS Solution Architect Interview Questions to be answered by
an AWS Enterprise Solution Architect, without order predefined:
Interview Questions: Business Perspective Recommended Answer
If an organization is facing a major change, what is your approach as AWS Solution
Architect to suggest to face it?
What steps will you perform to resolve this situation? This reveals if the
candidate for AWS Solution Architect position possesses an open interest in a
future customer, understand their business model, and recognize actual changes and
challenges.
From your point of view, what are the relevant responsibilities of an AWS Solution
Architect? Describe relevant responsibilities, duties, and challenges for an AWS
Solution Architect.
Refer to above job description.
How do you normally take AWS architecture requirements to design? Describe
your procedures and methodology for establishing relationships and how to
understand business requirements from customer.
What are key considerations/guidelines when you’re going to make some AWS
Architecture recommendations? Demonstrate with some examples, how you make
decisions and recommendations about AWS Architecture topics.
How do you approach a pre-sales engagement as AWS Solution Architect? How do you
establish a relationship with AWS salespeople? Please describe… It makes
interviewers understand how the candidate creates a relationship and collaborate
with other AWS work teams.
What challenges are you looking for the position as an AWS Solution Architect?
Discover and explain what is the candidate/job purpose and objective into the
company on this role.
How do you share (describe) your ideas and knowledge about AWS services/products to
customers or other people of your team? Please describe…
Could you please show us? This will reveal if the candidate has excellent
communication and presentation skills and really enjoy sharing his/her expertise
and knowledge as an advocate.
Could you please describe a situation, where you interacted with CxOs people or
other business leaders? Understand if the candidate has had communication and
relationship with C-level people, and how has managed those relationships.
Please describe a successful project that reflects your design/ implementation/
consulting experience about AWS Solution Architecture? Discover practical
experience based at project executed before around AWS Solution Architecture.
What enterprise architecture and management frameworks do you know? And how you
have used them? Reveal the knowledge of candidate about enterprise architecture,
business architecture, architecture, and management frameworks. Also, reveals how
the candidate has used them based on the experience.
Please describe a problem or issue during your career as an AWS Solution Architect?
How did you handle them? Understand how the candidate handles issues and
problems.
What have you done to improve your AWS knowledge within last year? Discover if
the candidate has invested into his/her personal and professional growth by
himself/herself.
What are most important characteristics of an AWS Cloud solution that you need to
take into account when you design it? Understand if the candidate uses the AWS
well-Architected framework and has a holistic view of a business solution.
Please describe or tell us about a special contribution you have made to your last
employer?
Explain clearly what contributions you did in the past, which was his
contribution to the success of the previous company and satisfaction of its
customers. Share some past experiences.
Who are you? Please tell us about yourself? Describe your principal values and
characteristics as a human being. Explain why you’re the best candidate for that
job position and what differentiates you from others.
Table #1 Typical general AWS Solution Architect Interview Questions
Normally, the above questions are complemented with specific AWS technical
questions that evaluate if the candidate has required qualifications from the AWS
services and technology perspective like following:
Interview Questions: Technical Perspective Recommended Answer
What is Cloud Computing?
What are their principal characteristics and benefits? Explain the meaning of
cloud computing, talk about characteristics as flexibility, elasticity, pay on
demand. Describe each different cloud models as IaaS, PaaS, and SaaS. Reflect on
the benefits and myths of the cloud.
What is AWS? Highlight AWS leadership in the cloud. Describe briefly some of
the AWS services with which you feel at ease, for example, EC2, RDS, DynamoDB,
Cloudformation etc…
Note that AWS has comprehensive security capabilities that support virtually any
cloud workload.
What is the AWS free tier?
What is included in it? Explain how the AWS Free Tier is designed to enable you to
get hands-on experience with AWS cloud services; and what AWS services are freely
available for 12 months following your AWS sign-up date, as well as additional
service offers that do not automatically expire at the end of your 12-month AWS
Free Tier term.
What is an EC2 instance? How to protect and reuse it? Explain that EC2 is a web
service that provides resizable computing capacity in the cloud. Describe how to
create an AMI, taking EC2 snapshot to backup, and reuse EC2 instance
What kind of instances does AWS offer? Describe all EC2 instance types. Each EC2
instance type comprises varying combinations of CPU, memory, storage, and
networking capacity giving you the flexibility to choose the appropriate mix of
resources for your applications. For more information refers to
https://aws.amazon.com/ec2/instance-types/

How to increase the availability of your applications? How to avoid bottlenecks in


the performance of your applications? Describe AWS load balancing solutions.
Remember that services like Elastic Load Balancing automatically distributes
incoming application traffic across multiple Amazon EC2 instances in the cloud. It
enables you to achieve greater levels of fault tolerance in your applications,
seamlessly providing the required amount of load balancing capacity required to
distribute application traffic.
Describe ELB services, the difference between application and classic load
balancing service.
How to enable an automatic scaling solution according to the user demand?
Explain about Auto scaling features of AWS. Remember that Auto Scaling allows
you to scale your Amazon EC2 capacity up or down automatically according to
conditions you define, and it is particularly well suited for applications that
experience hourly, daily, or weekly variability in usage.
Describe how to create a launch configuration, an auto-scaling group including
common limits and how to monitor it using Cloudwatch and how to establish automatic
alerts and actions.
How to create your own resources into the AWS Cloud? Describe Amazon VPC service.
Notice that Amazon Virtual Private Cloud (Amazon VPC) lets you provision a
logically isolated section of the AWS Cloud, where you can launch AWS resources in
a virtual network that you define. You have complete control over your virtual
networking environment, including the selection of your own IP address range, the
creation of subnets, and the configuration of route tables and network gateways.
Highlight VPC security settings using security groups and ACLs for subnets.
How cloud you implement a DNS service in AWS? How could you register a new domain
name? How could you implement a low-latency, fault-tolerant architectures managing
Web application traffic? Explain services like Amazon Route 53, a highly
available and scalable Domain Name System (DNS) web service. You can use Amazon
Route 53 to configure DNS health checks to route traffic to healthy endpoints or to
independently monitor the health of your application and its endpoints. Amazon
Route 53 makes it possible for you to manage traffic globally through a variety of
routing types, including Latency Based Routing, Geo DNS, and Weighted Round Robin—
all of which can be combined with DNS Failover to enable a variety of low-latency,
fault-tolerant architectures. Don’t forget that Amazon Route 53 also offers Domain
Name Registration – you can purchase and manage domain names such as example.com
and Amazon Route 53 will automatically configure DNS settings for your domains.
How to implement a private connection to AWS Services? AWS offers a service
called AWS Direct Connect that lets you establish a dedicated network connection
between your network and one of the AWS Direct Connect locations. This dedicated
connection can be partitioned into multiple virtual interfaces as a VLAN. This
allows you to use the same connection to access public resources using public IP
address space, and private resources using private IP space while maintaining
network separation between the public and private environments.
Describe advantages and disadvantages of using private network connections.
What do you know about the Shared Responsibility Model established with AWS?
Could you please explain more about what is the responsibility of a customer?
Because you’re building systems on top of the AWS platform, the security
responsibilities will be shared. While AWS manages the security of the cloud,
security in the cloud is the responsibility of the customer. Customers retain
control of the security they choose to implement to protect their own content,
platform, applications, systems, and networks, no differently than they would have
for the applications in an on-site datacenter.
How to control the access to your resources located at AWS?
How could you protect your data at rest? There is a service called AWS Identity
and Access Management (IAM) that enables you to securely control access to AWS
services and resources for your users. Using IAM, you can create and manage AWS
users and groups and use permissions to allow and deny their access to AWS
resources.
For protecting your data, there is AWS Key Management Service (KMS), it is a
managed service that helps make it easy for you to create and control the
encryption keys used to encrypt your data.
What are storage options provided by AWS? Describe in detail all the storage
options provided by AWS like EBS, S3, Glacier etc. Remember that AWS offers many
different storage services, including Amazon S3, Amazon EBS, Amazon EFS, and Amazon
Glacier. Amazon S3 is an object storage service, Amazon EBS is a block storage
service, Amazon EFS is a file storage service, and Amazon Glacier is a long-term
archive storage service.
Refer depending on scenario what is the best storage option.
What is the AWS Storage Gateway? The AWS Storage Gateway is a service connecting
an on-premises software appliance with cloud-based storage, to provide seamless and
secure integration between an organization’s on-premises IT environment and AWS
storage infrastructure.
Notice when to use it, and how to use it for recovery or backup storage option.
How to deliver content faster? Describe in detail the service like Amazon
CloudFront which is a content delivery web service. It integrates with other AWS
services to give developers and businesses an easy way to distribute content to end
users with low latency, high data transfer speeds, and no minimum usage
commitments.
What are the managed database services provided by AWS?
What kind of SQL databases are supported by AWS? Answer with the Amazon
Relational Database Service (Amazon RDS). It is a web service that makes it easy to
set up, operate, and scale a relational database in the cloud. It provides cost-
efficient and resizable capacity while managing time-consuming database management
tasks, allowing you to focus on your applications and business.
It gives you access to the capabilities of a MySQL, Oracle, SQL Server, or
PostgreSQL database engines running on your own Amazon RDS cloud-based database
instance with high availability configurations.
What is the difference between SQL and NoSQL Database in AWS? Explain about RDS
options and DynamoDB characteristics, their differences, benefits, and purpose of
each related to AWS service.
Which option exists to accelerate the performance of a web application?
Describe how to improve the performance of web applications by allowing you
to retrieve information from a fast, managed, in-memory system, instead of relying
entirely on slower disk-based databases. AWS offers a service called Amazon
ElastiCache, it can not only improve load and response time to user actions and
queries but also reduce the cost associated with scaling web applications.
Which AWS services are offered for business intelligence? Describe each AWS
related service, highlight Amazon Redshift as a fast, fully managed, petabyte-scale
data warehouse solution that makes it simple and cost-effective to efficiently
analyze all your data using your existing business intelligence tools.
From the end-user analytic point of view, there exists a service named Amazon
QuickSight which is a very fast, easy-to-use, and cloud-powered business
intelligence (BI) service. It makes it easy for all employees within an
organization to build visualizations, perform ad-hoc analysis, and quickly get
business insights from their data. Amazon QuickSight integrates automatically with
AWS data services, enables organizations to scale to hundreds of thousands of
users, and delivers fast and responsive query performance to them via the SPICE
engine.
What other AWS services do you use at the application level? Describe in detail
all the application services provided by AWS like SNS, SES, SQS, and Workflow.
Remember that Amazon Simple Email Service (Amazon SES) is a highly scalable and
cost-effective email-sending service for businesses and developers. On the other
hand, Amazon Simple Notification Service (Amazon SNS) is a web service that makes
it easy to set up, operate, and send notifications from the cloud. It provides
developers with a highly scalable, flexible, and cost-effective capability to
publish messages from an application and immediately deliver them to subscribers or
other applications. Finally, Amazon Simple Queue Service offers a reliable, highly
scalable hosted queue for storing messages as they travel between computers. By
using Amazon SQS, developers can simply move data between distributed application
components performing different tasks, without losing messages or requiring each
component to be always available. Amazon SQS makes it easy to build an automated
workflow.
Don’t forget that Amazon Simple Workflow Service (Amazon SWF) is a web service that
makes it easy to coordinate work across distributed application components. Amazon
SWF enables applications for a range of use cases, including media processing, web
application back-ends, business process workflows, and analytics pipelines, to be
designed as a coordination of tasks.
How will you improve the deployment and management of AWS services? Describe how
AWS services as AWS Elastic Beanstalk, AWS OpsWorks, and Cloudformation contribute
to improving the deployment and management of AWS services?
As an AWS Solution Architect, how could you implement Disaster recovery on AWS?
If you want to enable faster disaster recovery of their critical IT systems
without incurring the infrastructure expense of a second physical site, you should
use AWS services. Remember, that the AWS platform supports many popular disaster
recovery (DR) architectures, from “pilot light” environments that are ready to
scale up at a moment’s notice, to “hot standby” environments that enable rapid
failover and enable rapid recovery of your IT infrastructure and data.
Table #2. Typical technical AWS Solution Architect Interview Questions

You might also like