Cloud Computing File - 2319185
Cloud Computing File - 2319185
On
Cloud Computing Lab (PE-CS-A402AL)
Submitted in the partial fulfillment of requirement for the
award of the degree of
Submitted To: -
Er. Devashish Gupta
Submitted By: -
Pratibha Bala
(2319185)
Teacher
Sr.No. Practical
Sign.
Practical: 1
Aim:- Give an service analysis for different cloud vendors in the industry.
Cloud Service providers are vendors which provide Information Technology (IT) as a service
over the Internet. Cloud computing is a term which is used for storing and accessing data over
the internet. It doesn’t store any data on the hard disk of your PC. Cloud companies helps you
to access your data from a remote server.
Earlier we used to store our data in hard drives on a computer. Cloud Computing services have
replaced such hard drive technology. Cloud Computing service is nothing but providing
services like Storage, Databases, Servers, networking and the software through the Internet.
Few Companies offer such computing services, hence named as “Cloud Computing Providers/
Companies”. They charge its users for utilizing such services and the charges are based on their
usage of services.
AWS is Amazon’s cloud web hosting platform which offers fast, flexible, reliable and cost-
effective solutions. AWS is a comprehensive, easy to use computing platform offered Amazon.
The platform is developed with a combination of infrastructure as a service (IaaS), platform as
a service (PaaS) and packaged software as a service (SaaS) offering
Features:
Advantages:
• AWS allows organizations to use the already familiar programming models, operating
systems, databases, and architectures.
• It is a cost-effective service that allows you to pay only for what you use, without any
up-front or long-term commitments.
• You will not require to spend money on running and maintaining data centers.
• Offers fast deployments
• You can easily add or remove capacity.
• You are allowed cloud access quickly with limitless capacity.
• Total Cost of Ownership is very low compared to any private/dedicated servers.
PE-CS-A402AL 2319185
Disadvantages:
• If you need more immediate or intensive assistance, you’ll have to opt for paid support
packages.
• Amazon Web Services may have some common cloud computing issues when you
move to a cloud. For example, downtime, limited control, and backup protection.
• AWS sets default limits on resources which differ from region to region. These
resources consist of images, volumes, and snapshots.
• Hardware-level changes happen to your application which may not offer the best
performance and usage of your applications.
Microsoft Azure is one of the fastest-growing clouds among them all. Azure was launched
years after the release of AWS and Google Cloud but is still knocking on the door to become
the top cloud services provider. Microsoft Azure recently won a $10 billion US government
contract.
Features:
Advantages:
Disadvantages:
• On-demand services.
• Broad network access.
• Resources pooling.
• Rapid elasticity.
• Measured service.
Advantages:
Disadvantages
• Cloud hosting is more expensive than traditional hosting. But, to be honest, it’s worth
the peace of mind.
• Google Cloud DNS is still slower than CloudFlare, so we use the latter for DNS
resolution.
• No free support (as far as I know). But Google Cloud has extensive documentation, so
we haven’t had any problems implementing any features yet.
Oracle Cloud provides the compute, storage, networking, the database, and platform services
you need to deliver robust business outcomes as you rethink your data center needs. Defense
in depth. Security is a key design principle within Oracle Cloud Infrastructure.
Features:
Advantages
Disadvantages:
Practical: 2
The AWS Cloud spans 84 Availability Zones within 26 geographic regions around the world,
with announced plans for 24 more Availability Zones and 8 more AWS Regions in Australia,
Canada, India, Israel, New Zealand, Spain, Switzerland, and United Arab Emirates (UAE).
Amazon provides a fully functional free account for one year for users to use and learn the
different components of AWS. You get access to AWS services like EC2, S3, DynamoDB, etc.
for free. However, there are certain limitations based on the resources consumed.
When you sign up for Amazon Web Services (AWS), your AWS account is
automatically signed up for all services in AWS, including Amazon Polly. You are charged
only for the services that you use. With Amazon Polly, you pay only for the resources you use.
If you are a new AWS customer, you can get started with Amazon Polly for free. For more
information, see AWS Free Usage Tier.
PE-CS-A402AL 2319185
If you already have an AWS account, skip to the next step. If you don't have an AWS account,
perform the steps in the following procedure to create one.
Note: If you signed into AWS recently, choose Sign into the Console. If create a new
AWS account isn't visible, first choose Sign into a different account, and then
choose Create a new AWS account.
3. In Root user email address, enter your email address, edit the AWS account name, and
then choose Verify email address. An AWS verification email will be sent to this
address with a verification code.
PE-CS-A402AL 2319185
Enter the code you receive, and then choose Verify. The code might take a few minutes to
arrive. Check your email and spam folder for the verification code email.
4. Choose Continue.
You receive an email to confirm that your account is created. You can sign into your new
account using the email address and password that you registered with. However, you can't use
AWS services until you finish activating your account.
PE-CS-A402AL 2319185
On the Billing information page, enter the information about your payment method, and then
choose Verify and Add.
If you are signing up in India for an Amazon Internet Services Private Limited
(AISPL) account, then you must provide your CVV as part of the verification process. You
might also have to enter a one-time password, depending on your bank. AISPL charges your
payment method two Indian Rupees (INR), as part of the verification process. AISPL refunds
the two INR after the verification is complete.
If you want to use a different billing address for your AWS billing information, choose Use a
new address. Then, choose Verify and Continue.
Important: You can't proceed with the sign-up process until you add a valid payment method.
PE-CS-A402AL 2319185
1. On the Confirm your identity page, select a contact method to receive a verification
code.
2. Select your phone number country or region code from the list.
3. Enter a mobile phone number where you can be reached in the next few minutes.
4. If presented with a CAPTCHA, enter the displayed code, and then submit.
On the Select a support plan page, choose one of the available Support plans. For a description
of the available Support plans and their benefits, see Compare AWS Support plans.
After you choose a Support plan, a confirmation page indicates that your account is being
activated. Accounts are usually activated within a few minutes, but the process might take up
to 24 hours.
You can sign in to your AWS account during this time. The AWS home page might display
a Complete Sign Up button during this time, even if you've completed all the steps in the sign-
up process.
When your account is fully activated, you receive a confirmation email. Check your email and
spam folder for the confirmation email. After you receive this email, you have full access to all
AWS services.
PE-CS-A402AL 2319185
Account activation can sometimes be delayed. If the process takes more than 24 hours, check
the following:
• Finish the account activation process. You might have accidentally closed the window
for the sign-up process before you added all the necessary information. To finish the
sign-up process, open the registration page. Choose Sign in to an existing AWS
account, and then sign in using the email address and password you chose for the
account.
• Check the information associated with your payment method. Check Payment
Methods in the AWS Billing and Cost Management console. Fix any errors in the
information.
• Check your email for requests for additional information. Check your email and spam
folder to see if AWS needs any information from you to complete the activation
process.
• Contact AWS Support. Contact AWS Support for help. Be sure to mention any
troubleshooting steps that you already tried. Note: Don't provide sensitive information,
such as credit card numbers, in any correspondence with AWS.
After your AWS account is ready, you have to login into Amazon Management Console:
• The AWS Management Console is a browser-based GUI for Amazon Web Services
(AWS).Through the console, a customer can manage their cloud computing, cloud
storage and other resources running on the Amazon Web Services infrastructure
PE-CS-A402AL 2319185
• The AWS Management Console also provides educational resources, including wizards
and workflows, to help users adapt to the cloud.
• An AWS user can also manage his accounts, including monitoring monthly spending.
A user can deploy new applications and monitor existing ones
Services: It consists of a list of services to choose from and also provides information related
to our account, including:
• Elastic Compute Cloud: a web-based service that allows businesses to run application
programs in the AWS public cloud.
Search Bar:
• The search box in the navigation bar provides a unified search tool for tracking down
AWS services and features, service documentation, and AWS Marketplace.
• In search box on the navigation bar of the AWS Management Console, enter all or part
of your search terms.
PE-CS-A402AL 2319185
AWS CloudShell:
Notifications:
• The notifications in the Developer Tools console is a notifications manager for subscribing
to events in AWS CodePipeline.
• It has own API, AWS Code Star Notifications. You can use the notifications features to
quickly notify users about events in the repositories, build projects, deployment
applications, and pipelines that are most important to their work.
Support:
• AWS Support Center is the hub for managing your Support cases.
• The newly designed Support Center is moving to the AWS Management Console,
providing both federated access support and an improved case management experience.
Choosing a Region:
• For many services, you can choose an AWS Region that specifies where your resources
are managed. Regions are sets of AWS resources located in the same geographical area.
You don't need to choose a Region for the AWS Management Console or for some
services, such as AWS Identity and Access Management.
PE-CS-A402AL 2319185
Account ID:
• Your AWS Account identification number is an important value used to track your
account information with AWS. Your AWS ID is the twelve digit number located
underneath the Account Settings section.
Service Quote:
• The Service Quotas console provides quick access to the AWS default quota values
for your account, across all AWS Regions. When you select a service in the Service
Quotas console, you see the quotas and whether the quota is adjustable.
PE-CS-A402AL 2319185
• The AWS Billing console contains features to organize and report your AWS cost and
usage based on user-defined methods and manage your billing and control costs.
Security Credential:
• AWS uses the security credentials to authenticate and authorize your requests
• For example, if you want to download a protected file from an Amazon Simple Storage
Service (Amazon S3) bucket, your credentials must allow that access.
PE-CS-A402AL 2319185
Practical:3
Aim: AWS EC2-Bsics, Instance Creation, Security Groups, IP Addressing, Launching EC2
Instance.
• Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity
in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminates your need
to invest in hardware up front, so you can develop and deploy applications faster.
• An Amazon EC2 instance is a virtual server in Amazon's Elastic Compute Cloud (EC2)
for running applications on the Amazon Web Services (AWS) infrastructure
• EC2 is a service that enables business subscribers to run application programs in the
computing environment. It can serve as a practically unlimited set of virtual machines
(VMs).
• Amazon provides various types of instances with different configurations of CPU,
memory, storage, and networking resources to suit user needs. Each type is available in
various sizes to address specific workload requirements.
• Login to your AWS account and go to the AWS Services tab at the top left corner.
• Here, you will see all of the AWS Services categorized as per their area viz. Compute,
Storage, Database, etc. For creating an EC2 instance, we have to choose Compute EC2
as in the next step.
PE-CS-A402AL 2319185
• Once your desired Region is selected, come back to the EC2 Dashboard.
• Click on ‘Launch Instance’ button in the section of Create Instance.
• You will be asked to choose an AMI of your choice. (An AMI is an Amazon Machine
Image. It is a template basically of an Operating System platform which you can use as
a base to create your instance). Once you launch an EC2 instance from your preferred
AMI, the instance will automatically be booted with the desired OS.
What is AMI?
Amazon Machine Image offers an easy and visual mode of launching instances of
your virtual machine on the cloud platform. For example, you may want to launch
multiple and identical instances of the same virtual machine for your applications.
PE-CS-A402AL 2319185
No. of instances- you can provision up to 20 instances at a time. Here we are launching
one instance.
On clicking launch, you will get a popup screen to select an existing key-pair or create a new
one.
• A key pair, consisting of a public key and a private key, is a set of security credentials
that you use to prove your identity when connecting to an Amazon EC2 instance.
Amazon EC2 stores the public key on your instance, and you store the private key.
• A key pair is a combination of a public key that is used to encrypt data and a private
key that is used to decrypt data.
PE-CS-A402AL 2319185
As the heading says, in this screen you can either modify the root volume
settings or you can add New volume if required.
Practical-4
Aim: Identity and Access Management-Setup Configuration, Users Group & Roles.
The AWS Management Console requires your username and password so that the service can
determine whether you have permission to access its resources. However, we recommend that
you avoid accessing AWS using the credentials for your root AWS account; instead, we
recommend that you use AWS Identity and Access Management (IAM)to create an IAM user
and add the IAM user to an IAM group with administrative permissions. This grants the IAM
user administrative permissions. You then access the AWS Management Console using the
credentials for the IAM user.
If you signed up for AWS but have not created an IAM user for yourself, you can create one
using the IAM console.
Sign in to the IAM console as the account owner by choosing Root user and entering your
AWS account email address. On the next page, enter your password.
We strongly recommend that you adhere to the best practice of using the Administrator IAM
user that follows and securely lock away the root user credentials. Sign in as the root user only
to perform a few account and service management tasks.
PE-CS-A402AL 2319185
2.In the navigation pane, choose Users and then choose Add users:
3.Enter Username:
4. Select the check box next to AWS Management Console access. Then select Custom
password, and then enter your new password in the text box.
5 . (Optional)By default, AWS requires the new user to create a new password when first
signing in. You can clear the check box next to User must create a new password at next
sign-in to allow the new user to reset their password after they sign in.
6. Choose Next: Permissions:
PE-CS-A402AL 2319185
9.In the Create group dialog box, for Group name enter Administrator.
10.Choose Filter policies, and then select AWS managed - job function to filter the table
contents
11.In the policy list, select the check box for AdministratorAccess. Then choose Create
group.
12.Back in the list of groups, select the check box for your new group. Choose Refresh if
necessary to see the group in the list.
15.Choose Next: Review to see the list of group memberships to be added to the new user.
When you are ready to proceed, choose Create user.
• You can use this same process to create more groups and users and to give your
users access to your AWS account resources.
• To sign in as this new IAM user, sign out of the AWS Management Console, then
use the following URL, where your_aws_account_id is your AWS account number
without the hyphens (for example, if your AWS account number is 1234-5678-
9012, your AWS account ID is 123456789012):
PE-CS-A402AL 2319185
• Enter the IAM user name and password that you just created.
Let’s login to the user that we created, on the sign in page of AWS select IAM user.
PE-CS-A402AL 2319185
It’ll ask for the Account ID, filling and clicking next will pop up more options and ask for the
name of the user and password that we set while creating the user.
After Signing in it’ll promote to change password (because it’s our first time logging in with
this user and we’ve also set the require password reset option)
PE-CS-A402AL 2319185
After confirming the password reset, AWS Console will open with that user account. We can
confirm that by checking the account name on the top right corner.
It’s throwing a bunch of errors here because we have not authorized this user to use IAM
service.
We have only given EC2 permissions so this user can access the EC2 Service.
• Confirm delete by typing delete into the text box and press delete again.
• Selected users will be deleted now
PE-CS-A402AL 2319185
Practical 5
Aim: AWS S3-Basic, Use Of buckets, Object Lifecycle, Permission & Versioning.
Amazon S3
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. Customers of all sizes
and industries can use Amazon S3 to store and protect any amount of data for a range of use
cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise
applications, IoT devices, and big data analytics. Amazon S3 provides management features
so that you can optimize, organize, and configure access to your data to meet your specific
business, organizational, and compliance requirements.
Objects
These are data files, including documents, photos, and videos. Each object is identified by a
unique key within the S3 environment that differentiates it from other stored objects. You store
these objects in one or more buckets, and each object can be up to 5 TB in size.
Buckets
To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an
S3 bucket in one of the AWS Regions.
A bucket is a container for objects stored in Amazon S3. You can store any number of objects
in a bucket and can have up to 100 buckets in your account.
Creating a bucket
On the AWS Home page navigate to services and click on the S3 Service under the category
of Storage, we can also search the service in the search bar.
If clicked on the S3 Service for the first time it’ll redirect to the Amazon S3 Home page, if not
then it’ll redirect to the bucket page where all the bucket information is shown
PE-CS-A402AL 2319185
We are on the home page of Amazon S3 service, it’ll give us an overview of what the Amazon’s
S3 Service is.
We can create the bucket by clicking on the button of Create bucket on this page
Clicking on Create bucket will lead us to the Amazon S3 bucket creation page. Here we can
configure every aspect of our bucket.
PE-CS-A402AL 2319185
One thing to notice on the page here is that we can see the location as Global on the top bar of
our AWS Console.
Well, S3 was designed this way to keep the namespace global and accordingly keeping that
uniqueness in mind other features of S3 were defined. This simply means that, s3 bucket name
is unique globally and the namespace is shared by all AWS accounts and we can access the
objects and the bucket globally, not limited by an AWS region
The specific reasons for the global namespace aren't publicly stated, but almost certainly have
to do with the evolution of the service, backwards compatibility, and ease of adoption of
new regions.
We can see that if we try a generic name or something that’s already been used then it will say
that the name already exists, therefore we’ll have to change it to something unique.
We can compare this scenario which is kind of similar as when we create a new Gmail
account and it asks for the email address and if we type the email address that’s already taken
then it’ll tell us to change that to something unique.
We can compare this scenario which is kind of similar as when we create a new Gmail
account and it asks for the email address and if we type the email address that’s already taken
then it’ll tell us to change that to something unique.
PE-CS-A402AL 2319185
Here we gave it a unique name tes011 which is not used before (because if it was used before
AWS will give a warning as seen on the screenshot before this one)
General Configuration
In this box we specified the name as tes011 the region as Asia Pacific (Mumbai) ap-south-1.
We can also copy any previous bucket settings which is optional here
Object Ownership
PE-CS-A402AL 2319185
Here, we can specify the Object Ownership. It can be done by ACL (Access Control List)
which basically let us decide as who can have access to the objects written in the bucket.
ACLs disabled: The bucket owner automatically owns and has full control over every
object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. The bucket
uses policies to define access control.
ACLs enabled
• Bucket owner preferred – The bucket owner owns and has full control over new
objects that other accounts write to the bucket with the bucket-owner-full-control
canned ACL.
• Object writer (default) – The AWS account that uploads an object owns the
object, has full control over it, and can grant other users access to it through ACLs.
PE-CS-A402AL 2319185
Here I have blocked all the public access to this bucket so only I can access the bucket.
Bucket Versioning:
Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite.
• For example, if you delete an object, Amazon S3 inserts a delete marker instead of
removing the object permanently.
• The delete marker becomes the current object version. If you overwrite an object, it
results in a new object version in the bucket. You can always restore the previous
version.
Tags
Here, we can give this bucket some tags. We can consider them as some kind of category or
nicknames. I’ve not given this bucket any tags as this is optional.
PE-CS-A402AL 2319185
Default Encryption
We can enable and disable the encryption on our objects on this option, I’ll be disabling this as
I don’t need any encryption for now.
Advanced Settings
Object Lock
Object lock can be defined as if we want to lock the object or not which basically means if you
want to keep your files read-only
PE-CS-A402AL 2319185
Finalizing
The final setup is to click on the Create bucket and our bucket will be created with our
configuration.
Our bucket is successfully created, which we can see in the list as there are currently 3 buckets
available here.
PE-CS-A402AL 2319185
Copy ARN: Amazon Resource Names (ARNs) are unique identifiers assigned to individual
AWS resources. It can be an ec2 instance, EBS Volumes, S3 bucket, load balancers, VPCs,
route tables, etc. Therefore we can copy it’s ARN by clicking this button.
Empty: This option makes our bucket empty and delete any objects stored in it.
Opening our bucket will open all the properties and operation that we can do on the bucket
Under the Objects tab we can upload our files and do various actions on it’s objects.
PE-CS-A402AL 2319185
Inside our bucket we can click on Upload button to go to the upload section where we can
upload our files or folder as follows:
After selecting the file or folder it’ll show the description of the file
We can see here that it’s showing the name, it’s type and size. More files can be added here.
Next is Destination of the bucket, we can access our bucket directly by typing the URL in our
browser, just make sure you are signed in to the AWS account the bucket has access to.
s3://test011
Next is Permissions, here we can modify the permissions of the object and Grant basic
read/write permissions to other AWS accounts by using Access control list (ACL).
PE-CS-A402AL 2319185
All these different classes are made for different purposes, just like in the ec2 practical we have
seen the instance families, the concept is same here. We can see the description of each storage
class in the screenshot.
In the end click on upload to upload the selected files/folders in the bucket.
Here we can see the object and S3 URL along with other information about the file. If we try to open
the object with its URL, we’ll get the following error
PE-CS-A402AL 2319185
This is because we had blocked public access while creating the bucket.
We can see the objects versions in the object’s version tab under the file as follows:
On the bucket page we can select our object and perform various operations such as Copy S3
URL, Copy URL, Download, Open, Delete, Create Folder and Actions shown in the drop-
down menu:
For deleting the object simply select the object and click on delete, it’ll ask for confirmation
for deleting the object in case if user has accidently used this option.
PE-CS-A402AL 2319185
This is because we had blocked public access while creating the bucket.
We can see the objects versions in the object’s version tab under the file as follows:
On the bucket page we can select our object and perform various operations such as Copy S3
URL, Copy URL, Download, Open, Delete, Create Folder and Actions shown in the drop-
down menu:
PE-CS-A402AL 2319185
For deleting the object simply select the object and click on delete, it’ll ask for confirmation
for deleting the object in case if user has accidently used this option.
PE-CS-A402AL 2319185
If we want to delete our bucket, we can do it by selecting the bucket on the S3 Service’s bucket
page and pressing deleting. Just make sure you have no object in the bucket and it is empty.
PE-CS-A402AL 2319185
same as deleting the object we need to confirm the bucket deleting by typing the name of the
bucket and selecting delete bucket.
Object Lifecycle
To manage your objects so that they are stored cost effectively throughout their lifecycle,
configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define
actions that Amazon S3 applies to a group of objects.
For example:
• If you upload periodic logs to a bucket, your application might need them for a week
or a month. After that, you might want to delete them.
• Some documents are frequently accessed for a limited period of time. After that, they
are infrequently accessed. At some point, you might not need real-time access to them,
but your organization or regulations might require you to archive them for a specific
period. After that, you can delete them.
PE-CS-A402AL 2319185
• You might upload some types of data to Amazon S3 primarily for archival purposes.
For example, you might archive digital media, financial and healthcare records, raw
genomics sequence data, long-term database backups, and data that must be retained
for regulatory compliance.
With S3 Lifecycle configuration rules, you can tell Amazon S3 to transition objects to less-
expensive storage classes, or archive or delete them.
PE-CS-A402AL 2319185
Practical-6
Well, VPC gives you the ability to section of an area of AWS create basically a virtual network.
This means you can do things like select IP ranges for your VPC and other networking related
activities such as setting up subnets, network gateways & configuring router tables.
You can do cool things with subnets, like allowing internet access for one subnet
while restricting it for another. An example of where this might be useful is one
subnet could be used for API servers while another with no internet access could
be used for running databases on.
This allows for greater control over the security of your AWS account. You can
also setup network access control lists that prevent connections to your VPC
subnets unless they match a specific IP range. Useful if you want to restrict access
to your VPC.
PE-CS-A402AL 2319185
Creating an VPC
First things first, we need to load up the VPC management console. It is located in the
Networking & Content Delivery section.
PE-CS-A402AL 2319185
From here we can select Create VPC to create a VPC, which will take us to the create VPC
Wizard where we can configure our VPC.
We’ll have two options here one will be VPC only and other will be VPC and more
VPC Only can only create a simple VPC without any Advanced Option whereas, in the second
option we can configure our VPC to full potential.
Further we have to give a Name-Tag to our VPC here I have named my VPC as ADSR
PE-CS-A402AL 2319185
IP addresses enable resources in your VPC to communicate with each other, and with resources
over the internet.
• An IPv4 CIDR block has four groups of up to three decimal digits, 0-255, separated by
periods, followed by a slash and a number from 0 to 32. For example, 10.0.0.0/16.
• An individual IPv6 address is 128 bits, with 8 groups of 4 hexadecimal digits. For
example, 2001:0db8:85a3:0000:0000:8a2e:0370:7334.
PE-CS-A402AL 2319185
• An IPv6 CIDR block has four groups of up to four hexadecimal digits, separated by
colons, followed by a double colon, followed by a slash and a number from 1 to 128.
For example, 2001:db8:1234:1a00::/56.
I have only kept IPv4 CIDR block, we can also set a IPv6 CIDR block
You can associate an Amazon-provided IPv6 CIDR block with the VPC. Amazon provides a
fixed size (/56) IPv6 CIDR block, and you cannot choose the range of IPv6 addresses yourself.
Alternatively, if you have imported your own IPv6 CIDRs into AWS, you can specify an IPv6
CIDR block from your address pool.
Tenancy determines who is the owner of a resource. It might be easiest to think of tenancy
in terms of housing. For instance, if you have a house then you could consider it a dedicated
tenant since only one family presumably lives there. However, if you have an apartment
building, there is a good chance that several families have rooms in a single building which
would be more like a shared tenancy model.
PE-CS-A402AL 2319185
Default: ensure that instances launched in this VPC use the tenancy attribute specified at
launch or if you are creating a VPC for Outposts private connectivity.
Dedicated: to ensure that instances launched in this VPC are run on dedicated tenancy
instances regardless of the tenancy attribute specified at launch.
As we can see that currently we are on the US East (Ohio) – us-east-2 region we can set our
Number of Availability Zones (AZs) According to that, here I have selected 2 of the availability
zones, we can also customize them according to our needs.
The next step for creating our VPC is to configure the number of public and private subnets.
A subnet is a range of IP addresses in your VPC. You launch AWS resources, such as EC2
instances, into subnets. Use public subnets for web applications that need to be publicly
accessible over the internet.
PE-CS-A402AL 2319185
I have selected 2 subnets in public and private category, we can customize them both if we
expand the option of Customize subnets CIDR blocks aa follows:
In the right side of each block, we can see the available IPs.
Nat Gateways: A NAT gateway is a Network Address Translation (NAT) service. NAT
gateways enable resources in private subnets to reach the internet. External services, however,
cannot initiate a connection with the resources in the private subnets.
If you choose to create a NAT gateway in your VPC, you are charged for each hour (We can
see a visual indication of this near the option as $ symbol) that your NAT gateway is
provisioned and available. You are also charged for the amount of data that passes through the
gateway.
VPC Endpoint:
• A VPC endpoint enables you to privately connect your VPC to supported AWS services
like Amazon S3.
• VPC endpoints enable you to create an isolated VPC that is closed from the public
internet. In addition, there is no additional charge for using gateway endpoints (which
helps avoid the costs associated with NAT gateways).
• The DNS hostnames attribute determines whether instances launched in the VPC
receive public DNS hostnames that correspond to their public IP addresses.
PE-CS-A402AL 2319185
• The DNS resolution attribute determines whether DNS resolution resolution through
the Amazon DNS server is supported for the VPC.
Before finalizing and clicking on create VPC we can see the preview in the preview section on
the right side as following:
In the final setup we just have to click on the Create VPC button,
Finally, if there is no error in the VPC creating process then it will show the success option as
following:
PE-CS-A402AL 2319185
We can click on the View VPC button to see what we have created
If you create a new subnet in this VPC, it's automatically implicitly associated with the main
route table, which routes traffic to the virtual private gateway. If you set up the reverse
configuration (where the main route table has the route to the internet gateway, and the custom
route table has the route to the virtual private gateway), then a new subnet automatically has a
route to the internet gateway.
PE-CS-A402AL 2319185
Here, opening the public routing table will lead us to its properties
We can see the routes association with the internet gateways here and in the next tab we can
see the Subnet associations, since we have 2 public subnets it’s showing us two subnet IDs
with their corresponding IPv4 CIDR
PE-CS-A402AL 2319185
Practical-7
There are two types of Database types that AWS currently supports which are
SQL and NoSQL
SQL databases are primarily called as Relational Databases (RDBMS), it contains Tables with
fixed rows and columns.
We’ll be discussing the Transactional database services in this practical which are Amazon
RDS and Amazon DynamoDB.
PE-CS-A402AL 2319185
Amazon RDS
Amazon Relational Database Service (Amazon RDS) is a collection of managed services that
makes it simple to set up, operate, and scale databases in the cloud.
How it works?
As RDS is a managed service provided by AWS, we can expect that like other AWS services
it will provide scalability, security and cost effectiveness to the various RDBMS it provides.
The database products available through AWS RDS are as listed below.
• MySQL
• MariaDB
• Oracle
• Microsoft SQL Server
• PostgreSQL
• Amazon Aurora
PE-CS-A402AL 2319185
We can find the Amazon RDS service under the Database category
To create a database simply click on Create database button under the Create database widget.
PE-CS-A402AL 2319185
Firstly, we’ll be promoted if we want a standard create or easy create, in the standard create we
can configure all the options and in easy create option AWS automatically suggest and consider
some default options, although we can change them later
In next step we have to choose the database in this case I have chosen MySQL database with
the version 8.0.28.
PE-CS-A402AL 2319185
Next is Templates we can choose from three categories; some options might not be visible
depending on the MySQL version. I’ve chosen Free tier here
Moving on we have Availability and durability, since we have selected Free tier we cannot
change the settings but the description of these options are as follows:
• We can name our DB instance in my case I have kept the default name given by AWS
which is database-1.
• For the credentials settings I have used the username as my name aman and I have
chosen to auto generate password (we can also set our own password by unchecking
this option). This will help us accessing our DB instance.
•
• Next is Instance Configuration,
PE-CS-A402AL 2319185
Here we have classes of the instance type just as discussed in the 4th practical here we have
t3.micro family available in free tire account, we can also select other families for faster
performance of our database
here we can set the storage types and it’s settings, I have selected General Purpose SSD (gp2)
and the allocated storage for my database is 20 GiB which is the minimum one (highest storage
the we can allocate is 16,384GiB).
We can also enable autoscaling which means that enabling this feature will allow the storage
to increase after the specified threshold is exceeded.
PE-CS-A402AL 2319185
Starting with the network type in which I have selected IPv4, if you want IPv4 and IPv6
connectivity of the database then you can choose the dual stack mode.
The second option in the connectivity settings is in which VPC we want our DB instance to
exist? here I have selected my custom VPC ADSR that I made in the previous practical. We
cannot change the VPC after the database is created.
Next is Availability Zone. Choose the Availability Zone from the current region in which you
want the DB instance created. Availability Zones improve high availability by isolating failures
from other Availability Zones, while supporting low-latency connectivity in the region.
The next widget to configure is the Database authentication in this we can specify how
exactly we want to perform the authentication to access this data base there are various options
that we can see here, I have selected password only
PE-CS-A402AL 2319185
Finally, we can select the Create database button to create the database
While the database is being created we can view our auto generated password by clicking on
the View credential button.
PE-CS-A402AL 2319185
We’ll get the successful message after the database will be created without any
errors and we’ll be redirected to the Databases section of Amazon RDS
We can perform various actions by selecting the database such as Stop, Reboot, Delete, Create
read replica, Create Aurora read replica, Promote, Take snapshot, Restore to point in time,
Migrate snapshot.
Opening the database will open its properties and we can also change some settings from there,
in the connectivity and security section we can see the endpoint and the port number of our
database from here
We can also explore and set other options in other taps such as Monitoring, Logs & Events,
Configuration, Maintenance & backups & Tags.
To connect to a DB instance, use any client for the MySQL DB engine. For example, you might
use the MySQL command-line client or MySQL Workbench.
PE-CS-A402AL 2319185
Amazon DynamoDB
With DynamoDB, you can create database tables that can store and retrieve any amount of data
and serve any level of request traffic. You can scale up or scale down your tables' throughput
capacity without downtime or performance degradation.
How it works?
Features Of DynamoDB
DynamoDB is a NoSQL database service. DynamoDB is designed in such a way that the user
can get high- performance, run scalable applications that would not be possible with the
traditional database system. These additional features of DynamoDB can be seen under the
following categories:
PE-CS-A402AL 2319185
Clicking on this will lead us to the start page /.home page of this service
Here we can click on the Create table button to start creating a new table
The table creation process will start with the Table Details
PE-CS-A402AL 2319185
Here I have named the table as ValorantPlayers and named the Partition key which is the
table’s primary key to PlayerID which is of number type. We can also choose different primary
key and it’s type such as string or binary.
For keeping things simple we can simply choose Default settings in the Settings widget.
We can see the default settings information below the Settings block
In last we can add tags but I have skipped this option and have not given any tags to the table
Opening the table by clicking on it will open a bunch of options, from here we
can do various things with our table
Clicking on the button of Explore table Items will open the table items as follows
To create a new item in the table we can click on the Create Item button
Going back we can see that there is a value 0 returned in the table
PE-CS-A402AL 2319185
Moving back to creating items we can add further attributes and values as follows
I have added the new attributes such as “Name” and “Rank” with values “Aman” and “Bronze
3” which are of String type.
We can also choose the JSON format to add these values, the JSON format looks like this if
we switch to it.
PE-CS-A402AL 2319185
Practical-8
Amazon SNS
Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for
both application-to-application (A2A) and application-to-person (A2P) communication.
The A2A pub/sub functionality provides topics for high-throughput, push-based, many-to-
many messaging between distributed systems, microservices, and event-driven serverless
applications. Using Amazon SNS topics, your publisher systems can fanout messages to a large
number of subscriber systems, including Amazon SQS queues, AWS Lambda functions,
HTTPS endpoints, and Amazon Kinesis Data Firehose, for parallel processing. The A2P
functionality enables you to send messages to users at scale via SMS, mobile push, and email.
How it works?
Pub/sub
PE-CS-A402AL 2319185
SMS
Mobile Push
• SNS & Email Messages: Amazon SNS provides the features to send text
messages and email (SMTP).
Opening the service will lead us to the start page /. Home page of this service
Here we can put the name of our topic and click on the next setup button to move further in the
process of making an SNS’s topic
PE-CS-A402AL 2319185
Type: The type that you choose for your topic is immutable, meaning it can't be changed once
the topic is created. FIFO topics are a better fit for use cases that require message ordering and
deduplication. Standard topics are better suited for use cases that require higher message
publish and delivery throughput rates. Note as well that the different types of topic support
different delivery protocols.
I have chosen the standard type, we can see the difference in the above screenshot between the
FIFO and Standard types.
The name that I gave it before is same here so I don’t need to change that.
We can configure them if we want to deep dive into the advanced settings.
In the end of configuration, we can click the Create topic button for creating the topic
Now for creating the subscription we have to go to the subscriptions tab under the topic that
we just created and then click on the Create subscription
PE-CS-A402AL 2319185
Here in the details the main thing that we need to select is the protocol that we need to use, I
have selected the email protocol. Following that, I need to specify the endpoint which is the
destination of our protocol that in my case is an email-address because I have selected E-mail
as my protocol.
By the way, these are the other protocols that we can choose from
Now before pushing any message to email, we have to confirm it, the confirmation mail should
arrive on your given mail for the topic
In the mail we can click on Confirm subscription to confirm this mail address for its usage.
Now, the status of the subscription will be confirmed, and then click on Topic name. After that
click on Publish Message.
PE-CS-A402AL 2319185
There is also option to set a Time to Live (TTL): Amazon SNS provides support for setting a
TTL message attribute for mobile push notifications messages.
In addition to allowing, you to set a TTL value within the Amazon SNS message body for
supported mobile push notification services, Amazon SNS also lets you set a TTL message
attribute for mobile push notifications messages.
There are two options for the message structure, Identical payload and custom payload.
Identical one will send the same payload to all the protocol and in the custom one we can
specify different payloads to the endpoints subscribed to the topic based on their delivery
protocol.
And this is the message that I will broadcast in the message body to the subscribers
“This is just a message for testing, if the SNS service of AWS is working or not, I am also
going to add an emoji just for fun 🫠”
Next is Message attributes which is optional, Amazon SNS supports delivery of message
attributes which let you provide structured metadata items (such as timestamps, geospatial data,
signatures, and identifiers) for a message.
Message attributes are sent along with the message body but are optional and separate from it.
The receiver of the message can use this information to decide how to handle the message
without having to first process the message body. Each message can have up to 10 attributes.
PE-CS-A402AL 2319185
Finishing up we can click the Publish Message button for publishing this message to the
subscribers.
if published successfully then we will get the success message as following with the message
and request ID.
Cloud Watch
How it works?
CloudWatch collects monitoring and operational data in the form of logs, metrics,
and events, and visualizes it using automated dashboards so you can get a unified
view of your AWS resources, applications, and services that run on AWS and on
premises. You can visualize the experience of your application end users and
validate design choices through experimentation. Correlate your metrics and logs
to better understand the health and performance of your resources. Create alarms
based on metric value thresholds you specify, or alarms that can watch for
anomalous metric behavior based on ML algorithms. For example, set up
automated actions to notify you if an alarm is triggered and automatically start
auto scaling to help reduce mean time to resolution (MTTR). You can also dive
deep and analyze your metrics, logs, and traces to better understand how to
improve application performance.
PE-CS-A402AL 2319185
CloudWatch Dashboard
We can also create our own alarms, opening any one of these alarms will give us more detailed
monitoring status.
CloudWatch Metrics
Metrics are data about the performance of your systems. By default, many services provide
free metrics for resources (such as Amazon EC2 instances, Amazon EBS volumes, and
Amazon RDS DB instances). You can also enable detailed monitoring for some resources, such
as your Amazon EC2 instances, or publish your own application metrics. Amazon CloudWatch
can load all the metrics in your account (both AWS resource metrics and application metrics
that you provide) for search, graphing, and alarms.
Metric data is kept for 15 months, enabling you to view both up-to-the-minute data and
historical data.
To graph metrics in the console, you can use CloudWatch Metrics Insights, a high-performance
SQL query engine that you can use to identify trends and patterns within all your metrics in
real time.
PE-CS-A402AL 2319185
CloudTrail
AWS CloudTrail is a service that helps us to monitor, survey, and perform operation auditing
along with risk monitoring of the AWS account the user uses. With AWS CloudTrail, the user
will be able to log, ceaselessly monitor, and retain account activity associated with actions
across the AWS infrastructure.
CloudTrail provides the complete account activity of the Amazon Web Services. CloudTrail
also manages the functions performed with the help of the AWS Management Console,
program line tools, AWS SDKs, and various AWS services.
This event history simplifies security analysis, resource amendment trailing, and
troubleshooting.
Download events
You can download a CSV or JSON file containing up to the past 90 days of CloudTrail
events for your AWS account.
Create a trail
A trail enables CloudTrail to deliver log files to your Amazon S3 bucket. By default,
when you create a trail in the console, the trail applies to all regions. The trail logs
events from all regions in the AWS partition and delivers the log files to the S3 bucket
that you specify.
CloudTrail is enabled on your AWS account when you create the account. When activity occurs
in any AWS service that supports CloudTrail, that activity is recorded in a CloudTrail event
along with other AWS service events in Event history. In other words, you can view, search,
and download recent events in your AWS account before creating a trail, though creating a trail
is important for long-term records and auditing of your AWS account activity. Unlike a trail,
Event history only shows events that have occurred over the last 90 days.
• Sign in to the AWS Management Console using the IAM user you configured for
CloudTrail administration.
• Review the information in your dashboard about the most recent events that have
occurred in your AWS account. A recent event should be a ConsoleLogin event,
showing that you just signed in to the AWS Management Console.
PE-CS-A402AL 2319185
• In the navigation pane, choose Event history. You see a filtered list of events, with the
most recent events showing first. The default filter for events is Read only, set to false.
You can clear that filter by choosing X at the right of the filter.
• Many more events are shown without the default filter. You can filter events in many
ways. For example, to view all console login events, you could choose the Event name
filter, and specify ConsoleLogin. The choice of filters is up to you.
PE-CS-A402AL 2319185
• You can save event history by downloading it as a file in CSV or JSON format.
Downloading your event history can take a few minutes.
PE-CS-A402AL 2319185
Practical-9
Aim: Load-balancing, Elasticity, and Scalability in AWS - Basics, Auto-scaling, Route53.
Load Balancing
Amazon provides its own service for load balancing known as “Elastic Load Balancer “.
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets,
such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It
monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic
Load Balancing scales your load balancer capacity automatically in response to changes in
incoming traffic.
The Application Load Balancer distributes incoming HTTP and HTTPS traffic across multiple
targets such as Amazon EC2 instances, microservices, and containers, based on request
attributes. When the load balancer receives a connection request, it evaluates the listener rules
in priority order to determine which rule to apply, and if applicable, it selects a target from the
target group for the rule action.
The Network Load Balancer distributes incoming TCP and UDP traffic across multiple targets
such as Amazon EC2 instances, microservices, and containers. When the load balancer receives
a connection request, it selects a target based on the protocol and port that are specified in the
listener configuration, and the routing rule specified as the default action.
• Launch the EC2 instances that you plan to register with your load balancer. Ensure that
the security groups for these instances allow HTTP access on port 80.
• Install a web server, such as Apache or Internet Information Services (IIS), on each
instance, enter its DNS name into the address field of an internet-connected web
browser, and verify that the browser displays the default page of the server.
From the left pane of EC2 Service under Load Balancing click on Load Balancers. Here we
can find four types of Load Balancers that are Application Load Balancer, Network Load
Balancer, Gateway Load Balancer, and Classic Load Balancer (previous generation). We will
create an Application Load Balancer for our practical.
PE-CS-A402AL 2319185
Here we will provide the name of our Load Balancer i.e. My-Test-ALB. We will keep it to be
Internet-facing as we want our load balancer to route requests from clients over the internet to
target that in our case will be EC2 instances. We can select the type of IP addresses that our
subnets will use, for now, we will leave it to IPV4.
Here we will select at least two Availability Zones and one subnet per zone so that the load
balancer will route traffic to targets in these Availability Zones only. We will select us-east-
2a, us-east-2b, and us-east-2c and one subnet in them accordingly.
PE-CS-A402AL 2319185
Here we can either create a new Security Group or choose from the existing ones. We will use
the same Security Group i.e., launch-wizard-1 which we created while configuring our EC2
instance.
A listener is a process that checks for connection requests, using the protocol and port we
configure while creating our AWS Application Load Balancer. Traffic received by the listener
is then routed per our specification. Here we can either create a new Target Group or choose
from the existing ones.
PE-CS-A402AL 2319185
For creating a new Target Group, we will click on Create a target group. Note that here traffic
on port 80 will be forwarded to the Target Group created.
First, we will specify group details. Note that our load balancer will route requests to the targets
in a target group and performs health checks on the targets as well. Targets can be of different
types such as Instances, IP addresses, Lambda function, and even an AWS Application Load
Balancer. We will keep our target to be Instances that we created above.
PE-CS-A402AL 2319185
Here we will provide the name of our Target Group. In our case, we will keep it
to My-Test-Target-Group.
We will keep the remaining configurations as it is. Note that here we can also set the value of
Unhealthy threshold, Timeout, and Interval for health checks according to our
requirements. Currently, we will keep them to default values. Now we will click on Next.
PE-CS-A402AL 2319185
Now the next step in the creation of Target Group is to register targets. We must register our
targets to ensure that our AWS Application Load Balancer routes traffic to this target group.
Here we can see the instances we created initially. Now we will click on Include as pending
below to register them.
Now our Target group can be viewed in the list of available Target Groups.
PE-CS-A402AL 2319185
We will again get back to the configuration of our AWS Application Load Balancer and select
the Target Group created.
We can also add tags to our load balancer like we did while creating our EC2 instance but we
will leave it for now. Tags enable us to categorize our AWS resources so we can more easily
manage them.
Step-6: Summary
Here we can review and confirm configurations of our load balancer and then click on Create
load balancer.
PE-CS-A402AL 2319185
Now our AWS Application Load Balancer can be seen in the list of available load balancers.
Here we can view all the details related to the load balancer like Description, Listeners,
Monitoring, Integrated Services, and Tags. Note that our load balancer is currently in the
Provisioning state.
PE-CS-A402AL 2319185
After a while, we can see that the state has changed to Active.
Now we will copy the DNS name of AWS Application Load Balancer from the description
and enter it in the browser to see the magic happening!
Now if we keep reloading the page, we can see that it is changing the EC2 Instances which
indicates that load balancing is working. You can see how Load balancer diverts the traffic to
different servers to service the request from users
The code that we deployed on our EC2 instance was the following
!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1> Hello World from $(hostname -f)<a/h1>" > /var/www/html/index.html
AWS scalability
• The ability to increase the size of the workload either software or hardware in your
existing infrastructure and at the same time making sure that the performance is not
impacted is known as scalability in AWS.
• The ability to increase or decrease the resources quickly based on the need and to make
sure that it doesn’t affect the performance of the application.
• You can use Amazon CloudWatch to avail the autoscaling feature.
PE-CS-A402AL 2319185
Elasticity
• In AWS, the process of getting the resources dynamically when you actually require
them and then release the resources when you are done and do not need them is known
as elasticity.
• In another way, growing or shrinking the resources dynamically when needed is known
as Elasticity.
• Increasing or decreasing the number of resources automatically based on the need is
known as Elasticity in AWS.
Elasticity vs Scalability
In AWS, the process of getting the resources The ability to increase the size of the workload
dynamically when you actually require them either software or hardware in your existing
and then release the resources when you are infrastructure and at the same time making sure
done and do not need them is known as that the performance is not impacted is known as
elasticity. scalability in AWS.
• it’s quite easy to set the auto-scaling for your application. It helps you to monitor your
application automatically and adjust the capacity in terms of resources and instances
and makes sure that your application performs well.
• With absolutely low cost, you can able to optimize the performance of your application
by utilizing the AWS auto-scaling feature.
• If you want a balanced resource for your application at the right time, then AWS auto-
scaling is the perfect choice for you.
• You can use the AWS Management Console or the SDKs to quickly set up the auto-
scaling feature.
Route 53
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web
service. It is designed to give developers and businesses an extremely reliable and cost-
effective way to route end users to Internet applications by translating names like
www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect
to each other. Amazon Route 53 is fully compliant with IPv6 as well.
PE-CS-A402AL 2319185
Features of Route 53
The routing policies and route table make it easier for route 53 to serve as a DNS service.
However, the features of Amazon route 53 drive its popularity. Let us take a look at the crucial
features of route 53 on AWS as follows:
Resolver
The “Resolver” feature of route 53 helps in obtaining recursive DNS for Amazon VPC and on-
premises networks. It also helps in the creation of conditional forwarding rules and DNS
endpoints. The functionality of Resolver is evident in addressing custom names mastered in
private hosted zones of route 53 or on-premise DNS servers.
Traffic flow
The ease of use and cost-effectiveness with route 53 for global traffic management is one of its
commendable features. As discussed above, route 53 helps in routing end-users to the best
endpoint for an application. The routing policies provide control for choosing the criteria for
routing traffic to end-users.
Domain registration
The facility of domain registration services is the core of Amazon route 53. Users could search
for available domain names and register a domain name according to their choice. Furthermore,
users also have the option of transferring in existing domain names for management by route
53.
domain. For instance, visitors could access the website as xyz.com rather than www.xyz.com.
The integration of route 53 with Elastic Load Balancing (ELB) load balancing is a promising
feature for routing traffic.
Management Console
The compatibility of Amazon route 53 with the AWS Management Console is a reliable
indicator of its ease of use. The Management Console can help in the management of route 53
without having to write a single line of code. The Management Console is web-based and has
a point-and-click, graphical user interface, thereby improving ease of use for route 53.
2. Find the option of “Create Hosted Zone” on the top left side of the navigation bar and
click on it.
3. You would find a form page after completing the previous step. In this step, you have
to provide important details such as domain names and comments. After entering the
required information, click on the “Create” button.
4. Now, you have a hosted zone for the domain. You can find four DNS endpoints known
as delegation set. You have to update the endpoints in the Nameserver settings of
domain name.
PE-CS-A402AL 2319185
5. Choose the domain’s control panel and update the Amazon route 53 DNS endpoints
through the domain hosting service. You have to delete the rest default values and
the update process will take 2 or 3 minutes.
6. Now, you have to return back to the Route 53 console. Select the “Go to Record Sets”
option. You will find a list of record sets. The default setting of record set types are NS
and SOA type.
7. For the creation of record set. Click on the “Create Record Set” option. Fill all the
important details such as Name, Routing Policy, Alias, Type, TTL seconds Value
and other information. Then, click on the “Save Record Set” button.
8. In the final step, create another record for another region. This helps in having two
record sets with the same domain name pointing to various IP addresses according to
chosen routing policy.
9. After completing configuration of Amazon Route 53, the routing of user requests would
follow the network policy.
PE-CS-A402AL 2319185
Practical 10
AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually
any type of application or backend service without provisioning or managing servers. You can
trigger Lambda from over 200 AWS services and software as a service (SaaS) application, and
only pay for what you use.
Use cases
Process data at scale: Execute code at the capacity you need, as you need it. Scale to match
your data volume automatically and enable custom event triggers.
Run interactive web and mobile backends: Combine AWS Lambda with other AWS
services to create secure, stable, and scalable online experiences.
Enable powerful ML insights: Preprocess data before feeding it to your machine learning
(ML) model. With Amazon Elastic File System (EFS) access, AWS Lambda handles
infrastructure management and provisioning to simplify scaling.
First of all, we have to open the lambda service from the service menu on the AWS console
Blueprints provide example code to do some minimal processing. Most blueprints process
events from specific event sources, such as Amazon S3, DynamoDB, or a custom application.
Note: The console shows this page only if you do not have any Lambda functions created. If
you have created functions already, you will see the Lambda > Functions page. On the list
page, choose Create a function to go to the Create function page.
PE-CS-A402AL 2319185
b. Select Blueprints.
c. In the Filter box, type in hello-world-python and select the hello-world-python blueprint.
d. Then click Configure.
A Lambda function consists of code you provide, associated dependencies, and configuration.
The configuration information you provide includes the compute resources you want to allocate
(for example, memory), execution timeout, and an IAM role that AWS Lambda can assume to
execute your Lambda function on your behalf.
a. You will now enter Basic Information about your Lambda function.
PE-CS-A402AL 2319185
Basic Information:
• Name: You can name your Lambda function here. For this tutorial, enter hello-world-
python.
• Role: You will create an IAM role (referred as the execution role) with the necessary
permissions that AWS Lambda can assume to invoke your Lambda function on your
behalf. Select Create new role from template(s).
• Role name: type lambda_basic_execution
• In this section, you can review the example code authored in Python.
c. Runtime: Currently, you can author your Lambda function code in Java, Node.js, C#, Go or
Python. For this tutorial, leave this on Python 2.7 as the runtime.
d. Handler: You can specify a handler (a method/function in your code) where AWS Lambda
can begin executing your code. AWS Lambda provides event data as input to this handler,
which processes the event.
PE-CS-A402AL 2319185
In this example, Lambda identifies this from the code sample and this should be pre-populated
with lambda_function.lambda_handler.
e. Scroll down to configure your memory, timeout, and VPC settings. For this tutorial, leave
the default Lambda function configuration values.
PE-CS-A402AL 2319185
The console shows the hello-world-python Lambda function - you can now test the function,
verify results, and review the logs.
a. Select Configure Test Event from the drop-down menu called "Select a test event...".
• Choose Hello World from the Sample event template list from the Input test event page.
• Type in an event name like HelloWorldEvent.
• You can change the values in the sample JSON, but don’t change the event structure.
For this tutorial, replace value1 with hello, world!.
Select Create.
PE-CS-A402AL 2319185
c. Select Test.
PE-CS-A402AL 2319185
AWS Lambda automatically monitors Lambda functions and reports metrics through Amazon
CloudWatch. To help you monitor your code as it executes, Lambda automatically tracks the
number of requests, the latency per request, and the number of requests resulting in an error
and publishes the associated metrics.
a. Invoke the Lambda function a few more times by repeatedly clicking the Test button. This
will generate the metrics that can be viewed in the next step.
b. Select Monitoring to view the results.
PE-CS-A402AL 2319185
c. Scroll down to view the metrics for your Lambda function. Lambda metrics are reported
through Amazon CloudWatch. You can leverage these metrics to set custom alarms.
The Monitoring tab will show six CloudWatch metrics: Invocation count, Invocation duration,
Invocation errors, Throttled invocations, Iterator age, and DLQ errors.
PE-CS-A402AL 2319185
While you will not get charged for keeping your Lambda function, you can easily delete it from
the AWS Lambda console.