Unit 2
Unit 2
• The users who are in need of computing are expected to invest more money on computing
resources such as hardware, software, networking, and storage; this investment naturally costs
a bulk currency to buy these computing resources, all these tasks would add cost huge
expenditure to the classical academics and individuals.
• On the other hand, it is easy and handy to get the required computing power and resources
from some provider (or supplier) as and when it is needed and pay only for that usage. This
would cost only a reasonable investment or spending, compared to the huge investment when
buying the entire computing infrastructure. This phenomenon can be viewed as capital
expenditure versus operational expenditure.
• As one can easily assess the huge lump sum or smaller lump sum required for the hiring or
getting the computing infrastructure only to the tune of required time, and rest of the time free
from that.
A blind benefit of this computing is that even if we lose our laptop or due to some crisis our
personal computer—and the desktop system—gets damaged, still our data and files will stay
safe and secured as these are not in our local machine (but remotely located at the provider’s
place—machine).
It is a fast solution growing in popularity because of storage especially among individuals and
small- and medium-sized companies (SMEs).
Thus, cloud computing comes into focus and much needed subscription based or pay-per-use
service model of offering computing to end users or customers over the Internet and thereby
extending the IT’s existing capabilities.
In Figure 2.1 shows several cloud computing applications cloud represents the Internet-based computing
resources, and the accessibility is through some secure support of connectivity
NIST Definition of Cloud Computing
The formal definition of cloud computing comes from the National Institute of Standards and
Technology (NIST): “Cloud computing is a model for enabling ubiquitous, convenient, on-demand
network access to a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal management
effort or service provider interaction.
(b) Four deployment models that are used to narrate the cloud computing opportunities for Customers
while looking at architectural models, and
(c) Three important and basic service offering models of cloud computing
Cloud computing has five essential characteristics, which are shown in Figure 2.2. Readers can note the
word essential, which means that if any of these characteristics is missing, then it is not cloud
computing:
1. On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server
time and network storage, as needed automatically without requiring human interaction with each
service’s provider.
2. Broad network access: Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones,
laptops, and personal digital assistants [PDAs])
3. Elastic resource pooling: The provider’s computing resources are pooled to serve multiple consumers
using a multitenant model, with different physical and virtual resources dynamically assigned and
reassigned according to consumer demand. There is a sense of location independence in that the
customer generally has no control or knowledge over the exact location of the provided resources but
may be able to specify the location at a higher level of abstraction (e.g.,country, state, or data center).
Examples of resources include storage,processing, memory, and network bandwidth.
4. Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically, to
quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for
provisioning often appear to be unlimited and can be purchased
5. Measured service: Cloud systems automatically control and optimize resource use by leveraging a
metering capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing , bandwidth, and active user accounts). Resource usage can be monitored, controlled, and
reported providing transparency for both the provider and consumer of the utilized service.
Deployment models also called as types of models. These deployment models describe the ways with
which the cloud services can be deployed or made available to its customers, depending on the
organizational structure and the provisioning location. One can understand it in this manner too: cloud
(Internet)-based computing resources—that is, the locations where data and services are acquired and
provisioned to its customers—can take various forms. Four deployment models are usually
distinguished, namely,
(1) Public
(2) Private
Public Cloud
The public cloud makes it possible for anybody to access systems and services. The public cloud
may be less secure as it is open to everyone. The public cloud is one in which cloud
infrastructure services are provided over the internet to the general people or major industry
groups.
Private Cloud
The private cloud deployment model is the exact opposite of the public cloud deployment
model. It’s a one-on-one environment for a single user (customer). There is no need to share
your hardware with anyone else. The distinction between private and public clouds is in how
you handle all of the hardware. It is also called the “internal cloud” & it refers to the ability to
access systems and services within a given border or organization
Hybrid Cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a safe
environment while taking advantage of the public cloud’s cost savings. Organizations can move
data and applications between different clouds using a combination of two or more cloud
deployment methods, depending on their needs.
Community Cloud
A community cloud is like a shared space in the digital world where multiple organizations or
groups with similar interests or needs can come together and use the same cloud computing
resources. In a community cloud, these organizations share the same infrastructure, like servers
and storage, but they have their own separate areas within that shared space where they can
store and manage their data and applications. This allows them to collaborate and benefit from
shared resources while still maintaining some level of privacy and control over their own data.
(i) Infrastructure as a Service :- Infrastructure as a service (IaaS) is a service model that delivers
computer infrastructure on an outsourced basis to support various operations. Typically IaaS is a service
where infrastructure is provided as outsourcing to enterprises such as networking equipment, devices,
database, and web servers. It is also known as Hardware as a Service (HaaS). IaaS customers pay on a
per-user basis, typically by the hour, week, or month. Some providers also charge customers based on
the amount of virtual machine space they use.
It simply provides the underlying operating systems, security, networking, and servers for developing
such applications, and services, and deploying development tools, databases, etc.
PaaS is a category of cloud computing that provides a platform and environment to allow developers to
build applications and services over the internet. PaaS services are hosted in the cloud and accessed by
users simply via their web browser.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS frees users
from having to install in-house hardware and software to develop or run a new application. Thus, the
development and deployment of the application take place independent of the hardware.
iii)Software as Service:
Software as a Service (SaaS) is a category of cloud computing that provides software applications over
the internet, eliminating the need for users to install or maintain software on their local devices. SaaS
applications are hosted in the cloud and accessed by users via a web browser. A SaaS provider manages
the infrastructure, security, maintenance, and updates, ensuring that users can focus on utilizing the
software without worrying about backend management. This model enables businesses and individuals
to use applications on a subscription or pay-per-use basis, reducing upfront costs and simplifying
software deployment.
AWS Case study
Organizations had a difficult time finding, storing, and managing all of your data. Not only that, running
applications, delivering content to customers, hosting high traffic websites, or backing up emails and
other files required a lot of storage. Maintaining the organization’s repository was also expensive and
time-consuming for several reasons. Challenges included the following:
• AWS service for storage is S3 Bucket, One of its most powerful and commonly used storage
services is Amazon S3. AWS S3 (“Simple Storage Service”) enables users to store and retrieve
any amount of data at any time or place, giving developers access to highly scalable, reliable,
fast, and inexpensive data storage. Designed for 99.999999999 percent durability, AWS S3 also
provides easy management features to organize data for websites, mobile applications, backup
and restore, and many other applications.
• An Another example of cloud application is a web-based e-mail (e.g., Gmail, Yahoo mail); in this
application, the user of the e-mail uses the cloud—all of the emails in their inbox are stored on
servers at remote locations at the e-mail service provider.
• However, there are many other services that use the cloud in different ways. Here is yet
another example: Dropbox is a cloud storage service that lets us easily store and share files with
other people and access files from a mobile device as well.
Elastic resource pooling
Elastic resource pooling is a fundamental concept in cloud computing that refers to the ability to
dynamically allocate and share computing resources among multiple users or applications based on
demand. It enables efficient utilization of resources by allowing them to be flexibly provisioned and
scaled according to workload fluctuations. This concept is central to the scalability, efficiency, and cost-
effectiveness of cloud computing environments.
Elastic resource pooling, on the other hand, enables dynamic allocation of resources, allowing them to
be provisioned and de-provisioned automatically in response to changing workload requirements.
2. Shared Resource Pool: In a cloud computing environment, resources are pooled together into a
shared infrastructure that can be accessed by multiple users or applications simultaneously. This shared
resource pool includes a variety of resources such as virtual machines (e.g., Amazon EC2 instances),
storage (e.g., Amazon S3, Azure Blob Storage), databases (e.g., Amazon RDS, Azure SQL Database), and
networking resources (e.g., Amazon VPC, Azure Virtual Network). Users or applications can access these
resources on-demand, without having to provision or manage the underlying physical infrastructure.
3. Scalability: Elastic resource pooling enables horizontal and vertical scalability, allowing resources to
be scaled out (adding more instances) or scaled up (increasing the capacity of existing instances) to meet
changing workload demands. This scalability is essential for handling sudden spikes in traffic or
processing-intensive tasks without experiencing performance degradation or downtime.
4. Pay-per-Use Model: Elastic resource pooling is often associated with a pay-per-use or pay-as-you-go
pricing model, where users are charged based on their actual resource consumption rather than a fixed
upfront cost. This consumption-based pricing model aligns costs with usage, allowing organizations to
scale their infrastructure cost-effectively and pay only for the resources they consume.
5. Resilience and High Availability: Elastic resource pooling enhances resilience and high availability by
distributing workloads across multiple redundant resources within the shared pool. In the event of
hardware failures or infrastructure issues, workloads can be automatically migrated to alternate
resources without impacting performance or availability.
Virtual Hardware: It includes virtual processors, memory (RAM), and storage. You can choose the type
and size of this virtual hardware based on your needs. For example, you might need more processing
power for running complex calculations or more storage for storing large files.
Operating System: You can choose the operating system you want to run on your EC2 instance, such as
Amazon Linux, Ubuntu, Windows Server, etc. This is just like choosing the operating system for your
personal computer.
Networking: Each EC2 instance has its own network settings, including an IP address, security groups
(firewall rules), and network interfaces. This allows your instance to communicate with other resources
in the AWS cloud and the internet.
Purpose: You can use EC2 instances for various purposes, such as hosting a website, running
applications, processing data, or even for machine learning tasks. It's like having a versatile computer
that you can use for different jobs.
1. Instances: Think of these as virtual computers in the cloud. You can create as many as you need.
2. Tags: These are like sticky notes you can attach to your virtual resources to help you organize
and identify them
3. Amazon Machine Images (AMIs): These are like pre-made setups for your virtual computers.
They include everything your computer needs to run, like the operating system and any extra
software.
4. Instance Types: Imagine these as different models of virtual computers, with varying levels of
power and storage space.
5. Key Pairs: It's like having a set of keys to lock and unlock your virtual computer securely. AWS
keeps one key (the public key), and you keep the other (the private key).
Note:-Anyone who possesses your private key can connect to your instances, so it's important that
you store your private key in a secure place.
6. Virtual Private Clouds (VPCs): Imagine this as your own private corner of the cloud, where you
can set up your virtual computers and connect them to your own network. It's like having your
own little piece of the cloud just for you.
7. Regions and Availability Zones: Think of these as different neighborhoods and houses in the
cloud. You can choose where to place your virtual computers and storage.
8. Amazon EBS Volumes: This is like having a hard drive in the cloud. You can store your data here
permanently, even if you turn off your virtual computer
9. Security Groups: These act like a virtual fence around your virtual computers. You can decide
who can come in and what they can do.
10. Elastic IP Addresses: It's like having a permanent address for your virtual computer, even if you
move it around or turn it off.
1. Navigate to EC2:
-In the AWS Management Console,
go to the "Services" dropdown.
- Under "Compute," select "EC2.“
2. Launch an Instance:
- Click on the "Instances" link in the left panel bar.
- Click the "Launch Instances" button.
Create the new key pair ,by giving name ->click on create key pair
8. Add Storage:
- Configure the storage settings for your instance. You can increase the default size if needed.
9. Launch Instances:
10. Before launch instance once check with all configurations and number of instance you want
Elastic Block Storage (EBS) is like having a virtual hard drive for your computer in the cloud.
Imagine you have a computer where you can store all your files and data.
In the cloud, your computer is virtual, but you still need a place to store your stuff. That's where EBS
comes in.
The "elastic" part of EBS means that you can easily adjust the size of your storage volumes as needed. If
you need more space, you can simply increase the size of your EBS volume. And if you no longer need as
much space, you can decrease it. This flexibility makes it easy to scale your storage up or down
depending on your needs.
First we understand the terminology of vertical and horizontal scaling, and then we'll dive into Amazon
Elastic Block Store (EBS) in AWS.
1. Vertical Scaling:
Vertical scaling, also known as scaling up or scaling vertically, involves increasing the capacity of a
single server or instance by adding more resources such as CPU, memory, or storage. In this approach,
you typically upgrade the existing hardware components of the server to handle increased workload or
resource requirements. For example, you might add more RAM to a server to improve its performance.
Vertical scaling has its limitations, as there's only so much you can upgrade a single server before
reaching its maximum capacity. It also introduces potential points of failure, as all resources are
concentrated in a single instance.
2. Horizontal Scaling:
Horizontal scaling, also known as scaling out or scaling horizontally, involves adding more servers or
instances to distribute the workload across multiple machines. Instead of upgrading a single server, you
add more servers to your infrastructure to handle increased demand. Each server operates
independently and can share the workload with other instances.
Horizontal scaling offers greater scalability compared to vertical scaling, as it allows you to add more
resources as needed without being limited by the capacity of a single server. It also provides redundancy
and fault tolerance, as multiple instances can continue to operate even if one or more servers fail.
key features and benefits of Amazon EBS:
1. Elasticity: With Amazon EBS, you can easily scale your storage capacity up or down based on your
application's needs. You can create EBS volumes of different sizes and types (such as General Purpose
SSD, Provisioned IOPS SSD, and Throughput Optimized HDD) and attach them to your EC2 instances as
needed.
2. Durability and Availability: Data stored on EBS volumes is replicated within a single Availability Zone
(AZ) to protect against component failure. You can also create snapshots of your EBS volumes to back up
your data and restore it in case of data loss or corruption.
3. Performance: Amazon EBS offers different types of storage volumes optimized for different
performance requirements. For example, General Purpose SSD volumes provide a balance of price and
performance for a wide range of workloads, while Provisioned IOPS SSD volumes offer predictable
performance for I/O-intensive applications.
4. Snapshots and Backup: You can create point-in-time snapshots of your EBS volumes, which are stored
in Amazon S3. These snapshots can be used to back up your data, migrate volumes between Availability
Zones, or create new volumes.
Vertical Scaling
Amazon EBS (Elastic Block Store): Amazon EBS is a high-performance block storage service
designed for use with EC2 instances. It provides persistent storage for virtual machines, meaning
data remains available even after the instance is stopped or terminated.
EBS volumes can be attached to EC2 instances and support features like snapshots, encryption, and
scalability. Scale Up and Scale Down.
Definition: Scale up, also known as vertical scaling, involves increasing the resources (such as CPU,
RAM, or disk space) of an existing single server or virtual machine to handle increased workload or
performance requirements.
EBS Equivalent: increasing the size or performance specifications of an existing EBS volume
Procedure: increasing/decreasing of CPU and RAM
- In the "Add Storage" step, you can add additional volumes. Specify the size and other attributes for
these volumes.
fdisk /dev/xvdf
Lsblk -fs
lsblk -fs
1. mkdir /mnt/archana
Mount the partition:
3. df -h
nano /etc/fstab
Note :- you need to create the directory and mount it …to retrieve the files
a. Use the region selector in the top-right corner of the AWS Management Console to switch to the
destination region where you copied the snapshot.
We have created a snapshot of an EBS volume, copied it to another region, created a volume from the
snapshot, and attached it to an instance in the destination region.
- Security Group: Create a new security group (SG) and add NFS and allow from anywhere.
8. Repeat the above steps to launch another instance named EFS-2 in Subnet-1b with the same
configuration.
3. Specify details:
- Name: Optional
- VPC: Default
Click on "Next".
Step 3: Accessing the two EC2 instances named EFS-1 & EFS -2 in two different PowerShell
sessions and performing the specified tasks:
- Use the SSH command to connect to the EFS-1 instance. Replace `[instance-public-
ip]` with the public IP address of the instance:
sudo su
4. Create a Directory:
mkdir efs
6. List Files:
ls
- Optionally, you can verify the installation of the Amazon EFS utilities by checking the
version:
efs-utils --version
Repeat Steps 2-7 for Each PowerShell Session Corresponding to EFS-2 Instance.
By following these steps, you'll have accessed each EFS-1 instance in separate
PowerShell sessions, switched to the root user, created a directory named "efs," installed the
Amazon EFS utilities, and listed files in the directory. This setup allows you to configure and
manage each instance individually as needed.
7. Paste and execute the copied command in the terminal to mount the EFS file system onto
the instance.
3. Verify that the file automatically synchronizes and appears on the other instance.
By above steps, we can successfully create EC2 instances, configured them, created an EFS file
system, and attached it to the instances in the same availability zone in the Mumbai region.
Now, the instances can seamlessly access and share files stored in the EFS file system.
Object Storage:
At its core, S3 is an object storage service. Unlike traditional file systems that store data in a hierarchical
structure (folders and directories), S3 organizes data as objects within buckets. Each object consists of
the following components:
i. Key: A unique identifier for the object within the bucket, similar to a file path.
ii. Value: The actual data or content of the object, which can be anything from a document, image,
video, to application data.
iii. Metadata: Key-value pairs associated with the object, providing additional information such as
content type, creation date, or custom attributes.
Buckets:
Buckets are containers for storing objects within S3. They serve as the top-level namespace for
organizing and managing objects. When you create an S3 bucket, you must choose a globally unique
name, as bucket names must be unique across all of AWS. Each AWS account can have multiple buckets,
allowing you to segregate and manage data according to your organizational needs.
1. Durability and Availability: S3 is designed for high durability, with objects replicated across
multiple data centers within a region to ensure data availability and resilience to hardware
failures. Amazon S3 is designed for 99.999999999% (11 9's) of durability
2. Scalability: S3 is highly scalable and can accommodate virtually unlimited amounts of data,
making it suitable for small-scale applications and large enterprises alike.
3. Security: S3 provides robust security features, including encryption of data at rest and in transit,
access control through IAM policies, bucket policies, and ACLs, and integration with AWS
Identity and Access Management (IAM) for fine-grained access control.
4. Lifecycle Management: You can define lifecycle policies to automate data management tasks
such as transitioning objects to lower-cost storage tiers (e.g., S3 Standard-IA, S3 One Zone-IA, S3
Glacier) or deleting expired data.
i. S3 Standard
S3 Standard is the default storage class for Amazon S3. It is designed for frequently accessed
data that requires high durability, availability, and low latency.
ii. S3 Standard-IA (Infrequent Access):
S3 Standard-IA is a storage class designed for data that is accessed less frequently but
requires rapid access when needed.
It offers the same durability, availability, and low latency as S3 Standard, but at a lower
storage cost.
Objects stored in S3 Standard-IA are ideal for long-term storage and periodic access
scenarios.
iii. S3 One Zone-IA (Infrequent Access):
S3 One Zone-IA is similar to S3 Standard-IA but stores data in a single AWS Availability Zone
rather than across multiple zones.
It provides cost savings compared to S3 Standard-IA, making it an attractive option for
workloads that can tolerate data loss in the event of an Availability Zone failure.
iv. S3 Glacier:
S3 Glacier is a low-cost storage class designed for long-term data archival and digital
preservation.
It offers significantly lower storage costs compared to S3 Standard and S3 Standard-IA, with
the trade-off of longer retrieval times.
Objects stored in S3 Glacier are ideal for data that is rarely accessed and can tolerate
retrieval times ranging from minutes to several hours.
5. Versioning: Versioning in Amazon S3 is a feature that allows you to keep multiple versions of
an object in the same bucket. Whenever you upload a new version of an object with the same
key (name) as an existing object, S3 doesn't overwrite the existing object but instead creates a
new version of it. This means you can retain and access previous versions of objects, providing
an additional layer of protection against accidental deletions or overwrites
i. Accidental Deletion Protection :: With versioning enabled, if you accidentally delete
an object, you can still retrieve previous versions of it, preventing data loss.
ii. Accidental Overwrite Protection::Versioning prevents accidental overwrites of
objects. Even if you upload a new version of an object with the same name, the
previous versions are preserved.
7. Event Notifications: S3 can trigger events (e.g., object creation, deletion) that can be routed to
AWS Lambda, SNS, or SQS for processing.
8. Integration: S3 seamlessly integrates with other AWS services such as AWS Lambda, Amazon
CloudFront, AWS Glue, Amazon Athena, and Amazon EMR, enabling you to build powerful data
processing and analytics workflows.
It allows you to provision a logically isolated section of the AWS cloud where you can launch AWS
resources in a virtual network that you define.
VPCs give you control over your network configuration, including IP address ranges, subnets, routing
tables, network gateways, and security settings
CIDR Block:
CIDR is a method used to represent IP addresses and specify address ranges in a more flexible and
efficient way than traditional IP address classes (Class A, B, and C).
- With CIDR, an IP address is followed by a slash ("/") and a number, known as the prefix length or
subnet mask.
- The prefix length indicates the number of bits used for the network portion of the address.
- For example, in the CIDR notation "192.168.1.0/24", the "/24" indicates that the first 24 bits
represent the network portion of the address, leaving 8 bits for the host portion.
- For example, you might define a CIDR block of 10.0.0.0/16, which allows for up to 65,536 IP
addresses.
Subnets:
- Subnets are segments of the VPC's IP address range where you can place groups of resources, such as
EC2 instances, RDS databases, and Lambda functions.
- Each subnet resides in a specific Availability Zone (AZ) and has its own CIDR block, which is a subset of
the VPC's CIDR block.
Steps to create a VPC (Virtual Private Cloud) network in AWS with one public server and another
private server:
- IPv4 CIDR block**: Define the IP address range for your VPC (e.g., 10.0.0.0/16).
5. Click on "Create."
- IPv4 CIDR block: Define the IP address range for the subnet (e.g., 10.0.1.0/24 for a public subnet).
4. Click on "Create."
Select the subnet ( 10.0.1.0/24 ) --> Actions --> Modify Auto Assing IP Settings
Repeat the above steps to create another subnet for the private server (e.g., PrivateSubnet with an IP
address range like 10.0.2.0/24).
1. In the VPC dashboard, click on "Internet Gateways" in the left-hand navigation pane.
1. In the VPC dashboard, click on "Route Tables" in the left-hand navigation pane.
Select the route table ( InternetRT ) ---> Subnet Associations tab ---> Edit Subnet Associations --->
Select the subnet ( 10.0.1.0/24 ) -- Save
4. Add a route with destination `0.0.0.0/0` and target the internet gateway created in Step 3.
- Configure security groups to allow inbound traffic on port 80 (HTTP) or other necessary ports.
- Ensure the security group only allows necessary inbound traffic from the public server or other
trusted sources.
- The public server will have a public IP address and can be accessed directly over the internet.
- The private server will not have a public IP address and can only be accessed from within the VPC or
through a VPN connection.
Amazon Lambda:
AWS offers various categories of computing services, including
In Amazon Web Services (AWS), Lambda is a server less computing service that allows you to run code
without provisioning or managing servers.
Lambda is a powerful serverless compute service that enables developers to execute backend tasks in
response to events triggered by various sources. Its event-driven architecture, seamless integration
with other services,
It lets you run your code in response to events, such as changes to data in an Amazon S3 bucket or an
Amazon DynamoDB table, or in response to HTTP requests using Amazon API Gateway it offers a
scalable and efficient way to execute background functions without user interface
involvement.
Traditional web hosting typically involves provisioning physical servers or dedicated servers from a
hosting provider. how it generally works without relying on cloud computing.
1. Server Procurement: The first step is to acquire physical servers or dedicated servers from a hosting
provider. These servers are usually located in data centers managed by the provider.
2. Server Configuration: Once the servers are procured, they need to be set up and configured according
to the requirements of the website or application. This includes installing the operating system, web
server software (such as Apache or Nginx), database server software (such as MySQL or PostgreSQL),
and any other necessary software components.
4. Application Deployment: Once the servers are set up and configured, the website or application files
need to be deployed to the server. This involves transferring the files from the development
environment to the production server and configuring the web server to serve the application.
5. Monitoring and Maintenance: This includes tasks such as monitoring server performance, applying
software updates and patches, and troubleshooting any issues that arise.
Traditional web hosting without relying on virtual machines like EC2 requires significant upfront
investment in hardware and infrastructure setup, as well as ongoing maintenance and management
efforts to keep the servers running smoothly.
While EC2 -virtual machines simplified many aspects of web hosting, what purpose does Lambda
serve?
Before getting into the significance of Lambda, let's explore the distinctions between EC2 and Lambda.
1. Infrastructure Setup:
- EC2: Requires setting up the complete infrastructure including servers, operating systems, runtime
environments, memory allocation, and network settings.
- Lambda: Requires only creating the Lambda function, and AWS handles the infrastructure
automatically. You don't need to worry about server setup, operating systems, or runtime
environments.
2. Software Installation:
- EC2: Requires installing the required software on the virtual machines manually.
- Lambda: Doesn't require any software installation as AWS manages the runtime environment for you.
3. Deployment:
- EC2: You deploy the complete codebase onto the virtual machines.
- Lambda: You deploy snippet code or functions, not the entire application.
4. Cost Model:
- EC2: You pay for the resources (compute, storage, etc.) regardless of whether the application is
running.
- Lambda: You pay only for the compute time and memory allocated to the function when it's running.
- EC2: Requires manual management of firewalls, patches, and security settings. Continuous
monitoring is necessary.
- Lambda: AWS takes care of security and scaling automatically. You don't need to manage servers or
security patches.
Lambda Triggers:
Lambda functions are triggered by events from various AWS services. Some popular triggers are:
1. S3 (Simple Storage Service): Triggers Lambda functions when objects are created, modified, or
deleted in S3 buckets.
2. CloudWatch: Triggers Lambda functions based on events or metrics from CloudWatch, such as logs,
alarms, or scheduled events.
3. API Gateway: Triggers Lambda functions in response to HTTP requests made to API endpoints created
with API Gateway.
4. DynamoDB: Triggers Lambda functions when there are changes to data in DynamoDB tables.
1. Serverless Computing: With Lambda, you can execute your code in response to various events
without managing servers. AWS handles the infrastructure provisioning, scaling, and maintenance for
you, allowing you to focus on writing code.
2. Event-Driven Architecture: Lambda functions are triggered by events from various AWS services, such
as changes to data in Amazon S3 buckets, updates to Amazon DynamoDB tables, messages from
Amazon SQS queues, or HTTP requests via Amazon API Gateway.
3. Automatic Scaling: AWS Lambda automatically scales your application by provisioning the required
infrastructure to handle incoming requests. It scales up or down based on the number of requests your
function receives.
4. Pay-Per-Use Pricing: With Lambda, you only pay for the compute time consumed by your code and
the number of requests processed, with no charges for idle time. This pay-per-use pricing model offers
cost-effective pricing, especially for low-traffic applications.
5. Support for Multiple Runtimes: Lambda supports multiple programming languages, including Python,
Node.js, Java, Go, Ruby, and .NET Core. This flexibility allows you to choose the runtime that best fits
your application requirements and development preferences.
6. Integration with AWS Ecosystem: Lambda seamlessly integrates with other AWS services, enabling
you to build complex workflows and applications using a combination of serverless services. For
example, you can trigger Lambda functions in response to events from Amazon S3, Amazon DynamoDB,
Amazon SNS, Amazon SQS, and more.
7. Built-in Monitoring and Logging: Lambda provides built-in monitoring and logging through Amazon
CloudWatch, allowing you to monitor function invocations, performance metrics, and errors in real-
time. You can use CloudWatch Logs to troubleshoot issues and gain insights into your application's
behavior.
step-by-step procedure for setting up an AWS Lambda function to trigger whenever an object is
uploaded to an S3 bucket and update a DynamoDB table:
1. Create an AWS Account: If you haven't already, sign up for an AWS account at
https://aws.amazon.com/.
2. Access AWS Management Console: Log in to the AWS Management Console using your AWS account
credentials.
3. Open AWS Lambda Console: Navigate to the Lambda service by either searching for "Lambda" in the
AWS Management Console or
- Under "Permissions", create a new role with basic Lambda permissions or choose an existing role that
has permissions to access S3 and DynamoDB.
5. Upload Code:
- Write the Python code for your Lambda function. This code will be triggered whenever an object is
uploaded to the specified S3 bucket.
import boto3
s3 = boto3.client("s3")
dynamodb = boto3.resource('dynamodb')
if 'Records' in event:
bucket_name = record['s3']['bucket']['name']
object_key = record['s3']['object']['key']
dynamoTable = dynamodb.Table('newtable')
dynamoTable.put_item(
else:
# Log a message indicating that the 'Records' key is missing from the event
print("No 'Records' key found in the event.")
6. Configure Trigger:
- Configure the trigger to listen to the S3 bucket where you want to trigger the Lambda function.
- If your Lambda function requires any environment variables (e.g., credentials for accessing
DynamoDB), you can set them in the "Configuration" tab under "Environment variables".
- You can test your Lambda function manually by clicking on the "Test" button in the Lambda console.
- You can create a sample test event or use a custom test event to simulate the S3 event that triggers
your function.
- You can monitor the execution of your Lambda function in the Monitoring tab of the Lambda
console.
- You can also view logs to troubleshoot any issues that may arise during execution.
successfully set up a Lambda function to trigger whenever an object is uploaded to an S3 bucket and
update a DynamoDB table accordingly.
What is Docker?
Docker is a platform which packages an application and all its dependencies together in the form of
containers. This containerization aspect ensures that the application works in any environment.
In the diagram, each and every application runs on separate containers and has its own set of
dependencies & libraries. This makes sure that each application is independent of other applications,
giving developers surety that they can build applications that will not interfere with one another.
So a developer can build a container having different applications installed on it and give it to the QA
team. Then the QA team would only need to run the container to replicate the developer’s
environment.
Docker Commands
1. docker –version
2. docker pull
3. docker run
3. docker run
4. docker ps
This command is used to list the running containers
5. docker ps -a
This command is used to show all the running and exited containers
7. docker stop
8. docker kill
This command kills the container by stopping its execution immediately. The difference between ‘docker
kill’ and ‘docker stop’ is that ‘docker stop’ gives the container time to shutdown gracefully, in situations
when it is taking too much time for getting the container to stop
9. docker commit
This command creates a new image of an edited container on the local system
14. docker rm
Amazon Lex:
What is LEX
Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language
models to design, build, test, and deploy conversational interfaces in applications.
Lex is a service that enables developers to build conversational interfaces, commonly known as chatbots
or conversational agents, using natural language understanding (NLU) and speech recognition
capabilities.
AWS Lex simplifies the process of creating, deploying, and managing chatbots by providing tools and
services that handle complex tasks such as language understanding and dialogue management.
Aws Architecture
AWS-LEX
AWS Lex is a service for building conversational interfaces into any application using voice and text, and
it involves several key concepts including
• Bot
• Intent
• Utterances
• Initial Response
• Slot
• Slot type
• Confirmation
• Fulfillment
• Closing Response
Bot
A bot is the primary resources type in Amazon Lex . To enable conversations, you add one or
more languages to your bot. A language contains intents and slots types.
AWS-INTENT
• An intend is a goal that the user to accomplish through a conversation with a bot.
• You can have one or more related intents, Each intent has sample utterance that convey how
the user might express the intent.
• You can create your own intent or choose from a range of pre-defined built-in intents.
AWS-UTTERANCES
• Sample utternces are phrases that represent how a user might interact with an intent to
accomplish a goal
• You provide sample utterances and Amazon Lex builds a language model to support interactions
through voice and text.
• You can add a slot name to an utterances by enclosing it in braces({}).when you use a slot
name ,that slot is filled with the value from the users utterance.
AWS-SLOT
• A slot is information that Amazon Lex needs to fulfill an intent
• Each slot has a slot type.
• Information that a bot needs to fulfill the intent
AWS-SLOT TYPE
• A slot type contains the details that are necessary to fulfill a users intent.
• Each slot has a slot type.
• We can create our own slot type or choose from a range of pre-defined slot types called built-in
slot types.
• Some Build types are: Number, Alphanumeric ,date etc
INITIAL –RESPONSE
The initial response is sent to the user after Amazon Lex V2 determines the intent and before it
starts to elicit slot values.
CLOSING REPONSE
Configure the bot to respond after the fulfillment with a closing response.
AWS-FULLFILLMENT
Use fulfillment message to tell users the status of fulfilling their intent .You can define messages
when fulfillment is successful , and for when the intent cant be fulfilled.
AWS-CONFIRMATION
A confirmation prompt typically repeat back information for the user to confirms . If the user
confirms the intent ,the bot fulfills the intent .if the user declines, then the bot responds with a
decline response.
Note:
Integration with Other AWS Services:
- Lex seamlessly integrates with other AWS services, allowing developers to leverage additional
functionalities for their chatbots. For example, developers can use AWS Lambda functions to implement
custom business logic and fulfillment actions triggered by user intents.
Multi-Platform Support:
- Chatbots created with Lex can be deployed across various platforms and channels, including
websites, mobile apps, messaging platforms (e.g., Facebook Messenger),