0% found this document useful (0 votes)
31 views16 pages

What Is AWS Lambda

AWS Lambda is an event-driven serverless computing platform that allows running code without provisioning or managing servers. It executes code in response to events and automatically manages the underlying compute resources. AWS Lambda integrates with other AWS services and supports various programming languages. It provides auto-scaling, pay per use pricing, and handles infrastructure maintenance.

Uploaded by

ARYAN PATEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views16 pages

What Is AWS Lambda

AWS Lambda is an event-driven serverless computing platform that allows running code without provisioning or managing servers. It executes code in response to events and automatically manages the underlying compute resources. AWS Lambda integrates with other AWS services and supports various programming languages. It provides auto-scaling, pay per use pricing, and handles infrastructure maintenance.

Uploaded by

ARYAN PATEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

AWS Lambda

What is AWS Lambda:


➢ AWS Lambda is an Amazon serverless computing system that runs code and
automatically manages the underlying computing resources like (EC2).
➢ It is an event-driven computing service. It lets a person automatically run code in
response to many types of events, such as HTTP requests from the Amazon API
gateway, table updates in Amazon DynamoDB, and state transitions.
➢ It also enables the person to extend to other AWS services with custom logic
and even creates its back-end services. For example, just write the code and
then upload it as a .zip file or any container image. The service works by running
code on high-availability computer infrastructure.

Why AWS Lambda used:


➢ Runs your code on high availability compute infrastructure and performs all the
administration of your compute resources
➢ It is a platform as a service (PaaS)
➢ Lambda has a pay-as-you-go pricing model
➢ It will only run when invoked
➢ Auto scalable

It then performs all the administrative duties of that compute resource such as:
Use Cases Of AWS (Amazon Web Services) Lambda
Functions:
➢ File Processing: AWS lambda can be triggered by using simple storage
services (S3). Whenever files are added to the S3 service Lambda data
processing will be triggered.
➢ Web Applications: You can combine both web applications and AWS
lambda which will scale up and down automatically based on the incoming
traffic.
➢ IoT (Internet of Things) applications: You can trigger the AWS lambda
based on certain conditions while processing the data from the devices that
are connected to the IOT applications. It will analyze the data which are
received from the IOT application.
➢ Stream Processing: Lambda functions can be integrated with the Amazon
kinesis to process real-time streaming data for application tracking, log
filtering, and so on.

AWS lambda will help you to focus more on your code than the underlying
infrastructure. The infrastructure maintenance in AWS was taken care of by AWS
lambda.

Features Of AWS (Amazon Web Services) Lambda


Functions:

➢ AutoScaling and High Availability: AWS Lambda will make sure that your
application is highly available to the end users when there is sudden incoming
traffic. High availability can be achieved by scaling the application.
➢ Serverless Execution: There is no need for provisioning the servers manually
in AWS. AWS lambda will provision the underlying infrastructure based on the
triggers you are mentioned whenever a new file uploaded to a particular then
AWS lambda will automatically trigger and take care of the infrastructure.
➢ Pay-per-use-pricing: AWS will charge you only for the time that the compute
engine was active. AWS bills you based on the time taken to execute the code.
➢ Supports different programming languages: AWS lambda function will
support different programming languages. You can build the function with the
language at your convenience. Following are some languages supported by
AWS lambda:
○ Python
○ Node.js
○ Java
○ C#
○ PowerShell.
○ Go.
➢ Integrates with other AWS Services: AWS lambda can be integrated with
different AWS services like the following:
○ API Gateway
○ DynamoDB
○ S3
○ Step Functions
○ SNS
○ SQS.
➢ Versioning and Deployment: The AWS lambda function will maintain the
different kinds of versions of the code by which you can change between the
versions without any disruptions y based on the application performances.
➢ Security and Identity Management: AWS lambda will leverage AWS
Identity and Access Management (IAM) to control the access to the functions
that are built by using lambda You can define fine-grained permissions and
policies to secure your functions and ensure that only authorized entities can
invoke them.
AWS S3-Simple Storage Service

What is S3
➢ Amazon S3 is a Simple Storage Service in AWS that stores files of
different types like Photos, Audio, and Videos as Objects providing more
scalability and security.
➢ It allows the users to store and retrieve any amount of data at any point
in time from anywhere on the web.
➢ It facilitates extremely high availability, security, and simple connection to
other AWS Services.

What is Amazon S3 Used for?


➢ Amazon S3 is used for various purposes in the Cloud because of its robust
features with scaling and Securing of data. It helps people with all kinds of use
cases from fields such as Mobile/Web applications, Big data, Machine Learning,
and many more. The following are a few Wide uses of Amazon S3 service:
➢ Data Storage: Amazon s3 acts as the best option for scaling both small
and large storage applications. It helps in storing and retrieving the
data-intensitive applications as per needs in ideal time.
➢ Backup and Recovery: Many Organizations are using Amazon S3 to back
up their critical data and maintain the data durability and availability for
recovery needs.
➢ Hosting Static Websites: Amazon S3 facilitates storing HTML, CSS, and
other web content from Users/developers allowing them to host Static
Websites benefiting from low-latency access and cost-effectiveness.
➢ Data Archiving: Amazon S3 Glacier service integration helps as a
cost-effective solution for long-term data storing which are less frequently
accessed applications.
➢ Big Data Analytics: Amazon S3 is often considered a data lake because of
its capacity to store large amounts of both structured and unstructured data
offering seamless integration with other AWS Analytics and AWS Machine
Learning Services.
What is an Amazon S3 bucket?
➢ Amazon S3 bucket is a fundamental Storage Container feature in AWS S3
Service. It provides a secure and scalable repository for storing Objects such as
Text data, Images, Audio, and Video files over AWS Cloud. Each S3 bucket
name should be globally unique and should be configured with ACL (Access
Control List).

What are the types of S3 Storage Classes?

➢ Standard: Suitable for frequently accessed data, that needs to be highly


available and durable.
➢ Standard Infrequent Access (Standard IA): This is a cheaper
data-storage class and as the name suggests, this class is best suited for
storing infrequently accessed data like log files or data archives. Note that
there may be a per GB data retrieval fee associated with the Standard IA
class.
➢ Intelligent Tiering: This service class classifies your files automatically into
frequently accessed and infrequently accessed and stores the infrequently
accessed data in infrequent access storage to save costs. This is useful for
unpredictable data access to an S3 bucket.
➢ One Zone Infrequent Access (One Zone IA): All the files on your S3
have their copies stored in a minimum of 3 Availability Zones. One Zone IA
stores this data in a single availability zone. It is only recommended to use
this storage class for infrequently accessed, non-essential data. There may
be a per GB cost for data retrieval.
➢ Reduced Redundancy Storage (RRS): All the other S3 classes ensure
the durability of 99.999999999%. RRS only ensures 99.99% durability.
AWS no longer recommends RRS due to its less durability. However, it can
be used to store non-essential data.
Limitations:

Reusing bucket names:

➢ If a bucket is empty, you can delete it. After a bucket is deleted, the name
becomes available for reuse. However, after you delete the bucket, you might
not be able to reuse the name for various reasons.
➢ For example, when you delete the bucket and the name becomes available for
reuse, another AWS account might create a bucket with that name. In addition,
some time might pass before you can reuse the name of a deleted bucket. If you
want to use the same bucket name, we recommend that you don't delete the
bucket.

Objects and bucket limitations:

➢ There is no max bucket size or limit to the number of objects that you can store
in a bucket. You can store all of your objects in a single bucket, or you can
organize them across several buckets. However, you can't create a bucket from
within another bucket.

Bucket operations:

➢ The high availability engineering of Amazon S3 is focused on get, put, list, and
delete operations. Because bucket operations work against a centralized, global
resource space, it is not recommended to create, delete, or configure buckets on
the high-availability code path of your application. It's better to create, delete, or
configure buckets in a separate initialization or setup routine that you run less
often.

Bucket naming and automatically created buckets:

➢ If your application automatically creates buckets, choose a bucket naming


scheme that is unlikely to cause naming conflicts. Ensure that your application
logic will choose a different bucket name if a bucket name is already taken.
AWS- Batch Process
What is AWS Batch?

➢ AWS Batch is the batch processing service offered by AWS, simplifying running
high-volume workloads in compute resources. In other words, you can
effectively plan, schedule, run, and scale batch computing workloads of any
scale with AWS batch. Not only that, you can quickly launch, run, and terminate
compute resources while working with AWS batch. The computing resources
include Amazon EC2, AWS Fargate, and Spot instances. Know that AWS Batch
splits workloads into small pieces of tasks or batch jobs. It runs the batch jobs
simultaneously across various availability zones in an AWS region. Thus, it
reduces job execution time drastically.

Components of AWS Batch:

Compute Environment:

➢ The AWS batch compute environment includes many compute resources to run
batch jobs effectively. There are two types of computing environments in AWS
Batch – managed and unmanaged. Know that the ‘Managed compute
environment’ is provisioned and managed by AWS, whereas the ‘Unmanaged
compute environment’ is managed by customers. Managed compute
environment deals with specifying the desired compute type for running
workloads. For example, you can decide whether you need Fargate or EC2. You
can specify spot instances as well.

Job Definitions:

➢ In AWS Batch, jobs are well-defined before submitting them in job queues. Job
definitions include details of batch jobs, docker properties, associated variables,
CPU and memory requirements, computing resources requirements, and other
essential information. All this information helps to optimize the batch jobs
execution. Besides, the AWS batch allows overriding the values defined in job
definitions before submitting them. Essentially, job definitions define how batch
jobs should run in computing devices. Simply put, they act as the blueprint for
running batch jobs.

Job Queues:

➢ Another significant component of AWS Batch is job queues. Once you have
created job definitions, you can submit them in job queuing. You can configure
job queues based on priority. Usually, Jobs wait in job queues until the job
scheduler schedules them. The job scheduler schedules jobs based on priority.
So, jobs with high priority are scheduled first for execution by the job scheduler.
For example, time-sensitive jobs are usually highly-prioritized. So you can
execute them first. Low-priority jobs are executed at any time – mainly when
compute resources are cheaper. AWS batch comes with many wonderful
features.

Features of AWS Batch:

Dynamic Resource Provisioning:

➢ In AWS Batch, you must set up a compute environment, job definitions, and job
queue. After that, you don’t need to manage a single computing resource in the
environment. This is because the AWS batch performs automated provisioning
and scaling of resources.

Integration with Fargate:

➢ This integration allows using only the required amount of CPU and memory for
every batch job. As a result, you can optimize the use of computing resources
significantly. Also, it supports isolating compute resources for every job, which
ultimately improves the security of compute resources.
EC2 Launch Templates:

➢ By using AWS batch, you can customize compute resources with the help of
EC2 launch templates. Using the templates, you can scale EC2 instances
seamlessly based on requirements. Not only that, you can add storage volumes,
choose network interfaces, and configure permissions with the help of
templates. Above all, these templates help to reduce the number of steps in
configuring batch environments.

Workflow Engines:

➢ AWS batch can easily integrate with open-source workflow engines such as
Pegasus WMS, Nextflow, Luigi, Apache Airflow, and many others. Moreover,
you can model batch computing pipelines by using workflow languages.

Integration with Amazon EKS:

➢ AWS batch allows running batch jobs on Amazon EKS clusters. Not just that,
you can easily attach job queues with the EKS cluster-enabled compute
environment. AWS batch scale Kubernetes nodes and places pods in nodes
seamlessly.

HPC Workloads:

➢ In the simplest words, you can effectively run tightly coupled High-Performance
computing workloads using AWS batch since it supports running multi-node
parallel jobs. AWS batch works with an ‘Elastic fabric adapter’ that is a network
interface. With this interface, you can effortlessly run applications that demand
robust internode communication.

Job-dependency Modeling:

➢ AWS batch allows defining dependencies between jobs efficiently. Consider a


batch job that may consist of three stages and require different types of
resources at different stages. So, you can create batch jobs for different stages –
no matter what degree of dependency exists between the stages of the job.

Allocation Strategies:

➢ The AWS batch offers three strategies to allocate compute resources for
running jobs. The strategies are given below:
● Best-fit
● Best-fit progressive
● Spot-capacity oriented
➢ In the best-fit type, AWS batch allocates the best-fit instance types based on
the job requirements with low costs. At the same time, in this type, you can’t
add any additional instances if needed. So, jobs must wait until the current job
gets over in the compute resources.
➢ In the best-fit progressive type, you can add instances based on the
requirements of jobs.
➢ In the spot-capacity type, spot instances are selected based on job
requirements. Spot instances are usually uninterrupted.

Integrated Monitoring and Logging:

➢ You can view crucial operational metrics through the AWS management
console. The metrics include the computing capacity of resources as well as the
metrics associated with the batch jobs at different stages. Logs are usually
written in the console and Amazon Cloud watch.

Fine-grained Access Control:

➢ Access control is yet another feature of AWS Batch. AWS batch uses Identity
and Access Management (IAM) to control and monitor the compute resources
used for running batch jobs. It includes framing access policies for different
users.
➢ For instance, administrators can access any AWS branch API operation. At the
same time, developers will get limited permissions to configure compute
environments and register jobs. Besides, End users are not allowed to submit or
delete jobs.
Use-cases of AWS Batch:

➢ Many sectors can benefit from the AWS batch. Here, we brief some of the
significant use cases of AWS batch one by one.

Finance:

➢ With AWS Batch, you can make finance analyses by batch processing a high
volume of finance data sets. Mainly, you can make a post-trade analysis. It
includes analysis of everyday transaction costs, market performance, completion
reports, and so on. No wonder you can automate the financial analysis in the
AWS batch. Hence, you can predict the risks in business and make informed
decisions to boost business performance.

Life Science:

➢ Researchers can quickly find libraries of molecules with AWS batch. This service
helps researchers to get a deep understanding of biochemical processes, which
allows them to design efficient drugs.
➢ Generally, researchers generate raw files by making primary analyses of
genomic sequences. Then, they use the AWS batch to complete the secondary
analysis. AWS batch mainly helps to reduce errors because of incorrect
alignment between reference and sample data.

Digital Media:

➢ the AWS batch plays a pivotal role in digital media. It offers wonderful tools
with which you can automate content rendering jobs. With the tools, you can
reduce human intervention in content rendering jobs to a minimum. For example,
AWS batch speeds up batch transcoding workloads with automated workflows.
➢ AWS batch accelerates content creation and automates workflows in
asynchronous digital media processing. Thus, it reduces manual intervention in
asynchronous processing.
➢ Apart from the above-said, AWS batch supports running disparate but, at the
same time, dependent jobs at different stages of batch processing. This is
because AWS batch can handle execution dependencies as well as resource
scheduling in the best way. With AWS batch, you can compile and process files,
video content, graphics, etc.

Benefits of AWS Batch:

➢ You can run any volume of batch-computing jobs without installing any
batch-processing software.
➢ You can prioritize the execution of jobs based on business needs
➢ AWS batch allows scaling compute resources automatically
➢ It speeds up the running of computing resources, reducing frequent intervention
➢ On top of all, you can increase efficiency and reduce costs by optimizing
compute resources and workload distribution.
AWS Serverless Application Model
(AWS SAM)

What is AWS SAM?

➢ AWS SAM was launched to make the process of building serverless

applications simpler and easier. AWS SAM was licensed under the open-source

framework so anyone can contribute to its development. The code for the whole

serverless application is maintained in a repository structure. This helps in

creating development patterns.

➢ Code duplication and code complexity are the topmost problems that

developers using the AWS SAM framework should not be worried about. This

makes them focus on creating better applications with neat and organized code.

There are two components of the AWS SAM framework:

➢ SAM Templates

➢ SAM Command Line Interface (CLI)

AWS SAM Templates:

➢ Using YAML, developers can build serverless applications backed up by AWS

SAM templates. Since it provides shorthand syntax for APIs, functions, and even
databases, the developers can complete complex features with just a few lines

of code.

AWS SAM CLI:

➢ The purpose of AWS SAM CLI is to allow developers to get started with the

development and deployment of SAM-based applications. With some

similarities to the Lambda environment, SAM CLI provides an environment that

allows users to develop, test, and debug applications.

➢ These processes are all done locally and these applications are defined by SAM

templates or AWS CDK. AWS SAM can deploy the built applications to AWS;

AWS SAM also can create CI/CD pipelines. AWS SAM and its CLI are licensed

under the Apache 2.0 license.

Benefits of AWS SAM:

Single-deployment Configuration:

➢ To operate under a single CloudFormation stack, AWS SAM makes all similar

components and resources like a pack of one. Then the components and

resources under the stack are deployed as a single-version entity. This deployed

entity has a configuration that has been shared with all resources. This makes

sure that all related resources are deployed at once.


AWS CloudFormation Extension:

➢ AWS SAM was an AWS CloudFormation extension. SAM makes use of various

available features in AWS CloudFormation including a full resource suite,

intrinsic functions, etc. AWS CloudFormation resources and AWS SAM

resources can both be available in the resources section of AWS SAM

templates.

➢ When comparing AWS SAM to CloudFormation for constructing serverless

apps, AWS SAM wins because both resources are available, making it more

concise and less configurable.

Local Debugging and Testing:

➢ You would have read earlier that the CLI gives a local execution environment like

Lambda. With the help of this environment, the application built using SAM

templates can do local testing and debugging. This helps the developers

experience the Lambda environment on their local systems and ensures that the

application will run on the cloud without any discrepancies.

➢ Testing the application locally reduces the testing and debugging costs in the

cloud. There are multiple toolkits available for different IDEs that are provided

by AWS. By using these toolkits, it is possible to find the issues that the

developers may face in the cloud, and you can troubleshoot them before

deploying the application.


Deep Integration with Development Tools:

➢ AWS SAM can be combined with other AWS services and tools that make the

process of developing serverless applications easier. AWS serverless

application repository contains all new applications. To code, and test the

desired application, we can make use of AWS Cloud9 IDE. For continuous

integration and deployment, we can make use of CodeBuild, CodeDeploy, and

CodePipeline.

You might also like