0% found this document useful (0 votes)
3 views

Chapter 14 Serverless Computing and Event Processing

Uploaded by

sujithreddy765
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Chapter 14 Serverless Computing and Event Processing

Uploaded by

sujithreddy765
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

PART IV Cloud Programming Paradigms

Chapter 14 Serverless Computing And Event Processing

Introduction
• Previous chapters describe algorithms, platforms, and technologies that can be used to create cloud-native
software systems.

• This chapter focuses on facilities cloud providers offer that enable programmers to create, deploy, and scale
applications quickly, easily, and at low cost.

Traditional Client-Server Architecture


• When application programs communicate over a network, they follow the client-server paradigm, which
divides applications into two categories

o Server: an application that runs first and waits for contact


o Client: an application the contacts a server

Scaling A Traditional Server To Handle Multiple Clients


• A server scales by using concurrent execution to handle multiple clients at the same time.

• The server repeatedly waits for a client to initiate communication and then creates a concurrent process to handle
the client.

• In addition to the processes created for each active client, a master server process waits for the clients to contact
it.

• Figure 14.2 illustrates a concurrent server handling three clients.

Figure 14.2 An example of traditional client-server communication.

• The application for a concurrent server integrates two aspects of a server into a single application program:
o Fulfilling the service being offered - The chief function of a server lies in interacting with clients to
fulfill clients’ requests.

o Replicating and scaling the server - A traditional concurrent server uses contact from a new client
to trigger the creation of a separate process to handle the client.

• In a traditional server, a single application contains code to replicate the server to handle multiple
simultaneous clients as well as code to interact with a given client and handle the client’s requests.
Scaling A Server In A Cloud Environment
• A traditional concurrent server replicates copies of itself automatically to handle multiple clients simultaneously.
However, all copies must run on the same physical computer.

• The approach does not work well in a cloud environment because cloud systems achieve large scale by
replicating instances across many physical machines.

• Complex software systems must be used to run servers in a cloud data center. The systems handle instance
management by deploying copies as needed and must also handle network communication by arranging to
forward traffic from each client through a proxy to the correct instance.

• Figure 14.3 illustrates management software controlling a deployment.

Figure 14.3 Management software controlling a proxy and replicas of a server on multiple physical
machines.

The Economics Of Servers In The Cloud


• A large set of management technologies exist that can be used to deploy and operate services, including
orchestration systems, proxies, load balancers, and service mesh management software.

• Many of the software technologies follow the open-source model, making them free. Customers only need to
have basic server software, lease a set of VMs, and use open-source software to handle deployment and
scaling at little extra cost.

• Unfortunately, two costs can be significant:

o Unused capacity - a customer must allocate sufficient VMs to handle the expected peak load and as a
result, they must pay for VMs that remain idle during off-peak times.

o Expertise and training – Although free, open-source management systems require a significant amount
of expertise to use effectively and safely. And, because cloud technologies continue to evolve, the
customer must pay for training to keep their staff up to date.

The Serverless Computing Approach


• Serverless computing allows cloud customers to build and run software to fulfill users’ requests without thinking
about the deployment and replication of servers, without configuring network names and addresses for servers,
and without leasing VMs to run the servers.

• Also known as Function as a Service (FaaS), the serverless computing approach allows a cloud customer
to avoid dealing with servers by paying a cloud provider to deploy and scale the customer’s servers.

• Avoids charges for unused capacity because the provider only charges fees for the time a server is being used.
• A provider amortizes the cost of experts and their training over all customers, making it less expensive for each
customer.

• Some providers might charge a minimum monthly fee regardless of whether a service is utilized or not but some
charge a customer nothing is the service is unused over a given period (scale to zero).

• The serverless approach offers great flexibility when accommodating a large number of clients because it
provides arbitrary scale (scale to infinity) so a customer does not have to plan for a peak load,

Stateless Servers And Containers


• To make it feasible to deploy servers quickly, serverless technologies use containers (Chapter 6). To deploy and
manage a server, serverless systems use orchestration (Chapter 10), and to handle scale out, a serverless system
uses the controller-based approach (Chapter 13).

• Despite building on extant technologies, the serverless approach introduces two key features that distinguish
it from the traditional server approach:

o The use of stateless servers

o Adherence to an event-driven paradigm.

The use of stateless servers.


• The term state refers to data related to clients that a server stores internally.

• The term stateful server refers to a server that stores state information

• The term stateless server refers to a server that does not store state information.

• Stateful servers store information for two reasons:

o Allows a server to provide continuity across multiple contacts by the client.

o Allows a server to share information among multiple clients.

• A stateful approach works well for a traditional server because the server runs on a single computer. Therefore,
a traditional server can use mechanisms such as shared memory to allow all instances of the server to access
and update the state information. Furthermore, because a traditional server has a long lifetime, state information
usually persists across many client contacts.

• The stateful approach does not work well for a server that runs in a data center and handles large scale because
the orchestration systems deploy instances on multiple physical computers, making shared memory impossible
and to handle microservices, containers are designed with a short lifetime: the container starts, performs one
function, and exits. Thus, state information does not persist for more than one client connection.

• To capture the idea that serverless computing focuses on running a single, stateless function in each container
(i.e., FaaS), some engineers say that serverless computing runs stateless functions.

• Because it uses containers and can run on multiple physical servers, a serverless computing system
requires server code to be stateless.

• Statefulness only refers to the information a server keeps in memory while the server runs. Stored data (e.g., a
database, a file on NAS, or an object store) does not count as state information because the data is not lost when
the server exits. When a server exits, state information disappears.
• Although serverless computing requires servers to follow a stateless design, a server may store and
retrieve data from a database or persistent storage, such as a file on NAS or an object store.

Adherence to an event-driven paradigm.


• Serverless computing adopts the event-driven paradigm that Kubernetes controllers use and generalizes it.

• The underlying cloud system generates events when changes occur (e.g., a physical server fails). In addition,
serverless systems count each server access as an event.

• Some serverless systems provide management interface or other programmatic interface components that
allow a human user to interact with the system or a computer program to use a protocol other than HTTP. A
contact from any of the interfaces counts as an event.

The Architecture Of A Serverless Infrastructure


• Serverless computing adopts the technology Kubernetes uses for controllers and follows the same general
architecture.

• The chief components include an event queue, a set of interface components that insert events into the queue,
and a dispatcher that repeatedly extracts an event and assigns a worker node to process the event. In the case
of serverless computing, worker nodes run the server code.

• Figure 14.4 illustrates the components in a serverless infrastructure.

Figure 14.4 The architecture that providers use for serverless computing.

An Example Of Serverless Processing


• Netflix uses the AWS Lambda event-driven facility for video transcoding, a step taken to prepare each new
video for customers to download. The arrangement has become a canonical example of serverless computing.

• Below are the basic steps taken to transcode a video.


1. A content provider uploads a new video.
2. A serverless function divides the new video into 5-minute video segments.
3. Each segment is given to a separate serverless function for processing.
4. The processed segments are collected, and the video is available for the customers to access.

• Events trigger each of the serverless processing steps.

o When a content provider uploads a new video, the system places the new video in an Amazon S3
bucket. The S3 object storage system generates an event that triggers a serverless function to divide the
video into segments that are each five minutes long.
o When a segment arrives in an S3 bucket, another event triggers a serverless function that processes and
transcodes the segment. Thus, transcoding can proceed in parallel.

• Figure 14.6 illustrates how video data flows through the system and how events trigger processing.

Figure 14.6 The Netflix transcoding system. A new video event causes the video to be divided into segments;
a new segment event causes the segment to be transcoded.

Potential Disadvantages Of Serverless Computing


• Serverless computing offers three unbeatable advantages: the ability to scale arbitrarily, no need to manage
servers, and lower overall cost.

• Despite its advantages, serverless computing does have potential disadvantages.

o Serverless systems introduce latency. Unlike a traditional server that can always respond immediately
because it always starts before any client initiates contact, management software only launches instances
of the server if it is needed.

o Serverless systems can generate unexpected results. Consider when combined with a disaggregated
microservice, each microservice runs as its own function which must be orchestrated together. An error
in one could result in a cascading result of microservice failures, so not only is there a small cost for
each microservice that has been disaggregated, also the cost for each microservices to issue an alert,
which also involves a function call.

Summary
• Serverless computing follows the traditional client-server paradigm in which one or more clients initiate contact
with a server.

• Unlike a traditional concurrent server that is limited to one computer, serverless computing uses cloud
technologies that allow it to scale arbitrarily and it separates server management from the core function of a
server, allowing cloud providers to offer services that deploy and operate servers for their customers.

• The chief motivation for serverless computing lies in its economic benefits. Because a cloud provider handles
the details of managing server deployment and scaling, a customer does not need to maintain staff with expertise.
Because a cloud provider only charges for the computation actually used, a customer does not pay for idle VMs
or servers.

• In terms of implementation, serverless computing adopts and extends the architecture used for Kubernetes
controller-based systems. Serverless systems use an event- based paradigm in which each change in the cloud
system and each contact from a client becomes an event that is added to a queue. A dispatcher repeatedly
extracts events and assigns them to worker nodes to handle.

• Despite all the advantages, serverless computing has potential disadvantages. Unlike a conventional server that
starts before clients initiate contact, serverless computing creates servers on demand, leading to a small delay.
Unexpectedly high costs can arise from cascades of events and from microservices that divide computation onto
small functions.

You might also like