Serverless Architectures With Kubernetes
Serverless Architectures With Kubernetes
Architectures with
Kubernetes
Create production-ready Kubernetes clusters and
run serverless applications on them
Onur Yılmaz
Sathsara Sarathchandra
Serverless Architectures with Kubernetes
Copyright © 2019 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmitted in any form or by any means, without the prior written permission of the
publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of
the information presented. However, the information contained in this book is sold
without warranty, either express or implied. Neither the authors, nor Packt Publishing,
and its dealers and distributors will be held liable for any damages caused or alleged to
be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.
Authors: Onur Yılmaz and Sathsara Sarathchandra
Managing Editor: Snehal Tambe
Acquisitions Editor: Aditya Date
Production Editor: Samita Warang
Editorial Board: Shubhopriya Banerjee, Bharat Botle, Ewan Buckingham,
Megan Carlisle, Mahesh Dhyani, Manasa Kumar, Alex Mazonowicz, Bridget Neale,
Dominic Pereira, Shiny Poojary, Abhisekh Rane, Erol Staveley, Ankita Thakur,
Nitesh Thakur, and Jonathan Wray.
First Published: November 2019
Production Reference: 1281119
ISBN: 978-1-83898-327-7
Published by Packt Publishing Ltd.
Livery Place, 35 Livery Street
Birmingham B3 2PB, UK
Table of Contents
Preface i
Introduction .................................................................................................. 28
Serverless and the Cloud Evaluation Criteria .................................................. 29
AWS Lambda ........................................................................................................ 30
Exercise 4: Creating a Function in AWS Lambda
and Invoking It via the AWS Gateway API ........................................................ 32
Azure Functions .................................................................................................. 42
Exercise 5: Creating a Parameterized Function in Azure Functions ............. 44
Google Cloud Functions ..................................................................................... 56
Exercise 6: Creating a Scheduled Function in GCF .......................................... 58
Activity 2: Daily Stand-Up Meeting Reminder Function for Slack ................. 70
Summary ........................................................................................................ 73
Introduction ................................................................................................... 76
Fn Framework ............................................................................................... 77
Exercise 7: Getting Started with the Fn Framework ....................................... 78
Exercise 8: Running Functions in the Fn Framework ..................................... 83
The Serverless Framework .......................................................................... 89
Exercise 9: Running Functions with the Serverless Framework ................... 92
Activity 3: Daily Weather Status Function for Slack ..................................... 105
Summary ...................................................................................................... 108
Appendix 371
Index 453
Preface
>
About
This section briefly introduces the authors, the coverage of this book, the technical skills you'll
need to get started, and the hardware and software requirements required to complete all of
the included activities and exercises.
ii | Preface
Learning Objectives
By the end of this book, you will be able to:
• Deploy a Kubernetes cluster locally with Minikube
• Use AWS Lambda and Google Cloud Functions
• Create, build, and deploy a web page generated by the serverless functions in the
cloud
About the Book | iii
Audience
This book is for software developers and DevOps engineers who have basic or
intermediate knowledge about Kubernetes and want to learn how to create serverless
applications that run on Kubernetes. Those who want to design and create serverless
applications running on the cloud, or on-premise Kubernetes clusters, will also find this
book useful.
Approach
This book provides examples of engaging projects that have a direct correlation to
how serverless developers work in the real world with Kubernetes clusters. You'll build
example applications and tackle programming challenges that'll prepare you for large,
complex engineering problems. Each component is designed to engage and stimulate
you so that you can retain and apply what you learn in a practical context with the
maximum impact. By completing the book, you'll walk away feeling capable of tackling
real-world serverless Kubernetes applications development.
Hardware Requirements
For the optimal student experience, we recommend the following hardware
configuration:
• Processor: Intel Core i5 or equivalent
• Memory: 8 GB RAM (16 GB preferred)
• Hard disk: 10 GB available space
• Internet connection
Software Requirements
We also recommend that you have the following software installed in advance:
• Sublime Text (latest version), Atom IDE (latest version), or another similar text
editor application
• Git
iv | Preface
Additional Requirements
• Azure account
• Google cloud account
• AWS account
• Docker Hub account
• Slack account
Conventions
Code words in the text, database table names, folder names, filenames, file extensions,
pathnames, dummy URLs, user input, and Twitter handles are shown as follows:
"Write hello-from-lambda as the function name and Python 3.7 as the runtime."
New terms and important words are shown in bold. Words that you see on the screen,
for example, in menus or dialog boxes, appear in the text like this: "Open the AWS
Management Console, write Lambda in the Find Services search box, and click Lambda
- Run Code without Thinking about Servers."
A block of code is set as follows:
import json
Additional Resources
The code bundle for this book is also hosted on GitHub at https://github.com/
TrainingByPackt/Serverless-Architectures-with-Kubernetes. We also have other code
bundles from our rich catalog of books and videos available at https://github.com/
PacktPublishing/. Check them out!
Introduction to
1
Serverless
Learning Objectives
In this chapter, we will explain the serverless architecture, then create our first serverless
function and package it as a container.
2 | Introduction to Serverless
Introduction to Serverless
Cloud technology right now is in a state of constant transformation to create scalable,
reliable, and robust environments. In order to create such an environment, every
improvement in cloud technology aims to increase both the end user experience and
the developer experience. End users demand fast and robust applications that are
reachable from everywhere in the world. At the same time, developers demand a better
development environment to design, deploy, and maintain their applications in. In the
last decade, the journey of cloud technology has started with cloud computing, where
servers are provisioned in cloud data centers and applications are deployed on the
servers. The transition to cloud data centers decreased costs and removed the need
for responsibility for data centers. However, as billions of people are accessing the
internet and demanding more services, scalability has become a necessity. In order
to scale applications, developers have created smaller microservices that can scale
independently of each other. Microservices are packaged into containers as building
blocks of software architectures to better both the developer and end user experience.
Microservices enhance the developer experience by providing better maintainability
while offering high scalability to end users. However, the flexibility and scalability of
microservices cannot keep up with the enormous user demand. Today, for instance,
millions of banking transactions take place daily, and millions of business-to-business
requests are made to backend systems.
Finally, serverless started gaining attention for creating future-proof and ad
hoc-scalable applications. Serverless designs focus on creating even smaller services
than microservices and they are designed to last much longer into the future. These
nanoservices, or functions, help developers to create more flexible and easier-to-
maintain applications. On the other hand, serverless designs are ad hoc-scalable,
which means if you adopt a serverless design, your services are naturally scaled up
or down with the user requests. These characteristics of serverless have made it the
latest big trend in the industry, and it is now shaping the cloud technology landscape.
In this section, an introduction to serverless technology will be presented, looking at
serverless's evolution, origin, and use cases.
Before diving deeper into serverless design, let's understand the evolution of cloud
technology. In bygone days, the expected process of deploying applications started
with the procurement and deployment of hardware, namely servers. Following that,
operating systems were installed on the servers, and then application packages were
deployed. Finally, the actual code in application packages was executed to implement
business requirements. These four steps are shown in Figure 1.1:
VMs enable the running of multiple instances on the same hardware. However, using
VMs requires installing a complete operating system for every application. Even for a
basic frontend application, you need to install an operating system, which results in an
overhead of operating system management, leading to limited scalability. Application
developers and the high-level usage of modern applications requires faster and simpler
solutions with better isolation than creating and managing VMs. Containerization
technology solves this issue by running multiple instances of "containerized"
applications on the same operating system. With this level of abstraction, problems
related to operating systems are also removed, and containers are delivered as
application packages, as illustrated in Figure 1.4. Containerization technology enables
a microservices architecture where software is designed as small and scalable services
that interact with each other.
4 | Introduction to Serverless
However, the most definitive and complete explanation of serverless was presented in
2016, at the AWS developer conference, as the Serverless Compute Manifesto. It consists
of eight strict rules that define the core ideas behind serverless architecture:
Note
Although it was discussed in various talks at the AWS Summit 2016 conference,
the Serverless Compute Manifesto has no official website or documentation. A
complete list of what the manifesto details can be seen in a presentation by Dr.
Tim Wagner: https://www.slideshare.net/AmazonWebServices/getting-started-with-
aws-lambda-and-the-serverless-cloud.
• Instrumentation: Logs of the functions and the metrics collected over the
function calls should be available to the developers. This makes it possible to
debug and solve problems related to functions. Since they are already running on
remote servers, instrumentation should not create any further burden in terms of
analyzing potential problems.
The original manifesto introduced some best practices and limitations; however, as
cloud technology evolves, the world of serverless applications evolves. This evolution
will make some rules from the manifesto obsolete and will add new rules. In the
following section, use cases of serverless applications are discussed to explain how
serverless is adopted in the industry.
These use cases illustrate that serverless architectures can be used to design any
modern application. It is also possible to move some parts of monolithic applications
and convert them into serverless functions. If your current online shop is a single Java
web application packaged as a JAR file, you can separate its business functions and
convert them into serverless components. The dissolution of giant monoliths into small
serverless functions helps to solve multiple problems at once. First of all, scalability
will never be an issue for the serverless components of your application. For instance,
if you cannot handle a high amount of payments during holidays, a serverless platform
will automatically scale up the payment functions with the usage levels. Secondly, you
do not need to limit yourself to the programming language of the monolith; you can
develop your functions in any programming language. For instance, if your database
clients are better implemented with Node.js, you can code the database operations of
your online shop in Node.js.
Finally, you can reuse the logic implemented in your monolith since now it is a shared
serverless service. For instance, if you separate the payment operations of your
online shop and create serverless payment functions, you can reuse these payment
functions in your next project. All these benefits make it appealing for start-ups as
well as large enterprises to adopt serverless architectures. In the following section,
serverless architectures will be discussed in more depth, looking specifically at some
implementations.
Possible answers:
• Applications with high latency
• When observability and metrics are critical for business
• When vendor lock-in and ecosystem dependencies are an issue
8 | Introduction to Serverless
Microservices are deployed to the servers, which are still managed by the operations
teams. With the serverless architecture, the components are converted into third-party
services or functions. For instance, the security of the e-commerce website could be
handled by an Authentication-as-a-Service offering such as Auth0. AWS Relational
Database Service (RDS) can be used as the database of the system. The best option
for the backend logic is to convert it into functions and deploy them into a serverless
platform such as AWS Lambda or Google Cloud Functions. Finally, the frontend could
be served by storage services such as AWS Simple Storage Service (S3) or Google Cloud
Storage.
10 | Introduction to Serverless
With a serverless design, it is only required to define these services for you to have
scalable, robust, and managed applications running in harmony, as shown in Figure 1.8:
Note
Auth0 is a platform for providing authentication and authorization for web,
mobile, and legacy applications. In short, it provides authentication and
authorization as a service, where you can connect any application written in
any language. Further details can be found on its official website: https://auth0.
com.
Starting from a monolith architecture and first dissolving it into microservice, and then
serverless components is beneficial for multiple reasons:
• Cost: Serverless architecture helps to decrease costs in two critical ways. The first
is that the management of the servers is outsourced, and the second is that it only
costs money when the serverless applications are in use.
• Scalability: If an application is expected to grow, the current best choice is to
design it as a serverless application since that removes the scalability constraints
related to the infrastructure.
• Flexibility: When the scope of deployable units is decreased, serverless provides
more flexibility to innovate, choose better programming languages, and manage
with smaller teams.
These dimensions and how they vary between software architectures is visualized in
Figure 1.9:
When you start with a traditional software development architecture, the transition to
microservices increases scalability and flexibility. However, it does not directly decrease
the cost of running the applications since you are still dealing with the servers. Further
transition to serverless improves both scalability and flexibility while decreasing the
cost. Therefore, it is essential to learn about and implement serverless architectures
for future-proof applications. In the following section, the implementation of serverless
architecture, namely Function as a Service (FaaS), will be presented.
12 | Introduction to Serverless
These properties of functions are covered by cloud providers' offerings, such as AWS
Lambda, Google Cloud Functions, and Azure Functions; and on-premises offerings,
such as Kubeless, Apache OpenWhisk, and OpenFass. With its high popularity, the term
FaaS is mostly used interchangeably with the term serverless. In the following exercise,
we will create a function to handle HTTP requests and illustrate how a serverless
function should be developed.
Serverless Architecture and Function as a Service (FaaS) | 13
Note
The code files for the exercises in this chapter can be found here: https://github.
com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/
Lesson01/Exercise1.
To successfully complete the exercise, we need to ensure the following steps are
executed:
1. Create a file named function.go with the following content in your favorite text
editor:
package main
import (
"fmt"
"net/http"
)
func WelcomeServerless(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello Serverless World!")
}
In this file, we have created an actual function handler to respond when this
function is invoked.
2. Create a file named main.go with the following content:
package main
import (
"fmt"
"net/http"
)
func main() {
fmt.Println("Starting the serverless environment..")
14 | Introduction to Serverless
http.HandleFunc("/", WelcomeServerless)
fmt.Println("Function handlers are registered.")
http.ListenAndServe(":8080", nil)
}
In this file, we have created the environment to serve this function. In general, this
part is expected to be handled by the serverless platform.
3. Start a Go development environment with the following command in your
terminal:
docker run -it --rm -p 8080:8080 -v "$(pwd)":/go/src --workdir=/go/src
golang:1.12.5
With that command, a shell prompt will start inside a Docker container for
Go version 1.12.5. In addition, port 8080 of the host system is mapped to the
container, and the current working directory is mapped to /go/src. You will be
able to run commands inside the started Docker container:
4. Start the function handlers with the following command in the shell prompt
opened in step 3: go run *.go.
With the start of the applications, you will see the following lines:
These lines indicate that the main function inside the main.go file is running as
expected.
Kubernetes and Serverless | 15
The message displayed on the web page reveals that the WelcomeServerless
function is successfully invoked via the HTTP request and the response is
retrieved.
6. Press Ctrl + C to exit the function handler and then write exit to stop the
container:
With this exercise, we demonstrated how we can create a simple function. In addition,
the serverless environment was presented to show how functions are served and
invoked. In the following section, an introduction to Kubernetes and the serverless
environment is given to connect the two cloud computing phenomena.
Cloud computing and deployment strategies are always evolving to create more
developer-friendly environments with lower costs. Kubernetes and containerization
adoption has already won the market and the love of developers such that any cloud
computation without Kubernetes won't be seen for a very long time. By providing
the same benefits, serverless architectures are gaining popularity; however, this does
not pose a threat to Kubernetes. On the contrary, serverless applications will make
containerization more accessible, and consequently, Kubernetes will profit. Therefore, it
is essential to learn how to run serverless architectures on Kubernetes to create future-
proof, cloud-native, scalable applications. In the following exercise, we will combine
functions and containers and package our functions as containers.
Possible answers:
• Serverless – data preparation
• Serverless – ephemeral API operations
• Kubernetes – databases
• Kubernetes – server-related operations
Kubernetes and Serverless | 17
Note
The code files for the exercises in this chapter can be found here: https://github.
com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/
Lesson01/Exercise2.
To successfully complete the exercise, we need to ensure the following steps are
executed:
1. Create a file named Dockerfile in the same folder as the files from Exercise 1:
FROM golang:1.12.5-alpine3.9 AS builder
ADD . .
RUN go build *.go
FROM alpine:3.9
COPY --from=builder /go/function ./function
RUN chmod +x ./function
ENTRYPOINT ["./function"]
In this multi-stage Dockerfile, the function is built inside the golang:1.12.5-
alpine3.9 container. Then, the binary is copied into the alpine:3.9 container as
the final application package.
18 | Introduction to Serverless
2. Build the Docker image with the following command in the terminal: docker build
. -t hello-serverless.
Each line of Dockerfile is executed sequentially, and finally, with the last step, the
Docker image is built and tagged: Successfully tagged hello-serverless:latest:
3. Start a Docker container from the hello-serverless image with the following
command in your Terminal: docker run -it --rm -p 8080:8080 hello-serverless.
With that command, an instance of the Docker image is instantiated with port 8080
mapping the host system to the container. Furthermore, the --rm flag will remove
the container when it is exited. The log lines indicate that the container of the
function is running as expected:
Note
The code files for the exercises in this chapter can be found here: https://github.
com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/
Lesson01/Exercise3.
20 | Introduction to Serverless
To successfully complete the exercise, we need to ensure that the following steps are
executed:
1. Change the contents of function.go from Exercise 2 to the following:
package main
import (
"fmt"
"net/http"
)
names, ok := r.URL.Query()["name"]
3. Start a Docker container from the hello-serverless image with the following
command in the terminal: docker run -it –rm -p 8080:8080 hello-serverless.
With that command, the function handlers will start on port 8080 of the host
system:
In this exercise, how generic functions are used with different parameters was shown.
Personal messages based on input values were returned by a single function that
we deployed. In the following activity, a more complex function will be created and
managed as a container to show how they are implemented in real life.
Once completed, you will have a container running for the function:
When you query via an HTTP REST API, it should return sentences similar to the
following when bike points are found with available bikes:
When there are no bike points found or no bikes are available at those locations, the
function will return a response similar to the following:
Figure 1.25: Function response when a bike point is located but no bike is found
Note
The files main.go, function.go and Dockerfile can be found here: https://github.
com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/
Lesson01/Activity1.
The solution for the activity can be found on page 372.
In this activity, we built the backend of a Twitter bot. We started by defining main and
FindBikes functions. Then we built and packaged this serverless backend as a Docker
container. Finally, we tested it with various inputs to find the closest bike station. With
this real-life example, the background operations of a serverless platform and how to
write serverless functions were illustrated.
Summary | 25
Summary
In this chapter, we first described the journey from traditional to serverless software
development. We discussed how software development has changed over the years
to create a more developer-friendly environment. Following that, we presented the
origin of serverless technology and its official manifesto. Since serverless is a popular
term in the industry, defining some rules helps to design better serverless applications
that integrate easily into various platforms. We then listed use cases for serverless
technology to illustrate how serverless architectures can be used to create any modern
application.
Following an introduction to serverless, FaaS was explored as an implementation of
serverless architectures. We showed how applications are designed in traditional,
microservices, and serverless designs. In addition, the benefits of the transition to
serverless architectures were discussed in detail.
Finally, Kubernetes and serverless technologies were discussed to show how they
support each other. As the mainstream container management system, Kubernetes was
presented, which involved looking at the advantages of running serverless platforms
with it. Containerization and microservices are highly adopted in the industry, and
therefore running serverless workloads as containers was covered, with exercises.
Finally, a real-life example of functions as a backend for a Twitter bot was explored. In
this activity, functions were packaged as containers to show the relationship between
microservices-based, containerized, and FaaS-backed designs.
In the next chapter, we will be introducing serverless architecture in the cloud and
working with cloud services.
Introduction to
2
Serverless in the
Cloud
Learning Objectives
• Evaluate the criteria for choosing the best serverless FaaS provider
• Identify the supported languages, trigger types, and cost structure of major cloud service
providers
• Deploy serverless functions to cloud providers and integrate functions with other cloud
services
In this chapter, we will explain the serverless FaaS products of cloud providers, create our first
serverless functions in the cloud, and integrate with other cloud services.
28 | Introduction to Serverless in the Cloud
Introduction
In the previous chapter, the architectural evolution of traditional architectures to
serverless designs was discussed. In addition, the origin and benefits of serverless were
presented to explain its high adoption and success in the industry. In this chapter,
the focus will be on the serverless platforms of cloud providers. Let's start with the
evolution of cloud technology offerings over the years.
At the start of cloud computing, the primary offering of cloud providers was its
provisioned and ready-to-use hardware, namely the infrastructure. Cloud providers
manage hardware and networking operations, and therefore, the product they were
offering was Infrastructure-as-a-Service (IaaS), as illustrated in the following diagram.
All cloud providers are still offering IaaS products as their core functionality, such as
Amazon Elastic Compute Cloud (Amazon EC2) in AWS and Google Compute Engine in
GCP.
In the following years, cloud providers started to offer platforms where developers
could only run their applications. With this abstraction, manual server provisioning,
security updates, and server failures became the concerns of the cloud provider.
These offerings are known as Platform-as-a-Service (PaaS) since they only focus on
running applications and their data on their platforms. Heroku is the most popular
PaaS provider, although each cloud provider has its own PaaS products, such as AWS
Elastic Beanstalk or Google App Engine. Similar to IaaS, PaaS is still in use in software
development.
In the top-level abstraction, the functions of the applications operate as the unit of
control in serverless architectures. This known as Function-as-a-Service (FaaS) and is
offered by all the significant cloud providers in recent years. The abstraction levels from
IaaS to PaaS, and finally to FaaS, can be seen in the following diagram:
Introduction | 29
languages that are supported. It is one of the most significant decision factors
since implementing the functions in another language is not feasible in most
circumstances.
• Function triggers: Functions are designed to be triggered by cloud provider
services and external methods. The conventional techniques are scheduled calls,
on-demand calls, and integration with other cloud services, such as databases,
queues, and API gateways.
• Cost: The most compelling characteristic of the serverless architecture is its cost-
effectiveness and the mainstream way of calculating the price, that is, pay per
request. It is essential to calculate the actual and projected costs for the feasibility
of long-running projects.
30 | Introduction to Serverless in the Cloud
AWS Lambda
AWS Lambda is the first FaaS offering, and it also created the serverless hype in
the industry. It was made public in 2014 and has been widely adopted in the cloud
computing world by all levels of organizations. It made it possible for start-ups to
create new products in a short amount of time. It also enabled large enterprises such
as Netflix to move event-based triggers to serverless functions. With the opportunity
of removing the server operation burden, AWS Lambda and serverless became the next
trend in the industry. In this section, we will discuss AWS Lambda for programming
language support, trigger types, and cost structure. In addition, our very first serverless
function will be deployed.
Note
The official website of AWS Lambda can be found here if you wish to find out more:
https://aws.amazon.com/lambda.
AWS Lambda supports the Java, Python, Node.js, C#, Ruby, and Go programming
languages when it comes to serverless functions. Furthermore, AWS Lambda provides
an API called AWS Lambda Runtime Interface to enable the integration of any language
as a custom runtime. Therefore, it could be stated that AWS Lambda natively supports
a rich set of popular languages while allowing an extension to other programming
languages.
AWS Lambda is designed to have event-triggered functions. This is where the functions
process the events that have been retrieved from event sources. Within the AWS
ecosystem, various services can be an event source, including the following:
• Amazon S3 file storage for instances when new files are added
• Amazon Alexa to implement new skills for voice assistance
Introduction | 31
• Amazon CloudWatch Events for the events that occur in the state changes of
cloud resources
• Amazon CodeCommit for when developers push new commits to the code
repository
In addition to these services, the essential AWS service for the serverless event source
is the Amazon API Gateway. It has the REST API ability to invoke Lambda functions
over HTTPS, and it permits the management of multiple Lambda functions for different
methods, such as GET, POST, PATCH, and DELETE. In other words, API Gateway creates a
layer between the serverless functions and the outside world. This layer also handles
the security of the Lambda functions by protecting them against Distributed Denial of
Service (DDoS) attacks and defining throttles. The trigger types and the environment
are highly configurable for AWS Lambda functions if you want to integrate with other
AWS services or make them public via the API Gateway.
For the pricing of AWS Lambda, there are two critical points to take note of: the first
one is the request charges and the second one is the compute charges. Request
charges are based on the number of function invocations, while compute charges are
calculated as GB per second. The compute charge is the multiplication of memory size
and execution time:
• Memory Size (GB): This is the configured allocated memory for the functions.
• Execution time (ms): This is the realized execution time that the functions will be
running for.
In addition, there is a free tier where the first 1 million request charges and 400,000 GB
per second of compute charges are waived monthly. A simple calculation, including the
free tier, can show how cheap running a serverless function could be.
Let's assume that your function is called 30 million times in a month. You have allocated
128 MB of memory, and on average, the function runs for 200 ms:
Request charges:
Price: $0.20 per 1 M requests
Free tier: 1 M
Monthly request: 30 M
Monthly request charge: 29 M x $0.20 / M = $5.80
Compute charges:
32 | Introduction to Serverless in the Cloud
Note
In order to complete this exercise, you will need to have an active Amazon Web
Services account. You can create an account at https://aws.amazon.com/.
Exercise 4: Creating a Function in AWS Lambda and Invoking It via the AWS
Gateway API
In this exercise, we will be creating our first AWS Lambda function and connecting it to
AWS Gateway API so that we can invoke over its HTTP endpoint.
To successfully complete this exercise, we need to ensure that the following steps are
executed:
1. Open the AWS Management Console, write Lambda in the Find Services search
box, and click Lambda - Run Code without Thinking about Servers. The console
will look as follows:
Introduction | 33
2. Click on Create function in the Lambda functions list, as shown in the following
screenshot:
3. Select Author from scratch in the Create function view. Write hello-from-lambda
as the function name and Python 3.7 as the runtime. Click Create function at the
bottom of the screen, as shown in the following screenshot:
4. You will be directed to the hello-from-lambda function view, which is where you
Introduction | 35
7. Click Save at the top of the screen, as shown in the following screenshot:
8. Open the Designer view and click Add trigger, as shown in the following
screenshot:
9. Select API Gateway from the triggers list, as shown in the following screenshot:
10. Select Create a new API for the API and Open for the Security configurations on
the trigger configuration screen, as shown in the following screenshot:
On this screen, a new API has been defined in the API Gateway with open security
for the hello-from-lambda function. This configuration ensures that an endpoint
will be created and that it will be accessible without any authentication.
40 | Introduction to Serverless in the Cloud
12. Get the API Gateway endpoint from the API Gateway section, as shown in the
following screenshot:
13. Open the URL in a new tab to trigger the function and get the response, as shown
in the following screenshot:
This JSON response indicates that the AWS Lambda function is connected via the
API Gateway and working as expected.
42 | Introduction to Serverless in the Cloud
14. Return to the Functions list from Step 2, select hello-from-lambda, and choose
Delete from Actions. Then, click Delete in the pop-up window to remove the
function from Lambda, as shown in the following screenshot:
In this exercise, the general flow of creating an AWS Lambda function and connecting
to the AWS Gateway API for HTTP access was shown. In less than 10 steps, it is possible
to have running production-ready services in an AWS Lambda cloud environment. This
exercise has shown you how serverless platforms can make software development fast
and easy. In the following section, the analysis of cloud provider serverless platforms
will continue with Azure Functions by Microsoft.
Azure Functions
Microsoft announced Azure Functions in 2016 as the serverless platform in the
Microsoft Azure cloud. Azure Functions extends its cloud platform with event triggers
from Azure or external services to run serverless workloads. It differentiates by
focusing on the Microsoft supported programming languages and tools that are highly
prevalent in the industry. In this section, Azure Functions will be discussed in terms of
the supported programming languages, trigger types, and cost. Finally, we will deploy
a function that takes parameters from endpoints to Azure Functions to illustrate its
operational side.
Note
The official website of Azure Functions can be found here if you wish to find out
more: https://azure.microsoft.com/en-us/services/functions/.
Introduction | 43
The latest version of Azure Functions supports C#, JavaScript in the Node.js runtime,
F#, Java, PowerShell, Python, and Typescript, which is transpired into JavaScript. In
addition, a language extensibility interface is provided for the communication between
the functions runtime and the worker processes over gRPC as a messaging layer. It is
valuable to check the generally available, experimental, and extendible programming
languages supported by Azure Functions before we start utilizing it.
Note
gRPC is a remote procedure call (RPC) system that was initially developed at
Google. It is an open source system that enables cross-platform communication
without language or platform limitations.
Azure Functions are designed to be triggered by various types, such as timers, HTTP,
file operations, queue messages, and events. In addition, input and output bindings
can be specified for functions. These bindings define the input arguments for the
functions and output values to send other services. For instance, it is possible to create
a scheduled function to read files from Blob Storage and create Cosmos DB documents
as outputs. In this example, the function could be defined with a timer trigger, Blob
Storage input binding, and Cosmos DB output binding. Triggers and bindings make
Azure Functions easily integrate to Azure services and the external world.
There are two differences between the cost calculation method and the current prices
of Azure Functions compared to AWS Lambda. The first difference is that the current
computation price of Azure Functions is slightly cheaper, at $0.000016/GB per second.
The second difference is that Azure Functions calculates using observed memory
consumption while the memory limit is preconfigured in AWS Lambda.
In the following exercise, the very first serverless function will be deployed to Azure
Functions and will be invoked to show the operational view of the platform.
Note
In order to complete this exercise, you need to have an active Azure account. You
can create an account at https://signup.azure.com/.
44 | Introduction to Serverless in the Cloud
2. Click on Create Function App from the Function App list, as shown in the
following screenshot:
3. Give the app a unique name, such as hello-from-azure, and select Node.js as
the Runtime Stack. Click on Create at the bottom of the page, as shown in the
following screenshot:
4. You will be redirected to the Function App list view. Check for a notification at the
top of the menu. You will see Deployment to resource group 'hello-from-azure' is
in progress, as shown in the following screenshot:
6. Select In-portal for function creation inside the Azure web portal as a
development environment and click Continue, as shown in the following
screenshot:
7. Select Webhook + API and click Create, as shown in the following screenshot:
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the
request body."
};
}
};
This code exports a function that accepts parameters from the request. The
function creates a personalized message and sends it as output to the users.
The code should be inserted into the code editor, as shown in the following
screenshot:
9. Click on Get function URL and copy the URL inside the popup, as shown in the
Introduction | 53
11. Open the URL you copied in Step 7 into a new tab in the browser, as shown in the
following screenshot:
Add &name= and your name to the end of the URL and reload the
tab, for example, https://hello-from-azure.azurewebsites.net/api/
HttpTrigger?code=nNrck...&name=Onur, as shown in the following screenshot:
Type the name of the function into the pop-up view and click Delete to delete all
the resources. In the confirmation view, a warning indicates that deletion of the
function application is irreversible, as you can see in the following screenshot:
Introduction | 55
In the following section, Google Cloud Functions will be discussed in a similar way, and
a more complicated function will be deployed to the cloud provider.
56 | Introduction to Serverless in the Cloud
Note
The official website of Google Cloud Functions can be found here if you wish to
find out more: https://cloud.google.com/functions/.
Google Cloud Functions (GCF) can be developed in Node.js, Python, and Go. Compared
to the other major cloud providers, GCF supports a small subset of languages. In
addition, there are no publicly available language extension or APIs supported by GCF.
Thus, it is essential to evaluate whether the languages supported by GCF are feasible for
the functions you will develop.
Google Cloud Functions are designed to be associated with triggers and events. Events
happen within your cloud services, such as database changes, new files in the storage
system, or when provisioning new virtual machines. Triggers are the declaration of the
services and related events as inputs to functions. It is possible to create triggers as
HTTP endpoints, Cloud Pub/Sub queue messages, or storage services such as Cloud
Storage and Cloud Firestore. In addition, functions can be connected to the big data
and machine learning services that are provided in the Google Cloud Platform.
The cost calculation of Google Cloud Platform is slightly complex compared to other
cloud providers. This is because it takes the invocations, computation time, and
outbound network data into consideration, while other cloud providers focus only on
invocations and compute time:
• Invocations: Function invocations are charged $0.40 for every one million
requests.
Introduction | 57
• Compute time: The computation times of the functions are measured from the
time of invocation to their completion in 100 ms increments. For instance, if your
function takes 240 ms to complete, you will be charged for 300 ms of computation
time. There are two units that are used in this calculation – GB per second and
GHz per second. 1 GB of memory is provisioned for a function running for 1
second, and the price of 1 GB per second is $0.0000025. Also, 1 GHz of CPU is
provisioned for a function running for 1 second, and the price of 1 GHz per second
is $0.0000100.
• Outbound network data: Data that's transferred from the function to the outside
is measured in GB and charged at $0.12 for every GB of data.
GCF's free tier provides 2 million invocations, 400,000 GB per second, 200,000 GHz
per second of computation time, and 5 GB of outbound network traffic per month.
Compared to AWS or Azure, GCP will cost slightly more since it has higher prices and
more sophisticated calculation methods.
Let's assume that your function is called 30 million times in a month. You have allocated
128 MB of memory, 200 MHz CPU, and on average, the function runs for 200 ms, similar
to the example for AWS Lambda:
Request charges:
Price: $0.40 per 1 M request
Free tier: 2 M
Monthly request: 30 M
Monthly request charge = 28 M x $0.40 / M = $11.2
Compute charges - Memory:
Price: $0.0000025 per GB-second
Free tier: 400,000 GB-Seconds
Monthly compute: 30 M x 0.2 second x 128 MB / 1024 = 750,000 GB-second
Monthly memory charge: 350,000 x $0.0000025 = $0.875
Compute charges - CPU:
Price: $0.0000100 per GHz-second
Free tier: 200,000 GB-Seconds
58 | Introduction to Serverless in the Cloud
Monthly compute: 30 M x 0.2 second x 200 MHz / 1000 GHz = 1,200,000 GHz-second
Monthly CPU charge: 1,000,000 x $0.0000100 = $10
Monthly total cost= $11.2 + $0.875 + $10 = $22.075
Since the unit prices are slightly higher than AWS and Azure, the total monthly cost of
running the same function is more than $22 in GCP, while it was around $11 for AWS and
Azure. Also, any outbound network from the functions to the outside world is critical
when it comes to potential extra costs. Therefore, pricing methods and unit prices
should be analyzed in depth before you choose a serverless cloud platform.
In the following exercise, our very first serverless function will be deployed to GCF and
will be invoked by a scheduled trigger to show the operational view of the platform.
Note
In order to complete this exercise, you need to have an active Google account. You
can create an account at https://console.cloud.google.com/start.
2. Click on Create function on the Cloud Functions page, as shown in the following
screenshot:
3. In the function creation form, change the function name to HelloWorld and select
128 MB for the memory allocation. Ensure that HTTP is selected as the trigger
method and that Go 1.11 is selected as the runtime, as shown in the following
screenshot:
4. Change function.go using the inline editor inside the browser so that it has the
following content:
package p
import (
"fmt"
"net/http"
)
This code segment creates a HelloWorld function with a static message printed to
the output. The code should be inserted into function.go in the code editor, as
shown in the following screenshot:
5. Copy the URL in the form below the Trigger selection box to invoke the function,
as shown in the following screenshot:
6. Click on the Create button at the end of the form. With this configuration, the
code from step 4 will be packaged and deployed to Google Cloud Platform. In
addition, a trigger URL will be assigned to the function to be reachable from
outside, as shown in the following screenshot:
Wait a couple of minutes until the HelloWorld function in the function list has a
green check icon next to it, as shown in the following screenshot:
7. Open the URL you copied in step 5 into a new tab in your browser, as shown in the
following screenshot:
The response shows that the function has been successfully deployed and is
running as expected.
64 | Introduction to Serverless in the Cloud
8. Click on Cloud Scheduler in the left menu, under TOOLS, as shown in the
following screenshot:
9. Click on Create job on the Cloud Scheduler page, as shown in the following
screenshot:
10. Select a region if you are using Cloud Scheduler for the first time in your Google
Cloud project and click Next, as shown in the following screenshot:
11. Set the job name as HelloWorldEveryMinute and the frequency as * * * * *, which
means the job will be triggered every minute. Select HTTP as the target and
paste the URL you copied in step 5 into the URL box, as shown in the following
screenshot:
12. You will be redirected to the Cloud Scheduler list, as shown in the following
screenshot:
Wait for a couple of minutes and click the Refresh button. The list will show the
Last run timestamp and its result for HelloWorldEveryMinute, as shown in the
following screenshot:
This indicates that the cloud scheduler triggered our function at Aug 13, 2019,
3:44:00 PM and that the result was successful.
13. Return to the function list from step 7 and click … for the HelloWorld function.
Then, click Logs, as shown in the following screenshot:
You will be redirected to the logs of the function, where you will see that, every
minute, Function execution started and the corresponding success logs are listed,
as shown in the following screenshot:
As you can see, the cloud scheduler is invoking the function as planned and that
the function is running successfully.
Introduction | 69
14. Return to the Cloud Scheduler page from Step 13, choose HelloWorldEveryMinute,
click Delete on the menu, and then confirm this in the popup, as shown in the
following screenshot:
15. Return to the Cloud Functions page from step 7, choose HelloWorld, click Delete
on the menu, and then confirm this in the popup, as shown in the following
screenshot:
In this exercise, we created a Hello World function and deployed it to GCF. In addition,
a cloud scheduler job was created to trigger the function with specific intervals such
as every minute. Now, the function is connected to another cloud service so that the
function can trigger the service. It is essential to integrate functions with other cloud
services and evaluate their integration capabilities prior to choosing a cloud FaaS
provider.
In the following activity, you will develop a real-life daily stand-up reminder function.
You will connect a function and function trigger service you wish to invoke on your
specific stand-up meeting time. In addition, this reminder will send a specific message
to a cloud-based collaboration tool, namely Slack.
70 | Introduction to Serverless in the Cloud
Note
In order to complete this activity, you need to access a Slack workplace. You can
use your existing Slack workspace or create a new one for free at https://slack.
com/create.
Once completed, you will have deployed a daily stand-up reminder function to GCF, as
shown in the following screenshot:
In addition, you will need an integration environment for invoking the function at
specified meeting times. Stand-up meetings generally take place at a specific time on
workdays. Thus, a scheduler job will be connected to trigger your function according to
your meeting time, as shown in the following screenshot:
Introduction | 71
Finally, when the scheduler invokes the function, you will have reminder messages in
your Slack channel, as shown in the following screenshot:
Note
In order to complete this activity, you should configure Slack by following the Slack
Setup steps.
72 | Introduction to Serverless in the Cloud
Slack Setup
Execute the following steps to configure Slack:
1. In the Slack workspace, click on your username and select Customize Slack.
2. Click on Configure apps in the open window.
3. Click on Browse the App Directory to add a new application from the directory.
4. Find Incoming WebHooks from the search box in App Directory.
5. Click on Add Configuration for the Incoming WebHooks application.
6. Fill in the configuration for the incoming webhook with your specific channel
name and icon.
7. Open your Slack workspace and channel. You will see an integration message.
Note
Detailed screenshots of the Slack setup steps can be found on page 376.
import (
"bytes"
"net/http"
)
client := &http.Client{}
Summary | 73
_, err = client.Do(req)
if err != nil {
panic(err)
}
}
2. Create a scheduler job in GCP with the trigger URL of the function and specify the
schedule based on your stand-up meeting times.
Check the Slack channel when the time that's been defined with the schedule has
arrived for the reminder message.
3. Delete the schedule job and function from the cloud provider.
Note
The solution to this activity can be found on page 376.
Summary
In this chapter, we described the evolution of cloud technology offerings, including
how the cloud products have changed over the years and how responsibilities are
distributed among organizations, starting with IaaS and PaaS and, finally, FaaS.
Following that, criteria were presented for evaluating serverless cloud offerings.
Programming language support, function triggers, and the cost structure of serverless
products were listed so that we could compare the various cloud providers, that is,
AWS Lambda, Azure Functions, and GCF. In addition, we deployed a serverless function
to all three cloud providers. This showed you how cloud functions can be integrated
with other cloud services, such as the AWS API Gateway for REST API operations.
Furthermore, a parameterized function was deployed to Azure Functions to show how
we can process inputs from users or other systems. Finally, we deployed a scheduled
function to GCF to show integration with other cloud services. At the end of this
chapter, we implemented a real-life Slack reminder using serverless functions and
cloud schedulers.
In the next chapter, we will cover serverless frameworks and learn how to work with
them.
Introduction
3
to Serverless
Frameworks
Learning Objectives
• Create a real-life serverless application and run it on multiple cloud platforms in the
future
In this chapter, we will explain serverless frameworks, create our first serverless functions using
these frameworks, and deploy them to various cloud providers.
76 | Introduction to Serverless Frameworks
Introduction
Let's imagine that you are developing a complex application with many functions in
one cloud provider. It may not be feasible to move to another cloud provider, even if
the new one is cheaper, faster, or more secure. This situation of vendor dependency
is known as vendor lock-in in the industry, and it is a very critical decision factor in
the long run. Fortunately, serverless frameworks are a simple and efficient solution to
vendor lock-in.
In the previous chapter, all three major cloud providers and their serverless products
were discussed. These products were compared based on their programming language
support, trigger capabilities, and cost structure. However, there is still one unseen
critical difference between all three products: operations. Creating functions, deploying
them to cloud providers, and their management are all different for each cloud
provider. In other words, you cannot use the same function in AWS Lambda, Google
Cloud Functions, and Azure Functions. Various changes are required so that we can
fulfil the requirements of cloud providers and their runtime.
Serverless frameworks are open source, cloud-agnostic platforms for running
serverless applications. The first difference between the cloud provider and serverless
products is that their serverless frameworks are open source and public. They are free
to install on the cloud or on on-premise systems and operate on their own. The second
characteristic is that serverless frameworks are cloud agnostic. This means that it is
possible to run the same serverless functions on different cloud providers or your own
systems. In other words, the cloud provider where the functions will be executed is just
a configuration parameter in serverless frameworks. All cloud providers are equalized
behind a shared API so that cloud-agnostic functions can be developed and deployed by
serverless frameworks.
Cloud serverless platforms such as AWS Lambda increased the hype of serverless
architectures and empowered their adoption in the industry. In the previous chapter,
the evolution of cloud technology offerings over the years and significant cloud
serverless platforms were discussed in depth. In this chapter, we will discuss open
source serverless frameworks and talk about their featured characteristics and
functionalities. There are many popular and upcoming serverless frameworks on
the market. However, we will focus on two prominent frameworks with differences
in terms of priorities and architecture. In this chapter, a container-native serverless
framework, namely Fn, will be presented. Following that, a more comprehensive
framework with multiple cloud provider support, namely, the Serverless Framework,
will be discussed in depth. Although both frameworks create a cloud-agnostic and open
source environment for running serverless applications, their differences in terms of
implementation and developer experience will be illustrated.
Fn Framework | 77
Fn Framework
Fn was announced in 2017 by Oracle at the JavaOne 2017 conference as an event-driven
and open source Function-as-a-Service (FaaS) platform. The key characteristics of the
framework are as follows:
• Open source: All the source code of the Fn project is publicly available at https://
github.com/fnproject/fn, and the project is hosted at https://fnproject.io. It has
an active community on GitHub, with more than 3,300 commits and 1,100 releases,
as shown in the following screenshot:
Note
Docker 17.10.0-ce or later should be installed and running on your computer
before you start the next exercise, since this is a prerequisite for Fn.
3. Check the client and server version by using the following command in your
Terminal:
fn version
The output should be as follows:
This output shows that both the client and server side are running and interacting
with each other.
80 | Introduction to Serverless Frameworks
As the output indicates, the default context is set, and the registry is updated to
serverless.
5. Start the Fn dashboard by using the following command in your Terminal:
docker run -d --link fnserver:api -p 4000:4000 -e "FN_API_URL=http://
api:8080" fnproject/ui
This command downloads the fnproject/ui image and starts it in detached mode.
In addition, it links fnserver:api to itself and publishes the 4000 port, as shown in
the following screenshot:
With this exercise, we have installed the Fn framework, along with its client, server, and
dashboard. Since Fn is a cloud-agnostic framework, it is possible to install any cloud
or on-premise system with the illustrated steps. We will continue discussing the Fn
framework in terms of how the functions are configured and deployed.
The Fn framework is designed to work with applications, where each application is a
group of functions with their own route mappings. For instance, let's assume you have
grouped your functions into a folder, as follows:
- app.yaml
- func.yaml
- func.go
- go.mod
- products/
- func.yaml
82 | Introduction to Serverless Frameworks
- func.js
- suppliers/
- func.yaml
- func.rb
In each folder, there is a func.yaml file that defines the function with the corresponding
implementation in Ruby, Node.js, or any other supported language. In addition, there is
an app.yaml file in the root folder to define the application.
Let's start by checking the content of app.yaml:
name: serverless-app
app.yaml is used to define the root of the serverless application and includes the name
of the application. There are also three additional files for the function in the root
folder:
• func.go: Go implementation code
• go.mod: Go dependency definitions
• func.yaml: Function definition and trigger information
For a function with an HTTP trigger and Go runtime, the following func.yaml file is
defined:
name: serverless-app
version: 0.0.1
runtime: go
entrypoint: ./func
triggers:
- name: serverless-app
type: http
source: /serverless-app
When you deploy all of these functions to Fn, they will be accessible via the following
URLs:
http://serverless-kubernetes.io/ -> root function
http://serverless-kubernetes.io/products -> function in products/
directory
http://serverless-kubernetes.io/suppliers -> function in suppliers/
directory
Fn Framework | 83
In the following exercise, the content of the app.yaml and func.yaml files, as well as their
function implementation, will be illustrated with a real-life example.
These commands create a folder called serverless-app and then change the
directory so that it's in this folder. Finally, a file called app.yaml is created with the
content name: serverless-app, which is used to define the root of the application.
2. Run the following command in your Terminal to create a root function that's
available at the "/" of the application URL:
fn init --runtime ruby --trigger http
This command will create a Ruby function with an HTTP trigger at the root of the
application, as shown in the following screenshot:
4. Check the directory of the application by using the following command in your
Terminal:
ls -l ./*
This command lists the files in the root and child folders, as shown in the following
screenshot:
As expected, there is a Ruby function in the root folder with three files: func.rb for
the implementation, func.yaml for the function definition, and Gemfile to define
Ruby function dependencies.
Similarly, there is a Go function in the hello-world folder with three files: func.go
for the implementation, func.yaml for the function definition, and go.mod for Go
dependencies.
Fn Framework | 85
5. Deploy the entire application by using the following command in your Terminal:
fn deploy --create-app --all --local
This command deploys all the functions by creating the app and using a local
development environment, as shown in the following screenshot:
Firstly, the function for serverless-app is built, and then the function and trigger
are created. Similarly, the hello-world function is built and deployed with the
corresponding function and trigger.
6. List the triggers of the application with the following command and copy the
Endpoints for serverless-app-trigger and hello-world-trigger:
fn list triggers serverless-app
This command lists the triggers of serverless-app, along with function, type,
source, and endpoint information, as shown in the following screenshot:
Note
For the curl commands, do not forget to use the endpoints that we copied in Step
5.
This command will invoke the serverless-app trigger located at the root of the
application. Since it was triggered with the name payload, it responded with a
personal message: Hello Ece!:
curl http://localhost:8080/t/serverless-app/hello-world
This command will invoke the hello-world trigger without any payload and, as
expected, it responded with Hello World, as shown in the following screenshot:
8. Check the application and function statistics from the Fn Dashboard by opening
http://localhost:4000 in your browser.
On the home screen, your applications and their overall statistics can be seen,
along with auto-refreshed charts, as shown in the following screenshot:
Click on serverless-app from the applications list to view more information about
the functions of the application, as shown in the following screenshot:
Serverless Framework is open source, and its source code is available at GitHub:
https://github.com/serverless/serverless. It is a very popular repository with more
than 31,000 stars, as shown in the following screenshot:
These four characteristics of the Serverless Framework make it the most well-known
framework for creating serverless applications in the cloud. In addition, the framework
focuses on the management of the complete life cycle of serverless applications:
• Develop: It is possible to develop apps locally and reuse open source plugins via
the framework CLI.
• Deploy: The Serverless Framework can deploy to multiple cloud platforms and roll
out and roll back versions from development to production.
• Test: The framework supports testing the functions out of the box by using the
command-line client functions.
• Secure: The framework handles secrets for running the functions and cloud-
specific authentication keys for deployments.
• Monitor: The metrics and logs of the serverless applications are available with the
serverless runtime and client tools.
Note
The Serverless Framework can be downloaded and installed to a local computer
with npm. A Docker container, including the Serverless Framework installation, will
be used in the following exercise so that we have a fast and reproducible setup.
In the following exercise, the hello-world function will be deployed to AWS Lambda
using the Serverless Framework. In order to complete this exercise, you need to have
an active Amazon Web Services account. You can create an account at https://aws.
amazon.com/.
92 | Introduction to Serverless Frameworks
Press Y to create a new project and choose AWS Node.js from the dropdown, as
shown in the following screenshot:
4. Set the name of the project to hello-world and press Enter. The output is as
follows:
5. Press Y for the AWS credential setup question, and then press Y again for the Do
you have an AWS account? question. The output will be as follows:
You now have a URL for creating a serverless user. Copy and save the URL; we'll
need it later.
94 | Introduction to Serverless Frameworks
6. Open the URL from Step 4 in your browser and start adding users to the AWS
console. The URL will open the Add user screen with predefined selections. Click
Next: Permissions at the end of the screen, as shown in the following screenshot:
8. If you want to tag your users, you can add optional tags in this view. Click Next:
Review, as shown in the following screenshot:
9. This view shows the summary of the new user. Click Create User, as shown in the
following screenshot:
You will be redirected to a success page with an Access Key ID and secret, as
shown in the following screenshot:
10. Copy the key ID and secret access key so that you can use it in the following steps
of this exercise and the activity for this chapter. You need to click Show to reveal
the secret access key.
11. Return to your Terminal and press Enter to enter the key ID and secret
information, as shown in the following screenshot:
12. Press Y for the Serverless account enable question and select register from the
dropdown, as shown in the following screenshot:
13. Write your email and a password to create a Serverless Framework account, as
shown in the following screenshot:
14. Run the following commands to change the directory and deploy the function:
cd hello-world
serverless deploy -v
100 | Introduction to Serverless Frameworks
These commands will make the Serverless Framework deploy the function into
AWS, as shown in the following screenshot:
Note
The output logs start by packaging the service and creating AWS resources for the
source code, artifacts, and functions. After all the resources have been created, the
Service Information section will provide a summary of the functions and URLs.
The Serverless Framework | 101
At the end of the screen, you will find the Serverless Dashboard URL for the
deployed function, as shown in the following screenshot:
Copy the dashboard URL so that you can check the function metrics in the
upcoming steps.
15. Invoke the function by using the following command in your Terminal:
serverless invoke --function hello
This command invokes the deployed function and prints out the response, as
shown in the following screenshot:
As the output shows, statusCode is 200, and the body of the response indicates that
the function has responded successfully.
102 | Introduction to Serverless Frameworks
16. Open the Serverless Dashboard URL you copied at the end of Step 8 into your
browser, as shown in the following screenshot:
17. Log in with the email and password you created in Step 5.
You will be redirected to the application list. Expand hello-world-app and click on
the successful deployment line, as shown in the following screenshot:
In the function view, all the runtime information, including API endpoints,
variables, alerts, and metrics, are available. Scroll down to see the number of
invocations. The output should be as follows:
Since we have only invoked the function once, you will only see 1 in the charts.
104 | Introduction to Serverless Frameworks
18. Return to your Terminal and delete the function with the following command:
serverless remove
This command will remove the deployed function and all its dependencies, as
shown in the following screenshot:
In this exercise, we have created, configured, and deployed a serverless function using
the Serverless Framework. Furthermore, the function is invoked via a CLI, and its
metrics are checked from the Serverless Dashboard. The Serverless Framework creates
a comprehensive abstraction for cloud providers so that it is only passed as credentials
to the platform. In other words, where to deploy is just a matter of configuration with
the help of serverless frameworks.
In the following activity, a real-life serverless daily weather application will be
developed. You will create a serverless framework application with an invocation
schedule and deploy it to a cloud provider. In addition, the weather status messages will
be sent to a cloud-based collaboration tool known as Slack.
Note
In order to complete the following activity, you need to be able to access a Slack
workplace. You can use your existing Slack workspace or create a new one for free
at https://slack.com/create.
The Serverless Framework | 105
Finally, when the scheduler invokes the function, or when you invoke it manually, you
will get messages regarding the current weather status in your Slack channel:
Note
In order to complete this activity, you should configure Slack by following the Slack
setup steps.
Slack Setup
Execute the following steps to configure Slack:
1. In your Slack workspace, click your username and select Customize Slack.
2. Click Configure apps in the opened window.
3. Click on Browse the App Directory to add a new application from the directory.
The Serverless Framework | 107
Note
Detailed screenshots of the Slack setup steps can be found on page 387.
Note
The solution to this activity can be found on page 387.
108 | Introduction to Serverless Frameworks
Summary
In this chapter, we provided an overview of serverless frameworks by discussing the
differences between the serverless products of cloud providers. Following that, one
container-native and one cloud-native serverless framework were discussed in depth.
Firstly, the Fn framework was discussed, which is an open source, container-native, and
cloud-agnostic platform. Secondly, the Serverless Framework was presented, which is
a more cloud-focused and comprehensive framework. Furthermore, both frameworks
were installed and configured locally. Serverless applications were created, deployed,
and run in both serverless frameworks. The functions were invoked with the capabilities
of serverless frameworks, and the necessary metrics checked for further analysis.
At the end of this chapter, a real-life, daily weather Slack bot was implemented as a
cloud-agnostic, explicitly defined application using serverless frameworks. Serverless
frameworks are essential for the serverless development world with their cloud-
agnostic and developer-friendly characteristics.
Kubernetes Deep Dive
4
Learning Objectives
In this chapter, we will explain the basics of the Kubernetes architecture, the methods of
accessing the Kubernetes API, and fundamental Kubernetes resources. In addition to that, we
will deploy a real-life application into Kubernetes.
112 | Kubernetes Deep Dive
Introduction to Kubernetes
In the previous chapter, we studied serverless frameworks, created serverless
applications using these frameworks, and deployed these applications to the major
cloud providers.
As we have seen in the previous chapters, Kubernetes and serverless architectures
started to gain traction at the same time in the industry. Kubernetes got a high
level of adoption and became the de facto container management system with its
design principles based on scalability, high availability, and portability. For serverless
applications, Kubernetes provides two essential benefits: removal of vendor lock-in
and reuse of services.
Kubernetes creates an infrastructure layer of abstraction to remove vendor lock-in.
Vendor lock-in is a situation where transition from one service provider to another
is very difficult or even infeasible. In the previous chapter, we studied how serverless
frameworks make it easy to develop cloud-agnostic serverless applications. Let's
assume you are running your serverless framework on an AWS EC2 instance and
want to move to Google Cloud. Although your serverless framework creates a layer
between the cloud provider and serverless applications, you are still deeply attached
to the cloud provider for the infrastructure. Kubernetes breaks this connection by
creating an abstraction between the infrastructure and the cloud provider. In other
words, serverless frameworks running on Kubernetes are unaware of the underlying
infrastructure. If your serverless framework runs on Kubernetes in AWS, it is expected
to run on Google Cloud Platform (GCP) or Azure.
As the defacto container management system, Kubernetes manages most microservices
applications in the cloud and in on-premise systems. Let's assume you have already
converted your big monolith application to cloud-native microservices and you're
running them on Kubernetes. And now you've started developing serverless applications
or turning some of your microservices to serverless nanoservices. At this stage, your
serverless applications will need to access the data and other services. If you can run
your serverless applications in your Kubernetes clusters, you will have the chance to
reuse the services and be close to your data. Besides, it will be easier to manage and
operate both microservices and serverless applications.
As a solution to vendor lock-in, and for potential reuse of data and services, it is
crucial to learn how to run serverless architectures on Kubernetes. In this chapter,
a Kubernetes recap is presented to introduce the origin and design of Kubernetes.
Following that, we will install a local Kubernetes cluster, and you will be able to access
the cluster by using a dashboard or a client tool such as kubectl. In addition to that, we
will discuss the building blocks of Kubernetes applications, and finally, we'll deploy a
real-life application to the cluster.
Kubernetes Design and Components | 113
Servers with the node role are responsible for running the workload in Kubernetes.
Therefore, there are two essential Kubernetes components required in every node:
• kubelet: kubelet is the management gateway of the control plane in the nodes.
kubelet communicates with the API server and implements actions needed on the
nodes. For instance, when a new workload is assigned to a node, kubelet creates
the container by interacting with the container runtime, such as Docker.
• kube-proxy: Containers run on the server nodes, but they interact with each other
as they are running in a unified networking setup. kube-proxy makes it possible for
containers to communicate, although they are running on different nodes.
The control plane and the roles, such as master and node, are logical groupings of
components. However, it is recommended to have a highly available control plane with
multiple master role servers. Besides, servers with node roles are connected to the
control plane to create a scalable and cloud-native environment. The relationship and
interaction of the control plane and the master and node servers are presented in the
following figure:
Kubernetes Design and Components | 115
Figure 4.2: The control plane and the master and node servers in a Kubernetes cluster
In the following exercise, a Kubernetes cluster will be created locally, and Kubernetes
components will be checked. Kubernetes clusters are sets of servers with master or
worker nodes. On these nodes, both control plane components and user applications
are running in a scalable and highly available way. With the help of local Kubernetes
cluster tools, it is possible to create single-node clusters for development and testing.
minikube is the officially supported and maintained local Kubernetes solution, and it will
be used in the following exercise.
Note
You will use minikube in the following exercise as the official local Kubernetes
solution, and it runs the Kubernetes components on hypervisors. Hence you must
install a hypervisor such as Virtualbox, Parallels, VMWareFusion, Hyperkit,
or VMWare. Refer to this link for more information:
https://kubernetes.io/docs/tasks/tools/install-minikube/#install-a-hypervisor
116 | Kubernetes Deep Dive
5. Check for the four control-plane components with the following command:
pgrep -l etcd && pgrep -l kube-apiserver && pgrep -l kube-scheduler &&
pgrep -l controller
This command lists the processes and captures the mentioned command names.
There are total of four lines corresponding to each control plane component and
its process IDs, as depicted in the following figure:
Exercise 11: Accessing Kubernetes Clusters Using the Client Tool: kubectl
In this exercise, we aim to access the Kubernetes API using kubectl and explore its
capabilities.
To complete the exercise, we need to ensure the following steps are executed:
1. Download the kubectl executable by running these commands in the Terminal:
# Linux
curl -LO https://storage.googleapis.com/kubernetes-release/release/
v1.15.0/bin/linux/amd64/kubectl
# MacOS
curl -LO https://storage.googleapis.com/kubernetes-release/release/
v1.15.0/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
These commands download the binary of kubectl, make it executable, and move it
into the bin folder for Terminal access.
2. Configure kubectl to connect to the minikube cluster:
kubectl config use-context minikube
This command configures kubectl to use the minikube context, which is the set
of credentials used to connect to the kubectl cluster, as shown in the following
figure:
4. Get more information about the minikube node with the following command:
kubectl describe node minikube
This command lists all the information about the node, starting with its metadata,
such as Roles, Labels, and Annotations. The role of this node is specified as master
in the Roles section, as shown in the following figure:
Following the metadata, Conditions lists the health status of the node. It is possible
to check available memory, disk, and process IDs in tabular form, as shown in the
following figure.
Then, available and allocatable capacity and system information are listed, as
shown in the following figure:
Finally, the running workload on the node and allocated resources are listed, as
shown in the following figure:
This command lists all the resources supported by the Kubernetes cluster. The length
of the list indicates the power and comprehensiveness of Kubernetes in the senseof
application management. In this exercise, the official Kubernetes client tool was
installed, configured, and explored. In the following section, the core building block
resources from the resource list will be presented.
Kubernetes Resources
Kubernetes comes with a rich set of resources to define and manage cloud-
native applications as containers. In the Kubernetes API, every container, secret,
configuration, or custom definition is defined as a resource. The control plane
manages these resources while the node components try to achieve the desired state
of the applications. The desired state could be running 10 instances of the application
or mounting disk volumes to database applications. The control plane and node
components work in harmony to make all resources in the cluster reach their desired
state.
In this section, we will study the fundamental Kubernetes resources used to run
serverless applications.
Pod
The pod is the building block resource for computation in Kubernetes. A pod consists
of containers scheduled to run into the same node as a single application. Containers
in the same pod share the same resources, such as network and memory resources. In
addition, the containers in the pod share life cycle events such as scaling up or down. A
pod can be defined with an ubuntu image and the echo command as follows:
apiVersion: v1
kind: Pod
metadata:
name: echo
spec:
containers:
- name: main
image: ubuntu
command: ['sh', '-c', 'echo Serverless World! && sleep 3600']
124 | Kubernetes Deep Dive
When the echo pod is created in Kubernetes API, the scheduler will assign it to an
available node. Then the kubelet in the corresponding node will create a container
and attach networking to it. Finally, the container will start to run the echo and sleep
commands. Pods are the essential Kubernetes resource for creating applications,
and Kubernetes uses them as building blocks for more complex resources. In the
following resources, the pod will be encapsulated to create more complex cloud-native
applications.
Deployment
Deployments are the most commonly used Kubernetes resource to manage highly
available applications. Deployments enhance pods by making it possible to scale up,
scale down, or roll out new versions. The deployment definition looks similar to a pod
with two important additions: labels and replicas.
Consider the following code:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: nginx
Kubernetes Resources | 125
image: nginx:1.7.9
ports:
- containerPort: 80
The deployment named webserver defines five replicas of the application running
with the label app:server. In the template section, the application is defined with the
exact same label and one nginx container. The deployment controller in the control
plane ensures that five instances of this application are running inside the cluster.
Let's assume you have three nodes, A, B, and C, with one, two, and two instances of
webserver application running, respectively. If node C goes offline, the deployment
controller will ensure that the two lost instances will be recreated in nodes A and B.
Kubernetes ensures that scalable and highly available applications are running reliably
as deployments. In the following section, Kubernetes resources for stateful applications
such as databases will be presented.
StatefulSet
Kubernetes supports running both stateless ephemeral applications and stateful
applications. In other words, it is possible to run database applications or disk-oriented
applications in a scalable way inside your clusters. The StatefulSet definition is similar
to deployment with volume-related additions.
Consider the following code snippet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
126 | Kubernetes Deep Dive
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
The mysql StatefulSet state creates a MySQL database with 1 GB volume data. The
volume is created by Kubernetes and attached to the container at /var/lib/mysql. With
the StatefulSet controllers, it is possible to create applications that need disk access
in a scalable and reliable way. In the following section, we'll discuss how to connect
applications in a Kubernetes cluster.
Service
In Kubernetes, multiple applications run in the same cluster and connect to each
other. Since each application has multiple pods running on different nodes, it is not
straightforward to connect applications. In Kubernetes, Service is the resource used
to define a set of pods, and you access them by using the name of the Service. Service
resources are defined using the labels of the pods.
Kubernetes Resources | 127
containers:
- name: echo
image: busybox
args:
- /bin/sh
- -c
- echo Hello from the echo Job!
When the echo Job is created, Kubernetes will create a pod, schedule it, and run it.
When the container terminates after the echo command, Kubernetes will not try to
restart it or keep it running.
In addition to one-time tasks, it is possible to run scheduled jobs using the CronJob
resource, as shown in the following code snippet:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hourly-echo
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
restartPolicy: OnFailure
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo It is time to say echo!
Kubernetes Resources | 129
With the hourly-echo CronJob, an additional schedule parameter is provided. With the
schedule of "0 * * * *", Kubernetes will create a new Job instance of this CronJob
and run it every hour. Jobs and CronJobs are Kubernetes-native ways of handling
manual and automated tasks required for your applications. In the following exercise,
Kubernetes resources will be explored using kubectl and a local Kubernetes cluster.
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
Note
mysql.yaml is available on GitHub at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/blob/master/Lesson04/Exercise12/
mysql.yaml.
2. Deploy the StatefulSet MySQL database with the following command in your
Terminal:
kubectl apply -f mysql.yaml
This command submits the mysql.yaml file, which includes a StatefulSet called
mysql and a 1 GB volume claim. The output will look like this:
Note
If the pod status is Pending, wait a couple of minutes until it becomes Running
before continuing to the next step.
Note
service.yaml is available on GitHub at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/blob/master/Lesson04/Exercise12/
service.yaml.
132 | Kubernetes Deep Dive
6. Deploy the my-database service with the following command in your Terminal:
kubectl apply -f service.yaml
This command submits the Service named my-database to group pods with the
label app:mysql:
Note
create-table.yaml is available on GitHub at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/blob/master/Lesson04/Exercise12/
create-table.yaml.
This command submits the Job named create-table and within a couple of
minutes, the pod will be created to run the CREATE TABLE command, as shown in
the following figure:
Note
If the pod status is Pending or Running, wait a couple of minutes until it
becomes Completed before continuing to the next step.
10. Run the following command to check the tables in the MySQL database:
kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never \
-- mysql -h my-database -u user -ppassword db -e "show tables;"
This command runs a temporary instance of the mysql:5.7 image and runs the
mysql command, as shown in the following figure:
In the MySQL database, a table with the name messages is available, as shown
in the preceding output. It shows that MySQL StatefulSet is up and running
the database successfully. In addition, the create-table Job has created a pod,
connected to the database using the service, and created the table.
11. Clean the resources by running the following command:
kubectl delete -f create-table.yaml,service.yaml,mysql.yaml
You should see the output shown in the following figure:
In the following activity, the database will be filled with the information retrieved by
automated tasks in Kubernetes.
Note
You will need a Docker Hub account to push the images into the registry in the
following activity. Docker Hub is a free service, and you can sign up to it at https://
hub.docker.com/signup.
Note
In order to complete the following activity, you need to have a CurrencyLayer API
access key. It is a free currency and exchange rate service, and you can sign up to it
on the official website.
Kubernetes Resources | 135
Finally, with each run of the Kubernetes Job, you will have a real-time gold price in the
database:
func main() {
db, err := sql.Open("mysql", ...
r, err := http.Get(fmt.Sprintf(„http://apilayer.net/api/...
stmt, err := db.Prepare("INSERT INTO GoldPrices(price) VALUES(?)")_,
err = stmt.Exec(target.Quotes.USDXAU)
log.Printf("Successfully inserted the price: %v", target.Quotes.
USDXAU)
}
In the main function, first you need to connect to the database, and then retrieve
the price from CurrencyLayer. Then you need to create a SQL statement and
execute on the database connection. The complete code for main.go can be found
here: https://github.com/TrainingByPackt/Serverless-Architectures-with-
Kubernetes/blob/master/Lesson04/Activity4/main.go.
2. Build the application as a Docker container.
3. Push the Docker container to the Docker registry.
4. Deploy the MySQL database into the Kubernetes cluster.
5. Deploy a Kubernetes service to expose the MySQL database.
6. Deploy a CronJob to run every minute.
7. Wait for a couple of minutes and check the instances of CronJob.
8. Connect to the database and check for the entries.
9. Clean the database and automated tasks from Kubernetes.
Note
The solution of the activity can be found on page 403.
Summary | 137
Summary
In this chapter, we first described the origins and characteristics of Kubernetes.
Following that, we studied the Kubernetes design and components with the details of
master and node components. Then, we installed a local single-node Kubernetes cluster
and checked the Kubernetes components. Following the cluster setup, we studied the
official Kubernetes client tool, kubectl, which is used to connect to a cluster. We also
saw how kubectl is used to manage clusters and the life cycle of applications. Finally, we
discussed the fundamental Kubernetes resources for serverless applications, including
pods, deployments, and StatefulSets. In addition to that, we also studied how to
connect applications in a cluster using services. Kubernetes resources for one-time and
automated tasks were presented using Jobs and CronJobs. At the end of this chapter, we
developed a real-time data collection function using Kubernetes-native resources.
In the next chapter, we will be studying the features of Kubernetes clusters and using a
popular cloud platform to deploy them.
Production-Ready
5
Kubernetes Clusters
Learning Objectives
In this chapter, we will learn about the key considerations for the setup of Kubernetes. Following
that, we will also study the different Kubernetes platform options. Then, we move on to creating
a production-ready Kubernetes cluster in cloud platforms and performing administrative tasks.
140 | Production-Ready Kubernetes Clusters
Introduction
In the previous chapter, we created Kubernetes clusters for the development
environment and installed applications into it. In this chapter, the focus will be
on production-ready Kubernetes clusters and how to administer them for better
availability, reliability, and cost optimization.
Kubernetes is the de facto system for managing microservices running as containers in
the cloud. It is widely adopted in the industry by both start-ups and large enterprises
for running various kinds of applications, including data analysis tools, serverless
apps, and databases. Scalability, high availability, reliability, and security are the key
features of Kubernetes that enable its adoption. Let's assume that you have decided
to use Kubernetes, and hence you need a reliable and observable cluster setup for
development and production. There are critical considerations that depend on your
requirements, budget, and team before choosing a Kubernetes provider and how to
operate the applications. There are four key considerations to analyze:
• Service Quality: Kubernetes runs microservices in a highly available and reliable
way. However, it is critical to install and operate Kubernetes reliably and robustly.
Let's assume you have installed the Kubernetes control plane into a single node
in the cluster, and it was disconnected due to a network problem. Since you have
lost the Kubernetes API server connectivity, you will not be able to check the
status of your applications and operate them. Therefore, it is essential to evaluate
the service quality of the Kubernetes cluster you need for your production
environment.
• Monitoring: Kubernetes runs containers that are distributed to the nodes and
enables checking their logs and statuses. Let's assume that you rolled out a new
version of your application yesterday. Today, you want to check how the latest
version is working for any errors, crashes, and response time. Therefore, you need
a monitoring system integrated into your Kubernetes cluster to capture logs and
metrics. The collected data is essential for troubleshooting and diagnosis in a
production-ready cluster.
• Security: Kubernetes components and client tools work in a secure way to manage
the applications running in the cluster. However, you need to have specific roles
and authorization levels defined for your organization to operate Kubernetes
clusters securely. Hence, it is essential to choose a Kubernetes provider platform
that you can securely connect to and share with your customers and colleagues.
Kubernetes Setup | 141
In order to decide how to install and operate your Kubernetes clusters, these
considerations will be discussed for the Kubernetes platform options in this chapter.
Kubernetes Setup
Kubernetes is a flexible system that can be installed on various platforms from
Raspberry Pi to high-end servers in data centers. Each platform comes with its
advantages and disadvantages in terms of service quality, monitoring, security, and
operations. Kubernetes manages applications as containers and creates an abstraction
layer on the infrastructure. Let's imagine that you set up Kubernetes on the three old
servers in your basement and then install the Proof of Concept (PoC) of your new
project. When the project becomes successful, you want to scale your application and
move to a cloud provider such as Amazon Web Services (AWS). Since your application
is designed to run on Kubernetes and does not depend on the infrastructure, porting to
another Kubernetes installation is straightforward.
In the previous chapter, we studied the development environment setup using minikube,
the official method of Kubernetes. In this section, production-level Kubernetes
platforms will be presented. The Kubernetes platforms for production can be grouped
into threes, with the following abstraction layers:
Managed Platforms
Managed platforms provide Kubernetes as a Service, and all underlying services run
under the control of cloud providers. It is easy to set up and scale these clusters since
the cloud providers handle all infrastructural operations. Leading cloud providers such
as GCP, AWS, and Microsoft Azure have managed Kubernetes solution applications,
intending to integrate other cloud services such as container registries, identity
services, and storage services. The most popular managed Kubernetes solutions are as
follows:
• Google Kubernetes Engine (GKE): GKE is the most mature managed service on the
market, and Google provides it as a part of GCP.
• Azure Kubernetes Service (AKS): AKS is the Kubernetes solution provided by
Microsoft as a part of the Azure platform.
• Amazon Elastic Container Service for Kubernetes (EKS): EKS is the managed
Kubernetes of AWS.
Turnkey Platforms
Turnkey solutions focus on installing and operating the Kubernetes control plane in
the cloud or in on-premise systems. Users of turnkey platforms provide information
about the infrastructure, and the turnkey platforms handle the Kubernetes setup.
Turnkey platforms offer better flexibility in setup configurations and infrastructure
options. These platforms are mostly designed by organizations with rich experience in
Kubernetes and cloud systems such as Heptio or CoreOS.
If turnkey platforms are installed on cloud providers such as AWS, the infrastructure
is managed by the cloud provider, and the turnkey platform manages Kubernetes.
However, when the turnkey platform is installed on on-premise systems, in-house
teams should handle the infrastructure operations.
Google Kubernetes Engine | 143
Custom Platforms
Custom installation of Kubernetes is possible if your use case does not fit into any
managed or turnkey solutions. For instance, you can use Gardener (https://gardener.
cloud) or OpenShift (https://www.openshift.com) to install Kubernetes clusters to
cloud providers, on-premise data centers, on-premise virtual machines (VMs), or bare-
metal servers. While the custom platforms offer more flexible Kubernetes installations,
they also come with special operations and maintenance efforts.
In the following sections, we will create a managed Kubernetes cluster in GKE and
administer it. GKE offers the most mature platform and the superior customer
experience on the market.
If your application requires scalability with the higher usage and if you need 10 servers
instead of 2, the cost will also scale linearly:
7,300 total hours per month
Instance type: n1-standard-1
GCE Instance Cost: USD 242.72
Kubernetes Engine Cost: USD 0.00
Estimated Component Cost: USD 242.72 per 1 month
This calculation shows that GKE does not charge for the Kubernetes control plane and
provides a reliable, scalable, and robust Kubernetes API for every cluster. In addition,
the cost linearly increases for scaling clusters, which makes it easier to plan and
operate Kubernetes clusters.
In the following exercise, you will create a managed Kubernetes cluster in GKE and
connect to it.
Note
In order to complete this exercise, you need to have an active GCP account. You
can create an account on its official website: https://console.cloud.google.com/
start.
To complete the exercise, we need to ensure the following steps are executed:
1. Click Kubernetes Engine in the left menu under Compute on the Google Cloud
Platform home page, as shown in the following figure:
2. Click Create Cluster on the Clusters page, as shown in the following figure:
3. Select Your first cluster in the left from Cluster templates and write serverless
as the name. Click Create at the end of the page, as shown in the following figure:
4. Wait a couple of minutes until the cluster icon becomes green and then click the
Connect button, as you can see in the following figure:
5. Click Run in Cloud Shell in the Connect to the cluster window, as shown in the
following figure:
6. Wait until the cloud shell is open and available and press Enter when the command
is shown, as you can see in the following figure:
The output shows that the authentication data for the cluster is fetched, and the
kubeconfig entry is ready to use.
7. Check the nodes with the following command in the cloud shell:
kubectl get nodes
Since the cluster is created with a single node pool of one node, there is only one
node connected to the cluster, as you can see in the following figure:
8. Check for the pods running in the cluster with the following command in the
cloud shell:
kubectl get pods --all-namespaces
Since GKE manages the control plane, there are no pods for api-server, etcd, or
scheduler in the kube-system namespace. There are only networking and metrics
pods running in the cluster, as shown in the following screenshot:
With this exercise, you have created a production-ready Kubernetes cluster on GKE.
Within a couple of minutes, GKE created a managed Kubernetes control plane and
connected the servers to the cluster. In the following sections, administrating the
clusters for production environments will be discussed, and the Kubernetes cluster
from this exercise will be expanded.
To successfully complete the exercise, we need to ensure the following steps are
executed:
1. Install nginx in the cluster by running the following command in the cloud shell:
kubectl create deployment workload --image=nginx
This command creates a deployment named workload from the nginx image, as
depicted in the following figure:
4. Enable autoscaling for the node pool of the cluster using the following command:
gcloud container clusters update serverless --enable-autoscaling \
--min-nodes 1 --max-nodes 10 --zone us-central1-a \
--node-pool pool-1
Note
Change the zone parameter if your cluster is running in another zone.
This command enables autoscaling for the Kubernetes cluster with a minimum of 1
and a maximum of 10 nodes, as shown in the following figure:
This command can take a couple of minutes to create the required resources with
the Updating serverless... prompt.
5. Wait a couple of minutes and check for the number of nodes by using the
following command:
kubectl get nodes
With autoscaling enabled, GKE ensures that there are enough nodes to run the
workload in the cluster. The node pool is scaled up to four nodes, as shown in the
following figure:
8. Disable autoscaling for the node pool of the cluster by using the following
command:
gcloud container clusters update serverless --no-enable-autoscaling \
--node-pool pool-1 --zone us-central1-a
Note
Change the zone parameter if your cluster is running in another zone.
In this exercise, we saw the GKE cluster autoscaler in action. When the autoscaler
is enabled, it increases the number of servers when the cluster is out of capacity for
the current workload. Although it seems straightforward, it is a compelling feature
of Kubernetes platforms. It removes the burden of manual operations to check your
cluster utilization and take action. It is even more critical for serverless applications
where user demand is highly variable.
Let's assume you have deployed a serverless function to your Kubernetes cluster with
autoscaling enabled. The cluster autoscaler will automatically increase the number of
nodes when your functions are called frequently and then delete the nodes when your
functions are not invoked. Therefore it is essential to check the autoscaling capability of
the Kubernetes platform for serverless applications. In the following section, migrating
applications in production environments will be discussed, as it is another important
cluster administration task.
For instance, if you only want to run database instances on your nodes with SSD, you
need first to taint your nodes:
kubectl taint nodes disk-node-1 ssd=true:NoSchedule
Application Migration in Kubernetes Clusters | 155
With this command, disk-node-1 will only accept pods that have the following
tolerations in their definition:
tolerations:
- key: "ssd"
operator: "Equal"
value: "true"
effect: "NoSchedule"
Taints and tolerations work in harmony to assign pods to specific nodes as a part of the
Kubernetes scheduler. In addition, Kubernetes supports securely removing the servers
from the cluster by using the kubectl drain command. It is particularly helpful if you
want to take some nodes for maintenance or retirement. In the following exercise, an
application running in the Kubernetes cluster will be migrated to a particular set of new
nodes.
3. Check the number of running pods and their nodes with the following command:
kubectl get pods -o wide
All 10 replicas of the deployment are running successfully on the 4 nodes, as you
can see in the following figure:
Note
Change the zone parameter if your cluster is running in another zone.
Application Migration in Kubernetes Clusters | 157
This command creates a new node pool named high-memory-pool in the serverless
cluster with the machine type n1-highmem-2 and two servers, as you can see in the
following figure:
This command can take a couple of minutes to create the required resources with
the Creating node pool high-memory-pool prompt.
5. Wait for a couple of minutes and check the nodes in the cluster:
kubectl get nodes
This command lists the nodes in the cluster, and we expect to see two extra high-
memory nodes, as shown in the following figure:
6. Drain the old nodes so that Kubernetes will migrate applications to new nodes:
kubectl drain -l cloud.google.com/gke-nodepool=pool-1
This command removes the workloads from all nodes with the label cloud.google.
com/gke-nodepool=pool-1, as shown in the following figure:
7. Check the running pods and their nodes with the following command:
kubectl get pods -o wide
All 10 replicas of the deployment are running successfully on the new high-memory
node, as shown in the following figure:
Note
Change the zone parameter if your cluster is running in another zone.
This command deletes the old node pool, which is not being used, as you can see
in the following figure:
In this exercise, we have migrated the running application to new nodes with better
technical specs. Using the Kubernetes primitives and GKE node pools, it is possible to
migrate applications to a particular set of nodes without downtime. In the following
activity, you will use autoscaling and Kubernetes taints to run serverless functions while
minimizing the cost.
At the end of the activity, you will have functions connecting to the backend instances,
as shown in the following figure:
Backend instances will run on high-memory nodes and function instances will run on
preemptible servers, as shown in the following figure:
Note
In order to complete the activity, you should use the cluster from Exercise 15 with
backend deployments running.
Note
The solution to the activity can be found on page 412.
Summary
In this chapter, we first described the four key considerations to analyze the
requirements for the Kubernetes cluster setup. Then we studied the three groups of
Kubernetes platforms: managed, turnkey, and custom. Each Kubernetes platform has
been explained, along with their responsibility levels on infrastructure, Kubernetes,
and applications. Following that, we created a production-ready Kubernetes cluster on
GKE. Since Kubernetes is designed to run scalable applications, we studied how to deal
with increasing or decreasing workload by autoscaling. Furthermore, we also looked
at application migration without downtime in production clusters to illustrate how
to move your applications to the servers with higher memory. Finally, we performed
autoscaling and migration activities with a serverless function running in a production
cluster to minimize the costs. Kubernetes and serverless applications work together to
create reliable, robust, and scalable future-proof environments. Therefore, it is essential
to know how to install and operate Kubernetes clusters for production.
In the next chapter, we will be studying the upcoming serverless features in Kubernetes.
We will also study virtual kubelets in detail and deploy stateless containers on GKE.
Upcoming Serverless
6
Features in
Kubernetes
Learning Objectives
This chapter covers Knative, Google Cloud Run, and Virtual Kubelet, which offers the advantages
of serverless on top of a Kubernetes cluster.
164 | Upcoming Serverless Features in Kubernetes
Introduction to Knative
Knative is an open source project started by Google with contributions from over 50
other companies, including Pivotal, Red Hat, IBM, and SAP. Knative extends Kubernetes
by introducing a set of components to build and run serverless applications on top of it.
This framework is great for application developers who are already using Kubernetes.
Knative provides tools for them to focus on their code without worrying about the
underlying architecture of Kubernetes. It introduces features such as automated
container builds, autoscaling, scale to zero, and an eventing framework, which allows
developers to get the benefits of serverless on top of Kubernetes.
Introduction to Serverless with Kubernetes | 165
Note
The Build component has been deprecated in favor of Tekton Pipelines in the latest
version of Knative. The final release of the Knative Build component is available in
version 0.7.
Build is the process of building the container images from the source code and
running them on a Kubernetes cluster. The Knative Serving component allows the
deployment of serverless applications and functions. This enables serving traffic to
containers and autoscaling based on the number of requests. The serving component
is also responsible for taking snapshots of the code and configurations whenever a
change is made to them. The Knative Eventing component helps us to build event-
driven applications. This component allows the applications to produce events for and
consume events from event streams.
166 | Upcoming Serverless Features in Kubernetes
The following diagram illustrates a Knative framework with its dependencies and the
stakeholders of each component:
The bottom layer represents the Kubernetes framework, which is used as the container
orchestration layer by the Knative framework. Kubernetes can be deployed on any
infrastructure, such as Google Cloud Platform or an on-premises system. Next, we
have the Istio service mesh layer, which manages network routing within the cluster.
This layer provides many benefits, including traffic management, observability, and
security. At the top layer, Knative runs on top of a Kubernetes cluster with Istio. In the
Knative layer, at one end we can see contributors who contribute code to the Knative
framework through the GitHub project, and at the other end we can see the application
developers who build and deploy applications on top of the Knative framework.
Note
For more information on Istio, please refer to https://istio.io/.
Now that we have this understanding of Knative, let's look at how to install Knative on a
Kubernetes cluster in the following section.
Introduction to Serverless with Kubernetes | 167
First, we need to set a few environment variables that we will be using with the gcloud
CLI. You should update <your-gcp-project-name> with the name of your GCP project. We
will be using us-central1-a as the GCP zone. Execute the following commands in your
terminal window to set the required environment variables:
$ export GCP_PROJECT=<your-gcp-project-name>
$ export GCP_ZONE=us-central1-a
$ export GKE_CLUSTER=knative-cluster
The output should be as follows:
Set our GCP project as the default project to be used by the gcloud CLI commands:
$ gcloud config set core/project $GCP_PROJECT
Now we can create the GKE cluster using the gcloud command. Knative requires
a Kubernetes cluster with version 1.11 or newer. We will be using the Istio plugin
provided by GKE for this cluster. The following is the recommended configuration for a
Kubernetes cluster to run Knative components:
• Kubernetes version 1.11 or newer
• Kubernetes nodes with four vCPUs (n1-standard-4)
• Node autoscaling enabled for up to 10 nodes
• API scopes for cloud-platform
Execute the following command to create a GKE cluster compatible with these
requirements:
$ gcloud beta container clusters create $GKE_CLUSTER \
--zone=$GCP_ZONE \
--machine-type=n1-standard-4 \
--cluster-version=latest \
--addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio \
--enable-stackdriver-kubernetes \
--enable-ip-alias \
--enable-autoscaling --min-nodes=1 --max-nodes=10 \
--enable-autorepair \
--scopes cloud-platform
It may take a few minutes to set up the Kubernetes cluster. Once the cluster is ready,
we will use the command gcloud container clusters get-credentials to fetch the
credentials of the new cluster and configure the kubectl CLI as you can see in the
following code snippet:
$ gcloud container clusters get-credentials $GKE_CLUSTER --zone $GCP_ZONE
--project $GCP_PROJECT
Now you have successfully created the GKE cluster with Istio and configured kubectl to
access the newly created cluster. We can now proceed with the next step of installing
Knative. We will be installing Knative version 0.8, which is the latest available version at
the time of writing this book.
We will use the kubectl CLI to apply the Knative components to the Kubernetes cluster.
First, run the kubectl apply command with the -l knative.dev/crd-install=true flag to
prevent race conditions during the installation process:
$ kubectl apply --selector knative.dev/crd-install=true \
-f https://github.com/knative/serving/releases/download/v0.8.0/serving.
yaml \
-f https://github.com/knative/eventing/releases/download/v0.8.0/release.
yaml \
-f https://github.com/knative/serving/releases/download/v0.8.0/monitoring.
yaml
Once the command is completed, execute the following commands to check the status
of the installation. Make sure that all pods have a status of Running:
$ kubectl get pods --namespace knative-serving
$ kubectl get pods --namespace knative-eventing
$ kubectl get pods --namespace knative-monitoring
The output should be as follows:
At this stage, you have set up a Kubernetes cluster on GKE and installed Knative. Now
we are ready to deploy our first application on Knative.
Introduction to Serverless with Kubernetes | 171
Note
Knative service objects and Kubernetes Service objects are two different types.
1. Create a file named hello-world.yaml with the following content. This Knative
service object defines values such as the namespace to deploy this service in, the
Docker image to use for the container, and any environment variables:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-nodejs
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: gcr.io/knative-samples/helloworld-nodejs
env:
- name: TARGET
value: "Knative NodeJS App"
172 | Upcoming Serverless Features in Kubernetes
2. Once the hello-world.yaml file is ready, we can deploy our application with the
kubectl apply command:
$ kubectl apply -f hello-world.yaml
The output should be as follows:
3. The previous command will create multiple objects, including the Knative service,
configuration, revision, route, and Kubernetes Deployment. We can verify the
application by listing the newly created objects as in the following commands:
$ kubectl get ksvc
$ kubectl get configuration
$ kubectl get revision
$ kubectl get route
$ kubectl get deployments
The output should be as follows:
Next, we need to find the host URL of the helloworld-nodejs application. Execute
the following command and take note of the value of the URL column. This URL
takes the form http://<application-name>.<namespace>.example.com:
$ kubectl get route helloworld-nodejs
The output should be as follows:
5. Now we can invoke our application using the EXTERNAL_IP and URL values that we
noted in the earlier steps. Let's make a curl request with the following command:
$ curl -H "Host: helloworld-nodejs.default.example.com" http://${EXTERNAL_
IP}
The output should be as follows:
You should receive the expected output as Hello Knative NodeJS App!. This indicates
that we have successfully deployed and invoked our first application on the Knative
platform.
174 | Upcoming Serverless Features in Kubernetes
The following diagram illustrates the relationship between each of these components:
Figure 6.12: Relationship between Knative services, routes, configurations, and revisions
Knative Serving Component | 175
The configuration is used to define the desired state of the application. This will define
the container image used for the application and any other configuration parameters
that are required. A new Revision will be created each time a Configuration is updated.
Revision refers to a snapshot of the code and the Configuration. This is used to
record the history of Configuration changes. A Route is used to define the traffic
routing policy of the application and provides an HTTP endpoint for the application. By
default, the Route will send traffic to the latest Revision created by the Configuration.
The Route can also be configured for more advanced scenarios, including sending
traffic to a specific Revision or splitting traffic to different revisions based on defined
percentages. Service objects are used to manage the whole life cycle of the application.
While deploying a new application, it is required to create Configuration and Route
objects manually, but the Service can be used to simplify this by creating and managing
Configuration and Route objects automatically.
In the following section, we will be using canary deployment to deploy applications with
Knative. Let's first understand what exactly canary deployment is.
Canary Deployment
Canary deployment is a deployment strategy used when rolling out a new version of
code to a production environment. This is a fail-safe process of deploying a new version
of code into a production environment and switching a small percentage of traffic to
the new version. This way, the development and deployment teams can verify the new
version of the code with minimal impact on production traffic. Once the verifications
are done, all traffic will be switched to the new version. In addition to canary
deployments, there are several other deployment types, such as big bang deployments,
rolling deployments, and blue-green deployments.
In the helloworld-nodejs application that we deployed in Exercise 16, Deploying a
Sample App on Knative, we used the Service object with the spec.runLatest field, which
directs all traffic to the latest available revision. In the following exercise, we will be
using separate configuration and route objects instead of the service object.
Note
For more information on canary deployment technique, refer to https://dev.to/
mostlyjason/intro-to-deployment-strategies-blue-green-canary-and-more-3a3.
176 | Upcoming Serverless Features in Kubernetes
3. Let's get the revision name created by this configuration as we need this value in
the next step. Execute the kubectl get configurations command and retrieve the
value of the latestCreatedRevisionName field:
$ kubectl get configurations canary-deployment -o=jsonpath='{.status.
latestCreatedRevisionName}'
The output should be as follows:
For me, the value returned from the preceding command is canary-deployment-
xgvl8. Note that your value will be different.
4. The next step is to create the route object. Let's create a file named canary-
deployment-route.yaml with the following content (please remember to replace
canary-deployment-xgvl8 with the revision name that you noted in the previous
step). Under the spec.traffic section, you can see that 100% of traffic is routed to
the revision that we created previously:
apiVersion: serving.knative.dev/v1alpha1
kind: Route
metadata:
name: canary-deployment
namespace: default
spec:
traffic:
- revisionName: canary-deployment-xgvl8
percent: 100
5. Create the route object with the kubectl apply command:
$ kubectl apply -f canary-deployment-route.yaml
The output should be as follows:
6. Make a request to the application and observe the expected output of Hello This
is the first version - v1!:
$ curl -H "Host: canary-deployment.default.example.com"
"http://${EXTERNAL_IP}"
The output should be as follows:
9. Now we can check the revisions created, while updating the configuration, using
the kubectl get revisions command:
$ kubectl get revisions
The output should be as follows:
10. Let's get the latest revision created by the canary-deployment configuration:
$ kubectl get configurations canary-deployment -o=jsonpath='{.status.
latestCreatedRevisionName}'
The output should be as follows:
11. Now it's time to send some traffic to our new version of the application. Update
the spec.traffic section of canary-deployment-route.yaml to send 50% of the
traffic to the old revision and 50% to the new revision:
apiVersion: serving.knative.dev/v1alpha1
kind: Route
metadata:
name: canary-deployment
namespace: default
spec:
traffic:
- revisionName: canary-deployment-xgvl8
percent: 50
- revisionName: canary-deployment-8pp4s
percent: 50
180 | Upcoming Serverless Features in Kubernetes
13. Now we can invoke the application multiple times to observe how traffic splits
between two revisions:
$ curl -H "Host: canary-deployment.default.example.com"
"http://${EXTERNAL_IP}"
14. Once we verify version 2 of the application successfully, we can update canary-
deployment-route.yaml to route 100% of the traffic to the latest revision:
apiVersion: serving.knative.dev/v1alpha1
kind: Route
metadata:
name: canary-deployment
namespace: default
spec:
traffic:
- revisionName: canary-deployment-xgvl8
percent: 0
- revisionName: canary-deployment-8pp4s
percent: 100
15. Apply the changes to the route using the following command:
$ kubectl apply -f canary-deployment-route.yaml
The output should be as follows:
16. Now invoke the application multiple times to verify that all traffic goes to version 2
of the application:
$ curl -H "Host: blue-green-deployment.default.example.com"
"http://${EXTERNAL_IP}"
In this exercise, we have successfully used configuration and route objects to perform a
canary deployment with Knative.
Knative Monitoring
Knative comes with Grafana pre-installed, which is an open source metric analytics and
visualization tool. The Grafana pod is available in the knative-monitoring namespace
and can be listed with the following command:
$ kubectl get pods -l app=grafana -n knative-monitoring
We can expose the Grafana UI with the kubectl port-forward command, which will
forward local port 3000 to the port 3000 of the Grafana pod. Open a new terminal and
execute the following command:
$ kubectl port-forward $(kubectl get pod -n knative-monitoring -l app=grafana
-o jsonpath='{.items[0].metadata.name}') -n knative-monitoring 3000:3000
Now we can navigate the Grafana UI from our web browser on http://127.0.0.1:3000.
The output should be as follows:
Knative's Grafana dashboard comes with multiple dashboards, including the following:
Knative Autoscaler
Knative has a built-in autoscaling feature that automatically scales the application
pods based on the number of HTTP requests it receives. This will increase the pod
count when there is increased demand and decrease the pod count when the demand
decreases. The pod count will scale to zero when pods are idle and there are no
incoming requests.
Knative uses two components, the autoscaler, and the activator, to achieve the
previously mentioned functionality. These components are deployed as pods in the
knative-serving namespace, as you can see in the following snippet:
NAME READY STATUS RESTARTS AGE
activator-7c8b59d78-9kgk5 2/2 Running 0 15h
autoscaler-666c9bfcc6-vwrj6 2/2 Running 0 15h
controller-799cd5c6dc-p47qn 1/1 Running 0 15h
webhook-5b66fdf6b9-cbllh 1/1 Running 0 15h
The activator component is responsible for collecting information about the number
of concurrent requests to a revision and reporting these values to the autoscaler. The
autoscaler component will increase or decrease the number of pods based on the
metrics reported by the activator. By default, the autoscaler will try to maintain 100
concurrent requests per pod by scaling pods up or down. All Knative autoscaler-related
configurations are stored in a configuration map named config-autoscaler in the
knative-serving namespace. Knative can also be configured to use the Horizontal Pod
Autoscaler (HPA), which is provided by Kubernetes. HPA will autoscale pods based on
CPU usage.
configuration:
revisionTemplate:
metadata:
annotations:
autoscaling.knative.dev/target: "10"
spec:
container:
image: "gcr.io/knative-samples/autoscale-go:0.1"
2. Apply the service definition with the kubectl apply command:
$ kubectl apply -f autoscale-app.yaml
The output should be as follows:
4. Add execution permission to the hey binary and move it into the /usr/local/bin/
path:
$ chmod +x hey
$ sudo mv hey /usr/local/bin/
The output should be as follows:
5. Now we are ready to generate a load with the hey tool. The hey tool supports
multiple options when generating a load. For this scenario, we will use a load with
a concurrency of 50 (with the -c flag) for a duration of 60 seconds (with the -z
flag):
$ hey -z 60s -c 50 \
-host "autoscale-app.default.example.com" \
"http://${EXTERNAL_IP?}?sleep=1000"
6. In a separate terminal, watch for the number of pods created during the load:
$ kubectl get pods --watch
You will see output similar to the following:
NAME READY STATUS
RESTARTS AGE
autoscale-app-7jt29-deployment-9c9c4b474-4ttl2 3/3 Running 0
58s
autoscale-app-7jt29-deployment-9c9c4b474-6pmjs 3/3 Running 0
60s
autoscale-app-7jt29-deployment-9c9c4b474-7j52p 3/3 Running 0
63s
autoscale-app-7jt29-deployment-9c9c4b474-dvcs6 3/3 Running 0
56s
autoscale-app-7jt29-deployment-9c9c4b474-hmkzf 3/3 Running 0
62s
186 | Upcoming Serverless Features in Kubernetes
We have successfully configured Knative's autoscaler and observed autoscaling with the
Grafana dashboard.
Knative Autoscaler | 187
6. Click on the URL link displayed to run the container. Note that the URL will be
different for every new instance:
7. Next, we are going to deploy a new revision of the application by updating the
TARGET environment variable. Navigate back to the GCP console and click on the
DEPLOY NEW REVISION button.
190 | Upcoming Serverless Features in Kubernetes
8. From the Deploy revision to hello-world (us-central1) form, click on the SHOW
OPTIONAL REVISION SETTINGS link, which will point us to the additional setting
section:
We have successfully deployed a pre-built Docker image on the Google Cloud Run
platform.
The following figure displays a Kubernetes cluster with standard and virtual kubelets:
Figure 6.39: Kubernetes cluster with standard kubelets and Virtual Kubelets
Virtual Kubelet will appear as a traditional kubelet from the viewpoint of the Kubernetes
API. This will run in the existing Kubernetes cluster and register itself as a node within
the Kubernetes API. Virtual Kubelet will run and manage the pods in the same way a
kubelet does. But in contrast to the kubelet, which runs pods within the nodes, Virtual
Kubelet will utilize external services to run the pods. This connects the Kubernetes
cluster to other services such as serverless container platforms. Virtual Kubelet
supports a growing number of providers, including the following:
• Alibaba Cloud Elastic Container Instance (ECI)
• AWS Fargate
• Azure Batch
• Azure Container Instances (ACI)
• Kubernetes Container Runtime Interface (CRI)
• Huawei Cloud Container Instance (CCI)
• HashiCorp Nomad
• OpenStack Zun
Introduction to Virtual Kubelet | 193
Running pods on these platforms come with the benefits of the serverless world. We do
not have to worry about the infrastructure as it is managed by the cloud provider. Pods
will scale up and down automatically based on the number of requests received. Also,
we have to pay only for the utilized resources.
We will be using Azure Cloud Shell, which has all the previously mentioned CLIs
pre-installed:
1. Navigate to https://shell.azure.com/ to open Cloud Shell in a browser window.
Select Bash from the Welcome to Azure Cloud Shell window:
2. Click on the Create storage button to create a storage account for Cloud Shell.
Note that this is a one-time task purely for when we are using Cloud Shell for the
first time:
3. Once Cloud Shell is ready, we can start creating the AKS cluster.
First, we need to create an Azure resource group that allows us to group related
Azure resources logically. Execute the following command to create a resource
group named serverless-kubernetes-group in the West US (westus) region:
$ az group create --name serverless-kubernetes-group --location westus
The output should be as follows:
5. Next, we will create an Azure Kubernetes cluster. The following command will
create an AKS cluster named virtual-kubelet-cluster with one node. This
command will take a few minutes to execute:
$ az aks create --resource-group serverless-kubernetes-group --name
virtual-kubelet-cluster --node-count 1 --node-vm-size Standard_D2
--network-plugin azure --generate-ssh-keys
Once AKS cluster creation is successful, the preceding command will return some
JSON output with the details of the cluster:
6. Next, we need to configure the kubectl CLI to communicate with the newly
created AKS cluster. Execute the az aks get-credentials command to download
the credentials and configure the kubectl CLI to work with the virtual-kubelet-
cluster cluster with the following command:
Note
We are not required to install the kubectl CLI because Cloud Shell comes with
kubectl pre-installed.
7. Now we can verify the connection to the cluster from Cloud Shell by executing the
kubectl get nodes command, which will list the nodes available in the AKS cluster:
$ kubectl get nodes
The output should be as follows:
8. If this is the first time you are using the ACI service, you need to register the
Microsoft.ContainerInstance provider with your subscription. We can check the
registration state of the Microsoft.ContainerInstance provider with the following
command:
$ az provider list --query "[?contains(namespace,'Microsoft.
ContainerInstance')]" -o table
The output should be as follows:
10. The next step is to create the necessary ServiceAccount and ServiceAccount
objects for the tiller. Create a file named tiller-rbac.yaml with the following code:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
Introduction to Virtual Kubelet | 199
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
11. Then execute the kubectl apply command to create the necessary ServiceAccount
and ClusterRoleBinding objects:
$ kubectl apply -f tiller-rbac.yaml
The output should be as follows:
12. Now we can configure Helm to use the tiller service account that we created in
the previous step:
$ helm init --service-account tiller
200 | Upcoming Serverless Features in Kubernetes
13. Once all configurations are done, we can install Virtual Kubelet using the az aks
install-connector command. We will be deploying both Linux and Windows
connectors with the following command:
$ az aks install-connector \
--resource-group serverless-kubernetes-group \
--name virtual-kubelet-cluster \
--connector-name virtual-kubelet \
--os-type Both
Introduction to Virtual Kubelet | 201
14. Once the installation is complete, we can verify it by listing the Kubernetes nodes.
There will be two new nodes, one for Windows and one for Linux:
$ kubectl get nodes
The output should be as follows:
15. Now we have Virtual Kubelet installed in the AKS cluster. We can deploy an
application to a new node introduced by Virtual Kubelet. We will be creating a
Kubernetes Deployment named hello-world with the microsoft/aci-helloworld
Docker image.
We need to add a nodeSelector to assign this pod specifically to the Virtual
Kubelet node. Note that Virtual Kubelet nodes are tainted by default to prevent
unexpected pods from being run on them. We need to add tolerations to the pods
to allow them to be scheduled for these nodes.
Let's create a file named hello-world.yaml with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
Introduction to Virtual Kubelet | 203
image: microsoft/aci-helloworld
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/role: agent
type: virtual-kubelet
beta.kubernetes.io/os: linux
tolerations:
- key: virtual-kubelet.io/provider
operator: Equal
value: azure
effect: NoSchedule
16. Deploy the hello-world application with the kubectl apply command:
$ kubectl apply -f hello-world.yaml
The output should be as follows:
17. Execute the kubectl get pods command with the -o wide flag to output a list of
pods and their respective nodes. Note that the hello-world-57f597bc59-q9w9k pod
has been scheduled on the virtual-kubelet-virtual-kubelet-linux-westus node:
$ kubectl get pods -o wide
The output should be as follows:
Thus, we have successfully configured Virtual Kubelet on AKS with ACI and have
deployed a pod in the Virtual Kubelet node.
Let's now complete an activity where we will be deploying a containerized application
in a serverless environment.
if ( !isset ( $_GET['timezone'] ) ) {
// Returns error if the timezone parameter is not provided
$output_message = "Error: Timezone not provided";
} else if ( empty ( $_GET['timezone'] ) ) {
// Returns error if the timezone parameter value is empty
$output_message = "Error: Timezone cannot be empty";
} else {
// Save the timezone parameter value to a variable
$timezone = $_GET['timezone'];
Introduction to Virtual Kubelet | 205
try {
// Generates the current time for the provided timezone
$date = new DateTime("now", new DateTimeZone($timezone) );
$formatted_date_time = $date->format('Y-m-d H:i:s');
$output_message = "Current date and time for $timezone is
$formatted_date_time";
} catch(Exception $e) {
// Returns error if the timezone is invalid
$output_message = "Error: Invalid timezone value";
}
# Replace port 80 with the value from PORT environment variable in apache2
configuration files
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf
/etc/apache2/ports.conf
Note
The solution to the activity can be found on page 417.
Summary
In this chapter, we discussed the advantages of using serverless on Kubernetes. We
discussed three technologies that offer the benefits of serverless on top of a Kubernetes
cluster. These are Knative, Google Cloud Run, and Virtual Kubelet.
First, we created a GKE cluster with Istio and deployed Knative on top of it. Then
we learned how to deploy an application on Knative. Next, we discussed the serving
component of Knative and how to perform a canary deployment with configuration
and route objects. Then we discussed monitoring on Knative and observed how Knative
autoscaling works based on the number of requests received.
We also discussed Google Cloud Run, which is a fully managed platform, built on the
Knative project, to run stateless HTTP-driven containers. Then we learned how to
deploy an application with the Cloud Run service.
In the final section, we studied Virtual Kubelet, which is an open source implementation
of Kubernetes' kubelet. We learned the differences between normal kubelets and
Virtual Kubelet. Finally, we deployed Virtual Kubelet on an AKS cluster and deployed an
application to a Virtual Kubelet node.
In the next three chapters, we will be focusing on three different Kubernetes serverless
frameworks, namely Kubeless, OpenWhisk, and OpenFaaS.
Kubernetes Serverless
7
with Kubeless
Learning Objectives
In this chapter, we will first learn about the Kubeless architecture. Then, we'll create our first
Kubeless function, deploy it, and invoke it. You'll also learn how to debug a Kubeless function in
the case of a failure.
210 | Kubernetes Serverless with Kubeless
Introduction to Kubeless
Kubeless is an open source and Kubernetes-native serverless framework that runs on
top of Kubernetes. This allows software developers to deploy code into a Kubernetes
cluster without worrying about the underlying infrastructure. Kubeless is a project by
Bitnami, who is a provider of packaged applications for any platform. Bitnami provides
software installers for over 130 applications, which allow you to quickly and efficiently
deploy these software applications to any platform.
Kubeless functions support multiple programming languages, including Python, PHP,
Ruby, Node.js, Golang, Java, .NET, Ballerina, and custom runtimes. These functions can
be invoked with HTTP(S) calls as well as event triggers with Kafka or NATS messaging
systems. Kubeless also supports Kinesis triggers to associate functions with the AWS
Kinesis service, which is a managed data-streaming service by AWS. Kubeless functions
can even be invoked at specified intervals using scheduled triggers.
Kubeless comes with its own Command-Line Interface (CLI) named kubeless, which is
similar to the kubectl CLI offered by Kubernetes. We can create, deploy, list, and delete
Kubeless functions using this kubeless CLI. Kubeless also has a graphical user interface,
which makes the management of the functions much easier.
In this chapter, we will create our first serverless function on Kubernetes using
Kubeless. Then, we will invoke this function with multiple mechanisms including HTTP,
and PubSub triggers. Once we are familiar with the basics of Kubeless, we will create a
more advanced function that can post messages to Slack.
Kubeless Architecture
The Kubeless framework is an extension of the Kubernetes framework, leveraging
native Kubernetes concepts such as Custom Resource Definitions (CRDs) and custom
controllers. Since Kubeless is built on top of Kubernetes, it can take advantage of all the
great features available in Kubernetes, such as self-healing, autoscaling, load balancing,
and service discovery.
Note
Custom resources are extensions of the Kubernetes API. You can find more about
Kubernetes' custom resources in the official Kubernetes documentation at https://
kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/.
Introduction to Kubeless | 211
Let's take a look at the Kubernetes architecture in order to understand the core
concepts behind it:
Functions represent the code blocks executed by the Kubeless framework. During
the installation, a CRD named functions.kubeless.io will be created to represent the
Kubeless functions.
212 | Kubernetes Serverless with Kubeless
Triggers represent the invocation mechanism of the function. A Kubeless function will
be invoked whenever it receives a trigger. A single trigger can be associated with one
or many functions. Functions deployed on Kubeless can be triggered using five possible
mechanisms:
• HTTP trigger: This executes through HTTP(S)-based invocations such as HTTP
GET or POST requests.
• CronJob trigger: This executes through a predefined schedule.
• Kafka trigger: This executes when a message gets published to the Kafka topics.
• NATS trigger: This executes when a message gets published to the NATS topics.
• Kinesis trigger: This executes when records get published to AWS Kinesis data
streams.
Runtimes represent different programming languages that can be used to write and
execute Kubeless functions. A single programming language will be further divided into
multiple runtimes based on the version. As an example, Python 2.7, Python 3.4, Python
3.6, and Python 3.7 are the runtimes supporting the Python programming language.
Kubeless supports runtimes in both the stable and incubator stage. A runtime is
considered stable once it meets certain technical requirements specified by Kubeless.
Incubator runtimes are considered to be in the development stage. Once the specified
technical requirements are fulfilled, runtime maintainers can create a "pull" request in
the Kubeless GitHub repository to move the runtime from the incubator stage to the
stable stage. At the time of writing this book, Ballerina, .NET, Golang, Java, Node.js, PHP,
and Python runtimes are available in the stable stage and JVM and Vertx runtimes are
available in the incubator stage.
Note
The following document defines the technical requirements for a stable runtime:
https://github.com/kubeless/runtimes/blob/master/DEVELOPER_GUIDE.
md#runtime-image-requirements.
Creating a Kubernetes Cluster | 213
Note
VirtualBox can be installed on Ubuntu 18.04 with the APT package manager by
executing the following command in the terminal:
$ sudo apt install virtualbox -y
214 | Kubernetes Serverless with Kubeless
3. Install minikube.
Now, we are going to install Minikube version 1.2.0, which is the latest version
available at the time of writing this book. First, download the minikube binaries to
your local machine:
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/
v1.2.0/minikube-linux-amd64
The output will be as follows:
Now, in the VirtualBox Manager window, you can see a VM named minikube in the
running state:
8. Install kubectl.
Now, we are going to install kubectl version 1.15.0, which is the latest version
available at the time of writing this book. First, download the kubectl binaries to
your local machine:
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/
v1.15.0/bin/linux/amd64/kubectl
This will show the following output:
10. Finally, move the Minikube binary to the /usr/local/bin/ path location:
$ sudo mv kubectl /usr/local/bin/kubectl
The output is as follows:
12. Verify that the kubectl CLI is correctly pointed to the Minikube cluster:
$ kubectl get pods
You should see the following output:
Installing Kubeless
Once the Minikube Kubernetes environment is ready, we can install Kubeless on top of
the Kubernetes cluster. Installing Kubeless consists of installing three components:
• The Kubeless framework
• The Kubeless CLI
• The Kubeless UI
The Kubeless framework will install all the extensions on top of Kubernetes to
support Kubeless features. This includes CRDs, custom controllers, and deployments.
The Kubeless CLI is used to interact with the Kubeless framework for tasks such as
deploying functions, invoking functions, and creating triggers. The Kubeless UI is a GUI
for the Kubeless framework, which will help you to view, edit, and run functions.
In the next step, we will install the Kubeless framework. We will be using one of the
YAML manifests provided by Kubeless to install the framework. There are multiple yaml
files provided by Kubeless and we have to choose the correct yaml file based on the
Kubernetes environment (for example, rbac, non-rbac, or openshift):
$ kubectl create -f https://github.com/kubeless/kubeless/releases/download/
v1.0.3/kubeless-v1.0.3.yaml
Installing Kubeless | 219
The preceding step will create multiple Kubernetes objects in the kubeless namespace.
This will create a function object as a Custom Resource Definition and Kubeless
controller as a deployment. You can verify that these objects are up and running by
executing the following commands:
$ kubectl get pods -n kubeless
$ kubectl get deployment -n kubeless
$ kubectl get customresourcedefinition
You will see the following on your screen:
Now, we have completed the installation of the Kubeless framework successfully. In the
next section, we will install the Kubeless CLI.
220 | Kubernetes Serverless with Kubeless
Now, we have successfully installed the Kubeless CLI. We can verify this by running the
following command:
$ kubeless version
The Kubeless UI
The Kubeless UI is the GUI for Kubeless. It allows you to create, edit, delete, and
execute Kubeless functions with an easy-to-use UI. Execute the following command to
install the Kubeless UI in the Kubernetes cluster:
$ kubectl create -f https://raw.githubusercontent.com/kubeless/kubeless-ui/
master/k8s.yaml
Once the installation is successful, execute the following command to open the
Kubeless UI in a browser window. You can reload the browser window if the Kubeless
UI doesn't show up, since creating the service can take a few minutes:
$ minikube service ui --namespace kubeless
222 | Kubernetes Serverless with Kubeless
We've just completed the installation of the Kubeless UI, which can be used to create,
edit, delete, and execute Kubeless functions that are similar to the Kubeless CLI.
Kubeless Functions
Once Kubeless is successfully installed, you can now forget about the underlying
infrastructure, including VMs and containers, and focus only on your function logic.
Kubeless functions are code snippets written in one of the supported languages. As we
discussed previously, Kubeless supports multiple programming languages and versions.
You can execute the kubeless get-server-config command to get a list of language
runtimes supported by your Kubeless version:
$ kubeless get-server-config
In the following sections, we are going to create, deploy, list, invoke, update, and delete
a Kubeless function.
Kubeless Functions | 223
Let's break this command up into a few pieces in order to understand what each part of
the command does:
• kubeless function deploy hello: This tells Kubeless to register a new function
named hello. We can use this name to invoke this function later.
• --runtime python3.7: This tells Kubeless to use the Python 3.7 runtime to run this
function.
• --from-file hello.py: This tells Kubeless to use the code available in the hello.
py file to create the hello function. If you are not in the current file path when
executing the command, you need to specify the full file path.
• --handler hello.main: This specifies the name of the code file and the method
to execute when this function is invoked. This should be in the format of <file-
name>.<function-name>. In our case, the filename is hello and the function name
inside the file is main.
You can find the other options that are available when deploying a function by
executing the kubeless function deploy --help command.
Figure 7.25: Listing the Kubeless functions with the Kubeless CLI
Note
The same can be achieved using the kubeless function ls command.
If you wish to obtain more detailed information about a specific function, you can use
the kubeless function describe command:
$ kubeless function describe hello
Kubeless Functions | 225
Since a Kubeless function is created as a Kubernetes object (that is, a custom resource),
you can also use the Kubectl CLI to get the information about the available functions.
The following is the output from the kubectl get functions command:
$ kubectl get functions
Figure 7.27: Listing the Kubeless functions with the kubectl CLI
You can also invoke Kubeless functions with the Kubeless UI. Once you open the
Kubeless UI, you can see the list of functions available on the left-hand side. You can
click on the hello function to open it. Then, click on the Run function button to execute
the function. You can see the expected response of Welcome to Kubeless World
underneath the Response section:
Note
Kubeless functions can also be updated or deleted using the Kubeless UI.
You can then execute the kubeless function update command to update the hello
function that we created earlier:
$ kubeless function update hello --from-file hello.py
Now you have to pass the required data when invoking the hello function:
$ kubeless function call hello --data '{"name":"Kubeless World!"}'
You should be able to see Hello Kubeless World! as the output of the preceding
command.
Once the function is deleted, try listing the function again. It should throw an error, as
follows:
$ kubeless function list hello
228 | Kubernetes Serverless with Kubeless
The preceding kubeless function delete command will delete not only the kubeless
function, but, while creating the Kubeless function, the framework creates Kubernetes
objects such as pods and deployment. Those objects will also be deleted when we delete
the kubeless function. You can verify this with the following command:
$ kubectl get pods -l function=hello
Now we have learned how to create, deploy, list, invoke, update, and delete Kubeless
functions. Let's move on to an exercise about creating your first Kubeless function.
Note
The code files for this exercise can be found at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/tree/master/Lesson07/Exercise21.
2. Create the lesson-7 namespace and deploy the my-function.py file created
previously:
$ kubectl create namespace lesson-7
In this exercise, first, we created a simple Python function, which returned the Welcome
to Serverless Architectures with Kubernetes string as the output and deployed it to
Kubeless. Then, we listed the function to make sure it was created successfully. Then,
we invoked the my-function and successfully returned the expected response of Welcome
to Serverless Architectures with Kubernetes. Finally, we did the cleanup by deleting
the function.
In order to run the HTTP trigger, your Kubernetes cluster must have a running ingress
controller. Once the ingress controller is running in the Kubernetes cluster, you can use
the kubeless trigger http create command to create an HTTP trigger:
$ kubeless trigger http create <trigger-name> --function-name <function-
name>
--function-name flag is used to specify the name of the function that will be associated
with the HTTP trigger.
Note
There is a number of ingress controller add-ons available for Kubernetes, including
NGINX, Kong, Traefik, F5, Contour, and more. You can find them at https://
kubernetes.io/docs/concepts/services-networking/ingress-controllers/.
Note
The code files for this exercise can be found at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/tree/master/Lesson07/Exercise22.
2. After a couple of minutes, you should be able to see that the nginx-ingress-
controller container has been created in the kube-system namespace, which is the
namespace for the object created by the Kubernetes system:
$ kubectl get pod -n kube-system -l app.kubernetes.io/name=nginx-ingress-
controller
It shows the following:
return greetingMessage
4. Create the lesson-7 namespace and deploy the greetings.py created earlier:
$ kubectl create namespace lesson-7
5. Invoke the function and verify that the function is providing the expected output:
$ kubeless function call greetings --namespace lesson-7
Once invoked, the screen will display the following:
6. Now we can create the http trigger for the hello function:
$ kubeless trigger http create greetings \
--function-name greetings \
--namespace lesson-7
The result is as follows:
7. List the http triggers; you should be able to see the http trigger for the hello
function:
$ kubeless trigger http list --namespace lesson-7
The list will look something like this:
8. This will create an ingress object in the Kubernetes layer. We can list the ingress
objects with the kubectl CLI:
$ kubectl get ingress --namespace lesson-7
This will return the following:
9. You can see the hostname with the .nip.io domain, which we can use to access
the greetings function over HTTP.
In this case, the hostname is greetings.192.168.99.100.nip.io. Once you open this
hostname in a web browser, you should be able to see the greeting message in the
browser window (note that your output may be different depending on your local
time):
Figure 7.47: Invoking the function with the HTTP GET request
In order to create PubSub triggers in Kubeless, we need to have a running Kafka cluster
or NATS cluster. Once the Kafka or NATS cluster is ready, we can use kubeless trigger
kafka create to create a Kafka trigger or kubeless trigger nats create to create a
NATS trigger and associate our PubSub function with the new trigger:
$ kubeless trigger <trigger-type> create <trigger-name> \
--function-selector <label-query> \
--trigger-topic <topic-name>
Let's discuss what each piece of the command does:
• kubeless trigger <trigger-type> create <trigger-name>: This tells Kubeless to
create a PubSub trigger with the provided name and trigger type. Valid trigger
types are kafka and nats.
• --function-selector <label-query>: This tells us which function should be
associated with this trigger. Kubernetes labels are used to define this relationship
(for example, --function-selector key1=value1,key2=value2).
• --trigger-topic <topic-name>: The Kafka broker will listen to this topic and the
function will be triggered when a message is published to it.
The topic is where messages from the producers get published. The Kubeless CLI allows
us to create topics using the kubeless topic command. This allows us to create, delete,
list topics, and publish messages to topics easily.
Let's invoke a Kubeless function with the PubSub mechanism using Kafka:
1. First, we are going to deploy Kafka and Zookeeper to our Kubernetes cluster:
$ kubectl create -f https://github.com/kubeless/kafka-trigger/releases/
download/v1.0.2/kafka-zookeeper-v1.0.2.yaml
The output will look like the following:
2. Verify that two statefulset named kafka and zoo are running in the kubeless
namespace for Kafka and Zookeeper:
$ kubectl get statefulset -n kubeless
$ kubectl get services -n kubeless
$ kubectl get deployment -n kubeless
The following output is seen:
3. Once our Kafka and Zookeeper deployment is ready, we can create and deploy the
function to be triggered by PubSub triggers. Create a file named pubsub.py and add
the following content:
def main(event, context):
return "Invoked with Kubeless PubSub Trigger"
4. Let's deploy our function now:
$ kubeless function deploy pubsub --runtime python3.7 \
--from-file pubsub.py \
--handler pubsub.main
The deployment will yield the following:
5. Once the function is deployed, we can verify the function is successful by listing
the function:
$ kubeless function list pubsub
The listed function will be as follows:
6. Now, let's create the kafka trigger with the kubeless trigger kafka create
command and associate our pubsub function with the new trigger:
$ kubeless trigger kafka create my-trigger \
--function-selector function=pubsub \
--trigger-topic pubsub-topic
238 | Kubernetes Serverless with Kubeless
Figure 7.52: Creating the kafka trigger for the pubsub function
7. Now we need a Kubeless topic to publish the messages. Let's create a topic with
the kubeless topic create command. We need to make sure that the topic name
is similar to the one we provided as the --trigger-topic while creating the kafka
trigger in the previous step:
$ kubeless topic create pubsub-topic
8. Okay. Now it's time to test our pubsub function by publishing events to pubsub-
topic:
$ kubeless topic publish --topic pubsub-topic --data "My first message"
9. Check the logs function to verify whether the pubsub function is successfully
invoked:
$ kubectl logs -l function=pubsub
You should see the published message in the output logs:
...
My first message
...
To understand this better, check out the following output:
• METHOD: The HTTP method type (for example, GET/POST) when invoking the
function
• TOTAL_CALLS: The total number of invocations
• TOTAL_FAILURES: The number of function failures
• TOTAL_DURATION_SECONDS: The total number of seconds this function has executed
• AVG_DURATION_SECONDS: The average number of seconds this function has executed
• MESSAGE: Any other messages
The following is the kubeless function top output for the hello function:
$ kubeless function top hello
Now that we've monitored the function, it's time to move toward debugging it.
This will result in Invalid runtime error and Kubeless will display the supported
runtimes. Upon further inspection, we can see that there is a typo in the --runtime
parameter of the kubeless function deploy command.
The resulting output would look like this:
Let's correct this typo and rerun the kubeless function deploy command with the
python3.7 runtime:
$ kubeless function deploy debug --runtime python3.7 \
--from-file debug.py \
--handler debug.main
This time, the function will be successfully deployed into the Kubeless environment. It
should look like the following:
Error Scenario 02
Now, let's check the status of the function using the kubeless function ls command:
$ kubeless function ls debug
You can see that the status is 0/1 NOT READY. Now, let's check the status of the debug
pod using the kubectl get pods command:
$ kubectl get pods -l function=debug
Here, debug pod is in CrashLoopBackOff status. This error commonly occurs due to either
a syntax error in the function or the dependencies that we specify.
On closer inspection, we could identify that a colon (:) to mark the end of the function
header is missing.
Let's correct this and update our function.
Open the debug.py file and add a colon at the end of the function header:
def main(event, context):
name = event['data']['name']
return "Hello " + name
We will now execute the kubeless function update command to update the function
with the new code file:
$ kubeless function update debug --from-file debug.py
When you execute the kubeless function ls debug again, you should be able to see that
the function is now ready with the 1/1 READY status:
Error Scenario 03
Let's create an example error scenario with our hello function. For this, you can call the
hello function by replacing the key name of the data section with username:
$ kubeless function call debug --data '{"username":"Kubeless"}'
In order to find the possible cause for this failure, we need to check the function logs.
You can execute the kubeless function logs command to view the logs of the hello
function:
$ kubeless function logs debug
The first few lines of the output show lines similar to the following code block, which
are internal health checks. As per the logs, we can see that all the calls to the /healthz
endpoint have been successful with the 200 HTTP success response code:
10.56.0.1 - - [03/Jul/2019:13:36:17 +0000] "GET /healthz HTTP/1.1" 200 2 ""
"kube-probe/1.12+" 0/120
Next, you can see a stack trace of the error messages, as follows, with the possible
cause being the KeyError: 'name' error. The function was expecting a 'name' key, which
was not found during the function execution:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/bottle.py", line 862, in _
handle
return route.call(**args)
File "/usr/local/lib/python3.7/dist-packages/bottle.py", line 1740, in
wrapper
rv = callback(*a, **ka)
File "/kubeless.py", line 86, in handler
raise res
KeyError: 'name'
The last line of the error message indicates that HTTP error 500 was returned for the
function call:
10.56.0.1 - - [03/Jul/2019:13:37:29 +0000] "POST / HTTP/1.1" 500 739 ""
"kubeless/v0.0.0 (linux/amd64) kubernetes/$Format" 0/10944
Note
HTTP 500 is the error code returned by the HTTP protocol, which indicates an
Internal Server Error. This means that the server was unable to fulfill the
request due to unexpected conditions.
Apart from kubeless function logs, you can also use the kubectl logs command, which
will return a similar output. You need to pass the -l parameter, which indicates a label,
in order to only get the logs for a specific function:
$ kubectl logs -l function=hello
244 | Kubernetes Serverless with Kubeless
Use the kubectl get functions --show-labels command to see the labels associated
with the Kubeless functions.
This will yield the following:
Let's correct our mistake and pass the correct argument to the debug function:
$ kubeless function call debug --data '{"name":"Kubeless"}'
Now our function has run successfully and has generated Hello Kubeless as its output:
Before we start installing the serverless framework, we need to have Node.js version
6.5.0 or later installed as a prerequisite. So, first, let's install Node.js:
$ curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
Once installed, verify the Node.js version by executing the following command:
$ nodejs -v
Once the Node.js installation is successful, we will then install the Serverless
Framework by executing the following command:
$ sudo npm install -g serverless
This command will create a directory named my-kubeless-project and several files
within this directory. First, let's move to the my-kubeless-project directory by executing
the following command:
$ cd my-kubeless-project
Serverless Plugin for Kubeless | 247
The handler.py file contains a sample Python function, as follows. This is a simple
function that returns a JSON object and the status code of 200:
import json
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
It also creates a serverless.yml file, which tells the serverless framework to execute the
hello function inside the handler.py file. In the provider section, it is mentioned that
this is a Kubeless function with a python2.7 runtime. In the plugins section, it defines
the custom plugins required, such as the serverless-kubeless plugin:
# Welcome to Serverless!
#
# For full config options, check the kubeless plugin docs:
# https://github.com/serverless/serverless-kubeless
#
248 | Kubernetes Serverless with Kubeless
# Update the service name below with your own service name
service: my-kubeless-project
provider:
name: kubeless
runtime: python2.7
plugins:
- serverless-kubeless
functions:
hello:
handler: handler.hello
Finally, the package.json file contains the npm packaging information, such as
dependencies:
{
"name": "my-kubeless-project",
"version": "1.0.0",
"description": "Sample Kubeless Python serverless framework service.",
"dependencies": {
"serverless-kubeless": "^0.4.0"
},
Serverless Plugin for Kubeless | 249
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [
"serverless",
"kubeless"
],
"author": "The Kubeless Authors",
"license": "Apache-2.0"
}
You can update these files as required to match your business requirements. We are not
going to change these files in this example.
Now, we are going to execute the npm install command, which installs all npm
dependencies, such as the kubeless-serverless plugin:
$ npm install
When the function is successfully deployed, we can invoke the function with the
serverless invoke command:
$ serverless invoke --function hello -l
You can also use the kubeless function call command to invoke this function:
$ kubeless function call hello
Figure 7.74: Using the kubeless function call to invoke the function
Once you are done with the function, use serverless remove to delete the function:
$ serverless remove
Serverless Plugin for Kubeless | 251
Note
Execute the serverless logs -f hello command if you encounter any errors
while invoking the function.
Note
The detailed steps on creating a Slack workspace with incoming webhook
integration, along with the corresponding screenshots, are available on page 422.
Now we are ready to start the activity. Execute the following steps to complete
this activity:
Activity Solution
1. Create a function in any language (supported by Kubeless) that can post messages
to Slack. In this activity, we will write a Python function that performs the
following steps.
2. Use the requests library as a dependency.
3. Send a POST request to the incoming webhook (created in step 2) with an input
message.
4. Print the response of the post request,
5. Deploy the function to the Kubeless framework.
6. Invoke the function.
7. Go to your Slack workspace and verify that the message was successfully posted
to the Slack channel. The final output should look like this:
Summary | 253
Note
The solution to the activity can be found on page 422.
Summary
In this chapter, we learned how to deploy a single-node Kubernetes cluster with
Minikube. Then, we installed the Kubeless framework, Kubeless CLI, and Kubeless UI
on top of our Minikube cluster. Once the Kubernetes cluster and Kubeless framework
were ready, we created our first Kubeless function with Python and deployed it to
Kubeless. Then, we discussed multiple ways of invoking Kubeless functions, namely
with the Kubeless CLI, the Kubeless UI, HTTP triggers, scheduled triggers, and PubSub
triggers. Next, we discussed how to debug common error scenarios that we encounter
while deploying Kubeless functions. Then, we discussed how we can use the serverless
framework to deploy a Kubeless function. Finally, in the activity, we learned how we can
use a Kubeless function to send messages to a Slack channel.
In the next chapter, we shall introduce OpenWhisk, and cover OpenWhisk actions and
triggers.
Introduction to
8
Apache OpenWhisk
Learning Objectives
This chapter covers Apache OpenWhisk and how to work with its actions, triggers, and packages.
256 | Introduction to Apache OpenWhisk
Introduction to OpenWhisk
Until now in this book, we have learned about the Kubeless framework, which is an
open source Kubernetes-native serverless framework. We discussed the Kubeless
architecture, and created and worked with the Kubeless functions and triggers. In
this chapter, we shall be learning about OpenWhisk, which is another open source
serverless framework that can be deployed on top of Kubernetes.
OpenWhisk is an open source serverless framework that is part of the Apache Software
Foundation. This was originally developed at IBM with the project code name of Whisk,
and later branded as OpenWhisk once the source code was open sourced. Apache
OpenWhisk supports many programming languages, including Ballerina, Go, Java,
JavaScript, PHP, Python, Ruby, Swift, and .NET Core. It allows us to invoke functions
written in these programming languages in response to events. OpenWhisk supports
many deployment options, such as on-premises and cloud infrastructure.
There are four core components of OpenWhisk:
• Actions: These contain application logic written in one of the supported languages
that will be executed in response to events.
• Sequences: These link multiple actions together to create more complex
processing pipelines.
• Triggers and rules: These automate the invocation of actions by binding them to
external event sources.
• Packages: These combine related actions together for distribution.
The following diagram illustrates how these components interact with each other:
In the next section, we will learn how to run Apache OpenWhisk with IBM Cloud
Functions.
Note
A credit card is not required to register with IBM Cloud.
258 | Introduction to Apache OpenWhisk
2. At this point, we will receive an email with an activation link. Click on the Confirm
account button to activate your account, as shown in the following figure:
3. When you click on the Confirm account button in the email, we will be taken
to the IBM Cloud welcome screen. Click on the Log in button to log in with the
credentials used to register with IBM Cloud, as shown in the following figure:
4. Acknowledge the privacy data by clicking on the Proceed button, as shown in the
following figure:
5. You can skip the introduction video and proceed to the home page. Now you can
click the hamburger icon ( ) in the top-left corner of the screen and select
Functions from the menu, as shown in the following figure:
6. This will take you to the IBM Cloud functions page (https://cloud.ibm.com/
functions/), as shown in the following figure:
OpenWhisk offers a CLI named wsk to create and manage OpenWhisk entities. Next,
we will install the OpenWhisk CLI, which will be used to interact with the OpenWhisk
platform.
2. Next, we will extract the tar.gz file using the tar command as follows:
$ tar zxvf ibm-cli.tar.gz
The output should be as follows:
3. Then move the ibmcloud executable file to the /usr/local/bin/ path, as shown in
the following command:
$ sudo mv Bluemix_CLI/bin/ibmcloud /usr/local/bin/ibmcloud
The output should be as follows:
4. Now we will log in to IBM Cloud using the IBM Cloud CLI. Execute the following
command, replacing <YOUR_EMAIL> with the email address used when registering
to IBM Cloud. Provide the email and password used during the registration phase
when prompted and set the region number as 5 (us-south), as you can see in the
following command:
$ ibmcloud login -a cloud.ibm.com -o "<YOUR_EMAIL>" -s "dev"
The output should be as follows:
5. Now we will install the Cloud Functions plugin using the ibmcloud CLI, as shown in
the following command. This plugin will be used when we work with OpenWhisk
entities:
$ ibmcloud plugin install cloud-functions
266 | Introduction to Apache OpenWhisk
6. Next, we will provide the target organization (the organization name is your email
address) and the space (which defaults to dev) using the following command:
$ ibmcloud target -o <YOUR_EMAIL> -s dev
The output should be as follows:
7. Now the configurations are done. We can use ibmcloud wsk to interact with
OpenWhisk entities, as shown in the following command:
$ ibmcloud wsk action list
Running OpenWhisk with IBM Cloud Functions | 267
Note
In this book, we will be using the wsk command to manage OpenWhisk entities
instead of the ibmcloud wsk command provided by IBM Cloud Functions. Both of
them provide the same functionality. The only difference is that wsk is the standard
CLI for OpenWhisk and ibmcloud fn is from the IBM Cloud Functions plugin.
8. Let's create a Linux alias, wsk="ibmcloud wsk". First, open the ~/.bashrc file with
your favorite text editor. In the following command, we will be using the vim text
editor to open the file:
vim ~/.bashrc
Add the following line at the end of the file:
alias wsk="ibmcloud wsk"
9. Source the ~/.bashrc file to apply the changes, as shown in the following
command:
$ source ~/.bashrc
The output should be as follows:
10. Now we should be able to invoke OpenWhisk with the wsk command. Execute the
following command to verify the installation:
$ wsk --help
268 | Introduction to Apache OpenWhisk
This will print the help page of the wsk command, as shown in the following figure:
OpenWhisk Actions
In OpenWhisk, actions are code snippets written by developers that will be executed
in response to events. These actions can be written in any programming language
supported by OpenWhisk:
• Ballerina
• Go
• Java
• JavaScript
• PHP
• Python
• Ruby
• Swift
• .NET Core
Also, we can use a custom Docker image if our preferred language runtime is not
supported by OpenWhisk yet. These actions will receive a JSON object as input, then
perform the necessary processing within the action, and finally return a JSON object
with the processed results. In the following sections, we will focus on how to write,
create, list, invoke, update, and delete OpenWhisk actions using the wsk CLI.
OpenWhisk Actions | 269
Note
In this chapter, we will be mainly using JavaScript to create the function code.
As we can see in the output, whenever an action is successfully created, the CLI prompt
appropriately informs the reader of the status of the action.
The OpenWhisk framework will determine the runtime to execute the action based on
the extension of the source code file. In the preceding scenario, the Node.js 10 runtime
will be selected for the provided .js file. You can use the --kind flag with the wsk action
create command if you want to override the default runtime selected by the OpenWhisk
framework:
$ wsk action create secondRandomNumber random-number.js --kind nodejs:8
From the preceding output, we can see the two actions we created earlier with the
names randomNumber and secondRandomNumber. The wsk action list command lists the
actions and the runtime of these actions, such as nodejs:8 or nodejs:10. By default, the
action list will be sorted based on the last update time, so the most recently updated
action will be at the top of the list. If we want the list to be sorted alphabetically, we can
use the --name-sort (or -n) flag, as shown in the following command:
$ wsk action list --name-sort
The request-response method is synchronous; the action invocation will wait until the
results are available. On the other hand, the fire-and-forget method is asynchronous.
This will return an ID called the activation ID, which can be used later to get the results.
Here is the standard format of the wsk command to invoke the action:
$ wsk action invoke <action-name>
272 | Introduction to Apache OpenWhisk
The preceding command will return the following output in the terminal, which
contains the result returned from the method with other metadata about the method
invocation:
ok: invoked /_/randomNumber with id 002738b1acee4abba738b1aceedabb60
{
"activationId": "002738b1acee4abba738b1aceedabb60",
"annotations": [
{
"key": "path",
"value": "your_email_address_dev/randomNumber"
},
{
"key": "waitTime",
"value": 79
},
{
"key": "kind",
"value": "nodejs:10"
},
{
"key": "timeout",
"value": false
},
OpenWhisk Actions | 273
{
"key": "limits",
"value": {
"concurrency": 1,
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "initTime",
"value": 39
}
],
"duration": 46,
"end": 1564829766237,
"logs": [],
"name": "randomNumber",
"namespace": "your_email_address_dev",
"publish": false,
"response": {
"result": {
"number": 0.6488215545330562
},
"status": "success",
"success": true
},
"start": 1564829766191,
"subject": "your_email_address",
"version": "0.0.1"
}
274 | Introduction to Apache OpenWhisk
We can see the output ("number": 0.6488215545330562) returned by the main function
within the response section of the returned JSON object. This is the random number
generated by the JavaScript function that we wrote previously. The returned JSON
object contains an activation ID ("activationId": "002738b1acee4abba738b1aceedabb60"),
which we can use to get the results later. This output includes other important values,
such as the action invocation status ("status": "success"), the start time ("start":
156482976619), the end time ("end": 1564829766237), and the execution duration
("duration": 46) of this action.
Note
We will discuss how to get the activation results using activationId in the Fire-
and-Forget Invocation Method section.
We can use the --result (or -r) flag if we need to get the result of the action without
the other metadata, as shown in the following code:
$ wsk action invoke randomNumber --result
Figure 8.22: Invoking the randomNumber action using the request-and-response method
Figure 8.23: Invoking the randomNumber action using the fire-and-forget method
OpenWhisk Actions | 275
You need to replace <activation_id> with the value returned when you invoked the
function using the wsk action invoke command:
$ wsk activation get 2b90ade473e443bc90ade473e4b3bcff
ok: got activation 2b90ade473e443bc90ade473e4b3bcff
{
"namespace": "[email protected]_dev",
"name": "randomNumber",
"version": "0.0.2",
"subject": "[email protected]",
"activationId": "2b90ade473e443bc90ade473e4b3bcff",
"start": 1564832684116,
"end": 1564832684171,
"duration": 55,
"statusCode": 0,
"response": {
"status": "success",
"statusCode": 0,
"success": true,
"result": {
"number": 0.05105974715780626
}
},
"logs": [],
"annotations": [
{
"key": "path",
"value": "[email protected]_dev/randomNumber"
276 | Introduction to Apache OpenWhisk
},
{
"key": "waitTime",
"value": 126
},
{
"key": "kind",
"value": "nodejs:10"
},
{
"key": "timeout",
"value": false
},
{
"key": "limits",
"value": {
"concurrency": 1,
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "initTime",
"value": 41
}
],
"publish": false
}
OpenWhisk Actions | 277
If you would prefer to retrieve only a summary of the activation, the --summary (or -s)
flag should be provided with the wsk activation get command:
$ wsk activation get <activation-id> --summary
The output from the preceding command will print a summary of the activation details,
as shown in the following screenshot:
The wsk activation result command returns only the results of the action, omitting
any metadata:
$ wsk activation result <activation-id>
The wsk activation list command can be used to list all the activations:
$ wsk activation list
The preceding command returns a list of activations sorted by the datetime of the
activation's invocation. The following table describes the information provided by each
column:
We already have an action that prints a random number, which is defined in the random-
number.js function. This function prints a value between 0 and 1, but what if we want to
print a random number between 1 and 100? This can now be done using the following
code:
function main() {
var randomNumber = Math.floor((Math.random() * 100) + 1);
return { number: randomNumber };
}
Then, we can execute the wsk action update command to update the randomNumber
action:
$ wsk action update randomNumber random-number.js
OpenWhisk Actions | 279
Now we can verify the result of the updated action by executing the following
command:
$ wsk action invoke randomNumber --result
As we can see, the randomNumber action has returned a number between 1 to 100. We
can invoke the randomNumber function number multiple times to verify that it returns an
output number between 1 and 100.
Let's execute the wsk action delete command to delete the randomNumber and
secondRandomNumber actions we created in the preceding sections:
$ wsk action delete randomNumber
$ wsk action delete secondRandomNumber
The output should be as follows:
Now we have learned how to write, create, list, invoke, update, and delete OpenWhisk
actions. Let's move on to an exercise in which you will create your first OpenWhisk
action.
Next, we will create an action named examResults in the OpenWhisk framework with
the previously mentioned JavaScript function code. Then, we will invoke the action to
verify that it returns the results as expected. Once the action response is verified, we
will update the action to return the exam grade with the results based on the following
criteria:
• Return Pass with grade A if marks are equal to or above 80.
• Return Pass with grade B if marks are equal to or above 70.
• Return Pass with grade C if marks are equal to or above 60.
• Return Fail if marks are below 60.
Again, we will invoke the action to verify the results and finally delete the action.
Note
The code files for this exercise can be found at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/tree/master/Lesson08/Exercise26.
examResult = 'Pass';
} else {
examResult = 'Fail';
}
return { result: examResult };
}
2. Now, let's create the OpenWhisk action named examResult from the exam-result.
js file created in step 1:
$ wsk action create examResult exam-result.js
The output should be as follows:
3. Once the action creation is successful, we can invoke the examResult action by
sending a value between 0 to 100 to the examMarks parameter:
$ wsk action invoke examResult --param examMarks 72 –result
The output should be as follows:
6. Once the action is updated, we can invoke the action multiple times with different
exam marks as parameters to verify the functionality:
$ wsk action invoke examResult --param examMarks 150 --result
$ wsk action invoke examResult --param examMarks 75 --result
$ wsk action invoke examResult --param examMarks 42 –result
The output should be as follows:
Figure 8.34: Invoking the examResult action with different parameter values
OpenWhisk Actions | 283
In this exercise, we learned how to create a JavaScript function that follows the
standards for OpenWhisk actions. Then we created the action and invoked it with the
wsk CLI. After that, we changed the logic of the function code and updated the action
with the latest function code. Finally, we performed a cleanup by deleting the action.
OpenWhisk Sequences
In OpenWhisk, and in general with programming, functions (known as actions in
OpenWhisk) are expected to perform a single focused task. This will help to reduce
code duplication by reusing the same function code. But creating complex applications
requires connecting multiple actions together to achieve the desired result. OpenWhisk
sequences are used to chain multiple OpenWhisk actions (which can be in different
programming language runtimes) together and create more complex processing
pipelines.
The following diagram illustrates how a sequence can be constructed by chaining
multiple actions:
We can pass parameters (if any) to the sequence, which will be used as the input for
the first action. Then, the output of each action will be the input for the next action,
and the final action of the sequence will return its result as the output of the sequence.
Actions written in different programming languages can also be chained together with
sequences.
284 | Introduction to Apache OpenWhisk
Sequences can be created using the wsk action create command with the --sequence
flag to provide a comma-separated list of actions to invoke:
$ wsk action create <sequence-name> --sequence <action-01>,<action-02>
Note
Authentication is verifying the user's identity, and authorization is granting the
required level of access to the system.
First, let's create the authentication.js function. This function will receive two
parameters, named username and password. If the username and password match
the hardcoded values of admin (for the username parameter) and openwhisk (for the
password parameter), the function will return authenticationResult as true. Otherwise,
authenticationResult will be false:
function main(params) {
var authenticationResult = '';
Now both authentication and authorization actions are ready. Let's create a sequence
named login by combining authentication and authorization actions:
$ wsk action create login --sequence authentication,authorization
286 | Introduction to Apache OpenWhisk
Now it's time to test the login sequence. First, we will invoke the login sequence by
sending the correct credentials (username = admin and password = openwhisk):
$ wsk action invoke login --param username admin --param password openwhisk
–result
The expected result for a successful login is shown in the preceding screenshot. Now,
let's invoke the login sequence by sending incorrect credentials (username = hacker and
password = hacker). This time we expect to receive an authentication failure message:
$ wsk action invoke login --param username hacker --param password hacker –
result
Note
The code files for this exercise can be found at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/tree/master/Lesson08/Exercise27.
5. Create the second function (show-result.js), which returns the exam result (Pass
or Fail) based on the average marks. The exam results will be based on the logic as
marks less than 0 or greater than 100 will return an Error; marks greater than or
equal to 60 will return Pass; else it will return Fail.
The code would be as follows:
function main(params) {
var examResult = '';
if (params.name) {
helloMessage = 'Hello, ' + params.name;
} else {
helloMessage = 'Hello, Stranger';
}
return { result: helloMessage };
}
Now we can create a web action by sending the --web true flag with the wsk action
create command:
$ wsk action create myWebAction web-action.js --web true
OpenWhisk Actions | 291
Then, we can invoke the created web action using the web action URL. The general
format of a web action URL is as follows:
https://{APIHOST}/api/v1/web/{QUALIFIED_ACTION_NAME}.{EXT}
We can use the --url flag with the wsk action get command to retrieve the URL of a
web action:
$ wsk action get myWebAction –url
We need to append .json as an extension to the preceding URL since our web action is
responding with a JSON payload. Now we can either open this URL in a web browser or
use the curl command to retrieve the output.
292 | Introduction to Apache OpenWhisk
Figure 8.49: Invoking myWebAction from a web browser without the name parameter
Hello, Stranger is the expected response because we did not pass a value for the name
parameter in the query.
Now, let's invoke the same URL by appending ?name=OpenWhisk at the end of the URL:
https://us-south.functions.cloud.ibm.com/api/v1/web/sathsara89%40gmail.com_
dev/default/myWebAction.json?name=OpenWhisk
The output should be as follows:
Figure 8.50: Invoking myWebAction from a web browser with the name parameter
We can invoke the same URL as a curl request with the following command:
$ curl https://us-south.functions.cloud.ibm.com/api/v1/web/
sathsara89%40gmail.com_dev/default/myWebAction.json?name=OpenWhisk
OpenWhisk Actions | 293
Figure 8.51: Invoking myWebAction as a curl command with the name parameter
This command will produce the same output as we saw in the web browser.
As we discussed previously, OpenWhisk web actions can be configured to return
additional information including HTTP headers, HTTP status codes, and body content
of different types using one or more of the following fields in the JSON response:
• headers: This field is used to send HTTP headers in the response. An example
would be to send Content-Type as text/html.
• statusCode: This will send a valid HTTP response code. The status code of 200 OK
will be sent unless specified explicitly.
• body: This contains the response content, which is either plain text, a JSON object
or array, or a base64-encoded string for binary data.
Now we will update the web-action.js function to send the response in the format we
discussed earlier:
function main(params) {
var helloMessage = ''
if (params.name) {
username = params.name;
httpResponseCode = 200;
} else {
username = 'Stranger';
httpResponseCode = 400;
}
var htmlMessage = '<html><body><h3>' + 'Hello, ' + username + '</h3></
body></html>';
return {
294 | Introduction to Apache OpenWhisk
headers: {
'Set-Cookie': 'Username=' + username + '; Max-Age=3600',
'Content-Type': 'text/html'
},
statusCode: httpResponseCode,
body: htmlMessage
};
}
Then, we will update the myWebAction action with the latest function code:
$ wsk action update myWebAction web-action.js
Let's invoke the updated action with the following curl command. We will provide
name=OpenWhisk as a query parameter in the URL. Also, the -v option is used to print
verbose output, which will help us to verify the fields we added to the response:
$ curl https://us-south.functions.cloud.ibm.com/api/v1/web/
sathsara89%40gmail.com_dev/default/myWebAction.http?name=OpenWhisk -v
>
< HTTP/1.1 200 OK
< Date: Sun, 04 Aug 2019 16:32:56 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Set-Cookie: __cfduid=d1cb4dec494fb11bd8b60a225c218b3101564936375;
expires=Mon, 03-Aug-20 16:32:55 GMT; path=/; domain=.functions.cloud.ibm.
com; HttpOnly
< X-Request-ID: 7dbce6e92b0a90e313d47e0c2afe203b
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: OPTIONS, GET, DELETE, POST, PUT, HEAD, PATCH
< Access-Control-Allow-Headers: Authorization, Origin, X-Requested-With,
Content-Type, Accept, User-Agent
< x-openwhisk-activation-id: f86aad67a9674aa1aaad67a9674aa12b
< Set-Cookie: Username=OpenWhisk; Max-Age=3600
< IBM_Cloud_Functions: OpenWhisk
< Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/
cdn-cgi/beacon/expect-ct"
< Server: cloudflare
< CF-RAY: 5011ee17db5d7f2f-CMB
<
* Connection #0 to host us-south.functions.cloud.ibm.com left intact
<html><body><h3>Hello, OpenWhisk</h3></body></html>
As expected, we have received HTTP/1.1 200 OK as the HTTP response code, Content-
Type: text/html as a header, a cookie, and <html><body><h3>Hello, OpenWhisk</h3></
body></html> as the body of the response.
Now, let's invoke the same curl request without the name=OpenWhisk query parameter.
This time, the expected response code is HTTP/1.1 400 Bad Request because we did
not pass a value for the query parameter. Also, the curl command will respond with
<html><body><h3>Hello, Stranger</h3></body></html> as the HTTP response body code:
$ curl https://us-south.functions.cloud.ibm.com/api/v1/web/
sathsara89%40gmail.com_dev/default/myWebAction.http -v
296 | Introduction to Apache OpenWhisk
In this section, we introduced OpenWhisk web actions and discussed the differences
between standard actions and web actions. Then, we created a web action using the wsk
CLI. Next, we learned about the format of the URL exposed by web actions. We invoked
the web action with both web browser and curl commands. Then, we discussed the
additional information that can be returned with web actions. Finally, we updated our
web action to include headers, statusCode, and the body in the response and invoked
the web action using the curl command to verify the response.
Triggers are different types of events sent from event sources. These triggers can
be fired either manually with the wsk CLI or automatically from events occurring in
external event sources. Some examples of an event source are a Git repository, an email
account, or a Slack channel. As illustrated in the preceding diagram, feeds are used to
connect the triggers to external event sources. Examples for feeds are as follows:
• A commit is made to a Git repository.
• Incoming email messages to a particular account.
• Message received by a Slack channel.
As illustrated, the rule is the component that connects triggers with actions. A rule will
connect one trigger with one action. Once this link is created, every invocation of the
trigger will execute the associated action. The following scenarios are also possible by
creating an appropriate set of rules:
• A single trigger to execute multiple actions
• A single action to be executed in response to multiple triggers
Let's start by creating a simple action to be invoked with triggers and rules. Create a file
named triggers-rules.js and add the following JavaScript function:
function main(params) {
var helloMessage = 'Invoked with triggers and rules';
return { result: helloMessage };
}
Now it's time to create our first trigger. We will use the wsk trigger create command to
create the trigger using the wsk CLI:
$ wsk trigger create <trigger-name>
We can list the available triggers to make sure that myTrigger has been created
successfully:
$ wsk trigger list
Triggers are useless until we connect them with actions through a rule. Now we will
be creating an OpenWhisk rule with the wsk rule create command, which has the
following format:
$ wsk rule create <rule-name> <trigger-name> <action-name>
Let's create a rule named myRule to connect the myTrigger and triggerAndRules actions
together:
$ wsk rule create myRule myTrigger triggersAndRules
Figure 8.56: Creating myRule to connect myTrigger with the triggersAndRules action
We can get the details about myRule, which shows the trigger and action associated with
the rule:
$ wsk rule get myRule
This command will print detailed output about myRule as shown in the following
screenshot, which includes the namespace, version, status, and associated triggers and
actions of rule.
300 | Introduction to Apache OpenWhisk
It's time to see triggers in action once the action, trigger, and rule are ready. Let's fire
the trigger using the wsk trigger fire command:
$ wsk trigger fire myTrigger
In the preceding screenshot, we can see that the myTrigger trigger activation is
recorded, followed by the triggersAndRules action activation.
We can print the result of the triggersAndRules action activation to make sure that the
action was invoked properly by the trigger:
$ wsk activation get 85d9d7e50891468299d7e50891d68224 –summary
In this section, we discussed how to automate action invocation with feeds, triggers,
and rules. We created an action, a trigger, and then a rule to connect them. Finally, we
invoked the action by firing the trigger.
In the following exercise, let's learn how to create a cron job-based trigger.
302 | Introduction to Apache OpenWhisk
Note
The code files for this exercise can be found at https://github.com/TrainingByPackt/
Serverless-Architectures-with-Kubernetes/tree/master/Lesson08/Exercise28.
Here is the response for the wsk trigger create command. Make sure there is ok:
created trigger dateTimeCronTrigger at the end of the output, which indicates the
successful creation of dateTimeCronTrigger:
ok: invoked /whisk.system/alarms/alarm with id
06f8535f9d364882b8535f9d368882cd
{
"activationId": "06f8535f9d364882b8535f9d368882cd",
"annotations": [
{
"key": "path",
"value": "whisk.system/alarms/alarm"
},
{
"key": "waitTime",
"value": 85
},
{
"key": "kind",
"value": "nodejs:10"
},
{
"key": "timeout",
"value": false
},
{
"key": "limits",
"value": {
"concurrency": 1,
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "initTime",
"value": 338
}
],
"duration": 594,
"end": 1565083299218,
"logs": [],
304 | Introduction to Apache OpenWhisk
"name": "alarm",
"namespace": "[email protected]_dev",
"publish": false,
"response": {
"result": {
"status": "success"
},
"status": "success",
"success": true
},
"start": 1565083298624,
"subject": "[email protected]",
"version": "0.0.152"
}
ok: created trigger dateTimeCronTrigger
3. Create the rule (dateTimeRule) to connect the action (dateTimeAction) with the
trigger (dateTimeCronTrigger):
$ wsk rule create dateTimeRule dateTimeCronTrigger dateTimeAction
The output should be as follows:
4. This action will now be triggered every minute. Allow the cron job trigger to
run for around 5 minutes. We can list the last 6 activations with the following
command:
$ wsk activation list --limit 6
OpenWhisk Feeds, Triggers, and Rules | 305
5. List the summary of the activations of dateTimeAction to make sure it has printed
the current datetime every minute:
$ wsk activation get 04012f4f3e6044ed812f4f3e6054edc4 --summary
Check the value of the currentDateTime field, printed for each invocation to verify that
this action was invoked every minute as scheduled. In the preceding screenshot, we
can see that the action was invoked at 09:37:02, then again at 09:38:03, and finally at
09:39:03.
In this activity, we created a simple function that prints the current date and time.
Then, we created a cron job trigger to invoke this action every minute.
OpenWhisk Packages
OpenWhisk packages allow us to organize our actions by bundling the related actions
together. As an example, consider that we have multiple actions, such as createOrder,
processOrder, dispatchOrder, and refundOrder. These actions will perform the relevant
application logic when an application user creates an order, processes an order,
dispatches an order, and refunds an order respectively. In this case, we can create a
package named order to group all order-related actions together.
As we learned previously, action names should be unique. Packages help to prevent
naming conflicts because we can create multiple actions with the same name by placing
them in different packages. As an example, the retrieveInfo action from the order
package may retrieve information about an order, but the retrieveInfo action from the
customer package can retrieve information about a customer.
So far, we have created many actions without bothering about packages. How was this
possible? This is because OpenWhisk places actions into default packages if we do not
mention any specific package during action creation.
There are two types of packages in OpenWhisk:
• Built-in packages (packages come with OpenWhisk)
• User-defined packages (other packages created by users)
All the packages available in a namespace can be retrieved with the wsk package list
<namespace> command.
The output should be as follows:
OpenWhisk Feeds, Triggers, and Rules | 307
In this section, we introduced the concept of packages and discussed the built-in
packages and user-defined packages of OpenWhisk. In the next exercise, we will create
a package and add an action to the newly created package.
2. Now we are going to create an action that will be added to our arithmetic
package. Create a file named add.js with the following content:
function main(params) {
var result = params.firstNumber + params.secondNumber;
return { result: result };
}
3. We can create the action and add it to the arithmetic package simultaneously
with the wsk action create command. This will only require us to prefix the action
name with the package name. Execute the following command:
$ wsk action create arithmetic/add add.js
In the output, we can see that the action has been successfully created in the
arithmetic package.
The output should be as follows:
4. Now we can verify that our add action has been placed in the arithmetic package
using the wsk action list command.
$ wsk action list --limit 2
The output should be as follows:
5. The wsk package get command will return JSON output that describes the
package:
$ wsk package get arithmetic
OpenWhisk Feeds, Triggers, and Rules | 309
6. We can use the --summary flag if we want to see a summary of the package
description, which lists the actions within the package:
$ wsk package get arithmetic –summary
The output should be as follows:
Once the key is generated, copy the API key and save it somewhere safe as you will
see this key only once.
Note
Detailed steps on creating an OpenWeather account and a SendGrid account are
available in the Appendix section on page 432.
Now we are ready to start the activity. Execute the following steps to complete
this activity:
3. Create a function in any language that you are familiar with (and supported by the
OpenWhisk framework) that will take the city name as a parameter and return a
JSON object with weather information retrieved from the OpenWeather API.
Note
For this solution, we will be using functions written in JavaScript. However, you can
use any language that you are familiar with to write the functions.
5. Create a third function (in any language that you are familiar with and is supported
by the OpenWhisk framework) that will take the JSON object with the weather
data and format it as a string message to be sent as the email body.
Here is an example function written in JavaScript:
function main(params) {
if (!params.weatherData) {
reject("Weather data not provided");
}
resolve({message: weatherMessage});
});
}
6. Next, create a sequence connecting all three actions.
7. Finally, create the trigger and rule to invoke the sequence daily at 8.00 AM.
Note
The solution to the activity can be found on page 432.
314 | Introduction to Apache OpenWhisk
Summary
In this chapter, we first learned about the history and the core concepts of Apache
OpenWhisk. Then, we learned how to set up IBM Cloud Functions with CLI to run our
serverless functions. After that, OpenWhisk actions were introduced, which are the
code snippets written in one of the languages supported by OpenWhisk. We discussed
how to write, create, list, invoke, update, and delete OpenWhisk actions using the wsk
CLI. Next, we went over OpenWhisk sequences, which are used to combine multiple
actions together to create a more complex processing pipeline. Going forward, we
learned how to expose actions publicly using a URL with web actions. We discussed
how web actions allow us to return additional information from the action, such as
HTTP headers and non-JSON payloads, including HTML and binary data. The next
section was on feeds, triggers, and rules that automate action invocation using events
from external event sources. Finally, OpenWhisk packages were discussed, which are
used to organize related actions by bundling them together.
In the next and final chapter, we shall learn about OpenFaaS and work with an
OpenFaaS function.
Going Serverless with
9
OpenFaaS
Learning Objectives
• Create, build, deploy, list, invoke, and delete functions with the OpenFaaS CLI
In this chapter, we aim to set up the OpenFaaS framework on top of a Minikube cluster and
study how we can work with OpenFaaS functions, using both the OpenFaaS CLI and OpenFaaS
portal. We will also look into features such as observability and autoscaling with OpenFaaS.
318 | Going Serverless with OpenFaaS
Introduction to OpenFaaS
In the previous chapter, we learned about OpenWhisk, an open source serverless
framework, which is part of the Apache Software Foundation. We learned how to create,
list, invoke, update, and delete OpenWhisk actions. We also discussed how to automate
the action invocation with feeds, triggers, and rules.
In this chapter, we will be studying OpenFaas, another open source framework used to
build and deploy serverless functions on top of containers. This was started as a proof-
of-concept project by Alex Ellis in October 2016, and the first version of the framework,
written in Golang, was committed to GitHub in December 2016.
OpenFaaS was originally designed to work with Docker Swarm, which is the clustering
and scheduling tool for Docker containers. Later, the OpenFaaS framework was
rearchitected to support the Kubernetes framework, too.
OpenFaaS comes with a built-in UI named OpenFaaS Portal, which can be used to
create and invoke the functions from the web browser. This portal also offers a CLI
named faas-cli that allows us to manage functions through the command line.
The OpenFaaS framework has built-in support for autoscaling. This will scale up
the function when there is increased demand, and it will scale down when demand
decreases, or even scale down to zero when the function is idle.
Now, let's take a look at the components of the OpenFaaS framework:
OpenFaaS consists of the following components that are running on the underlying
Kubernetes or Docker Swarm:
• API Gateway:
The API Gateway is the entry point to the OpenFaaS framework, which exposes
the functions externally. It is also responsible for collecting the function metrics
such as function invocation count, function execution duration, and number of
function replicas. The API Gateway also handles function autoscaling by increasing
or decreasing function replicas based on demand.
• Prometheus:
Prometheus, which is an open source monitoring and alerting tool, comes bundled
with the OpenFaaS framework. This is used to store the information about the
function metrics collected by the API Gateway.
• Function Watchdog:
The Function Watchdog is a tiny Golang web server running alongside each
function container. This component is placed between the API Gateway and your
function and is responsible for converting message formats between the API
Gateway and the function. It converts the HTTP messages sent by the API Gateway
to the "standard input" (stdin) messages, which the function can understand.
This also handles the response path by converting the "standard output" (stdout)
response sent by the function to an HTTP response.
The following is an illustration of a function watchdog:
Docker Swarm or Kubernetes can be used as the container orchestration tool with the
OpenFaaS framework, which manages the containers running on the underlying Docker
framework.
320 | Going Serverless with OpenFaaS
Once these prerequisites are ready, we can continue to install OpenFaaS. The
installation of OpenFaas can be broadly classified into three steps, as follows:
1. Installing the OpenFaaS CLI
2. Installing the OpenFaaS framework (on a Minikube cluster)
3. Setting up an environment variable
Once the installation is complete, we can verify installation with the faas-cli version
command:
$ faas-cli version
As you can see, we have installed the faas-cli utility on the cluster and can also check
the version number.
322 | Going Serverless with OpenFaaS
Now we are going to create the Kubernetes secret, which is required to enable basic
authentication for the OpenFaaS gateway. First, we will create a random string that will
be used as the password. Once the password is generated, we will echo the generated
password and save it in a secure place as we need it to log in to the API Gateway later
on. Run the following commands to generate the password:
$ PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)
$ echo $PASSWORD
Introduction to OpenFaaS | 323
After generating the password, we will create a Kubernetes secret object to store the
password.
Note:
A Kubernetes secret object is used to store sensitive data such as a password.
We can now deploy the OpenFaaS framework from the helm chart. The helm upgrade
openfaas command starts the deployment of OpenFaaS and will start deploying the
OpenFaaS framework on your local Minikube cluster. This will take between 5 and
15 minutes depending on the network speed. Run the following commands to install
OpenFaaS:
$ helm upgrade openfaas \
--install openfaas/openfaas \
--namespace openfaas \
--set functionNamespace=openfaas-fn \
--set basic_auth=true
324 | Going Serverless with OpenFaaS
The preceding command prints a lengthy output, and, at the bottom, it provides a
command to verify the installation, as you can see in the following screenshot:
You can verify the deployment state from the following command:
$ kubectl --namespace=openfaas get deployments -l "release=openfaas,
app=openfaas"
Once the installation has been successfully completed and all services are running, we
then have to log in to the OpenFaaS gateway with the credentials we created in the
preceding steps. Run the following command to log in to the OpenFaas gateway:
$ faas-cli login --username admin --password $PASSWORD
Open the ~/.bashrc file with your favorite text editor and add the following two lines
at the end of the file. Replace <your-docker-id> with your Docker ID in the following
commands:
export OPENFAAS_URL=$(minikube ip):31112
export OPENFAAS_PREFIX=<your-docker-id>
Then, you need to source the ~/.bashrc file to reload the newly configured environment
variables, as shown in the following command:
$ source ~/.bashrc
OpenFaaS Functions
OpenFaaS functions can be written in any language supported by Linux or Windows,
and they can then be converted to a serverless function using Docker containers.
This is a major advantage of the OpenFaaS framework compared to other serverless
frameworks that support only predefined languages and runtimes.
OpenFaaS functions can be deployed with either faas-cli or the OpenFaaS portal.
In the following sections, we are first going to discuss how we can build, deploy, list,
invoke, and delete OpenFaaS functions using the faas-cli command-line tool. Then, we
will discuss how to deploy and invoke functions with the OpenFaaS portal.
326 | Going Serverless with OpenFaaS
Let's check the folder structure with the tree -L 2 command that will print the folder
tree with two levels of depth, as you can see in the following screenshot:
Within the template folder, we can see 17 folders each for a specific language template.
Now, we can use the faas-cli new command to create the structure and files for a new
function using the downloaded templates as follows:
$ faas-cli new <function-name> --lang=<function-language>
328 | Going Serverless with OpenFaaS
Let's create our first OpenFaaS function named hello with the go language template
using the following command:
$ faas-cli new hello --lang=go
As per the output, the preceding command will create multiple files and directories
inside the current folder. Let's execute the tree -L 2 command again to identify the
newly created files and directories:
We can see a file named hello.yml, a folder named hello, and a handler.go file inside the
hello folder.
First, we will look into the hello.yml file, which is called the function definition file:
version: 1.0
provider:
name: openfaas
gateway: http://192.168.99.100:31112
functions:
hello:
lang: go
handler: ./hello
image: sathsarasa/hello:latest
This file has three top levels named version, provider, and functions.
330 | Going Serverless with OpenFaaS
Inside the provider section, there is a name: faas tag, which defines the provider name
as faas. This is the default and only valid value for the name tag. The next one is the
gateway tag, which points to the URL where the API Gateway is running. This value
can be overridden at deployment time with the --gateway flag or the OPENFAAS_URL
environment variable.
Next is the functions section, which is used to define one or more functions to be
deployed with the OpenFaaS CLI. In the preceding code, the hello.yml file has a single
function named hello written in the Go language (lang: go). The handler of the function
is defined with handler: ./hello section, which points to the folder where the source
code of the hello function (hello/handler.go) resides. Finally, there is the image tag that
specifies the name of the output Docker image. The Docker image name is prepended
with your Docker image ID configured using the OPENFAAS_PREFIX environment variable.
Next, we will discuss the handler.go file that was created inside the hello folder. This
file contains the source code of the function written in the Go language. This function
accepts a string parameter and returns the string by prepending it with Hello, Go. You
said:, as displayed in the following code snippet:
package function
import (
"fmt"
)
This is just a sample function generated by the template. We can update it with our
function logics.
This initiates the process of building the Docker image and will invoke the docker build
command internally. A new folder named build will be created during this step with all
the files required for the build process.
Now, let's build the hello function that we created in the previous section:
$ faas-cli build -f hello.yml
Note
Docker Hub is a free service for storing and sharing Docker images.
We can verify that the image is pushed successfully by visiting the Docker Hub page at
https://hub.docker.com/.
OpenFaaS Functions | 333
Thus, we have successfully pushed the Docker image function to Docker Hub.
We will receive a 202 Accepted output along with the function URL, which we can use
to invoke the function.
At this step, there will be a number of Kubernetes objects, including pods, services,
deployments, and replica sets created in the openfaas-fn namespace. We can view all
these Kubernetes objects with the following command:
$ kubectl get all -n openfaas-fn
Hence, we have successfully deployed the hello function to the OpenFaaS framework.
The output of the faas-cli list command will include the following columns:
• Function – The name of the function
• Invocations – The number of times the function has been invoked
• Replicas – The number of Kubernetes pod replicas of the function
OpenFaaS Functions | 335
The value of the Invocations column will increase each time we invoke the function.
The value of the Replicas column will increase automatically if the invocation rate
increases.
The --verbose flag can be used with faas-cli list if you want to get an additional
column named Image, which lists the Docker image used to deploy the function, as
shown in the following command:
$ faas-cli list --verbose
Figure 9.24: Listing the OpenFaaS functions with the verbose output
If we want to get details about a specific function, we can use the faas-cli describe CLI
command:
$ faas-cli describe hello
Now, let's invoke the hello function we deployed in the previous step.
Run the following command to invoke the hello function:
$ faas-cli invoke hello
Once the function is invoked, it will ask you to enter the input parameters and press
Ctrl + D to stop reading from the standard input. The output should be as follows:
We can also send the input data to the function, as shown in the following command:
$ echo "Hello with echo" | faas-cli invoke hello
Figure 9.27: Invoking the hello function with piping the input
The curl command can also be used to invoke the functions, as follows:
$ curl http://192.168.99.100:31112/function/hello -d "Hello from curl"
Hence, we have successfully invoked the hello function using both the faas-cli invoke
command and the curl command.
We can remove the hello function we created earlier with the following command:
$ faas-cli remove hello
In these sections, we learned to create, deploy, list, invoke, and delete OpenFaaS
functions using the faas-cli command line. Now, let's move on to an exercise where we
will be creating our first OpenFaaS function.
2. Update the ip-info/requirements.txt file to add the requests pip module, which
we need to invoke HTTP requests from our function:
requests
3. Update the ip-info/handler.py file to invoke the https://httpbin.org/ip endpoint.
This endpoint is a simple service that will return the IP of the originating request.
The following code will send an HTTP GET request to the https://httpbin.org/ip
endpoint and return the origin IP address:
import requests
import json
def handle(req):
api_response = requests.get('https://httpbin.org/ip')
json_object = api_response.json()
origin_ip = json_object["origin"]
The faas-cli up command will print the following output, which lists the steps of
building, pushing, and deploying the ip-info function:
[0] > Building ip-info.
Clearing temporary build folder: ./build/ip-info/
Preparing ./ip-info/ ./build/ip-info//function
Building: sathsarasa/ip-info:latest with python3 template. Please wait..
Sending build context to Docker daemon 9.728kB
...
Successfully built 1b86554ad3a2
Successfully tagged sathsarasa/ip-info:latest
Image: sathsarasa/ip-info:latest built.
[0] < Building ip-info done.
[0] worker done.
Deploying: ip-info.
WARNING! Communication is not secure, please consider using HTTPS.
Letsencrypt.org offers free SSL/TLS certificates.
That's all you need to do! This will deploy the Figlet function into our existing OpenFaaS
cluster. Now, you will be able to see a new function named figlet in the left-hand
sidebar of the OpenFaaS portal, as shown in the following figure:
Let's invoke the function from the OpenFaaS portal. You need to click on the function
name, and then the right-hand panel of the screen will display information about the
function, including the function status, invocation count, replica count, function image,
and the function URL:
We can invoke this function by clicking on the INVOKE button available under the
Invoke function section. If the function requires an input value, you can provide it
under the Request Body section before invoking the function.
Let's invoke the figlet function by providing the OpenFaaS string as the request body, as
shown in the following figure:
Now, you can see the expected output of the function. This will be the ASCII logo
for the input value we provided when invoking the function. Additionally, the UI will
provide you with the response status code and the execution duration for the function
invocation.
Deploying a Custom Function
Now, let's deploy a custom function named hello using the Docker image that we built
previously. Before deploying the functions from the OpenFaaS portal, we should have
our functions written, and the Docker images built and pushed using the faas-cli
command.
344 | Going Serverless with OpenFaaS
Click on the Deploy New Function button again, and, this time, select the CUSTOM
tab from the dialog box. Now, we need to provide the Docker image name and function
name as mandatory fields. Let's provide the hello Docker image we built previously
(<your-docker-id>/hello) and provide hello-portal as the function name and click on
the DEPLOY button:
Then, you will see the hello-portal function added to the left-side menu of the
OpenFaaS portal:
Now, you can follow similar steps to the ones that we discussed previously to invoke the
hello-portal function.
Then, we will update the generated Handler.php file to return a hardcoded HTML string
using the following command:
Open the html-output/src/Handler.php file using your favorite text editor. The following
command will open this file with the vi editor:
$ vi html-output/src/Handler.php
Add the following content to the file. This is a simple PHP code that will return the text,
OpenFaaS HTML Output, formatted as HTML header text:
<?php
namespace App;
/**
* Class Handler
* @package App
*/
class Handler
{
OpenFaaS Functions | 347
/**
* @param $data
* @return
*/
public function handle($data) {
$htmlOutput = "<html><h1>OpenFaaS HTML Output</h1></html>";
return $htmlOutput;
}
Now, the PHP function is ready with the HTML output. The next step is to configure
Content-Type of the function as text/html. This can be done by updating the environment
section of the function definition file. Let's update the html-output.yml file with
content_type: text/html inside the environment section, as shown in the following
code:
$ vi html-output.yml
provider:
name: faas
gateway: http://192.168.99.100:31112
functions:
html-output:
lang: php7
handler: ./html-output
image: sathsarasa/html-output:latest
environment:
content_type: text/html
Now, let's build, push, and deploy the html-output function with the faas-cli up
command:
$ faas-cli up -f html-output.yml
348 | Going Serverless with OpenFaaS
Once the preceding command is executed, we will receive an output similar to the
following:
[0] > Building html-output.
Clearing temporary build folder: ./build/html-output/
Preparing ./html-output/ ./build/html-output//function
Building: sathsarasa/html-output:latest with php7 template. Please wait..
Sending build context to Docker daemon 13.31kB
...
Successfully built db79bcf55f33
Successfully tagged sathsarasa/html-output:latest
Image: sathsarasa/html-output:latest built.
[0] < Building html-output done.
[0] worker done.
eb2c5ec03df0: Pushed
3b051c6cbb79: Pushed
99abb9ea3d15: Mounted from sathsarasa/php7
be22007b8d1b: Mounted from sathsarasa/php7
83a68ffd9f11: Mounted from sathsarasa/php7
1bfeebd65323: Mounted from sathsarasa/php7
latest: digest:
sha256:ec5721288a325900252ce928f8c5f8726c6ab0186449d9414baa04e4fac4dfd0
size: 4296
[0] < Pushing html-output [sathsarasa/html-output:latest] done.
[0] worker done.
Deploying: html-output.
WARNING! Communication is not secure, please consider using HTTPS.
Letsencrypt.org offers free SSL/TLS certificates.
2. Create the HTML folder inside serverless-website to store all the HTML files:
$ mkdir serverless-website/src/html
3. Create the first HTML file for the home page (serverless-website/src/html/home.
html) with the following code. This HTML page will output the text, Welcome to
OpenFaaS Home Page, as the page header, and OpenFaaS Home as the page title:
<!DOCTYPE html>
<html>
<head>
<title>OpenFaaS Home</title>
</head>
<body>
<h1>Welcome to OpenFaaS Home Page</h1>
</body>
</html>
OpenFaaS Functions | 351
4. Create the second HTML file for the login page (serverless-website/src/html/
login.html). This HTML page will output a simple login form with two fields for
username and password and a Login button to submit the form:
<!DOCTYPE html>
<html>
<head>
<title>OpenFaaS Login</title>
</head>
<body>
<h1>OpenFaaS Login Page</h1>
<form id="contact_us_form">
<label for="username">Username:</label>
<input type="text" name="username" required>
<label for="password">Password:</label>
<input type="text" name="password" required>
<input type="submit" value="Login">
</form>
</body>
</html>
5. Update the handler file (serverless-website/src/Handler.php) to return the
appropriate HTML file based on the path parameters of the function URL with
the following code. This function will receive either home or login as the path
parameter while invoking. It will then read the path parameter and set the HTML
page name accordingly based on the path parameter provided. The next step is to
open the HTML file, read the content of the file, and finally return the content of
the file as the function response:
<?php
namespace App;
class Handler
{
public function handle($data) {
// Retrieve page name from path params
$path_params = getenv('Http_Path');
$path_params_array = explode('/',$path_params);
$last_index = count($path_params_array);
$page_name = $path_params_array[$last_index-1];
// Set the page name
352 | Going Serverless with OpenFaaS
$current_dir = __DIR__;
$html_file_path = $current_dir . "/html/" . $page_name .
".html";
// Read the file
$html_file = fopen($html_file_path, "r") or die("Unable to open
HTML file!");
$html_output = fread($html_file,filesize($html_file_path));
fclose($html_file);
// Return file content
return $html_output;
}
}
6. Set content_type as text/html in serverless-website.yml:
version: 1.0
provider:
name: openfaas
gateway: http://192.168.99.100:31112
functions:
serverless-website:
lang: php7
handler: ./serverless-website
image: sathsarasa/serverless-website:latest
environment:
content_type: text/html
7. Build, push, and deploy the serverless-website function using the following
command:
$ faas-cli up -f serverless-website.yml
The following is the output of the preceding command:
[0] > Building serverless-website.
Clearing temporary build folder: ./build/serverless-website/
Preparing ./serverless-website/ ./build/serverless-website//function
Building: sathsarasa/serverless-website:latest with php7 template. Please
wait..
Sending build context to Docker daemon 16.38kB
...
OpenFaaS Functions | 353
Deploying: serverless-website.
WARNING! Communication is not secure, please consider using HTTPS.
Letsencrypt.org offers free SSL/TLS certificates.
Figure 9.43: Invoking the home page of the serverless website function
Figure 9.44: Invoking the login page of the serverless website function
First, we need to expose the Prometheus deployment created during the installation.
Execute the following command to expose Prometheus as a NodePort service:
$ kubectl expose deployment prometheus -n openfaas --type=NodePort
--name=prometheus-ui
This will expose the Prometheus deployment on a random port above 30,000. Execute
the following commands to get the URL of the Prometheus UI:
$ MINIKUBE_IP=$(minikube ip)
$ PROMETHEUS_PORT=$(kubectl get svc prometheus-ui -n openfaas -o jsonpath="{.
spec.ports[0].nodePort}")
$ PROMETHEUS_URL=http://$MINIKUBE_IP:$PROMETHEUS_PORT/graph
$ echo $PROMETHEUS_URL
Note
Invoke the available functions multiple times so that we can view the statistics of
these invocations from the Prometheus dashboard.
In addition to the Prometheus dashboards that we discussed, we can also use Grafana
to visualize the metrics stored in Prometheus. Grafana is an open source tool used to
analyze and visualize metrics over a period of time. It can be integrated with multiple
data sources such as Prometheus, ElasticSearch, Influx DB, or MySQL. In the next
exercise, we are going to learn how to set up Grafana with OpenFaaS and create
dashboards to monitor the metrics stored in the Prometheus data source.
OpenFaaS Functions | 357
3. Find the URL of the grafana dashboard using the following commands:
$ MINIKUBE_IP=$(minikube ip)
$ GRAFANA_URL=http://$MINIKUBE_IP:$GRAFANA_PORT/dashboard/db/openfaas
$ echo $GRAFANA_URL
The output should be as follows:
4. Navigate to the grafana URL using the URL printed in the previous step:
5. Log in to Grafana using the default credentials (the username is admin and the
6. password is admin). The output should be as follows:
From the Grafana menu () in the top-left corner, as highlighted in Figure 9.53,
select Dashboards > Import. Provide the ID of 3434 in the Grafana.com Dashboard
input box and wait for a few seconds to load the dashboard data:
7. From this screen, select faas as the Prometheus data source and click on Import,
as shown in the following figure:
Thus, we have successfully set up Grafana dashboards to visualize the metrics stored in
Prometheus.
Now we can put a load on the figlet function by invoking it 1,000 times, as shown in
the following code. The following script will invoke the figlet function 1,000 times by
providing the OpenFaaS string as the input for the function and sleeps for 0.1 seconds in
between each invocation:
for i in {1..1000}
do
echo "Invocation $i"
echo OpenFaaS | faas-cli invoke figlet
sleep 0.1
done
Navigate to the Grafana portal and observe the increasing number of replicas for the
figlet function. Once the load completes, the replica count will start scaling down and
go back to the com.openfaas.scale.min count of 1 function replica.
The output should be as follows:
resize: vertical
}
/** Style the submit button */
input[type=submit] {
color: white;
background-color: #5a91e8;
padding: 10px 20px;
border: none;
border-radius: 4px;
cursor: pointer;
}
/** Change submit button color for mouse hover */
input[type=submit]:hover {
background-color: #2662bf;
}
/** Add padding around the form */
container {
padding: 20px;
border-radius: 5px;
}
/** Bold font for response and add margin */
#response {
font-weight: bold;
margin-bottom: 20px;
}
</style>
</head>
<body>
<h1>OpenFaaS Contact Form</h1>
<div class="container">
<!-- Placeholder for the response -->
<div id='response'></div>
<form id="contact_us_form">
<label for="name">Name:</label>
<input type="text" id="name" name="name" required>
<label for="email">Email:</label>
<input type="email" id="email" name="email" required>
<label for="message">Message:</label>
<textarea id="message" name="message" required></textarea>
<input type="submit" value="Send Message">
</form>
</div>
366 | Going Serverless with OpenFaaS
<script src="http://code.jquery.com/jquery-3.4.1.min.js"></script>
<script>
$(document).ready(function(){
$('#contact_us_form').on('submit', function(e){
// prevent form from submitting.
e.preventDefault();
$('#response').html('Sending message...');
// retrieve values from the form field
var name = $('#name').val();
email = $('#email').val();
var message = $('#message').val();
var formData = {
name: name,
email: email,
message: message
};
// send the ajax POST request
$.ajax({
type: "POST",
url: './form-processor',
data: JSON.stringify(formData)
})
done(function(data) {
$('#response').html(data);
})
fail(function(data) {
$('#response').html(data);
});
});
});
</script>
</body>
</html>
OpenFaaS Functions | 367
3. Create the form-processor function, which takes the form values from the Contact
Us form and sends an email to a specified email address with the information
provided.
4. Invoke the Contact Us form function using a web browser and verify the email
delivery.
The contact form should look as shown in the following figure:
The email received from the contact form should look as shown in the following
screenshot:
Note
The solution to the activity can be found on page 444.
Summary
We started this chapter with an introduction to the OpenFaaS framework and
continued with an overview of the components available with the OpenFaaS framework.
Next, we looked at how to install faas-cli and the OpenFaaS framework on a local
Minikube cluster.
Then, we started looking at OpenFaaS functions. We discussed how we can use faas-
cli to create the function templates, build and push function Docker image, and deploy
the function to the OpenFaaS framework. Then, we learned how to invoke the deployed
functions with the faas-cli command and curl command. Next, we introduced the
OpenFaaS portal, which is the built-in UI for the OpenFaaS framework.
We also learned how we can set up an OpenFaaS function to return HTML content and
return different content based on provided parameters. We configured the Prometheus
and Grafana dashboards to visualize the function metrics, including invocation count,
invocation duration, and replica counts. Then, we discussed the function autoscaling
feature, which scales up or scales down function replicas based on demand. We
performed a load test on a function and observed autoscaling in action with Grafana
dashboards.
Summary | 369
Finally, in the activity, we built the frontend and backend of a Contact Us form of a
website using the OpenFaaS framework.
Through the concepts and the various exercises and activities presented in this book,
we have equipped you with all the skills you need to use serverless architectures and
the state-of-art container management system, Kubernetes.
We are confident that you will be able to apply this knowledge toward building more
robust and effective systems and host them on cloud providers such as AWS Lambda,
Google Cloud Function, and more. You will also be able to use the highly effective
features of best-in-class frameworks such as OpenFaaS, OpenWhisk, Kubeless, and
more.
Appendix
>
About
This section is included to assist the students to perform the activities in the book.
It includes detailed steps that are to be performed by the students to achieve the objectives of
the activities.
372 | Appendix
import (
"fmt"
"net/http"
)
func main() {
fmt.Println("Starting the 🚲 finder..")
http.HandleFunc("/", FindBikes)
fmt.Println("Function handlers are registered.")
http.ListenAndServe(":8080", nil)
}
2. Create a function.go file for the FindBikes function:
...
...
...
Chapter 01: Introduction to Serverless | 373
...
if bikeAmount == 0 {
w.Write([]byte(fmt.Sprintf(RESPONSE_NO_AVAILABLE_BIKE,
bikePoint.CommonName, url)))
return
} else {
w.Write([]byte(fmt.Sprintf(DEFAULT_RESPONSE, bikePoint.
CommonName, bikeAmount, url)))
return
}
...
Note
The files required for the activity can be found on the link: https://github.com/
TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/
Activity1.
In this file, the actual function and its helpers should be implemented. FindBikes is
responsible for getting data from the TFL Unified API for the bike point locations
and then the number of available bikes. According to the collected information,
this function returns complete sentences to be used as Twitter responses.
3. Create a Dockerfile for building and packaging the function, as in Exercise 2:
FROM golang:1.12.5-alpine3.9 AS builder
ADD . .
RUN go build *.go
FROM alpine:3.9
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
RUN update-ca-certificates
COPY --from=builder /go/function ./bikes
RUN chmod +x ./bikes
ENTRYPOINT ["./bikes"]
In this Dockerfile, the application is built in the first container and packaged in
the second container for delivery.
374 | Appendix
4. Build the container image with Docker commands: docker build . -t find-bikes.
It should look something like this:
5. Run the container image as a Docker container and make the ports available on
the host system: docker run -it --rm -p 8080:8080 find-bikes.
Things should look as shown in the following screenshot:
6. Test the function's HTTP endpoint with different queries, such as Oxford, Abbey,
or Diagon Alley.
We expect to get real responses for London streets and failure responses for
imaginary streets from literature:
2. Click on Configure apps in the open window, as shown in the following screen-
shot:
3. Click on Browse the App Directory to add a new application from the directory, as
shown in the following screenshot:
4. Find Incoming WebHooks from the search box in App Directory, as shown in the
following screenshot:
Chapter 02: Introduction to Serverless in the Cloud | 379
6. Fill in the configuration for the incoming webhook by specifying your specific
channel name and icon, as shown in the following screenshot:
Copy the Webhook URL and click Save Settings, as shown in the preceding
screenshot.
7. Open the Slack workspace and channel we mentioned in step 6. You will see an
integration message:
Activity Solution
Execute the following steps to complete this activity:
1. Create a new function to call the Slack webhook when the function is invoked.
In GCF, it can be defined with the name StandupReminder, 128 MB memory, and an
HTTP trigger.
382 | Appendix
import (
"bytes"
"net/http"
)
client := &http.Client{}
_, err = client.Do(req)
if err != nil {
panic(err)
}
}
Note
Do not forget to change the url value with the Slack URL for the incoming web-
hook configuration from step 6.
You can find the complete function.go file in the activity solutions of this book's
GitHub repository: https://github.com/TrainingByPackt/Serverless-Architec-
tures-with-Kubernetes/blob/master/Lesson02/Activity2/function.go.
384 | Appendix
2. Create a scheduler job with the trigger URL of the function and specify the sched-
ule based on your stand-up meeting times.
The scheduler can be defined in Google Cloud Scheduler with the name
StartupReminder and the URL of the function, as shown in the following
screenshot:
With the schedule of 0 9 * * 1-5, the reminder will invoke the function at 09:00
on every day of the week from Monday through Friday.
Chapter 02: Introduction to Serverless in the Cloud | 385
3. Check the Slack channel when the time that was defined with the schedule has
arrived for the reminder message.
For the schedule of 0 9 * * 1-5, you will see a message on your selected Slack
channel at 09:00 on workdays, as shown in the following screenshot:
4. Delete the schedule job and function from the cloud provider, as shown in the
following screenshot:
In this activity, we've built the backend of a Slack application using functions. We
started by configuring Slack for incoming webhooks and then created a function to
send data to the webhook. Since our function should be invoked at predefined times, we
used the cloud scheduler services to invoke the function. With a successful reminder
message in Slack, the integration of functions to other cloud services and external
services was illustrated.
Chapter 03: Introduction to Serverless Frameworks | 387
4. Click on Browse the App Directory to add a new application from the directory:
7. Choose a channel for posting joke messages and click on the Add Incoming
WebHooks integration:
8. Fill in the configuration for the incoming webhook with your specific channel
name and icon:
9. Open your Slack workspace and the channel you configured in Step 6 to check the
integration message:
Activity Solution
1. Execute the following steps to complete this activity:
2. In your Terminal, start the Serverless Framework development environment:
docker run -it --entrypoint=bash onuryilmaz/serverless
This command will start a Docker container in interactive mode. In the upcoming
steps, actions will be taken inside this Docker container:
Note
nano and vim are installed as text editors in the Serverless Framework develop-
ment environment Docker container.
4. Create a serverless.yaml file with the following content and replace the value
of SLACK_WEBHOOK_URL with the URL you copied from Step 6 of the Slack
Setup. Furthermore, update the CITY environment variable with the current office
location to get the correct weather information. In addition, you can change the
schedule section, which is currently triggering the function every workday at
08:00:
service: daily-weather
provider:
name: aws
runtime: nodejs8.10
functions:
weather:
handler: handler.weather
events:
- schedule: cron(0 8 ? * 1-5 *)
environment:
CITY: Berlin
SLACK_WEBHOOK_URL: https://hooks.slack.com/services/.../.../...
Note
serverless.yaml is available at https://github.com/TrainingByPackt/Serverless-Archi-
tectures-with-Kubernetes/blob/master/Lesson03/Activity3/serverless.yaml.
396 | Appendix
Note
package.json is available at https://github.com/TrainingByPackt/Serverless-Archi-
tectures-with-Kubernetes/blob/master/Lesson03/Activity3/package.json.
console.log(weatherURL)
fetch(weatherURL)
.then(response => response.text())
Chapter 03: Introduction to Serverless Frameworks | 397
.then(data => {
slack.webhook({
text: "Current weather status is " + data
}, function(err, response) {
console.log("======== SLACK SEND STATUS ========")
console.error(response.status);
return callback(null, {statusCode: 200, body: "ok" });
console.log("======== SLACK SEND STATUS ========")
if (err) {
console.log("======== ERROR ========")
console.error(error);
console.log("======== ERROR ========")
return callback(null, {statusCode: 500, body: JSON.
stringify({ error}) });
}
});
}).catch((error) => {
console.log("======== ERROR ========")
console.error(error);
console.log("======== ERROR ========")
return callback(null, {statusCode: 500, body: JSON.
stringify({ error}) });
});
};
Note
handler.js is available at https://github.com/TrainingByPackt/Serverless-Architec-
tures-with-Kubernetes/blob/master/Lesson03/Activity3/handler.js.
398 | Appendix
7. At the end of the file's creation, you will see the following file structure, with three
files:
ls -l
The output should be as follows:
8. Install the required Node.js dependencies for the serverless application. Run the
following command to install the dependencies:
npm install -i
The output should be as follows:
9. Export the AWS credentials as environment variables. Export the following envi-
ronment variables and AWS credentials from Exercise xx:
export AWS_ACCESS_KEY_ID=AKIASVTPHRZR33BS256U
export AWS_SECRET_ACCESS_KEY=B***************************R
The output should be as follows:
10. Deploy the serverless application to AWS using the Serverless Framework. Run the
following commands to deploy the function:
serverless deploy
These commands will make the Serverless Framework deploy the function into
AWS. The output logs start by packaging the service and creating AWS resources
for source code, artifacts, and functions. After all the resources have been created,
the Service Information section provides a summary of the complete stack as you
can see in the following figure:
11. Check AWS Lambda for the deployed functions in the AWS Console as shown in
the following figure:
12. Invoke the function with the Serverless Framework's client tools. Run the follow-
ing command in your Terminal:
serverless invoke --function weather
This command invokes the deployed function and prints out the response as you
can see in the following figure:
As we can see, statusCode is 200, and the body of the response also indicates that
the function has responded successfully.
Chapter 03: Introduction to Serverless Frameworks | 401
13. Check the Slack channel for the posted weather status:
14. Return to your Terminal and delete the function with the Serverless Framework.
Run the following command in your Terminal:
serverless remove
This command will remove the deployed function, along with all its dependencies:
15. Exit the Serverless Framework development environment container. Run exit in
your Terminal:
In this activity, we have built the backend of a Slack application using a serverless
framework. We started by configuring Slack for incoming webhooks and then
created a serverless application to send data to the webhook. In order to invoke
the function at predefined times, the configuration of the serverless framework
was utilized instead of cloud-specific schedulers. Since serverless frameworks
create an abstraction for the cloud providers, the serverless application that we
developed in this activity is suitable for multi-cloud deployments.
Chapter 04: Kubernetes Deep Dive | 403
func main() {
The main function starts with database connection, followed by price retrieval
from CurrencyLayer. Then it continues with creating a SQL statement and
executing on the database connection.
Note
main.go is available at https://github.com/TrainingByPackt/Serverless-Architec-
tures-with-Kubernetes/blob/master/Lesson04/Activity4/main.go.
Note
Dockerfile is available at https://github.com/TrainingByPackt/Serverless-Architec-
tures-with-Kubernetes/blob/master/Lesson04/Activity4/Dockerfile.
Chapter 04: Kubernetes Deep Dive | 405
Note
Do not forget to change <USERNAME> to your Docker Hub username.
406 | Appendix
4. Push the Docker container to the Docker registry. Run the following command in
your Terminal:
docker push <USERNAME>/gold-price-to-mysql
This command uploads the container image to the Docker Hub, as shown in the
following figure:
Note
Do not forget to change <USERNAME> to your Docker Hub username.
5. Deploy the MySQL database into the Kubernetes cluster. Create a mysql.yaml file
with the MySQL StatefulSet definition:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
Chapter 04: Kubernetes Deep Dive | 407
value: "root"
- name: MYSQL_DATABASE
value: "db"
- name: MYSQL_USER
value: "user"
- name: MYSQL_PASSWORD
value: "password"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
Note
mysql.yaml is available at https://github.com/TrainingByPackt/Serverless-Architec-
tures-with-Kubernetes/blob/master/Lesson04/Activity4/mysql.yaml.
Note
service.yaml is available at https://github.com/TrainingByPackt/Serverless-Architec-
tures-with-Kubernetes/blob/master/Lesson04/Activity4/service.yaml.
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: insert
image: <USERNAME>/gold-price-to-mysql
env:
- name: MYSQL_ADDRESS
value: "gold-price-db:3306"
- name: MYSQL_DATABASE
value: "db"
- name: MYSQL_USER
value: "user"
- name: MYSQL_PASSWORD
value: "password"
- name: API_KEY
value: "<API-KEY>"
Note
insert-gold-price.yaml is available at https://github.com/TrainingByPackt/Server-
less-Architectures-with-Kubernetes/blob/master/Lesson04/Activity4/insert-gold-
price.yaml.
Do not forget to change <USERNAME> to your Docker Hub username and <API-KEY>
to your CurrencyLayer API key.
10. Deploy the CronJob with the following command in your Terminal:
kubectl apply -f insert-gold-price.yaml
This command submits the file to Kubernetes and creates the gold-price-to-mysql
CronJob, as shown in the following figure:
11. Wait for a couple of minutes and check the instances of CronJob. Check the
running pods with the following command in your Terminal:
kubectl get pods
This command lists the pods, and you should see a couple of instances whose
names start with gold-price-to-mysql and with a STATUS of Completed, as shown
in the following figure:
In the GoldPrices MySQL table, there is price data collected every minute. It shows
that MySQL StatefulSet is up and running the database successfully. In addition,
the CronJob has been creating the pods every minute and is running successfully.
13. Clean the database and automated tasks from Kubernetes. Clean the resources
with the following command in your Terminal:
kubectl delete -f insert-gold-price.yaml,service.yaml,mysql.yaml
You should see the output shown in the following figure:
Note
Change the zone parameter if your cluster is running in another zone.
This function creates a new node pool named preemptible with an automatically
scaled minimum of 1 node and a maximum of 10 nodes, as shown in the following
figure:
4. Create a CronJob to connect to the backend service every minute. The CronJob
definition should have tolerations to run on preemptible servers.
Create a CronJob definition with the following content inside a file named cronjob.
yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backend-checker
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
414 | Appendix
containers:
- name: checker
image: appropriate/curl
args:
- curl
- -I
- backend
nodeSelector:
cloud.google.com/gke-nodepool: "preemptible"
tolerations:
- key: preemptible
operator: Equal
value: "true"
effect: NoSchedule
restartPolicy: OnFailure
The file has a CronJob definition for running the curl -I backend function every
minute. nodeSelector indicates that the scheduler will choose to run on the nodes
with the label key cloud.google.com/gke-nodepool and a value of preemptible.
However, since there are taints on the preemptible nodes, tolerations are also
added.
Note
cronjob.yaml is available on GitHub: https://github.com/TrainingByPackt/Server-
less-Architectures-with-Kubernetes/blob/master/Lesson05/Activity5/cronjob.yaml.
Note
Replace <ID> with a pod name from Step 5.
The output of the function shows the trail of curl connecting to the nginx
instance, as shown in the following figure:
Note
Change the zone parameter in the command if your cluster is running in another
zone.
This command deletes the cluster from GKE, as shown in the following figure:
5. Now we have a Docker image created and pushed to the registry. Now navigate
to the GCP console and open the Cloud Run page. Click on the CREATE SERVICE
button to create a new service with the following information:
Container Image URL: gcr.io/<your-gcp-project-id>/clock:v1.0
Deployment platform: Cloud Run (fully managed)
Location: Select any region you prefer from the available options
Service name: clock
Authentication: Allow unauthenticated invocations
Chapter 06: Upcoming Serverless Features in Kubernetes | 419
6. Click on the CREATE button and you will be navigated to the Service details page:
7. Open the provided URL from the Service details page. For me, this URL is https://
clock-awsve2jaoa-uc.a.run.app/, but your URL will be different:
8. We are receiving this error as we have not provided the timezone parameter.
Chapter 06: Upcoming Serverless Features in Kubernetes | 421
9. Let's invoke the URL again with the timezone parameter, https://clock-awsve-
2jaoa-uc.a.run.app/?timezone=Europe/London
2. Now, you will receive a six-digit confirmation code to the email that you entered
on the previous page. Enter the received code on the following page:
4. Add a suitable name here. This will be your Slack channel name:
5. Now your Slack channel is ready. Click on See Your Channel in Slack, as shown in
the following screenshot:
6. Now we are going to add an Incoming Webhook app to our slack. From the left
menu, select Add apps under the Apps section:
7. Enter Incoming Webhooks in the search field and click on Install for the Incoming
Webhook app:
10. Save the webhook URL. We will need this when we are writing the Kubeless func-
tion.
11. Now, let's create the function and deploy it. First, we need to create the require-
ments.txt file, which specifies the dependencies we need to install for the func-
tion's runtime. These are the additional modules we need in order to run our func-
tion successfully. We will be using the requests package to send the HTTP POST
request to the Slack webhook endpoint:
Requests==2.22.0
430 | Appendix
Activity Solution
1. Create the function as follows.
import json
import requests
webhook_url = 'YOUR_INCOMMING_WEBHOOK_URL'
response = requests.post(
webhook_url, data=json.dumps(event['data']),
headers={'Content-Type': 'application/json'}
)
if response.status_code == 200:
return "Your message successfully sent to Slack"
else:
return "Error while sending your message to Slack: " + response.
get('error')
2. Deploy the function:
$ kubeless function deploy slack --runtime python3.6 \
--from-file slack.py \
--handler slack.main \
--dependencies requirements.txt
Deploying the function will yield the following output:
We are passing the requirements.txt file that we created in the previous step as a
dependency while deploying the slack function. This will ensure that the Kubeless
runtime contains the required Python packages for function execution.
Chapter 07 Kubernetes Serverless with Kubeless | 431
4. Go to your Slack workspace and verify that the message was successfully posted
to the Slack channel:
In this activity, we created a Slack space and created an incoming webhook. Next, we
created and deployed a Kubeless function that can post messages to the Slack channel.
432 | Appendix
2. Once you have signed up to OpenWeather, an API key will be generated auto-
matically for you. Go to the API keys tab (https://home.openweathermap.org/
api_keys) and save the API key because this key is required to fetch the data from
OpenWeather API:
Chapter 08: Introduction to Apache OpenWhisk | 433
Note
It may take a few minutes to get your API key activated. Wait for a few minutes
and retry if you receive Invalid API key. Please see http://openweathermap.org/
faq#error401 for more info. error while invoking the URL.
5. Go to Settings > API Keys and click on the Create API Key button:
6. Provide a name in the API Key Name field, select the Full Access radio button, and
click on the Create & View button to create an API key with full access:
7. Once the key is generated, copy the API key and save it somewhere safe because
you will see this key only once:
Activity Solution
1. Create the get-weather.js function with the function code provided in step 3.
Replace <OPEN_WEATHER_API_KEY> with the API key saved in step 1.
2. Create the action named getWeather with the get-weather.js function created
in the preceding step and provide the default value of the cityName parameter as
London:
$ wsk action create getWeather get-weather.js --param cityName London
The output should be as follows:
4. Now we can create the action to send emails (we will be using the API key gener-
ated with SendGrid). We will be using the sendgrid module for this function. First,
we need to create a directory to store the function code and the dependencies:
$ mkdir send-email
$ cd send-email
The output should be as follows:
6. Install the sendgrid npm package, which is required for the function:
$ npm install sendgrid -save
The output should be as follows:
7. Create the index.js file with the function code provided in step 4. Replace <SEND_
GRID_API_KEY> with the key, which was saved when creating the SendGrid account.
Similarly, replace <TO_EMAIL> to receive weather data and <FROM_EMAIL> to send
weather data with your email address.
8. Compress the code with all the dependencies:
$ zip -r send-email.zip *
9. Now we can create an action named sendEmail using send-email.zip:
$ wsk action create sendEmail send-email.zip --kind nodejs:default
The output should be as follows:
Note
Make sure to check your spam folder because the email client might have catego-
rized this as a spam email.
11. Create the format-weather-data.js function with the function code provided in
step 5.
12. Create the action named formatWeatherData with the format-weather-data.js func-
tion created in the preceding step:
$ wsk action create formatWeatherData format-weather-data.js
The output should be as follows:
15. Check the mail account that you added as <TO_EMAIL> (check the spam folder).
Check the status of email delivery at https://app.sendgrid.com/email_activity.
The output should be as follows:
16. Finally, we need to create the trigger and rule to invoke the sequence every day at
8 AM. First, we will create weatherMailSenderCronTrigger, which will be triggered
daily at 8.00 AM:
$ wsk trigger create weatherMailSenderCronTrigger \
--feed /whisk.system/alarms/alarm \
--param cron "0 8 * * *"
ok: invoked /whisk.system/alarms/alarm with id
cf1af9989a7a46a29af9989a7ad6a28c
{
"activationId": "cf1af9989a7a46a29af9989a7ad6a28c",
"annotations": [
{
"key": "path",
"value": "whisk.system/alarms/alarm"
},
{
442 | Appendix
"key": "waitTime",
"value": 66
},
{
"key": "kind",
"value": "nodejs:10"
},
{
"key": "timeout",
"value": false
},
{
"key": "limits",
"value": {
"concurrency": 1,
"logs": 10,
"memory": 256,
"timeout": 60000
}
}
],
"duration": 162,
"end": 1565457634929,
"logs": [],
"name": "alarm",
"namespace": "[email protected]_dev",
"publish": false,
"response": {
"result": {
"status": "success"
},
"status": "success",
"success": true
},
"start": 1565457634767,
"subject": "[email protected]",
"version": "0.0.152"
}
ok: created trigger weatherMailSenderCronTrigger
Chapter 08: Introduction to Apache OpenWhisk | 443
Once the preceding steps are completed, you should receive an email daily at 8.00 AM
to the specified email address with the weather data for the requested city.
444 | Appendix
3. Create a new directory named html inside the contact-form directory to store the
HTML files:
$ mkdir contact-form/html
The output should be as follows:
4. Create the contact-us.html file inside the contact-form/html folder with the code
provided in step 2.
Chapter 09: Going Serverless with OpenFaaS | 445
5. Update the handler.py Python file inside the contact-form folder. This Python
function will read the content of the contact-us.html file and return it as the func-
tion response:
import os
def handle(req):
current_directory = os.path.dirname(__file__)
html_file_path = os.path.join(current_directory, 'html', 'contact-us.
html')
Deploying: contact-form.
WARNING! Communication is not secure, please consider using HTTPS.
Letsencrypt.org offers free SSL/TLS certificates.
9. Update the handler.py Python file inside the form-processor folder. This Python
function performs receives the email, name, and message parameters entered
into the Contact Us form, formats the email body to be sent, sends the email using
SendGrid, and returns the email sending status as the function response.
10. Replace <SEND_GRID_API_KEY> with the SendGrid API key saved in step 1, and
<TO_EMAIL> with the email address to receive the Contact Us form data:
import json
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
def handle(req):
SENDGRID_API_KEY = '<SEND_GRID_API_KEY>'
TO_EMAIL = '<TO_EMAIL>'
EMAIL_SUBJECT = 'New Message from OpenFaaS Contact Form'
json_req = json.loads(req)
email = json_req["email"]
name = json_req["name"]
message = json_req["message"]
email_body = '<strong>Name: </strong>' + name + '<br><br>
<strong>Email: </strong>' + email + '<br><br> <strong>Message: </strong>'
+ message
email_object = Mail(
from_email= email,
to_emails=TO_EMAIL,
subject=EMAIL_SUBJECT,
html_content=email_body)
try:
sg = SendGridAPIClient(SENDGRID_API_KEY)
response = sg.send(email_object)
sendingStatus = "Message sent successfully"
except Exception as e:
sendingStatus = "Message sending failed"
return sendingStatus
448 | Appendix
sha256:c700592a3a7f16875c2895dbfa41bd269631780d9195290141c245bec93a2257
size: 4286
[0] < Pushing form-processor [sathsarasa/form-processor:latest] done.
[0] worker done.
Deploying: form-processor.
WARNING! Communication is not secure, please consider using HTTPS.
Letsencrypt.org offers free SSL/TLS certificates.
15. Fill in the form and then submit the form, as shown in the following figure:
16. Check the email account you provided as <TO_EMAIL> in step 9 to verify the email
delivery:
About
All major keywords used in this book are captured alphabetically in this section. Each one is
accompanied by the page number of where they appear.
168-169, 171, 174-175,
A C 184, 188-189, 191-192,
activation: 259, 271, caches:12 232, 319, 365, 369
274-275, 277-278, callback:243 contour:231
300-301, 304-305 celsius:313 controller: 114, 117,
activator:183 cfduid: 295-296 125, 211, 219, 231
add-on:231 cf-ray: 295, 297 coreos:142
allocation:60 charge: 31-32, cron-based:301
analysis: 15, 42, 57-58, 143-144 cronjob: 127-129, 134, 136,
108, 118, 140 charset: 295-296, 364 160-161, 212, 301-302
analytics: 154, 181 checker:160
anomalies:354
apache: 8, 12, 89, 205,
cityname: 311, 313
client: 72-73, 79, 81,
D
244, 255-257, 314, 318 89, 91, 107, 112-113, datetime: 205, 232,
apache-:249 118-119, 123, 137, 140 278, 305
apigroup:199 cloudflare: 295, 297 debugging: 186, 239
apihost:291 cloudwatch: 31, 40 decoupling:127
apilayer:136 cluster: 111-113, 115-120, deprecated:165
apiserver:117 123, 125-127, 129, devops:113
apiversion: 123-125, 134, 136-137, 139-141, directory: 14, 72, 82-84,
127-129, 131-132, 143-144, 146-157, 99, 106-107, 246-247
171, 176-180, 183, 159-161, 163-170, 173, disk-node-: 154-155
198-199, 202 187, 191-192, 195-197, docker: 1, 6, 13-14, 18-21,
appendix:311 202, 206, 209-211, 24, 77-80, 89, 91-92,
apt-get:245 213, 215, 217-218, 221, 114, 134, 136, 171, 176,
autoscaler: 150, 152, 231, 234-236, 253, 183, 187, 191, 202,
154, 183, 186 317, 320-321, 323, 205-206, 268, 318-320,
337, 340, 342, 368 325-326, 330-333, 335,
B clustering:318
clusterip:230
338-339, 343-344, 348,
352-353, 357, 368
backend: 1-2, 6, 8-9, 22, concurrent:183 dockerfile: 17-18, 20,
24-25, 113, 127, 141, config: 119, 167, 247 24, 205, 326
155-156, 159-161, 369 configmaps:211 doctype: 350-351, 364
backups:301 connectors:200
bashrc: 267, 325
binaries: 214-217, 220-221
console: 32-33, 58, 94,
107, 144, 187, 189, 356
E
bitnami:210 constant:2 e-commerce: 8-9
blocks: 2, 5, 111-112, 124, 211 container: 1, 8, 14-19, ecosystem: 7, 30
bluemix: 264, 291 21-25, 77, 79, 91-92, 104, endpoint: 24, 32, 39-41,
107, 112, 114, 123-126, 44, 85, 175, 243, 338
128, 136, 142, 152-153, engine: 28, 142-145,
156, 159, 164-166, 167, 213
entities: 263, 265-267
entrypoint: 17, 82
H jsonpath: 173, 177, 179,
181, 355, 358
ephemeral: 5, 7, 12, 16, 125 handlefunc:14 jsonstr:72
handler: 13, 15, 35, 107,
F 243, 247-248, 329-330,
338, 346-347, 351-352
K
faas-cli: 318, 320-321, hashicorp:192 knative: 163-184,
324-328, 330-340, header: 241, 295, 346, 350 186-187, 206
343, 345, 347, 350, heptio:142 kubeadm:213
352, 362-363, 368 heroku:28 kubeconfig:149
faas-netes:322 hostname:234 kubectl: 112, 118-120,
fail-safe:175 htmloutput:347 122, 129-134, 137, 149,
fargate:192 httpbin:338 151-158, 167, 169-170,
fclose:352 httponly: 295-296 172-173, 176-181,
figlet: 341-343, 362-363 hyperkit:115 184-185, 191, 193,
filename:224 hypervisor: 115, 213 197, 199, 202-203,
filesize:352 210, 216-219, 221,
fin-tech:113
firebase:56
I 225, 228-230, 232,
234, 236, 238, 241,
firestore:56 ibm-cli: 263-264 243-244, 322-324,
fnproject: 77-78, 80 ibmcloud: 264-267 334, 355, 357-358
fnserver:80 identifier:270 kubeless: 12, 206,
fprint:60 in-cluster:211 209-213, 218-244,
fprintf: 13, 20 increments:57 246-253, 256, 369
incubator:212 kubelet: 114, 117-118,
G in-depth:4
index-:351
124, 163-164, 191-193,
200-202, 204, 206
gb-second:57 ingress: 173, 230-231, 234 kube-probe:243
gcloud: 152-153, 156, inittime: 273, 276, 303 kube-proxy: 114, 118
159, 167-169 inline: 60-61 kubernetes: 1, 6, 8, 15-17,
gemfile:84 in-portal:50 25, 111-119, 123-129,
generator:184 ip-info: 337-340 134-137, 139-145,
getenv:351 isolation: 3, 5 150, 152, 154-155,
ghz-second: 57-58 158-161, 163-173, 183,
golang: 14, 17, 210,
212, 318-319
J 187, 191-193, 196-197,
202-203, 206, 209-211,
googleapis: 116, 119, javaone:77 213, 218-219, 221, 225,
184, 214, 216 javascript: 43, 51, 89, 228, 230-232, 234-236,
grafana: 181-182, 186, 256, 268-269, 274, 243, 253, 256, 318-319,
317, 356-363, 368 280-281, 283, 287, 290, 322-323, 334, 362, 369
298, 311-313, 364
229-230, 232, 236,
L 238, 273, 275, 291, 299, P
labels: 120, 124-126, 129, 304, 306-307, 322, packages: 2-3, 255-256,
202, 235, 244, 362 334, 346, 351, 357 306-307, 314
lambda: 4, 9, 12, 15-16, network: 4-5, 56-58, 123, parameter: 53, 76, 129,
30-43, 56-57, 73, 76, 89, 140, 166, 195, 323 152-153, 156, 159, 204,
91-92, 105, 107, 244, 369 newbuffer:72 240, 243, 281-282,
latency:7 nodejs: 171, 173, 245, 284, 290, 292-295,
library:252 270-272, 276, 303 311, 330, 351
license:249 node-pools: 156, 159 params: 280-282,
linux-amd:220 nodeport: 355, 357-358 284-285, 287-288,
localhost: 15, 19, node-port:355 290, 293, 298, 308,
21-22, 81, 86-87 nodesource:245 311-313, 351
lock-in: 7, 16, 76-77, 90, 112 non-json: 290, 314 parsed:354
non-rbac:218 payload: 86, 290-291
M noschedule: 154-155, 203 pipeline:314
plugin: 92, 168, 231, 244,
max-age: 294-297
mechanism: 212, 234, 236
O 247-249, 263, 265, 267
powershell: 43, 193
memory: 31, 43, 57, 60, offline:125 prefix: 308, 325, 330
120, 123, 155-156, openfaas: 206, 314, prepended:330
161, 191, 223, 273, 317-320, 322-326, primitives:159
276, 303, 354 328-338, 340-343, principles:112
metadata: 120, 123-132, 345-347, 350-352, printf:136
171, 176-181, 183-184, 354-358, 361-365, println: 13-14
198-199, 202, 368-369 prometheus: 317, 319,
272, 274, 277 openshift: 143, 218 354-357, 361-362, 368
minikube: 115-120, 141, openstack:192 pubsub: 209-210,
209, 213-218, 221, 231, openwhisk: 12, 89, 206, 234-238, 253
235, 253, 317, 320, 323, 244, 253, 255-257,
325, 355, 358, 368
monolith: 7, 11, 112
263, 265-271, 278-284,
286-288, 290-297, 299,
R
monolithic: 7-8 301-302, 306-307, read-only:174
mountpath: 126, 130 311-314, 318, 369 redundancy:150
myrule: 299-300 operation: 4, 30, 143 registries:142
mytrigger: 298-301 operator: 155, 203 relational:9
oracle: 77, 213-214, 216 release: 56, 119, 165,
N outsource:3
overridden: 325, 330
169, 216, 218, 324
replica: 334, 342,
namespace: 149, 171, 362-363, 368
173, 176-181, 183, 195, report-uri: 295, 297
198-199, 218-219, repository: 31, 90,
212, 298, 322, 326, setapikey:312 276, 303
339, 348, 353 set-cookie: 294-296 timers: 43, 51
robust: 2, 5-6, 10, shopify:6 timestamp:67
143-144, 161, 369 showresult: 288-289 timezone: 204-205
roleref:199 sidebar:342 tomcat:8
routes: 174, 176 signup: 43, 134, 310 transition: 2-3, 11,
routing: 166, 174-175 simplify:175 25, 29, 112
rule-name:299 slackbot:251 trigger: 17, 19, 27, 30-31,
runlatest: 171, 175, 183 slideshare:5 37-43, 51, 56, 58, 60-62,
runtimes: 210-212, snapshot:175 69-70, 73, 76, 82-86,
222, 240, 283, 325 snippet: 72, 125, 127-128, 212, 230-235, 237-238,
169, 183, 330 298-304, 306, 313
S sprintf: 136, 330
standard: 154, 192,
turnkey: 142-143, 161
typescript:43
sandbox:191 196, 211, 267, 271,
scalable: 2-3, 5-6, 10, 12,
16, 113-115, 125-126,
290, 297, 319, 336
stateful: 15, 125, 129
U
143-144, 150, 161 stateless: 12, 15, 125, ubuntu: 123, 213
scheduler: 58, 64-71, 161, 187, 206 upstream:143
73, 106, 113, 119, statuscode: 35, 101, 247, urandom:322
124, 149, 155 275, 293-294, 297, 311 us-central: 152-153,
selector: 124-125, 127, stdout:319 156, 159, 167, 190
129, 131, 202 subpath: 126, 130 user-agent: 294-296
sendgrid: 310-312, 364 subset:56 us-south: 265,
server: 4, 8, 14, 28, 30, subtract:307 292, 294-297
79, 81, 89, 113-114, switching:175
118-119, 124-125, 140,
144, 154, 191, 222,
syncusers:12
syntax:241
V
243, 295-297, 319 validation:54
serverless: 1-2, 4-16, 20,
22, 24-25, 27-32, 42-43,
T variable: 154, 171, 173, 176,
178, 189-190, 204-205,
54, 56, 58, 73, 75-76, 78, tabular:120 320, 330, 340
80, 82, 89-93, 99-105, tagged: 18, 20, 331, virtualbox: 115, 213-216
107-108, 112, 123, 134, 339, 348, 353 vmware:115
137, 140, 147, 152-154, targetport: 127, 131
156-157, 159-161,
163-165, 187, 192-193,
tekton: 165
telegram: 7
W
204, 206, 209-210, terminates: 128 watchdog: 319, 331
228, 230, 244-251, text-align: 364 web-action: 290, 293-294
253, 256-257, 301, threshold: 362 webhook: 6, 51,
314, 317-318, 325, 330, tiller: 198-200 72, 251-252
353-354, 361, 364, 369 timeout: 223, 272-273, webserver: 124-125