Practical Spring Cloud Function
Practical Spring Cloud Function
Cloud Function
Developing Cloud-Native Functions
for Multi-Cloud and Hybrid-Cloud
Environments
—
Banu Parasuraman
Practical Spring
Cloud Function
Developing Cloud-Native
Functions for Multi-Cloud
and Hybrid-Cloud
Environments
Banu Parasuraman
Practical Spring Cloud Function: Developing Cloud-Native Functions for
Multi-Cloud and Hybrid-Cloud Environments
Banu Parasuraman
Frisco, TX, USA
Acknowledgments������������������������������������������������������������������������������xv
Introduction��������������������������������������������������������������������������������������xvii
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
viii
Table of Contents
Index�������������������������������������������������������������������������������������������������335
ix
About the Author
Banu Parasuraman is a cloud native
technologist and a Customer Success Manager
(CSM) at IBM, with over 30 years of experience
in the IT industry. He provides expert advice
to clients who are looking to move to the cloud
or implement cloud-native platforms such
as Kubernetes, Cloud Foundry, and the like.
He has engaged over 25 select companies
spread across different sectors (including
retail, healthcare, logistics, banking, manufacturing, automotive, oil
and gas, pharmaceuticals, and media and entertainment) in the United
States, Europe, and Asia. He is experienced in most of the popular cloud
platforms, including VMware VCF, Pivotal PCF, IBM OCP, Google GCP,
Amazon AWS, and Microsoft Azure. Banu has taken part in external
speaking engagements targeted at CXOs and engineers, including at
VMworld, SpringOne, Spring Days, and Spring Developer Forum Meetups.
His internal speaking engagements include developer workshops on
cloud-native architecture and development, customer workshops on
Pivotal Cloud Foundry, and enabling cloud-native sales plays and
strategies for sales and teams. Lastly, Banu has numerous blogs on
platforms such as Medium and LinkedIn, where he promotes the adoption
of cloud-native architecture.
xi
About the Technical Reviewer
Manuel Jordan Elera is an autodidactic
developer and researcher who enjoys learning
new technologies for his own experiments and
creating new integrations. Manuel won the
Springy Award 2013 Community Champion
and Spring Champion. In his little free time,
he reads the Bible and composes music on his
guitar. Manuel is known as dr_pompeii. He
has tech-reviewed numerous books, including
Pro Spring MVC with Webflux (Apress, 2020),
Pro Spring Boot 2 (Apress, 2019), Rapid Java Persistence and Microservices
(Apress, 2019), Java Language Features (Apress, 2018), Spring Boot 2
Recipes (Apress, 2018), and Java APIs, Extensions, and Libraries (Apress,
2018). You can read his detailed tutorials on Spring technologies and
contact him through his blog at www.manueljordanelera.blogspot.com.
You can follow Manuel on his Twitter account, @dr_pompeii.
xiii
Acknowledgments
It has been a great privilege to write this book and help you understand
real-world implementations of Spring Cloud Function. Thank you for
reading it.
After my presentation at SpringOne 2020, I received a message on
LinkedIn from Steve Anglin at Apress. Steve asked me if I would be willing
to write a book about Spring Cloud Function. I was a bit hesitant at first,
given that I was occupied with many client engagements, which were
taking up most of my work hours. I was worried that I would not do the
subject justice, due to my preoccupation with my clients. But after a long
contemplation and a heartfelt discussion with my family, I decided to
take it on.
I want to thank Steve Anglin, Associate Editorial Director, for reaching
out to me and providing me this opportunity to write a book on Spring
Cloud Function.
Mark Powers, the Editorial Operations Manager, was instrumental
in helping me bring this book to close. With his incessant prodding and
nudging, he helped me reached the finish line. Thanks, Mark.
Manuel Jordan, the technical reviewer, was immensely helpful. His
comments kept me honest and prevented me from cutting corners. He
helped improve the quality of the solutions that I present in this book.
Thanks, Manuel.
I also want to thank Nirmal Selvaraj and others at Apress, who worked
to bring this book to fruition.
This book would not be possible without the help of my wife Vijaya and
daughters Pooja and Deepika, who provided the much-needed emotional
support through this journey.
xv
Introduction
I entered the field of Information Technology (IT) 25 years ago, after
spending time in sales and marketing. I was an average programmer and
was never into hardcore programming. During my early life in IT, I worked
as part of a team that built a baseball simulator for the Detroit Tigers. I
helped build a video capture driver for that simulator using C++. Even
though this was a great project with a lot of visibility, it was never my real
passion to be a hard-core programmer.
I soon gravitated toward solution architecture. This seemed to
perfectly tie my marketing skills to my technology skills. I began looking
at solutions from a marketing lens. This approach formed the basis for
writing this book. Because, what good is a technology if we do not know
how to apply it in real life?
Functional programming was an emerging technology. Cloud
providers such as AWS, Google, and Azure created serverless
environments, with innovations such as Firecracker virtualization
techniques, that allowed infrastructure to scale down to zero. This allowed
customers to derive huge cost savings by not paying for resources that were
not in use and subscribing to a pay-per-use model.
Initially, development of these functions that run on serverless
environments were built on either NodeJS or Python. These functions
were also vendor-specific. Spring.io developed the Spring Cloud Function
framework, which allowed the functions to run in a cloud-agnostic
environment. The focus was on the “write once, deploy anywhere”
concept. This was a game changer in the cloud functions world.
xvii
Introduction
xviii
CHAPTER 1
compute time. They also can be scaled to fit exact demand, by focusing on
billing for the number of invocations as compared to billing for uptime.
FaaS has two parts, as shown in Figure 1-1.
1.1.1. Implementation
of an Enterprise Application
Imagine all the infrastructure needed to run a single payroll application
on the cloud. This application may consume only 16GB of RAM and eight
vCPUs, but you are charged continuously for the entire duration that the
application is active. Using a simple AWS pricing formula, this works out
to around $1,000 per year. This cost is for the whole time the application
2
Chapter 1 Why Use Spring Cloud Function
If you calculate the billing cost using an AWS cost calculator, you see
that you will spend $1 million per year. This spend is for applications
that are critical and highly utilized, as well as for applications that are
minimally utilized. This cost is due to the fact that the cloud providers
charge for the entire duration the application is active and consuming
the infrastructure. The key here is that the infrastructure is fully allocated
3
Chapter 1 Why Use Spring Cloud Function
for the application’s life. Imagine how much you could save if the
infrastructure was allocated only for the duration that the application
was active and serving. This would be a great cost and resource saving
approach. Cloud providers have thought through this because they also
faced the pressure of finite infrastructure and considered the time needed
to provision additional infrastructure.
4
Chapter 1 Why Use Spring Cloud Function
5
Chapter 1 Why Use Spring Cloud Function
come up and another 15 minutes for the application to come down. This
would not fly. Imagine if you deployed this application to an AWS Lambda
serverless function. Thirty minutes of downtime for each invocation?
This will not work. Your users would abandon the application entirely. As
you can see, large applications cannot benefit from resource release and
resource reassignment because they take a long time to start up and shut
down, which would impact the general operation of the application.
Can you make this large payroll application behave in an expected way
for serverless functions? The answer is yes. A lot of refactoring is required,
but it can be done.
6
Chapter 1 Why Use Spring Cloud Function
7
Chapter 1 Why Use Spring Cloud Function
8
Chapter 1 Why Use Spring Cloud Function
9
Chapter 1 Why Use Spring Cloud Function
You can see that the code and containers both differ from the provider
and are not easily portable.
10
Chapter 1 Why Use Spring Cloud Function
In the discussions so far, you have seen the following issues related
to FaaS:
• Portability of code
11
Chapter 1 Why Use Spring Cloud Function
This is where the Spring Cloud Function comes in. The Spring.io team
started the Spring Cloud Function project with the following goals:
Source: https://spring.io/projects/spring-cloud-function
The key goals are decoupling from a specific runtime and supporting a
uniform programming model across serverless providers.
Here’s how these goals are achieved:
Figures 1-4 and 1-5 show how you can deploy Spring Cloud Function.
When Spring Cloud Function is bundled with specific libraries, it can be
deployed to AWS Lambda, Google Cloud Functions, or Azure Functions.
12
Chapter 1 Why Use Spring Cloud Function
13
Chapter 1 Why Use Spring Cloud Function
14
Chapter 1 Why Use Spring Cloud Function
15
Chapter 1 Why Use Spring Cloud Function
1.4.2. What Is Knative?
Knative is an extension of Kubernetes that enables serverless workloads
to run on Kubernetes clusters. Working with Kubernetes is a pain. The
amount of tooling that is required to help developers move their code from
the IDE to Kubernetes defeats the purpose of the agility that Kubernetes
professes to bring to the environment. Knative automates the process of
build packages and deploying to Kubernetes by provider operators that are
native to Kubernetes. Hence, the names “K” and “Native”.
Knative has two main components:
16
Chapter 1 Why Use Spring Cloud Function
17
Chapter 1 Why Use Spring Cloud Function
18
Chapter 1 Why Use Spring Cloud Function
Using this use case, the following sections explore how you can
leverage Spring Cloud Function to modernize and transform this
application into a highly efficient, scalable, and portable system.
19
Chapter 1 Why Use Spring Cloud Function
Abbreviations: ROSA: Red Hat OpenShift on AWS; ARO: Azure Red Hat OpenShift;
EKS: Elastic Kubernetes Services; AKS: Azure Kubernetes Services; GKE: Google
Kubernetes Engine
20
Chapter 1 Why Use Spring Cloud Function
Now you need to deploy the payroll system on AWS Lambda. The
deployment sequence is important, as you need to deploy SAP ECC and
Oracle before integrating and then configure API and messaging for the
Spring Cloud Function to integrate with SAP. Spring Cloud Function can be
created and tested with dummy information, but it needs to be deployed
after integration testing with SAP ECC:
21
Chapter 1 Why Use Spring Cloud Function
22
Chapter 1 Why Use Spring Cloud Function
23
Chapter 1 Why Use Spring Cloud Function
24
Chapter 1 Why Use Spring Cloud Function
25
Chapter 1 Why Use Spring Cloud Function
26
Chapter 1 Why Use Spring Cloud Function
27
Chapter 1 Why Use Spring Cloud Function
28
Chapter 1 Why Use Spring Cloud Function
29
Chapter 1 Why Use Spring Cloud Function
Figure 1-18. Four-node VxRail P570F cluster for vSphere with Tanzu
and HAProxy
30
Chapter 1 Why Use Spring Cloud Function
31
Chapter 1 Why Use Spring Cloud Function
1.7. Summary
This chapter discussed FaaS environments, Spring Cloud Function, and
Knative. You saw that FaaS containers/environments provided by AWS,
Google, or Microsoft Azure are not portable, as the underlying components
that host the FaaS environment do not have the same architecture, which
makes it difficult to move or migrate FaaS containers between cloud
providers. You also saw that Spring Cloud Function can abstract the
dependent AWS and Google libraries and provide a portable alternative.
Spring Cloud Function on Knative can improve developer productivity by
“writing once and deploying anywhere.” You saw how to apply Spring Cloud
Function and Knative to an enterprise payroll application and learned about
the various implementation approaches. The next chapter walks through the
deployments step-by-step. You will also see how to develop and deploy code
to various targets, such as AWS, GCP, Azure, OpenShift, and VMware Tanzu.
This will help you understand the power of Spring Cloud Function.
32
CHAPTER 2
Note that the code remains the same in all environments; the only
change you make is to the pom.xml file. There is also an introduction to
a FunctionInvoker <Message<String>, <String> class regarding Azure
Functions. Azure build looks for this class during the build process.
Another difference you will notice with Azure is that you have to use JDK
11 and above to get the Azure templates in the IDE.
The common tasks you will perform for Knative and Kubernetes
environments such as EKS, AKS, GKE, OpenShift, and Kubernetes grid are
the following:
34
Chapter 2 Getting Started with Spring Cloud Function
Prerequisites:
• Maven or Gradle
35
Chapter 2 Getting Started with Spring Cloud Function
36
Chapter 2 Getting Started with Spring Cloud Function
As you can see from the Added Dependencies section in Figure 2-2, I
also included an H2 Database. You can use the database of your choice.
Upon clicking the Finish button, the IDE will create the code bits and
take you to the project screen. Alternatively, you can bring the code down
from GitHub by cloning the project.
Listing 2-1 shows the pom.xml file with Maven dependencies.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
37
Chapter 2 Getting Started with Spring Cloud Function
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
spring.cloud.function.definition=employeeConsumer
spring.datasource.url=jdbc:h2:mem:employee
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.jpa.hibernate.ddl-auto=create
spring.h2.console.enabled=true
38
Chapter 2 Getting Started with Spring Cloud Function
package com.kubeforce.payroll;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.
SpringBootApplication;
import org.springframework.cloud.function.context.
FunctionalSpringApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class PayrollAwsApplication {
39
Chapter 2 Getting Started with Spring Cloud Function
@Bean
public EmployeeConsumer employeeConsumer() {
return new EmployeeConsumer();
}
@Bean
public EmployeeSupplier employeeSupplier() {
return new EmployeeSupplier();
}
}
Once the scaffolding and code bits are completed, you are ready for the
next step, which is to create the model for your employee.
package com.kubeforce.payroll;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Table;
40
Chapter 2 Getting Started with Spring Cloud Function
@Entity
@Table(name= "employee")
public class Employee {
@Id
@GeneratedValue(generator = "UUID")
private Long id;
public Employee() {
}
42
Chapter 2 Getting Started with Spring Cloud Function
package com.kubeforce.payroll;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.Map;
import java.util.function.Consumer;
@Autowired
private EmployeeRepository employeeRepository;
43
Chapter 2 Getting Started with Spring Cloud Function
@Override
public void accept (Map<String, String> map)
{
LOGGER.info("Creating the employee", map);
Employee employee = new Employee (map.get("name"),
Integer.parseInt(map.get(
"employeeIdentifier")), map.get("email"), map.
get("salary"));
employeeRepository.save(employee);
}
package com.kubeforce.payrol;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.function.Supplier;
44
Chapter 2 Getting Started with Spring Cloud Function
@Component
public class EmployeeSupplier implements Supplier
{
public static final Logger LOGGER = LoggerFactory.
getLogger(EmployeeSupplier.class);
@Autowired
private EmployeeRepository employeeRepository;
@Override
public Employee get ()
{
List <Employee>employees = employeeRepository.
findAll();
LOGGER.info("Getting the employee of our choice - ",
employees);
return employees.get(0);
}
}
package com.kubeforce.payroll;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.Optional;
import java.util.function.Function;
45
Chapter 2 Getting Started with Spring Cloud Function
@Override
public Employee apply (Long s)
{
Optional<Employee> employeeOptional =
employeeRepository.findById(s);
if (employeeOptional.isPresent()) {
return employeeOptional.get();
}
return null;
}
}
Once you have developed your code, you can run the function locally.
The key step to run this locally is to modify the main class to include:
SpringApplication.run(PayrollApplication.class, args)
FunctionalSpringApplication.run(PayrollApplication.class, args)
46
Chapter 2 Getting Started with Spring Cloud Function
This allows for the function to be tested by tools such as Postman and curl.
{
"name": "xxx",
"employeeIdentifier":"2",
"email": [email protected],
"salary":"1000"
}
47
Chapter 2 Getting Started with Spring Cloud Function
You can test all the three interfaces here, but when you go to Lambda
or other environments, you will be restricted to just one interface.
You can also verify if the data has been added by going to the H2
console, as shown in Figure 2-5.
This completes the steps for creating, running, and testing the code of
a local function.
In the following sections, you’ll see how to use the same code with
little to no changes and deploy it to a serverless function container of
your choice.
48
Chapter 2 Getting Started with Spring Cloud Function
• AWS account
Step 1: First follow Steps 1-6 outlined in Section 2.1. The code remains the
same for any serverless offerings from the cloud providers.
The change introduced at the code level will be in the main Spring
Boot class called PayrollApplication. You will change this:
SpringApplication.run(PayrollApplication.class, args )
to this:
FunctionalSpringApplication.run(PayrollApplication.class, args)
49
Chapter 2 Getting Started with Spring Cloud Function
package com.kubeforce.payrollaws;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.
SpringBootApplication;
import org.springframework.cloud.function.context.
FunctionalSpringApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class PayrollAwsApplication {
@Bean
public EmployeeConsumer employeeConsumer() {
return new EmployeeConsumer();
}
50
Chapter 2 Getting Started with Spring Cloud Function
@Bean
public EmployeeSupplier employeeSupplier() {
return new EmployeeSupplier();
}
The next step is to change the pom.xml file to add the AWS Lambda
adapter.
Step 2: Change POM to include AWS dependencies and plugins, as shown
in Listing 2-9.
51
Chapter 2 Getting Started with Spring Cloud Function
<spring-cloud.version>2021.0.1</spring-cloud.version>
<aws-lambda-events.version>3.9.0</aws-lambda-events.
version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</
artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-aws</
artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
52
Chapter 2 Getting Started with Spring Cloud Function
<version>3.9.0</version>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.6.11</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-webflux-ui
</artifactId>
<version>1.6.11</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies
</artifactId>
<version>${spring-cloud.version}</version>
53
Chapter 2 Getting Started with Spring Cloud Function
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin
</artifactId>
<dependencies>
<dependency>
<groupId>org.springframework.boot.
experimental</groupId>
<artifactId>spring-boot-thin-layout
</artifactId>
<version>1.0.28.RELEASE</version>
</dependency>
</dependencies>
</plugin>
54
Chapter 2 Getting Started with Spring Cloud Function
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.4</version>
<configuration>
<createDependencyReducedPom>false
</createDependencyReducedPom>
<shadedArtifactAttached>true
</shadedArtifactAttached>
<shadedClassifierName>aws
</shadedClassifierName>
</configuration>
</plugin>
</plugins>
</build>
</project>
55
Chapter 2 Getting Started with Spring Cloud Function
Use the Upload From button, as shown in Figure 2-9, to access the
choices for uploading a file.
56
Chapter 2 Getting Started with Spring Cloud Function
Figure 2-8. Upload the JAR file into the AWS Lambda dashboard
Set the handler as shown. The Lambda function handler is the method
in your function code that processes events. This will indicate that the
Spring Cloud Function adapter will be invoked when the function is called.
I used Java 11. Refer to the pom.xml.
57
Chapter 2 Getting Started with Spring Cloud Function
Once you click Save, you can start your testing process.
Step 5: Test the function. The AWS Portal allows developers to run the
tests against the function. Clicking the Test tab will take you to the testing
dashboard, as shown in Figure 2-10.
58
Chapter 2 Getting Started with Spring Cloud Function
Figure 2-12 shows you the details of the test. Note that the test is
successful.
• Google account
59
Chapter 2 Getting Started with Spring Cloud Function
Step 1: Follow Steps 1-6 outlined in Section 2.1. This step is the same as
you completed for AWS Lambda. The code is very similar to the AWS code.
See Listing 2-10.
package com.kubeforce.payrollgcp;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.
SpringBootApplication;
import org.springframework.cloud.function.context.
FunctionalSpringApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class PayrollGcpApplication {
@Bean
public EmployeeConsumer employeeConsumer() {
return new EmployeeConsumer();
}
60
Chapter 2 Getting Started with Spring Cloud Function
@Bean
public EmployeeSupplier exampleSupplier() {
return new EmployeeSupplier();
}
}
61
Chapter 2 Getting Started with Spring Cloud Function
<properties>
<java.version>11</java.version>
<spring-cloud-gcp.version>3.1.0</spring-cloud-gcp.
version>
<spring-cloud.version>2021.0.1</spring-cloud.version>
<spring-cloud-function.version>4.0.0-SNAPSHOT</spring-
cloud-function.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</
artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>spring-cloud-gcp-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-gcp</
artifactId>
62
Chapter 2 Getting Started with Spring Cloud Function
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies
</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>spring-cloud-gcp-dependencies
</artifactId>
<version>${spring-cloud-gcp.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
63
Chapter 2 Getting Started with Spring Cloud Function
<build>
<plugins>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin
</artifactId>
<configuration>
<outputDirectory>target/deploy
</outputDirectory>
</configuration>
<dependencies>
<dependency>
<groupId>org.springframework.cloud
</groupId>
<artifactId>spring-cloud-function-
adapter-gcp</artifactId>
<version>3.2.2</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>com.google.cloud.functions</groupId>
<artifactId>function-maven-plugin</artifactId>
<version>0.9.1</version>
<configuration>
64
Chapter 2 Getting Started with Spring Cloud Function
<functionTarget>org.springframework.cloud.
function.adapter.gcp.GcfJarLauncher
</functionTarget>
<port>8080</port>
</configuration>
</plugin>
</plugins>
</build>
</project>
Step 3: Build, package, and deploy to Google Cloud Functions. The Google
cloud website for this activity is a bit clumsy. You have to zip up your entire
directory and upload it. This may not be allowed in some enterprises. I
therefore reverted to using the CLI to accomplish this task. See Figure 2-13.
This command must be run from the project root folder:
65
Chapter 2 Getting Started with Spring Cloud Function
66
Chapter 2 Getting Started with Spring Cloud Function
• Azure account
67
Chapter 2 Getting Started with Spring Cloud Function
package com.kubeforce.payrollazure;
import com.microsoft.azure.functions.*;
import com.microsoft.azure.functions.annotation.*;
import org.aspectj.weaver.NewConstructorTypeMunger;
import org.springframework.cloud.function.adapter.azure.
FunctionInvoker;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder;
@FunctionName("employeeConsumer")
public String execute(
@HttpTrigger(name = "request", methods =
{HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Op
tional<Map>> request,
ExecutionContext context) {
68
Chapter 2 Getting Started with Spring Cloud Function
Step 2: Build and package the function. IntelliJ offers you the ability
to execute Maven commands through a Maven window, as shown in
Figure 2-16.
You have to run the maven clean command before you run the maven
package command. These commands are under the Lifecycle menu in
the Maven window. Run the Maven package to get the deployable JAR file.
These JAR files are stored in the target folder in your project directory
Step 3: Deploy the function in Azure. You will see some artifacts created in
the target folder. Payroll-azure-0.0.1-SNAPSHOT.jar is the file you are
interested in. This file needs to be deployed to the Azure Cloud.
69
Chapter 2 Getting Started with Spring Cloud Function
IntelliJ has a feature/plugin for Azure that you can enable. Once you
enable that feature, you can perform Azure Function-based activities from
the IDE.
In this case, to start the deploy process, choose Run Azure Functions ➤
Deploy from the Azure Functions list, as shown in Figure 2-18.
After the function is successfully deployed to the Azure Cloud, you can
use the URL provided to runs tests in Postman or proceed to the Azure
Functions console to test.
The Azure Functions console is shown in Figure 2-19.
70
Chapter 2 Getting Started with Spring Cloud Function
You will see that the function is deployed in Azure and is running.
Step 4: Test. Clicking the employeeConsumer function will take you to the
detail console, as shown in Figure 2-20. Here, you can conduct tests by
clicking the Code + Test link. The dashboard has an Input section, where
you can specify the JSON in the body, choose the HTTP method as POST
(in this example), and click the Run button. You can see the results of the
execution in the command console, as shown in Figure 2-21.
72
Chapter 2 Getting Started with Spring Cloud Function
• Knative serving
1. Install Docker.
73
Chapter 2 Getting Started with Spring Cloud Function
software-properties-common
$curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo
apt-key add -
$sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$sudo apt-get update
$sudo apt-get install -y docker-ce docker-ce-cli containerd.io
$sudo usermod -aG docker $USER
Once you run these commands, you will have a running Docker
instance.
2. Install Kubectl.
3. Install KIND.
74
Chapter 2 Getting Started with Spring Cloud Function
Now you have the KIND bits that will allow you to install the cluster
and deploy Knative.
Step 2: Configure Knative. In order to configure Knative, you need a KIND
cluster. You will create a cluster with custom configuration.
Create a cluster using a configuration file called clusterconfig.yaml.
Note that the name of the cluster is "knative”. You can name it differently,
but you have to use that cluster to deploy Knative; see Listing 2-14.
apiVersion: v1
kind: Service
metadata:
76
Chapter 2 Getting Started with Spring Cloud Function
name: kourier
namespace: kourier-system
labels:
networking.knative.dev/ingress-provider: kourier
spec:
ports:
- name: http2
port: 80
protocol: TCP
targetPort: 8080
nodePort: 31080
- name: https
port: 443
protocol: TCP
targetPort: 8443
nodePort: 31443
selector:
app: 3scale-kourier-gateway
type: NodePort
4. Install Kourier.
You can see in Figure 2-24 that all the components are up and running.
You can go to the next step of publishing your Spring Cloud Function app
on Knative.
78
Chapter 2 Getting Started with Spring Cloud Function
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
79
Chapter 2 Getting Started with Spring Cloud Function
Step 4: Deploy the app to Knative. You need to create a YAML file for the
service. Notice the image that has been used. I pushed a Docker image
called “main” that exposes employeeSupplier. You will see that I will use
a different image when pushing to other cloud providers. This is to get
you acquainted with pushing different images with different exposed
endpoints. See Listing 2-17.
80
Chapter 2 Getting Started with Spring Cloud Function
A YAML execution gives you more control over the target environment.
Note this URL, as it is required for the testing step. The URL shown here for
example is https://payroll.default.127.0.0.1.sslip.io.
You can run the following command to get the URL and check if the
endpoint is ready for testing.
Step 5: Test . Since employeeSupplier queries the database and gets the
records, you need to use a GET operation. See Figure 2-26.
81
Chapter 2 Getting Started with Spring Cloud Function
Step 1: Set up a Kubernetes cluster with EKS. Before you run the command
in Listing 2-18, ensure that you are properly connected to AWS and
have the subscriptions and permissions to create a cluster. Additional
information can be found at https://docs.aws.amazon.com/eks/latest/
userguide/getting-started.html.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: payroll-clstr
82
Chapter 2 Getting Started with Spring Cloud Function
region: us-east-2
nodeGroups:
- name: payroll-nodes
instanceType: m5.large
desiredCapacity: 3
volumeSize: 80
The process takes 10 to 15 minutes. At the end of the process, you will
have created a cluster in EKS. You can verify it in the AWS EKS console in
the cloud. See Figure 2-27.
You will see that the dashboard shows the name of the cluster you
provided in the Listing 2-18 and the creation of three nodes as set for
desiredCapacity in the listing.
Verify that the clusters are up and running (see Figure 2-28). You can
do that by running the following command:
83
Chapter 2 Getting Started with Spring Cloud Function
Step 2: Configure Knative on EKS. In this step, you configure Knative on the
EKS cluster you created. See Listing 2-19.
84
Chapter 2 Getting Started with Spring Cloud Function
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Step 4: Push to the Docker hub. The next step is to push the Docker image
to the Dockerhub repository, as shown in Figure 2-30. Be sure to log in
to Dockerhub at https://hub.docker.com/ and create a repository and
namespace. You will need it for the Docker push.
85
Chapter 2 Getting Started with Spring Cloud Function
86
Chapter 2 Getting Started with Spring Cloud Function
Step 5: Deploy the app to Knative on EKS. In this step, you create a YAML
file and set the location of the image that you will deploy to Knative.
Create the payroll.yaml file and set up the image to be deployed, as
shown in Listing 2-21.
apiVersion: service.knative.dev/v1
kind: service
metadata:
name: payroll
spec:
template:
spec:
containers:
- image: docker.io/banupkubeforce/springcloudfunctions/
payrollv2
87
Chapter 2 Getting Started with Spring Cloud Function
ports:
- containerPort: 80
env:
- name: TARGET
value: "employeeconsumer"
Figure 2-32 shows the result of running the kn cli. A YAML execution
gives you more control over the target environment. Note the URL, as it is
required for the testing step. The URL shown here for example is https://
payroll.default.13.58.221.247.sslip.io.
You can run the following command to get the URL and check if the
endpoint is ready for testing:
$ kn services list
88
Chapter 2 Getting Started with Spring Cloud Function
89
Chapter 2 Getting Started with Spring Cloud Function
Prerequisites:
• kubectl (https://kubernetes.io/docs/tasks/tools/)
Step 1: Set up a Kubernetes cluster with GKE. In this step, you create a GKE
cluster. Make sure you have the sufficient permissions to create the cluster.
Additional information can be found at https://cloud.google.com/
kubernetes-engine/docs/deploy-app-cluster.
Before running the command ensure, that you have logged in to GCP
using the Gcloud CLI. Run the command in Listing 2-22.
90
Chapter 2 Getting Started with Spring Cloud Function
--scopes=service-control,service-management,compute-rw,
storage-ro,cloud-platform,logging-write,monitoring-write,
pubsub,datastore \
--num-nodes=3
91
Chapter 2 Getting Started with Spring Cloud Function
Step 2: Configure Knative on GKE. This step is similar to what you did
earlier with EKS or local Kubernetes:
92
Chapter 2 Getting Started with Spring Cloud Function
93
Chapter 2 Getting Started with Spring Cloud Function
Now that you have Knative configured and running, you can proceed
to the next step of pushing the app image to Knative.
Step 3: Containerize the app with Docker and push it to a repository
(optional).
This is an optional step, as you already deployed an image in Section
2.5. You can skip this step and go to Step 4. I used JDK 8 here, but you can
use the latest JDK by changing the FROM statement to FROM adoptopenjdk/
openjdk11:latest, as shown in Listing 2-23.
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Step 4: Push to the Docker hub. The next step is to push the Docker
image to the Dockerhub repository, as shown in Figure 2-40. Make sure to
log in to Dockerhub at https://hub.docker.com/ and create a repository
and namespace. You will need it for the Docker push.
94
Chapter 2 Getting Started with Spring Cloud Function
95
Chapter 2 Getting Started with Spring Cloud Function
Figure 2-42 shows the results of running the kn cli. A YAML execution
gives you more control over the target environment. Note the URL, as it is
required for the testing step. The URL shown here for example is https://
payroll.default.34.69.156.24.sslip.io.
96
Chapter 2 Getting Started with Spring Cloud Function
Step 6: Test. Use the DNS name provided in Step 5 to test against the
following URL:
https://payroll.default.34.69.156.24.sslip.io/
employeeConsumer
Now you have successfully deployed the payroll app on GKE with
Knative.
97
Chapter 2 Getting Started with Spring Cloud Function
• kubectl (https://kubernetes.io/docs/tasks/tools/)
Step 1: Set up a Kubernetes cluster with AKS. In this step, you create an
AKS cluster. Make sure you have the sufficient permissions to create the
cluster. Additional information can be found at https://docs.microsoft.
com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli.
Before running the following command, ensure that you have logged
in to Azure using the Azure CLI. Then run the command in Listing 2-24.
98
Chapter 2 Getting Started with Spring Cloud Function
Configure your kubeconfig to use the AKS cluster that you created by
running the command in Listing 2-25.
Step 2: Configure Knative on AKS. This step is similar to what you did
earlier with EKS or local Kubernetes.
Install Knative serving:
99
Chapter 2 Getting Started with Spring Cloud Function
100
Chapter 2 Getting Started with Spring Cloud Function
Now that you have the Knative configured and running, you can
proceed to the next step of pushing the app image to Knative.
Step 3: Containerize the app with Docker and push it to a repository.
Follow Step 3 in Sections 2.5, 2.6, or 2.7 if you choose to create and deploy
an image to the Dockerhub repository.
Step 4: Deploy the app to Knative on AKS. Here again, I use the Knative CLI
kn to deploy the app, as it convenient for this simple exercise.
Run the command in Listing 2-28.
Note the URL that is generated. You will use this URL for testing.
Step 5: Test. Use the DNS name provided in Step 4 to test against the
following URL:
https://payroll.default.20.121.248.sslip.io/employeeConsumer
101
Chapter 2 Getting Started with Spring Cloud Function
• kubectl (https://kubernetes.io/docs/tasks/tools/)
102
Chapter 2 Getting Started with Spring Cloud Function
Here are the steps to configure and deploy Spring Cloud Function on
VMware Tanzu:
103
Chapter 2 Getting Started with Spring Cloud Function
Step 2: Create a cluster for your app. Here you create a cluster for the
payroll example. Run the following command:
You will get a message that the payroll cluster has been completed, as
shown in Figure 2-49.
104
Chapter 2 Getting Started with Spring Cloud Function
Step 3: Install Knative on the TKG cluster. Run the following command to
install kantive-serving on the TKG cluster:
105
Chapter 2 Getting Started with Spring Cloud Function
Step 5: Test Follow Section 2.5, Step 5 and use Postman or curl to
test against the http://payroll.default.127.0.0.241.nip.io/
employeeConsumer URL.
This completes the successful deployment of Tanzu Kubernetes
Grid or TKG.
2.10. Summary
This chapter explained how Spring Cloud Function can be developed and
deployed to various target platforms, including AWS Lambda, Google
Functions, Azure Functions, and Kubernetes environments such as
AWS EKS, Google’s GKE, Azure AKS, VMware Tanzu, and the like. With
Kubernetes, you were able to use Knative to deploy the same image to
all the other Kubernetes flavors without changing the code. Knative
ensures that the code is portable across any Kubernetes platform. This is
critical, as it ensures movement to any cloud with minimal disruptions. To
summarize, you should now have a good understanding of how to build
serverless functions with Spring Cloud Function. The next chapter looks at
how to automate the deployment process for Spring Cloud Function.
106
CHAPTER 3
3.1. GitHub Actions
This is a CI/CD platform tightly integrated with GitHub; it allows you to
create and trigger workflows from GitHub. So, if you are a fan of GitHub,
you will really like this new feature. When you sign up for GitHub, GitHub
Actions will automatically be integrated into your project, so you do not
have to use a separate tool like Jenkins or Circle CI. Of course, this means
that you are restricted to GitHub as your code repository. Creating a
workflow is quite straightforward. You can create a workflow directly on
the GitHub website by navigating to your project and clicking the New
Workflow button in the Actions tab, as shown in the Figure 3-1.
Upon clicking New Workflow, as shown in Figure 3-2, you will be taken
to the workflow “marketplace,” where you can choose from the suggested
flows or set up a workflow yourself. Click the Set Up a Workflow Yourself
link to start creating a custom workflow.
108
Chapter 3 CI/CD with Spring Cloud Function
As you can see in Figure 3-3, the workflow points to a main.yaml file
that is created in the ./github/workflows directory under your root
project folder. You can also create the same file in your IDE and it will show
up under the Actions tab once you commit the code. Listing 3-1 shows the
sample code created for AWS Lambda.
name: CI
on:
push:
branches: [ "master" ]
jobs:
build-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-java@v2
with:
java-version: '8'
distribution: 'temurin'
109
Chapter 3 CI/CD with Spring Cloud Function
cache: maven
- uses: aws-actions/setup-sam@v1
- uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_
ACCESS_KEY }}
aws-region: us-east-2
- name: Build with Maven
run: mvn -B package --file pom.xml
# sam package
- run: sam package --template-file template.yaml
--output-template-file packaged.yaml --s3-bucket
payrollbucket
# sam deploy
- run: sam deploy --no-confirm-changeset --no-fail-
on-empty-changeset --stack-name payroll-aws --s3-bucket
payrollbucket --capabilities CAPABILITY_IAM --region us-east-2
110
Chapter 3 CI/CD with Spring Cloud Function
This sets up the GitHub Actions workflow and the triggers. Now,
every time you commit or push code to this project repository in GitHub,
this workflow will execute. Figure 3-5 shows a sample execution of this
workflow.
3.2. ArgoCD
While GitHub Actions can push code to serverless environments such as
Lambda, it lacks a good graphical representation of code deployed when
it comes to the Kubernetes environment. Kubernetes is a orchestrator of
containers and has a plethora services that manage deployments. ArgoCD
was created for Kubernetes. ArgoCD is a declarative CD (Continuous
Delivery) tool, which means application definitions, configurations,
and environments can be version controlled. ArgoCD, similar to GitHub
111
Chapter 3 CI/CD with Spring Cloud Function
Actions, uses Git repositories as a single source of truth. This is also known
as GitOps.
Declarative means configuration is guaranteed by a set of
facts instead of by a set of instructions.
Declarative GitOps allows the programmer or the ones who created
the application to control the configuration of the environment in which
the environment will run. This means the programmer does not have to
rely on different teams, such as infrastructure or DevOps teams, to manage
the pieces of the application. The programmers are in control, and this is a
good thing.
ArgoCD set up is mostly programmatic and relies on the underlying
Kubernetes configmaps. This is, as you can see, is different from other tools
like Jenkins.
Here is how I set up the ArgoCD environment.
Prerequisites:
• A Kubernetes cluster
112
Chapter 3 CI/CD with Spring Cloud Function
113
114
Chapter 3
CI/CD with Spring Cloud Function
Now you can see that the ArgoCD services are up and running. Notice
that an external IP has not been associated with service/argocd-server.
Run the following command to attach a LoadBalancer to the
argocd-server:
115
116
Chapter 3
CI/CD with Spring Cloud Function
117
118
Chapter 3
The output of this command is the password for the “admin” user.
You can now log in using the web browser. Then navigate to the
20.119.112.240 URL to change the password.
Log in to ArgoCD with the following command:
Now that you have learned about GitHub Actions and ArgoCD, you can
move on to deploying your application and automating the CI/CD process.
119
Chapter 3 CI/CD with Spring Cloud Function
120
Chapter 3 CI/CD with Spring Cloud Function
Add these two files to the resources folder of the main project, as
shown in Figure 3-12.
spring.cloud.function.definition=employeeSupplier
spring.datasource.url=jdbc:h2:mem:employee
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
121
Chapter 3 CI/CD with Spring Cloud Function
spring.h2.console.enabled=true
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.defer-datasource-initialization=true
122
Chapter 3 CI/CD with Spring Cloud Function
Figure 3-13 shows the flow when deploying Spring Cloud Function to a
Kubernetes environment with Knative configured.
The process steps are as follows:
123
Chapter 3 CI/CD with Spring Cloud Function
• AWS account
124
Chapter 3 CI/CD with Spring Cloud Function
name: CI
on:
push:
branches: [ "master" ]
125
Chapter 3 CI/CD with Spring Cloud Function
jobs:
build-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-java@v2
with:
java-version: '11'
distribution: 'temurin'
cache: maven
- uses: aws-actions/setup-sam@v1
- uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_
ACCESS_KEY }}
aws-region: us-east-2
- name: Build with Maven
run: mvn -B package --file pom.xml
# sam package
- run: sam package --template-file template.yaml
--output-template-file packaged.yaml --s3-bucket
payrollbucket
# sam deploy
- run: sam deploy --no-confirm-changeset --no-fail-on-
empty-changeset --stack-name payroll-aws --s3-bucket
payrollbucket --capabilities CAPABILITY_IAM --region
us-east-2
126
Chapter 3 CI/CD with Spring Cloud Function
The secrets for these two elements can be stored in GitHub secrets:
Figure 3-16 shows the place to store configuration secrets for GitHub
Actions. It is under the Settings tab.
on:
push:
branches: [ "master" ]
GitHub Actions execute the steps outlined in the YAML file and deploy
the function to AWS Lambda. Figure 3-17 shows a successful run.
127
Chapter 3 CI/CD with Spring Cloud Function
128
Chapter 3 CI/CD with Spring Cloud Function
Figure 3-19. Testing if the function was successful. See the JSON
response
In Figure 3-19, you can see that the test was successful and see a JSON
result of what is in the database.
129
Chapter 3 CI/CD with Spring Cloud Function
• Use gcloud-cli
on:
push:
branches: [ "master" ]
jobs:
build-deploy:
runs-on: ubuntu-latest
steps:
130
Chapter 3 CI/CD with Spring Cloud Function
Note that you will have to store your GCP_CREDENTIALS in the GitHub
Secrets dashboard.
As in the previous example with AWS Lambda, note that the steps to
check out, set up, and build Maven are the same. For the authentication
and deployment, you use the Google Cloud CLI. The Set up Cloud SDK
task will download and set up the Google CLI. You can use the same
command line script that you used when you deployed from a laptop in
Chapter 2.
131
Chapter 3 CI/CD with Spring Cloud Function
Step 3: Commit and push code to trigger the GitHub Actions. This
trigger is defined in the actions code. In this example, any push or commit
to the “master” branch will trigger the GitHub Actions.
on:
push:
branches: [ "master" ]
This can be done on the GitHub Actions website or in the IDE. You can
go to GitHub and commit a change by doing a simple modification at the
“master” branch. This will start the GitHub Action flow.
Once the actions successfully complete the job, you can go to the
Google Cloud Functions dashboard and test the function. Again, you
execute a simple GET against the EmployeeSupplier function.
Step 4: Test the function.
Before you test the function, ensure that you pick the function to be
invoked from an unauthenticated device such as your laptop. Once you’re
done testing, remove the privilege to avoid unnecessary invocations.
132
Chapter 3 CI/CD with Spring Cloud Function
133
Chapter 3 CI/CD with Spring Cloud Function
on:
push:
branches: [ "master" ]
# CONFIGURATION
# For help, go to https://github.com/Azure/Actions
#
# 1. Set up the following secrets in your repository:
# AZURE_FUNCTIONAPP_PUBLISH_PROFILE
#
# 2. Change these variables for your configuration:
134
Chapter 3 CI/CD with Spring Cloud Function
env:
AZURE_FUNCTIONAPP_NAME: payroll-kubeforce-new # set this
to your function app name on Azure
POM_XML_DIRECTORY: '.' # set this to the
directory which contains pom.xml file
POM_FUNCTIONAPP_NAME: payroll-kubeforce-new # set
this to the function app name in your local development
environment
JAVA_VERSION: '11' # set this to the
java version to use
jobs:
build-and-deploy:
runs-on: ubuntu-latest
environment: dev
steps:
- name: 'Checkout GitHub Action'
uses: actions/checkout@master
135
Chapter 3 CI/CD with Spring Cloud Function
with:
app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
# package: './${{ env.POM_XML_DIRECTORY }}/target/azure-
functions/${{ env.POM_FUNCTIONAPP_NAME }}'
package: './${{ env.POM_XML_DIRECTORY }}/target/azure-
functions/${{ env.POM_FUNCTIONAPP_NAME }}'
# package: '${{ env.POM_XML_DIRECTORY }}/target/azure-
functions/${{ env.POM_FUNCTIONAPP_NAME }}'
# package: 'target/azure-functions/${{ env.POM_
FUNCTIONAPP_NAME }}'
publish-profile: ${{ secrets.AZURE_FUNCTIONAPP_PUBLISH_
PROFILE }}
on:
push:
branches: [ “master” ]
136
Chapter 3 CI/CD with Spring Cloud Function
137
Chapter 3 CI/CD with Spring Cloud Function
138
Chapter 3 CI/CD with Spring Cloud Function
139
Chapter 3 CI/CD with Spring Cloud Function
Once you have the prerequisites set up, you can begin configuring
an automated CI/CD pipeline. For this example implementation, you’ll
use the code from GitHub at https://github.com/banup-kubeforce/
payroll-h2.git.
Step 1: Spring Cloud Function code in GitHub. Push your code to
GitHub. You can use the code for payroll-h2 in GitHub.
Step 2: Create a GitHub Action. Listing 3-8 shows the code for
the action.
on:
push:
branches:
- 'main'
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
140
Chapter 3 CI/CD with Spring Cloud Function
distribution: 'temurin'
cache: maven
-
name: Login to DockerHub
uses: docker/login-action@
f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
141
Chapter 3 CI/CD with Spring Cloud Function
This code creates a Docker image and pushes it to the Docker hub. You
can store the username and password as secrets in the GitHub site (see
Figure 3-31).
Step 3: Execute GitHub Actions to build and push the Docker image.
The execution of GitHub Actions can be triggered by a push/commit.
The trigger is defined in the GitHub Actions YAML file:
on:
push:
branches:
- 'main'
142
Chapter 3 CI/CD with Spring Cloud Function
143
Chapter 3 CI/CD with Spring Cloud Function
144
Chapter 3 CI/CD with Spring Cloud Function
Figure 3-36. Successful run of the sync showing the deployment flow
145
Chapter 3 CI/CD with Spring Cloud Function
Step 7: Testing. The best way to get the URL to test is to connect to the
cluster via the command line and get the URL, as explained in Chapter 2.
Run $kn service list to get the URL for testing, as shown in
Figure 3-38.
146
Chapter 3 CI/CD with Spring Cloud Function
3.9. Summary
In this chapter, you learned how to set up some CI/CD tools to create an
automated deployment for your Spring Cloud Function.
You learned how to trigger the deployment of functions on Lambda,
Google Cloud Functions, and Azure Functions.
You also learned that you can combine the build of Docker images
stored in Docker hub and ArgoCD to deploy the image to any Kubernetes
cluster that is running Knative.
If you want to achieve “write-once deploy-anywhere,” you have to
look at using Kubernetes and Knative. Spring Cloud Function is really a
portable function.
147
CHAPTER 4
Building Event-Driven
Data Pipelines with
Spring Cloud
Function
Spring Cloud Function plays a critical role in the hybrid cloud or on-
premises/private cloud space. Building events/data pipelines that span
across the local datacenter and the cloud increases complexity due to the
firewall boundaries. Data pipelines play an important role when you want
to acquire, move, or transform data as it comes (streaming) or when it’s
offline (batch).
So, what are event data pipelines?
triggered by the collision event and begin sending data as streams into
the OnStar system. The OnStar system processes the data from numerous
other systems and triggers a response in the form of an OnStar operator
calling you. The event that triggered the data processing along with the
output is the event data pipeline. Figure 4-1 illustrates this process.
150
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
151
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
4.1.1. Acquire Data
Acquiring data is the first step in the data pipeline process. Here, business
owners and data architects decide on what data to use to fulfil the
requirements for a specific use case.
For example, in the case of a collision event detection for OnStar, the
data needs to be acquired from sensors. This sensor data then needs to
be combined with data from internal and external partners, like finance,
towing partners, rental partners, and so on.
Table 4-1 shows the data, the type of data, and the sources for a vehicle
event-driven pipeline.
4.1.2. Store/Ingest Data
The data from various sources is acquired and stored as raw data into
the store of choice. The method of ingestion can be stream-based or
batch-based.
For example, in the case of OnStar, sensor data is streamed at regular
intervals or is event-based. Other data, such as rental info, can either be
batch based or on-demand query driven. The raw datastore can be an S3
object store hosted in the cloud or on-premises.
152
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
4.1.3. Transform Data
The data that is stored raw in the object store is then processed. The
process may include converting or transforming unstructured data into
structured data and storing it in an RDBMS database.
For example, in OnStar, partner data and internal data will be
combined and transformed into a common data model. The sensor data
will also be transformed into an RDBMS format.
4.1.4. Load Data
Once the data is transformed, it is then loaded into a single or multiple
databases with a common schema. This schema is based on a predefined
data model specific to the use case. The target datastore can be a data lake
or another RDBMS store. Here again, it depends on the type of analysis
that needs to be done.
For example, if this an OLTP type of analysis, the data needs to be
processed and sent to requesting systems quickly. This would require an
RDBMS store. Data that needs to be available for reporting and research
can be stored in a data lake.
4.1.5. Analyze Data
During this sub-process, the data that is stored in a data lake or RDBMs
will be analyzed using tools such as Tableau, Power BI, or a dedicated web
page for reporting.
In the case of OnStar, data that is stored in the data lake will be analyzed
using Tableau or Power BI, while the data that needs immediate attention
will be analyzed by a custom dashboard or reporting interface on the web.
Spring Cloud Function plays an integral role in the whole process,
especially when combined with tools such as Spring Cloud Data Flow,
AWS Glue, Azure Data Factory, Google’s data flow, and so on. You will dive
deep into these tools in this chapter.
153
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
154
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
As you can see from the dashboard, you can build stream-based or
batch-based (task) data pipelines and manage these through a single
dashboard.
SCDF, unlike other the data pipeline tools available in the cloud, can
be deployed in a Kubernetes, Docker, or Cloud Foundry environment,
making it a portable tool for data pipeline development and deployment.
• Kubernetes or Docker
• A RabbitMQ cluster/instance
155
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
156
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
You can now access SCDF using the external IP. For example,
http://20.241.228.184:8080/dashboard
The next step is to add applications. Spring provides a standard set of
templates that you can use to build your pipeline.
Step 2: Add Applications to your SCDF Instance
Use the Add Application(s) button to create some starter apps that you
can use to create a data pipeline; see Figures 4-7 and 4-8.
157
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
Figure 4-8 shows you the options to add applications that are custom
built through the Registering One or More Applications option. You
import the application coordinates from an HTTP URI or use a properties
file. There is also an option to import some prebuilt starters from Spring.
158
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
Figure 4-9 shows prebuilt templates with the Docker images URI. SCDF
is running on a Kubernetes environment, so the prebuilt images have a
Docker URI.
These prebuilt templates come in three categories—source, processor,
and sink. They allow you to wire up a data pipeline without the need for
coding. If you want a custom component, you can follow the examples in
https://dataflow.spring.io/docs/stream-developer-guides/.
The next step is to create a stream using the starter templates
you loaded.
159
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
This example uses a RabbitMQ as the source, so pick Rabbit from the
available options, as shown in Figure 4-11.
160
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
2. Pick a processor.
Pick the Transform component from the list, as shown in Figure 4-12.
161
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
3. Pick a sink.
Pick the Log component from the list, as shown in Figure 4-13.
Now you wire them by dragging from the output of the first component
to the input of the second component, as depicted in Figure 4-14.
162
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
4. Configure RabbitMQ.
163
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
In this case, you just use a simple log. No configuration is required. See
Figure 4-17.
164
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
Once you wire up, you will get a data pipeline. This will also show a
Stream DSL (Domain Specific Language) expression that you can save and
reuse using a SCDF shell or dashboard, as shown in Figure 4-18.
165
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
• SenderConfig
• QueueSender
• SenderFunction
Prerequisites:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
166
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies
</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencyManagement>
spring.cloud.function.definition=senderFunction
spring.rabbitmq.host=20.40.208.246
spring.rabbitmq.port=5672
spring.rabbitmq.username=banup
spring.rabbitmq.password=pa55word
queue.name=VehicleInfo
167
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
This is a simple setter for the queue name. You can expand this to
include other RabbitMQ configurations. This is the entity definition (see
Listing 4-3).
package com.kubeforce.scdffunctiontigger;
import org.springframework.amqp.core.Queue;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class SenderConfig {
@Value("${queue.name}")
private String message;
@Bean
public Queue queue() {
return new Queue(message, true);
}
168
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
package com.kubeforce.scdftrigger;
import org.springframework.amqp.core.Queue;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class QueueSender {
@Autowired
private RabbitTemplate rabbitTemplate;
@Autowired
private Queue queue;
This example uses the queueSender to send the data (see Listing 4-5).
package com.kubeforce.scdftrigger;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.function.Function;
169
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
170
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
171
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
You have seen how to create a data pipeline in SCDF that monitors
a topic in RabbitMQ. You created a Spring Cloud Function that posts
messages into the RabbitMQ topic. Spring Cloud Function can also be
deployed as a source in SCDF; more information on how to develop code
for SCDF is available on the SCDF site.
You can now connect and publish to the stream using your Spring
Cloud Function.
174
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
Now you have a job in AWS Glue that you can trigger manually or via a
function.
spring.cloud.function.definition=producerFunction
#use your aws credentials here
aws.access_key =
aws.secret_key =
aws.region = us-east-2
175
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-kinesis-client</artifactId>
<version>1.14.1</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-kinesis-producer</artifactId>
<version>0.13.1</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-kinesis-client</artifactId>
<version>1.14.1</version>
</dependency>
3: Create the model. This is a simple model for tracking a vehicle’s detail.
See Listing 4-8.
176
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
4: Create the Kinesis producer. This interface is nice to have; see Listing 4-9.
177
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.kinesis.producer.*;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicLong;
@Service
public class ProducerServiceImpl implements ProducerService {
@Value(value = "${aws.stream_name}")
private String streamName;
@Value(value = "${aws.region}")
private String awsRegion;
@Value(value = "${aws.access_key}")
private String awsAccessKey;
178
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
@Value(value = "${aws.secret_key}")
private String awsSecretKey;
public ProducerServiceImpl() {
this.kinesisProducer = getKinesisProducer();
}
return kinesisProducer;
}
179
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
@Override
public void putDataIntoKinesis(String payload) throws
Exception {
@Override
public void onFailure(Throwable t) {
180
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
LOG.error(String.format("Failed to put
record - %s : %s.",
last.getErrorCode(), last.
getErrorMessage()));
}
}
LOG.error("Exception during put", t);
}
@Override
public void onSuccess(UserRecordResult result) {
completed.getAndIncrement();
}
};
try {
data = ByteBuffer.wrap(payload.getBytes("UTF-8"));
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
181
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
}
@Override
public void stop() {
if (kinesisProducer != null) {
kinesisProducer.flushSync();
kinesisProducer.destroy();
}
}
6: Create the producer function. This is the critical function that will be
exposed to post data into Kinesis; see Listing 4-11.
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.function.Function;
182
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
183
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
10: Run Glue manually. From the Glue Studio, start the process by
clicking Run, as shown in Figure 4-24.
184
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
The job starts to run, as shown in Figure 4-28. Check the s3 bucket for
any data.
In this section, you learned how to create a Spring Cloud Function
that can post data into AWS Kinesis that is part of the data pipeline. You
learned that you can publish data into Kinesis and trigger the AWS Glue
pipeline manually, but I also encourage you to explore other ways you
can implement Spring Cloud Function for AWS Glue, such as creating and
deploying triggers. More information on how to create AWS Glue triggers
in Spring is available at https://docs.aws.amazon.com/sdk-for-java/
latest/developer-guide/examples-glue.html.
185
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
For the example in this section, you will create a dataflow that includes
cloud pub/sub:
Spring Cloud Function ➤Dataflow {Cloud Pub/Sub ➤ Cloud Storage}
Prerequisites:
186
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
187
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
188
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
189
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
Prerequisites:
• Maven dependencies
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
<!-- [START pubsub_spring_boot_starter] -->
<!-- [START pubsub_spring_integration] -->
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-pubsub
</artifactId>
<version>3.3.0</version>
</dependency>
<!-- [END pubsub_spring_boot_starter] -->
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-core</artifactId>
</dependency>
<!-- [END pubsub_spring_integration] -->
<dependency>
191
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
spring.cloud.function.definition=producerFunction
spring.cloud.gcp.project-id=springcf-348721
spring.cloud.gcp.credentials.location=file:C://Users//banua//
Downloads//application_default_credentials.json
pubsub.topic=projects/springcf-348721/topics/vehicletopic
package com.kubeforce.googlepubsub;
import java.time.LocalDateTime;
192
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
package com.kubeforce.googlepubsub;
193
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
package com.kubeforce.googlepubsub;
import com.google.cloud.spring.pubsub.core.PubSubTemplate;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
@Component
public class PubSubPublisher {
194
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
package com.kubeforce.googlepubsub;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.beans.factory.annotation.Autowired;
import java.time.LocalDateTime;
import java.util.function.Function;
@Override
public String apply(TrackDetail trackDetail) {
ObjectMapper mapper = new ObjectMapper();
String data = “”;
try {
data = mapper.writeValueAsString(trackDetail);
195
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
196
Chapter 4 Building Event-Driven Data Pipelines with Spring Cloud Function
4.5. Summary
This chapter explained how to create dataflow and data pipelines, whether
on-premises using SCDF or in the cloud. For the cloud, you can use SCDF
or cloud-native tools.
Spring Cloud Function is versatile and can be used in the context of
data pipelines as a trigger or as part of the flow.
With AWS Glue and Google Data Flow, you saw that you can use Spring
Cloud Function as a trigger for the flows. This requires some additional
coding by adding some relevant libraries and invoking the flow.
Upcoming chapters discuss other use cases of Spring Cloud Function.
197
CHAPTER 5
AI/ML Trained
Serverless Endpoints
with Spring Cloud
Function
This chapter looks at how Spring Cloud Function can be leveraged in AI/
ML. You learn about the AI/ML process and learn where Spring Cloud
Function fits in the process. You also learn about some of the offerings
from the cloud providers, such as AWS, Google, and Azure.
Before delving into the details of Spring Cloud Function
implementation, you need to understand the AI/ML process. This will set
the stage for implementing Spring Cloud Function.
5.1. AI/ML in a Nutshell
AI/ML is gaining popularity, as it is being offered by almost all cloud
providers. For AI/ML to work properly, it is important to understand the
process behind it. See Figure 5-1.
Let’s dig deeper into the process depicted in Figure 5-1 and see what is
accomplished.
1) Gathering requirements
• Model requirements
200
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
• Data collection
201
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
202
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
203
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
• Data drift
• Concept drift
204
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Table 5-1. Where to Use Spring Cloud Function in the AI/ML Process
AI/ML Process Human Compute
Model Human/
requirements manual
process
Collect data Integration triggers, data Data pipeline process-
pipeline sources or sinks Transformation
Data cleaning Integration triggers Transformation process
Data labeling Tagging discrete Bulk tagging
elements-updates,
deletes
Feature Manual
engineering
Train model Trigger for training Training process
Model Manual Triggers for evaluation Bulk evaluation
evaluation
Deploy models Model serving, model Bulk storage
Monitoring alerts
models
205
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
concurrent processing units. This made the whole process costly and it
was left to companies with deep pockets to be able to conduct proper AI/
ML activities.
Today, with all the cloud providers providing some level of AI/ML
activities through an API or SaaS approach, and with the ability to pay per
use or pay as you go, companies small and big have begun to utilize AI/ML
in their compute activities.
Paradigms such as cloud functions make it even easier to take
advantage of a scalable platform offered by the cloud. Activities such as
model storage and retrieval can be done on demand with cloud functions.
Serving pre-trained models is easy through cloud functions and these
models can be made available to any client without the need to install
client libraries. Here are some of the advantages of cloud functions
in AI/ML:
• Scalable infrastructure
206
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
207
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
208
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
209
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
5.3.1. What Is DJL?
Deep Java Library (DJL) https://docs.djl.ai/ is a high-level, engine-
agnostic Java framework for deep learning. It allows you to connect to
any framework like TensorFlow or PyTorch and conduct AI/ML activities
from Java.
DJL has also great hooks to Spring Boot and can easily be invoked
through the Spring Framework. DJL acts as an abstraction layer across
frameworks and makes it easy to interact with those frameworks, as shown
in Figure 5-4.
210
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
There are many components in DJL that are useful to look at, but the
DJL serving is interesting.
Run the following commands to get the djl-serving bits. Then unzip
the file into your directory of choice and set the path to the serving.bat
located at ~\serving-0.19.0\bin\serving.bat. This will allow you to
execute serving from anywhere on your machine.
curl -O https://publish.djl.ai/djl-serving/serving-0.19.0.zip
unzip serving-0.19.0.zip
C:\Users\banua>serving -m "resnet=https://tfhub.dev/tensorflow/
resnet_50/classification/1"
[INFO ] - Starting djl-serving: 0.19.0 ...
211
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
[INFO ] -
Model server home: C:\Users\banua
Current directory: C:\Users\banua
Temp directory: C:\Users\banua\AppData\Local\Temp\
Command line:
Number of CPUs: 16
Max heap size: 8114
Config file: N/A
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8080
Default job_queue_size: 1000
Default batch_size: 1
Default max_batch_delay: 300
Default max_idle_time: 60
Model Store: N/A
Initial Models: resnet=https://tfhub.dev/tensorflow/resnet_50/
classification/1
Initial Workflows: N/A
Netty threads: 0
Maximum Request Size: 67108864
[INFO ] - Initializing model: resnet=https://tfhub.dev/
tensorflow/resnet_50/classification/1
[INFO ] - Downloading https://publish.djl.ai/tensorflow-2.7.0/
win/cpu/api-ms-win-core-synch-l1-2-0.dll.gz ...
[INFO ] - Downloading https://publish.djl.ai/tensorflow-2.7.0/
win/cpu/api-ms-win-core-file-l1-2-0.dll.gz ...
[INFO ] - Downloading https://publish.djl.ai/tensorflow-2.7.0/
win/cpu/THIRD_PARTY_TF_JNI_LICENSES.gz ...
[INFO ] - Downloading https://publish.djl.ai/tensorflow-2.7.0/
win/cpu/api-ms-win-core-file-l1-1-0.dll.gz ...
[INFO ] - Downloading https://publish.djl.ai/tensorflow-2.7.0/
win/cpu/api-ms-win-crt-environment-l1-1-0.dll.gz ...
212
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
213
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
214
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
215
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
216
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
"resnet=https://tfhub.dev/tensorflow/resnet_50/
classification/1"
$curl -O https://resources.djl.ai/images/kitten.jpg
Next, run the following and you will see the output with probabilities.
You provide the djl-serving instance that is running at http://
localhost:8080/predictions with the kitten image that is located
in the current directory, and you get a response shown in Figure 5-6,
which shows that the image is probably a tabby cat. The probability is
0.4107377231121063. This is close.
217
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Next, you see how you can use DJL to create a Spring Cloud Function
to serve models.
218
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
• DJL libraries
• A model: https://djl-ai.s3.amazonaws.com/
resources/demo/pneumonia-detection-model/saved_
model.zip
Step 1: Create the Spring Cloud Function with DJL framework. Add
dependencies to the Hadoop file.
Add the DJL highlighted dependencies along with spring-cloud-
function-web and GCP dependencies, as shown in Listing 5-2.
219
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/ai.djl/bom -->
<dependency>
<groupId>ai.djl</groupId>
<artifactId>bom</artifactId>
<version>0.12.0</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>ai.djl</groupId>
<artifactId>api</artifactId>
<version>0.12.0</version>
</dependency>
<dependency>
<groupId>ai.djl.tensorflow</groupId>
<artifactId>tensorflow-api</artifactId>
<version>0.12.0</version>
</dependency>
<dependency>
<groupId>ai.djl.tensorflow</groupId>
<artifactId>tensorflow-engine</artifactId>
<version>0.12.0</version>
220
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
</dependency>
<dependency>
<groupId>ai.djl.tensorflow</groupId>
<artifactId>tensorflow-native-auto</artifactId>
<version>2.4.1</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
package com.kubeforce.djlxray;
import ai.djl.inference.Predictor;
import ai.djl.modality.Classifications;
import ai.djl.modality.cv.Image;
import ai.djl.modality.cv.ImageFactory;
import ai.djl.modality.cv.translator.
ImageClassificationTranslator;
import ai.djl.modality.cv.util.NDImageUtils;
import ai.djl.repository.zoo.Criteria;
import ai.djl.repository.zoo.ZooModel;
import ai.djl.translate.Translator;
221
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
import lombok.SneakyThrows;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.function.Function;
@SneakyThrows
@Override
public String apply(Map<String, String> imageinput) {
imagePath= imageinput.get("url");
savedModelPath = imageinput.get("savedmodelpath");
Image image;
try {
image = ImageFactory.getInstance().
fromUrl(imagePath);
} catch (IOException e) {
throw new RuntimeException(e);
}
Translator<Image, Classifications> translator =
ImageClassificationTranslator.builder()
222
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
}
}
Step 3: Test locally. Run the Spring Cloud Function and invoke the
endpoint http://localhost:8080/xrayFunction
223
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Provide input:
{
"url":"https://djl-ai.s3.amazonaws.com/resources/images/chest_
xray.jpg",
"savedmodelpath":https://djl-ai.s3.amazonaws.com/resources/
demo/pneumonia-detection-model/saved_model.zip
}
Upon invoking the function, the model is downloaded and then loaded
into memory. This takes about a minute to load, after which it comes back
with a successful message. The model took 802066 microseconds (80
seconds) to load, and this is critical for your function calls, as you will have
to accommodate for this model-loading time. See Figure 5-9.
You also learned how to use deep learning Java libraries in your
functions. You can deploy this Spring Cloud Function to any cloud, as
shown in Chapter 2.
5.4.1. TensorFlow
TensorFlow was developed by Google and is an open source platform for
machine learning. It is an interface for expressing and executing machine
learning algorithms. The beauty of TensorFlow is that a model expressed
in TensorFlow can be executed with minimal changes on mobile devices,
laptops, or large-scale systems with multiple GPUs and CPUs. TensorFlow
is flexible and can express a lot of algorithms, including training and
inference algorithms for deep neural networks, speech recognition,
robotics, drug discovery, and so on.
In Figure 5-9, you can see that TensorFlow can be deployed to multiple
platforms and has many language interfaces. Unfortunately, TensorFlow
is written in Python, so most of the models are written and deployed
in Python. This poses a unique challenge for enterprises who have
standardized on Java.
225
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
1
Source: https://blog.tensorflow.org/2019/01/whats-coming-in-
tensorflow-2-0.html
226
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
As you can see from Table 5-2, cloud functions are recommended
for experimentation. Google recommends Compute Engine with TF
Serving, or its SaaS platform (AI Platform) for predictions for production
deployments.
The issue with this approach is that a function-based approach is more
than just an experimentation environment. Functions are a way of saving
on cost while exposing the serving capabilities for predictions through
APIs. It is a serverless approach, so enterprises do not have to worry about
scaling.
2
Source: https://cloud.google.com/blog/products/ai-machine-learning/
how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-
functions
227
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
228
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Step 2: Create a project. Create a project called MNIST and then create a
main.py file with the code in Listing 5-4. I used PyCharm to run this code.
On your Mac, make sure to run this command before running the
code; otherwise you will get a certificate error. The code tries to download
packages from googleapis:
I ran main.py and the whole process took me 13 minutes. At the end of
it, I was able to get two model files as output.
import tensorflow as tf
EPOCHS = 10
mnist = tf.keras.datasets.mnist
fashion_mnist = tf.keras.datasets.fashion_mnist
229
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).
batch(32)
class CustomModel(Model):
def __init__(self):
super(CustomModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
model = CustomModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(nam
e='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name
='test_accuracy')
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
230
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
train_loss(loss)
train_accuracy(labels, predictions)
@tf.function
def test_step(images, labels):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
tf.saved_model.save(model, export_dir="c://Users//banua//
Downloads/MNIST/models")
231
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
232
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
I created a bucket called mnist-soc. You will use the bucket name in
the Cloud Functions call. See Figure 5-13.
Name your bucket mnist-soc and leave the others set to the defaults;
then click Create.
Upload the savedmodel3.zip file to this folder by clicking Upload Files.
See Figure 5-14.
233
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Click the file to get the details of the URL you need to connect to, as
shown in Figure 5-15.
234
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
235
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
• Google account
Modify the Spring Cloud Function to fit the Google Cloud Functions
environment. See Listing 5-5.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-gcp</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies
</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
236
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>spring-cloud-gcp-dependencies
</artifactId>
<version>3.3.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<outputDirectory>target/deploy
</outputDirectory>
</configuration>
<dependencies>
<dependency>
<groupId>org.springframework.cloud
</groupId>
<artifactId>spring-cloud-function-
adapter-gcp</artifactId>
<version>3.2.7</version>
</dependency>
</dependencies>
</plugin>
237
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
<plugin>
<groupId>com.google.cloud.functions</groupId>
<artifactId>function-maven-plugin</artifactId>
<version>0.9.1</version>
<configuration>
<functionTarget>org.springframework.cloud.
function.adapter.gcp.GcfJarLauncher
</functionTarget>
<port>8080</port>
</configuration>
</plugin>
</plugins>
</build>
Once this runs successfully, you will get the output shown in
Figure 5-16.
238
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
You now test in the Cloud Function console by providing input (see
Figure 5-18). Note that you have to increase the memory to 4096MB with a
timeout set to 540s just to be safe:
{"url":"https://djl-ai.s3.amazonaws.com/resources/images/chest_
xray.jpg",
"savedmodelpath":"https://storage.googleapis.com/mnist-soc/
saved_model.zip"}
239
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
If you scroll down the test console, you get the execution times. This
shows that the function execution took 16392ms, as shown in Figure 5-19.
This is 16s for execution, which is phenomenal. This is faster because you
stored the saved model in Google Cloud Storage, which is closer to the
function.
240
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
You also found that you have to set the memory and timeout based on
the saved model size and store the model closer to the function, such as in
Google’s storage offerings.
• AWS account
Step 1: Prep your Lambda environment. Ensure that you have access
and a subscription to the AWS Lambda environment.
Step 2: Modify the Spring Cloud Function to fit the AWS Lambda
environment. You need to add the DJL dependencies to the pom.xml file
that you created in Chapter 2; see Listing 5-6.
<dependency>
<groupId>ai.djl</groupId>
<artifactId>api</artifactId>
<version>0.12.0</version>
241
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
</dependency>
<dependency>
<groupId>ai.djl.tensorflow</groupId>
<artifactId>tensorflow-api</artifactId>
<version>0.12.0</version>
</dependency>
<dependency>
<groupId>ai.djl.tensorflow</groupId>
<artifactId>tensorflow-engine</artifactId>
<version>0.12.0</version>
</dependency>
<dependency>
<groupId>ai.djl.tensorflow</groupId>
<artifactId>tensorflow-native-auto</artifactId>
<version>2.4.1</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
242
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
• Setup and manage • Connect to other • Using SageMaker’s • SageMaker • Deploy model to • Use AWS
Label Data
Discover
Deploy
Train
Tune
Build
labeling jobs AWS services and alogrithms and automatically tunes SageMaker marketplace to find
bring them together frameworks for your model endpoints. and buy model
in SageMaker training packages
Notebooks
SageMaker allows you to build and deploy models with Python as the
language of choice, but when it comes to endpoints, there are Java SDKs
much like AWS Glue that create prediction APIs or serve models for further
processing. You can leverage Lambda functions for these APIs.
243
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
So, as you saw in TensorFlow, you have to work in Python and Java to
model and expose models for general-purpose use.
Let’s run through a typical example and see if you can then switch to
exposing APIs in Spring Cloud Function.
Note This example uses the same sample to build, train, and deploy
as in this hands-on tutorial in AWS.
https://aws.amazon.com/getting-started/hands-
on/build-train-deploy-machine-learning-model-
sagemaker/
244
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Step 2: Prepare the data. Use Python to prepare the data. This example
uses the XGBoost ML algorithm. See Figure 5-24.
As you can see from the list, most frameworks use Python. This
example uses conda_python3, as suggested in the AWS tutorial.
Copy and paste the Python code into the Jupyter notebook cell and run
it. You will get a “success” message, as shown in Figure 5-25.
246
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Copy and paste the code to create the s3 bucket to store your model, as
shown in Figure 5-26.
Now copy and paste the code to download data into a dataframe, as
shown in Figure 5-27.
247
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
You have to wait for Step 3 to finish before deploying the model; see
Figure 5-30.
Step 4: Deploy the model. Make a note of the compute sizes used. This will
impact your billing. See Figure 5-31.
Step 5: Make a note of the endpoints. See Figures 5-32 and 5-33.
248
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Step 6: Create the Spring Cloud Function code to access the endpoint.
Listing 5-7 shows the POM dependencies.
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-sagemakerruntime</artifactId>
<version>1.11.979</version>
<exclusions>
<exclusion>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-core</artifactId>
</exclusion>
</exclusions>
</dependency><dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-core</artifactId>
<version>1.11.979</version>
</dependency>
249
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
Create a Supplier class to call and get the result from the SageMaker
endpoint. The SupplierFunction, unlike discussed in Section 5.3,
will invoke an endpoint URL and provide the results. Here, you use
SageMaker’s own model-serving capabilities. The Spring Cloud Function
acts as a client for SageMaker. See Figure 5-34.
250
Chapter 5 AI/ML Trained Serverless Endpoints with Spring Cloud Function
5.7. Summary
As you learned in this chapter, you can serve models using Spring Cloud
Function. But you also learned that serving models using Spring Cloud
Function and Java is a stretch because the AI/ML models are written in
Python. While Python may be popular, it is also important to note that in
an enterprise, Java is king. Finding ways to leverage Java in AI/ML is the
key to having an integrated environment within your enterprise. Cold
starts of Python-based functions take a long time. This is where using Java
and frameworks such as GraalVM speeds up the startup times.
The next chapter explores some real-world use cases of IoT and
Conversation AI and explains how Spring Cloud Function can be used.
251
CHAPTER 6
Spring Cloud
Function and IoT
This chapter covers Spring Cloud Function implementations with
IoT. You'll see some real-world examples from manufacturing and
logistics. You explore how Spring Cloud Function can operate with existing
IoT platforms on the cloud and in datacenters. You also explore some
specific implementations in Penske, which is interesting, as they can be
applied to nearly all IoT-related scenarios.
Before you explore the solutions, you need to dive a bit into IoT and
understand the status of the IoT market. You'll also look at some surveys
on why Java is the preferred enterprise language for IoT development.
With this growth potential, you can safely assume that the supporting
technologies will also grow. These technologies not only include hardware
sensors and IoT gateways, but also technologies such as microservices and
functions. The IoT industry is best suited to implement these technologies,
as they are highly distributed and rely heavily on the software components
to be small and efficient.
Serverless function-based environments that are triggered on demand
are perfectly suited to IoT, as they can save significant cost. Traditional
approaches to IoT relied on dedicated applications running 24/7. They
used up a lot of resources, adding to the cost of operation. With the nearly
ephemeral nature of serverless functions, this cost can be moved to the
“pay per use” approach.
Before you start working on some of the examples of IoT and Spring
Cloud Function, it is best to understand why you need to code in Java.
There are many alternative languages you can code in, but Java is the best
for IoT and here is why:
254
Chapter 6 Spring Cloud Function and IoT
255
Chapter 6 Spring Cloud Function and IoT
Let’s dig a little more into the IoT process; see Figure 6-2.
256
Chapter 6 Spring Cloud Function and IoT
257
Chapter 6 Spring Cloud Function and IoT
Figure 6-3. Manufacturing plant process flow with AWS and Spring
Cloud Function
Solution components:
258
Chapter 6 Spring Cloud Function and IoT
You can build the solution using AWS IoT Greengrass and leverage
a Spring Cloud Function that is deployed on Lambda. The point of this
exercise is to understand the capabilities of Spring Cloud Function as a
component that is integral to the solution.
The AWS IoT Greengrass implementation extends AWS Cloud to on-
premises. You can run Lambda functions on Greengrass’s edge.
Step 1: Install AWS IoT Greengrass. Install AWS IoT Greengrass on a device
(https://docs.aws.amazon.com/greengrass/v1/developerguide/
install-ggc.html). To test it, I installed the software on Windows. I
used the tutorial at https://aws.amazon.com/blogs/iot/aws-iot-
greengrass-now-supports-the-windows-operating-system/ to get my
first Greengrass implementation.
Once you have AWS IoT up and running, you need to create a function
to connect to devices and collect data. Let’s create a sample function.
Step 2: Spring Cloud Function to publish an MQTT message. You can
clone the project from GitHub at https://github.com/banup-kubeforce/
AWSIots3-V2.git.
259
Chapter 6 Spring Cloud Function and IoT
package com.kubeforce.awsgreengrassiot;
import org.hibernate.cache.internal.
StandardTimestampsCacheFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.Map;
import java.util.function.Consumer;
@Autowired
private MqttPublish mqttPublish;
@Override
public void accept (Map<String, String> map )
{
LOGGER.info("Adding Device info", map);
MqttPublish mqttPublish= new MqttPublish();
}
260
Chapter 6 Spring Cloud Function and IoT
import java.nio.ByteBuffer;
import java.util.Timer;
import java.util.TimerTask;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.greengrass.javasdk.IotDataClient;
import com.amazonaws.greengrass.javasdk.model.*;
}
}
class PublishDeviceInfo extends TimerTask {
private IotDataClient iotDataClient = new
IotDataClient();
261
Chapter 6 Spring Cloud Function and IoT
262
Chapter 6 Spring Cloud Function and IoT
--handler executable-name \
--role role-arn \
--zip-file fileb://Application_Name.zip \
--runtime arn:aws:greengrass:::runtime/function/executable
Step 4: Create a Spring Cloud Function to get data from the IoT core.
Create a class to subscribe and get messages from Mqtt.
If you want to access the data that you published, you can
run MqttSubscriber.java from the command line. You can use
MqttSubscriber.java. This subscribes to a specific topic, device/info for
example, and gets the messages. See Listing 6-3.
263
Chapter 6 Spring Cloud Function and IoT
import software.amazon.awssdk.crt.CRT;
import software.amazon.awssdk.crt.CrtRuntimeException;
import software.amazon.awssdk.crt.mqtt.MqttClientConnection;
import software.amazon.awssdk.crt.mqtt.
MqttClientConnectionEvents;
import software.amazon.awssdk.crt.mqtt.QualityOfService;
import software.amazon.awssdk.iot.iotjobs.model.RejectedError;
import java.nio.charset.StandardCharsets;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.atomic.AtomicReference;
264
Chapter 6 Spring Cloud Function and IoT
/*
* When called during a CI run, throw an exception that
will escape and fail the exec:java task
* When called otherwise, print what went wrong (if
anything) and just continue (return from main)
*/
static void onApplicationFailure(Throwable cause) {
if (isCI) {
throw new RuntimeException("BasicPubSub execution
failure", cause);
} else if (cause != null) {
System.out.println("Exception encountered: " +
cause.toString());
}
}
@Override
public void onConnectionResumed(boolean
sessionPresent) {
System.out.println("Connection resumed: " +
(sessionPresent ? "existing session" : "clean
session"));
}
};
try {
266
Chapter 6 Spring Cloud Function and IoT
onApplicationFailure(new RuntimeException("MQTT
connection creation failed!"));
}
subscribed.get();
countDownLatch.await();
267
Chapter 6 Spring Cloud Function and IoT
return payload[0];
}
Create a class to upload the message to an S3 bucket. You can use the
S3Upload.java provided; see Listing 6-4.
package com.kubeforce.awsiots3;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.
PutObjectRequest;
import software.amazon.awssdk.services.s3.S3Client;
268
Chapter 6 Spring Cloud Function and IoT
s3.putObject(PutObjectRequest.builder().
bucket(bucketName).key(key)
.build(),
RequestBody.fromString(payload));
s3.close();
return ("success");
}
Finally, create a Spring Cloud Function called Consumer that calls the
MqttSubscriber and S3Upload classes. See Listing 6-5.
import java.util.Map;
import java.util.function.Consumer;
}
}
269
Chapter 6 Spring Cloud Function and IoT
270
Chapter 6 Spring Cloud Function and IoT
271
Chapter 6 Spring Cloud Function and IoT
1
3
Sensors send informaon to Azure IoT
hub
Azure Blob Store
272
Chapter 6 Spring Cloud Function and IoT
Step 1: Install an Azure IoT Edge device. You can follow the
instructions at https://docs.microsoft.com/en-us/azure/iot-edge/
quickstart-linux?view=iotedge-2020-11 to enable either a Windows or
Linux device.
Step 2: Connect the device to Azure IoT. Since you cannot deploy the
Azure Stack hub, it is best to use an Azure IoT hub on the web.
You can enable it on the Azure Portal. More information on how
to configure your edge devices to connect to the IoT hub is available at
https://docs.microsoft.com/en-us/azure/iot-edge/quickstart-linu
x?view=iotedge-2020-11.
You also have to register your device on the hub. See Figure 6-7.
Here is an example command line:
273
Chapter 6 Spring Cloud Function and IoT
Step 3: Create a Spring Cloud Function and deploy it on Azure IoT Edge.
This code is available at https://github.com/banup-kubeforce/
AzureIoTSimulator.git.
274
Chapter 6 Spring Cloud Function and IoT
You can create a Spring Cloud Function that sends information from
the edge device to the IoT hub. Make sure you have the connection string
that you created in Step 1.
Dependencies:
<dependency>
<groupId>com.microsoft.azure.sdk.iot</groupId>
<artifactId>iot-device-client</artifactId>
<version>2.1.1</version>
</dependency>
import com.microsoft.azure.sdk.iot.device.DeviceClient;
import com.microsoft.azure.sdk.iot.device.IotHubClientProtocol;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.net.URISyntaxException;
@Configuration
public class IOTConfiguration {
@Bean
public DeviceClient deviceClient(@Value("${iot.connection.
string}") String connString) throws URISyntaxException {
return new DeviceClient(connString,
IotHubClientProtocol.HTTPS);
}
275
Chapter 6 Spring Cloud Function and IoT
package com.kubeforce.azureiotsimulator;
import java.time.LocalDateTime;
package com.kubeforce.azureiotsimulator;
import com.microsoft.azure.sdk.iot.device.DeviceClient;
import com.microsoft.azure.sdk.iot.device.Message;
import com.microsoft.azure.sdk.iot.device.MessageSentCallback;
import com.microsoft.azure.sdk.iot.device.exceptions.
IotHubClientException;
276
Chapter 6 Spring Cloud Function and IoT
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import java.lang.invoke.MethodHandles;
import java.util.function.Function;
277
Chapter 6 Spring Cloud Function and IoT
return null;
}
}
Step 4: Deploy the function into Azure Function on the edge devices.
You will have to containerize the function as discussed in Chapter 2.
Instead of pushing to Dockerhub, you have to push it to Azure Container
Registry. Information on Azure Container Registry is available at https://
azure.microsoft.com/en-us/products/container-registry/.
Use the VS Studio Code features. Additional information is provided
on GitHub:
278
Chapter 6 Spring Cloud Function and IoT
Figure 6-8. Azure function deployment on the IoT edge devices using
Azure IoT Edge runtime
Source: https://learn.microsoft.com/en-us/azure/iot-edge/
media/tutorial-deploy-function/functions-architecture.
png?view=iotedge-1.4
This section showed that you can take the same use case of the
manufacturing assembly plant and sensors and apply an Azure IoT
solution to it.
You created a Spring Cloud Function and deployed it to Azure IoTEdge
using the Azure Function. Detailed information is provided on GitHub at
https://github.com/banup-kubeforce/AzureIoTSimulator.git.
You also learned that you can use the Azure IoT Edge devices with
Azure IoT Edge runtime to collect data from sensors. This is very similar to
how it is done in AWS.
279
Chapter 6 Spring Cloud Function and IoT
Kubernetes Plaorm
(OpenShi or VMWare Tanzu)
Fleet management
For this use case, a company wants to manage its fleet of vehicles
and provide data to its consumers about the vehicle's health, location,
maintenance, repair info, driver info, and so on, to enable visibility of its
fleet and cargo. The vehicle data needs to be collected and processed in its
own datacenter. See Table 6-1.
280
Chapter 6 Spring Cloud Function and IoT
Solution:
Step 1: Contact your third-party provider to get the information about their
IoT hub. Since the work of acquiring the sensor data is offloaded to third-
party providers, you can assume that data will be accumulated and routed
to the third-party cloud from the vehicles.
Once the data is accumulated at the third-party gateway, it gets routed
to the company's datacenter via an invocation of a function.
281
Chapter 6 Spring Cloud Function and IoT
282
Chapter 6 Spring Cloud Function and IoT
You leveraged Spring Cloud Data Flow as the data pipeline to collect
and process the sensor data.
More information on the project is available at https://github.com/
banup-kubeforce/onpremIoT.git.
6.6. Summary
As you learned in this chapter, you can build IoT-based solutions using
Spring Cloud Function, both in an on-premises environment, and as well
as in AWS and Azure.
Spring Cloud Function is one of the most versatile frameworks in
the Java world. It can be deployed on proprietary cloud-based serverless
environments and on Knative, making it a very portable component.
Whether it is a manufacturing plant walled off from the public Internet
or on the road with fleet management, you can build secure and reliable
solutions with Spring Cloud Function.
283
CHAPTER 7
Industry-Specific
Examples with Spring
Cloud Function
This chapter explores some of the industry-specific implementations of
Spring Cloud Function.
It leverages the IBM Cloud offering to demonstrate that Spring Cloud
Function is supported on any cloud offering.
Some of the examples in this chapter are real-world scenarios in
which functions play a critical role. For example, a function that sends
alarms about pipeline leaks and a chat function that helps customers solve
problems.
This solution leverages IBM Cloud and its IoT platform offerings
along with smart gateways to capture sensor data and provide analysis
and visualizations. You'll see how to deploy Spring Cloud Function and
integrate it with IBM Event streams to store the streaming data into IBM
Cloudant DB. See Figure 7-1.
Cloudant
Cloud Funcons
NoSQL DB
Cellular Network
IoT Gateway
Sensors
Gas Pipeline
286
Chapter 7 Industry-Specific Examples with Spring Cloud Function
7.1.1. Sensors
There are multiple parameters that need to be measured and tracked when
monitoring the health of a pipeline. This data can be categorized into asset
and external data.
Asset data can include pressure, flow rate, wall thickness, and cracks.
External data can include temperature, humidity, pH levels, and soil
resistivity. Smart sensors can be installed along the length of the pipeline
to transmit information to a nearby IoT gateway.
7.1.2. IoT Gateway
These devices act as sensor data aggregators and send the acquired data to
receiving systems like the Watson IoT platform. They also allow the Watson
IoT platform to connect and manage sensor devices. There are many
cloud-ready gateways in the market.
287
Chapter 7 Industry-Specific Examples with Spring Cloud Function
288
Chapter 7 Industry-Specific Examples with Spring Cloud Function
7.1.7. IBM Cloudant DB
This is a fully managed distributed database that allows for storing data
from the IBM Watson IoT platform. You can find more information at
https://www.ibm.com/cloud/cloudant.
Now you'll see how to realize the flow shown in Figure 7-2, where
the IoT sensors send information to the IBM Event Stream through
the message gateway. The message is then received from IBM event
streams through a Spring Cloud Function and stored in Cloudant DB. It is
suggested you explore the integration between sensors and the message
gateway at https://www.ibm.com/docs/en/wip-mg/5.0.0.1?topic=os-
imaserver-rpm.
Figure 7-2 shows the flow that this example plans to achieve.
289
Chapter 7 Industry-Specific Examples with Spring Cloud Function
If you are just this trying out, you can use the free tier (lite). See
Figure 7-3.
290
Chapter 7 Industry-Specific Examples with Spring Cloud Function
You then stipulate where you want to run the event stream—on a
public cloud or satellite, as well as the cloud instance location. Then select
your pricing plan (the Lite option is free), give it a name, and assign a
291
Chapter 7 Industry-Specific Examples with Spring Cloud Function
resource group. You can also provide some identification tags. This will
create the event stream.
Step 4: Create a topic with the required parameters.
Within your event streams dashboard, click the Create Topic button
and enter the name of the topic, as well as any partitions and message
retention parameters. This will create a topic, which you can see on the
Event Streams page, as shown in Figure 7-7.
292
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Once on the Kubernetes cluster page, pick the pricing plan and a
Kubernetes cluster will be created for you. With Lite, you do not have the
option to name the cluster. You can subscribe to the standard tier to get
more control. See Figure 7-8.
293
Chapter 7 Industry-Specific Examples with Spring Cloud Function
IBM Cloud will go through the process of creating the cluster and bring
you to the page shown in Figure 7-9.
294
Chapter 7 Industry-Specific Examples with Spring Cloud Function
295
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Verify the current configuration. Run the following command to set the
context of Kubectl to the Kubernetes cluster:
Now install Knative serving. Run the following command to install the
Knative serving components:
296
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Install Istio. Istio creates an ingress for the cluster. Run the following
code to install it:
297
Chapter 7 Industry-Specific Examples with Spring Cloud Function
298
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Now install Magic DNS. It allows you to call the service from an FQDN
rather than an IP address.
Once the status is Running, you can deploy Spring Cloud Function.
Step 6: Create a Cloudant database. Navigate to your subscription on
the IBM Cloud and click Create Resource. Search for Cloudant.
299
Chapter 7 Industry-Specific Examples with Spring Cloud Function
300
Chapter 7 Industry-Specific Examples with Spring Cloud Function
You can use Create Database on the top of the screen. Provide the
name and partition information to create the database. Once this process
is complete, you can see the database, as shown in Figure 7-21.
<dependencies>
<dependency>
<groupId>com.ibm.cloud</groupId>
<artifactId>cloudant</artifactId>
<version>0.3.0</version>
</dependency>
301
Chapter 7 Industry-Specific Examples with Spring Cloud Function
<dependency>
<groupId>com.cloudant</groupId>
<artifactId>cloudant-client</artifactId>
<version>2.20.1</version>
</dependency>
</dependencies>
spring.cloud.function.definition=iotCloudantSupplier
cloudant.db=sensordb
cloudant.url="https://apikey-v2-w2z4fgix9dlpw626eihi4g4n9w
20ntyekk7jknbyr1o:1d012fa20433d315b899a2ed90f3fefb@23204
af3-2c33-4bfb-bbc4-f55bbe1902ea-bluemix.cloudantnosqldb.
appdomain.cloud"
cloudant.apikey="Service credentials-1"
spring.datasource.url=jdbc:h2:mem:employee
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.h2.console.enabled=true
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.defer-datasource-initialization=true
#EventStreams
#Connection
spring.kafka.jaas.enabled=true
spring.kafka.jaas.login-module=org.apache.kafka.common.
security.plain.PlainLoginModule
302
Chapter 7 Industry-Specific Examples with Spring Cloud Function
spring.kafka.jaas.options.username=token
spring.kafka.jaas.options.password=NhZ7i_
IHpf3piG99jQpIMGtZTT3tRggmfj7UhaztdNFx
spring.kafka.bootstrap-servers=broker-2-2068cxqswxtbl1kv.kafka.
svc09.us-south.eventstreams.cloud.ibm.com:9093,broker-1-2068cx
qswxtbl1kv.kafka.svc09.us-south.eventstreams.cloud.ibm.com:909
3,broker-0-2068cxqswxtbl1kv.kafka.svc09.us-south.eventstreams.
cloud.ibm.com:9093,broker-5-2068cxqswxtbl1kv.kafka.svc09.us-
south.eventstreams.cloud.ibm.com:9093,broker-3-2068cxqswxtbl1
kv.kafka.svc09.us-south.eventstreams.cloud.ibm.com:9093,broker-
4-2068cxqswxtbl1kv.kafka.svc09.us-south.eventstreams.cloud.ibm.
com:9093
spring.kafka.properties.security.protocol=SASL_SSL
spring.kafka.properties.sasl.mechanism=PLAIN
#Producer
spring.kafka.template.default-topic=sensor-topic
spring.kafka.producer.client-id=event-streams-kafka
spring.kafka.producer.key-serializer=org.apache.kafka.common.
serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.
serialization.StringSerializer
#Consumer
listener.topic=sensor-topic
spring.kafka.consumer.group-id=sensor-topic
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.
serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.
common.serialization.StringDeserializer
303
Chapter 7 Industry-Specific Examples with Spring Cloud Function
You have to provide the device ID, protocol, data, and status of
the device into a JSON object for Cloudant to store. See Listings 7-3
through 7-5.
import com.cloudant.client.api.ClientBuilder;
import com.cloudant.client.api.CloudantClient;
import com.cloudant.client.api.Database;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.Map;
import java.util.function.Consumer;
/*
{
"deviceid":"gas-sensor1",
"protocol":"mqtt",
"data":"{temp:25,pressure:35}",
"status":"Warning"
}
*/
public class IotCloudantConsumer implements
Consumer<Map<String,String>> {
public static final Logger LOGGER = LoggerFactory.
getLogger(IotCloudantConsumer.class);
@Autowired
private IotRepository iotRepository;
304
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Database db = client.database("sensordb",false);
@Override
public void accept (Map<String, String> map)
{
LOGGER.info("Creating the Iot sensor info", map);
JSONObject iotdata = new JSONObject();
iotdata.put("deviceid","gas-sensor1");
iotdata.put("protocol","mqtt");
iotdata.put("data","{temp:25,pressure:35}");
iotdata.put("status","Warning");
}
import com.cloudant.client.api.ClientBuilder;
import com.cloudant.client.api.CloudantClient;
import com.cloudant.client.api.Database;
305
Chapter 7 Industry-Specific Examples with Spring Cloud Function
package com.kubeforce.ibmiotv2;
import org.json.JSONObject;
import org.springframework.kafka.core.KafkaTemplate;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.function.Supplier;
306
Chapter 7 Industry-Specific Examples with Spring Cloud Function
return result;
}
}
This section explored the world of IoT from an IBM Cloud and IBM
Watson IoT perspective. Figure 7-24 shows the various components of the
IoT platform that were covered.
You also saw that you can build functions with Spring Cloud Function
and deploy them on IBM Cloud on Kubernetes.
307
Chapter 7 Industry-Specific Examples with Spring Cloud Function
308
Chapter 7 Industry-Specific Examples with Spring Cloud Function
The use case here is that a patient walks into a hospital complaining of
a pain in the back, near the spine. The pain has been persistent and they
want a quick diagnosis.
309
Chapter 7 Industry-Specific Examples with Spring Cloud Function
The process:
Translating this processes into a cloud architecture view, you can see
that Spring Cloud Function plays an important role in all these processes.
It is therefore essential for the functions to be portable so that the best
cloud providers can be used in each case. See Figure 7-26.
310
Chapter 7 Industry-Specific Examples with Spring Cloud Function
311
Chapter 7 Industry-Specific Examples with Spring Cloud Function
• Demand forecasting
• Visual searching
• Recommender systems
• Chatbots
• Augmented reality
• Brand management
312
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Enterprise Systems
Web and Mobile
24/7 Customer Service
Messaging Apps
External Content Sources
313
Chapter 7 Industry-Specific Examples with Spring Cloud Function
7.3.1. Components of Conversational
AI Solutions
1) Edge services
314
Chapter 7 Industry-Specific Examples with Spring Cloud Function
6) DB2 on cloud
7) Cloudant DB
315
Chapter 7 Industry-Specific Examples with Spring Cloud Function
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.6.11</version>
</dependency>
Now you need to create Spring Cloud Function code for iMac
inventory. The model allows Watson to connect to and get a response. See
Listing 7-7.
package com.kubeforce.watsonimacs;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Table;
316
Chapter 7 Industry-Specific Examples with Spring Cloud Function
@Entity
@Table(name= "inventory")
public class Inventory {
@Id
@GeneratedValue(generator = "UUID")
private Long id;
public Inventory() {
}
317
Chapter 7 Industry-Specific Examples with Spring Cloud Function
318
Chapter 7 Industry-Specific Examples with Spring Cloud Function
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.Map;
import java.util.function.Consumer;
@Autowired
private InventoryRepository InventoryRepository;
@Override
public void accept (Map<String, String> map)
{
LOGGER.info("Creating the inventory", map);
319
Chapter 7 Industry-Specific Examples with Spring Cloud Function
package com.kubeforce.watsonimacs;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.Optional;
import java.util.function.Function;
@Override
public Inventory apply (Long s)
{
Optional<Inventory> inventoryOptional =
inventoryRepository.findById(s);
if (inventoryOptional.isPresent()) {
return inventoryOptional.get();
}
return null;
}
}
320
Chapter 7 Industry-Specific Examples with Spring Cloud Function
package com.kubeforce.watsonimacs;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.function.Supplier;
@Component
public class InventorySupplier implements Supplier
{
public static final Logger LOGGER = LoggerFactory.
getLogger(InventorySupplier.class);
@Autowired
private InventoryRepository InventoryRepository;
@Override
public Inventory get ()
{
List <Inventory>inventories = InventoryRepository.
findAll();
LOGGER.info("Getting the computer of our choice - ",
inventories);
return inventories.get(0);
}
}
321
Chapter 7 Industry-Specific Examples with Spring Cloud Function
322
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Choose the name you want for your assistant and click
Next. See Figure 7-31.
323
Chapter 7 Industry-Specific Examples with Spring Cloud Function
324
Chapter 7 Industry-Specific Examples with Spring Cloud Function
325
Chapter 7 Industry-Specific Examples with Spring Cloud Function
5) Create an action.
Create an action that helps you to interact with the customers. You
can do this by choosing Create Your First Action under the Create a
Conversation section of the page. You can choose to use a template or
create this from scratch.
Choose the options highlighted in Figure 7-35 to create your action.
326
Chapter 7 Industry-Specific Examples with Spring Cloud Function
327
Chapter 7 Industry-Specific Examples with Spring Cloud Function
328
Chapter 7 Industry-Specific Examples with Spring Cloud Function
The Integrations button will take you to the Build Custom Integrations
dashboard, as shown in Figure 7-38.
329
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Pick the function that you deployed. Once you provide the URL, it will
give you some options to choose from, as shown in Figure 7-40.
330
Chapter 7 Industry-Specific Examples with Spring Cloud Function
Figure 7-40. Add Open API URL for the deployed Spring Cloud
Function
331
Chapter 7 Industry-Specific Examples with Spring Cloud Function
332
Chapter 7 Industry-Specific Examples with Spring Cloud Function
You can now can retest your chat interface by going back to the chat
that you created.
See GitHub at https://github.com/banup-kubeforce/Watson-imacs.
git for additional information. Deploy the code in your environment and
integrate. This is a fun exercise.
7.4. Summary
This chapter looked at some real-world use cases and deployed Spring
Cloud Function in IBM Cloud.
This demonstrates the true "write once, deploy anywhere" capability of
Spring Cloud Function.
The chapter also looked at what IBM Cloud has to offer. IBM Cloud
is versatile and has many products that have been built and optimized
over the years. You saw how IBM Cloud can be used in the oil and gas,
healthcare, and retail markets.
There are many other use cases in which you can apply Spring Cloud
Function, and it can be an alternative to a microservices architecture or
can coexist with one. The decision to use functions over microservices
needs to be carefully analyzed for cost, scalability, and performance before
deciding on an approach.
333
Index
A model evaluation, 202
monitoring, 203, 204
Add applications
requirements, 200, 201
create streams, 160–167
Spring Cloud Function, 205
data pipeline, 162
Spring framework, 209–211
log component, 162
train model, 202
prebuilt templates, 159
AKS cluster, 27, 98, 99
properties, 164, 165
AKS Knative, 147
in SCDF, 158
Apache Kafka, 256, 289
SpEL, 163
Apache MXNet, 209
Add Device page, 290
Apache.org, 207
Added Dependencies section, 37
App deployment, 105
Ahead of Time (AOT), 22
Application.properties, 38, 39
AI/ML process
Apps utilization, 3
cloud functions, 206
ArgoCD, 107, 143
cloud providers, 206
argocd-server, 115
compute and storage
GitOps, 112
requirements, 205
installation, 114
data pipeline, 201 log in, 119
deploying, 203, 248 namespace creation, 112
endpoint, 249 password, 119
feature engineering, 202 argocd-server, 115, 117
flow, 243 Argument, 256, 257
Google, 227 Assistant creation, 324
Java and Python, 206–208 AWS account, 49
language popularity, 207 AWS CLI, 49, 82, 107, 241
lifecycle, 200 AWS EKS cluster, 82–90
336
INDEX
337
INDEX
338
INDEX
339
INDEX
340
INDEX
341
INDEX
342
INDEX
343
INDEX
Sink, 154, 155, 159, 162, 172, 185, deployment, 13, 34, 210
205, 282 with DJL, 218–225
S3 integration, 175 GCP, 23, 24, 89–97
SmartSense, 280–284 GitHub Actions, 108–111
Smart wearables, 310 goals, 12
Speech recognition, 201, 225 Google Cloud
SpringApplication, 46 Dataflow, 185–197
Spring-based microservices, 255 Google Cloud Functions, 59–67
Spring Boot, 210 H2 database, 120
code, 46, 47 HRM system, 1
employee model creation, 40–43 hyperscaler relationship, 11
scaffolding, 35–40 IoT (see Internet of
steps, 35 Things (IoT))
test the function, 47, 48 JDK 11, 33
write the consumer, 43, 44 Knative, 13, 32
write the function, 45, 46 Knative and EKS, AWS, 22, 23
write the supplier, 44, 45 Knative and GKE, GCP, 24, 25
Spring Cloud Data Flow (SCDF) Knative on Azure
add application, 157–159 AKS, 27, 28
Spring Cloud Function, 155–157 libraries, 12
Spring Cloud Function, 72–81, OCP, 31
312, 331 on-premises, 280–284
AI/ML, 199–208 SCDF, 155–157
AWS Lambda, 20, 21, schema.sql, 121
49–60, 241–243 serverless providers, 12
Azure Functions, 26, 27 Spring Boot, 34–48
Azure IoT Edge, 272–279 step-by-step approach, 33
CI/CD process (see CI/CD VMware Tanzu, 28–30
process) Watson Assistant, 316–333
cloud offerings, 19, 20 Spring Cloud Streams, 255
common tasks, 33 creation and deployment, 154
configuration file, 122 graphical representation, 154
data.sql, 121 sample dashboard, 154
deploying (see Deploying) Spring Data, 255
344
INDEX
345
INDEX
346