Spring Cloud
Spring Cloud
Table of Contents
1. Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2. Contributing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4. Checkstyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1. Quick Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3. Addressing an Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9. Configuration properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1. Usage Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2. Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2. Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3. Contributing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4. Checkstyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1. Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1. Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2. Single Sign On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.11. Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.2. ServiceRegistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.3. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5. CachedRandomPropertySource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7. Configuration Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
1. Quick Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
2. Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
8. HttpHeadersFilters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
2. Starters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
1. Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
6. Binders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
9. Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
3.1. Creating the Spring Task Project using Spring Initializr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
2. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
2.1. Notes on Developing a Batch-partitioned application for the Kubernetes Platform. . . . . . 427
2.2. Notes on Developing a Batch-partitioned Application for the Cloud Foundry Platform . . 427
Appendices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
16. Session token lifecycle management (renewal, re-login and revocation) . . . . . . . . . . . . . . . . . . 484
Spring Cloud provides tools for developers to quickly build some of the common
patterns in distributed systems (e.g. configuration management, service
discovery, circuit breakers, intelligent routing, micro-proxy, control bus).
Coordination of distributed systems leads to boiler plate patterns, and using
Spring Cloud developers can quickly stand up services and applications that
implement those patterns. They will work well in any distributed environment,
including the developer’s own laptop, bare metal data centres, and managed
platforms such as Cloud Foundry.
1. Features
Spring Cloud focuses on providing good out of box experience for typical use cases and extensibility
mechanism to cover others.
• Distributed/versioned configuration
• Routing
• Service-to-service calls
• Load balancing
• Circuit Breakers
• Distributed messaging
spring-boot 2.4.6
spring-cloud-build 3.0.3
spring-cloud-bus 3.0.3
spring-cloud-circuitbreaker 2.0.2
spring-cloud-cli 3.0.3
spring-cloud-cloudfoundry 3.0.2
spring-cloud-commons 3.0.3
spring-cloud-config 3.0.4
spring-cloud-consul 3.0.3
spring-cloud-contract 3.0.3
spring-cloud-function 3.1.3
spring-cloud-gateway 3.0.3
spring-cloud-kubernetes 2.0.3
spring-cloud-netflix 3.0.3
spring-cloud-openfeign 3.0.3
spring-cloud-sleuth 3.0.3
spring-cloud-stream 3.1.3
spring-cloud-task 2.3.2
spring-cloud-vault 3.0.3
spring-cloud-zookeeper 3.0.3
Spring Cloud Build
<a href="https://github.com/spring-cloud/spring-cloud-build/actions">[Build]</a> |
<em>https://github.com/spring-cloud/spring-cloud-
build/workflows/Build/badge.svg?branch=main&style=svg</em>
Spring Cloud Build is a common utility project for Spring Cloud to use for plugin and dependency
management.
$ mvn deploy
-DaltSnapshotDeploymentRepository=repo.spring.io::default::https://repo.spring.io/snap
shot
$ mvn deploy
-DaltReleaseDeploymentRepository=repo.spring.io::default::https://repo.spring.io/relea
se
$ mvn deploy
-DaltReleaseDeploymentRepository=bintray::default::https://api.bintray.com/maven/sprin
g/jars/org.springframework.cloud:build
(the "central" profile is available for all projects in Spring Cloud and it sets up the gpg jar signing,
and the repository has to be specified separately for this project because it is a parent of the starter
parent which users in turn have as their own parent).
2. Contributing
Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard
Github development process, using Github tracker for issues and merging pull requests into master.
If you want to contribute even something trivial please do not hesitate, but follow the guidelines
below.
• Use the Spring Framework code format conventions. If you use Eclipse you can import
formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project.
If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file.
• Make sure all new .java files to have a simple Javadoc class comment with at least an @author
tag identifying you, and preferably at least a paragraph on what the class is for.
• Add the ASF license header comment to all new .java files (copy from existing files in the
project)
• Add yourself as an @author to the .java files that you modify substantially (more than cosmetic
changes).
• Add some Javadocs and, if you change the namespace, some XSD doc elements.
• If no-one else is using your branch, please rebase it against the current master (or other target
branch in the main project).
• When writing a commit message please follow these conventions, if you are fixing an existing
issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue
number).
2.4. Checkstyle
Spring Cloud Build comes with a set of checkstyle rules. You can find them in the spring-cloud-
build-tools module. The most notable files under the module are:
spring-cloud-build-tools/
└── src
├── checkstyle
│ └── checkstyle-suppressions.xml ③
└── main
└── resources
├── checkstyle-header.txt ②
└── checkstyle.xml ①
Checkstyle rules are disabled by default. To add checkstyle to your project just define the
following properties and plugins.
pom.xml
<properties>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError> ①
<maven-checkstyle-plugin.failsOnViolation>true
</maven-checkstyle-plugin.failsOnViolation> ②
<maven-checkstyle-plugin.includeTestSourceDirectory>true
</maven-checkstyle-plugin.includeTestSourceDirectory> ③
</properties>
<build>
<plugins>
<plugin> ④
<groupId>io.spring.javaformat</groupId>
<artifactId>spring-javaformat-maven-plugin</artifactId>
</plugin>
<plugin> ⑤
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
<reporting>
<plugins>
<plugin> ⑤
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
</reporting>
</build>
④ Add the Spring Java Format plugin that will reformat your code to pass most of the Checkstyle
formatting rules
If you need to suppress some rules (e.g. line length needs to be longer), then it’s enough for you to
define a file under ${project.root}/src/checkstyle/checkstyle-suppressions.xml with your
suppressions. Example:
projectRoot/src/checkstyle/checkstyle-suppresions.xml
<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC
"-//Puppy Crawl//DTD Suppressions 1.1//EN"
"https://www.puppycrawl.com/dtds/suppressions_1_1.dtd">
<suppressions>
<suppress files=".*ConfigServerApplication\.java"
checks="HideUtilityClassConstructor"/>
<suppress files=".*ConfigClientWatch\.java" checks="LineLengthCheck"/>
</suppressions>
$ curl https://raw.githubusercontent.com/spring-cloud/spring-cloud-
build/master/.editorconfig -o .editorconfig
$ touch .springformat
In order to setup Intellij you should import our coding conventions, inspection profiles and set up
the checkstyle plugin. The following files can be found in the Spring Cloud Build project.
spring-cloud-build-tools/
└── src
├── checkstyle
│ └── checkstyle-suppressions.xml ③
└── main
└── resources
├── checkstyle-header.txt ②
├── checkstyle.xml ①
└── intellij
├── Intellij_Project_Defaults.xml ④
└── Intellij_Spring_Boot_Java_Conventions.xml ⑤
⑤ Project style conventions for Intellij that apply most of Checkstyle rules
Figure 1. Code style
Go to File → Settings → Editor → Code style. There click on the icon next to the Scheme section.
There, click on the Import Scheme value and pick the Intellij IDEA code style XML option. Import
the spring-cloud-build-
tools/src/main/resources/intellij/Intellij_Spring_Boot_Java_Conventions.xml file.
Figure 2. Inspection profiles
Go to File → Settings → Editor → Inspections. There click on the icon next to the Profile section.
There, click on the Import Profile and import the spring-cloud-build-
tools/src/main/resources/intellij/Intellij_Project_Defaults.xml file.
Checkstyle
To have Intellij work with Checkstyle, you have to install the Checkstyle plugin. It’s advisable to also
install the Assertions2Assertj to automatically convert the JUnit assertions
Go to File → Settings → Other settings → Checkstyle. There click on the + icon in the Configuration
file section. There, you’ll have to define where the checkstyle rules should be picked from. In the
image above, we’ve picked the rules from the cloned Spring Cloud Build repository. However, you
can point to the Spring Cloud Build’s GitHub repository (e.g. for the checkstyle.xml :
raw.githubusercontent.com/spring-cloud/spring-cloud-build/master/spring-cloud-build-tools/src/
main/resources/checkstyle.xml). We need to provide the following variables:
Remember to set the Scan Scope to All sources since we apply checkstyle rules for
production and test sources.
① This plugin downloads sets up all the git information of the project
④ This plugin generates an adoc file with all the configuration properties from the classpath
⑥ This plugin is required to copy resources into proper final destinations and to generate main
README.adoc and to assert that no files use unresolved links
⑦ This plugin ensures that the generated zip docs will get published
In order for the build to generate the adoc file with all your configuration properties, your docs
module should contain all the dependencies on the classpath, that you would want to scan for
configuration properties. The file will be output to
${docsModule}/src/main/asciidoc/_configprops.adoc file (configurable via the configprops.path
property).
If you want to modify which of the configuration properties are put in the table, you can tweak the
configprops.inclusionPattern pattern to include only a subset of the properties (e.g.
<configprops.inclusionPattern>spring.sleuth.*</configprops.inclusionPattern>).
Spring Cloud Build Docs comes with a set of attributes for asciidoctor that you can reuse.
<attributes>
<docinfo>shared</docinfo>
<allow-uri-read>true</allow-uri-read>
<nofooter/>
<toc>left</toc>
<toc-levels>4</toc-levels>
<sectlinks>true</sectlinks>
<sources-root>${project.basedir}/src@</sources-root>
<asciidoc-sources-root>${project.basedir}/src/main/asciidoc@</asciidoc-sources-
root>
<generated-resources-root>${project.basedir}/target/generated-resources@
</generated-resources-root>
<!-- Use this attribute the reference code from another module -->
<!-- Note the @ at the end, lowering the precedence of the attribute -->
<project-root>${maven.multiModuleProjectDirectory}@</project-root>
<!-- It's mandatory for you to pass the docs.main property -->
<github-repo>${docs.main}@</github-repo>
<github-project>https://github.com/spring-cloud/${docs.main}@</github-project>
<github-raw>
https://raw.githubusercontent.com/spring-cloud/${docs.main}/${github-tag}@
</github-raw>
<github-code>https://github.com/spring-cloud/${docs.main}/tree/${github-tag}@
</github-code>
<github-issues>https://github.com/spring-cloud/${docs.main}/issues/@</github-
issues>
<github-wiki>https://github.com/spring-cloud/${docs.main}/wiki@</github-wiki>
<github-master-code>https://github.com/spring-cloud/${docs.main}/tree/master@
</github-master-code>
<index-link>${index-link}@</index-link>
This means that the project contains 3 guides that would correspond to the following guides in
Spring Guides org.
• github.com/spring-guides/gs-guide1
• github.com/spring-guides/gs-guide2
• github.com/spring-guides/gs-guide3
If you deploy your project with the -Pguides profile like this
what will happen is that for GA project versions, we will clone gs-guide1, gs-guide2 and gs-guide3
and update their contents with the ones being under your guides project.
You can skip this by either not adding the guides profile, or passing the -DskipGuides system
property when the profile is turned on.
You can configure the project version passed to guides via the guides-project.version (defaults to
${project.version}). The phase at which guides get updated can be configured by guides-
update.phase (defaults to deploy).
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would
like to contribute to this section of the documentation or if you find an error,
please find the source code and issue trackers in the project at github.
1. Quick Start
Spring Cloud Bus works by adding Spring Boot autconfiguration if it detects itself on the classpath.
To enable the bus, add spring-cloud-starter-bus-amqp or spring-cloud-starter-bus-kafka to your
dependency management. Spring Cloud takes care of the rest. Make sure the broker (RabbitMQ or
Kafka) is available and configured. When running on localhost, you need not do anything. If you
run remotely, use Spring Cloud Connectors or Spring Boot conventions to define the broker
credentials, as shown in the following example for Rabbit:
application.yml
spring:
rabbitmq:
host: mybroker.com
port: 5672
username: user
password: secret
The bus currently supports sending messages to all nodes listening or all nodes for a particular
service (as defined by Eureka). The /bus/* actuator namespace has some HTTP endpoints.
Currently, two are implemented. The first, /bus/env, sends key/value pairs to update each node’s
Spring Environment. The second, /bus/refresh, reloads each application’s configuration, as though
they had all been pinged on their /refresh endpoint.
The Spring Cloud Bus starters cover Rabbit and Kafka, because those are the two
most common implementations. However, Spring Cloud Stream is quite flexible,
and the binder works with spring-cloud-bus.
2. Bus Endpoints
Spring Cloud Bus provides two endpoints, /actuator/busrefresh and /actuator/busenv that
correspond to individual actuator endpoints in Spring Cloud Commons, /actuator/refresh and
/actuator/env respectively.
To expose the /actuator/busrefresh endpoint, you need to add following configuration to your
application:
management.endpoints.web.exposure.include=busrefresh
To expose the /actuator/busenv endpoint, you need to add following configuration to your
application:
management.endpoints.web.exposure.include=busenv
The /actuator/busenv endpoint accepts POST requests with the following shape:
{
"name": "key1",
"value": "value1"
}
3. Addressing an Instance
Each instance of the application has a service ID, whose value can be set with spring.cloud.bus.id
and whose value is expected to be a colon-separated list of identifiers, in order from least specific to
most specific. The default value is constructed from the environment as a combination of the
spring.application.name and server.port (or spring.application.index, if set). The default value of
the ID is constructed in the form of app:index:id, where:
To learn more about how to customize the message broker settings, consult the Spring Cloud
Stream documentation.
The preceding trace shows that a RefreshRemoteApplicationEvent was sent from customers:9000,
broadcast to all services, and received (acked) by customers:9000 and stores:8081.
To handle the ack signals yourself, you could add an @EventListener for the
AckRemoteApplicationEvent and SentApplicationEvent types to your app (and enable tracing).
Alternatively, you could tap into the TraceRepository and mine the data from there.
Any Bus application can trace acks. However, sometimes, it is useful to do this in a
central service that can do more complex queries on the data or forward it to a
specialized tracing service.
Both the producer and the consumer need access to the class definition.
package com.acme;
You can register that event with the deserializer in the following way:
package com.acme;
@Configuration
@RemoteApplicationEventScan
public class BusConfiguration {
...
}
Without specifying a value, the package of the class where @RemoteApplicationEventScan is used is
registered. In this example, com.acme is registered by using the package of BusConfiguration.
You can also explicitly specify the packages to scan by using the value, basePackages or
basePackageClasses properties on @RemoteApplicationEventScan, as shown in the following example:
package com.acme;
@Configuration
//@RemoteApplicationEventScan({"com.acme", "foo.bar"})
//@RemoteApplicationEventScan(basePackages = {"com.acme", "foo.bar", "fizz.buzz"})
@RemoteApplicationEventScan(basePackageClasses = BusConfiguration.class)
public class BusConfiguration {
...
}
All of the preceding examples of @RemoteApplicationEventScan are equivalent, in that the com.acme
package is registered by explicitly specifying the packages on @RemoteApplicationEventScan.
9. Configuration properties
To see the list of all Bus related configuration properties please check the Appendix page.
1. Usage Documentation
The Spring Cloud CircuitBreaker project contains implementations for Resilience4J and Spring
Retry. The APIs implemented in Spring Cloud CircuitBreaker live in Spring Cloud Commons. The
usage documentation for these APIs are located in the Spring Cloud Commons documentation.
There are two starters for the Resilience4J implementations, one for reactive applications and one
for non-reactive applications.
• org.springframework.cloud:spring-cloud-starter-circuitbreaker-resilience4j - non-reactive
applications
• org.springframework.cloud:spring-cloud-starter-circuitbreaker-reactor-resilience4j - reactive
applications
1.1.2. Auto-Configuration
To provide a default configuration for all of your circuit breakers create a Customize bean that is
passed a Resilience4JCircuitBreakerFactory or ReactiveResilience4JCircuitBreakerFactory. The
configureDefault method can be used to provide a default configuration.
@Bean
public Customizer<Resilience4JCircuitBreakerFactory> defaultCustomizer() {
return factory -> factory.configureDefault(id -> new
Resilience4JConfigBuilder(id)
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(4
)).build())
.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
.build());
}
Reactive Example
@Bean
public Customizer<ReactiveResilience4JCircuitBreakerFactory> defaultCustomizer() {
return factory -> factory.configureDefault(id -> new
Resilience4JConfigBuilder(id)
.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(4
)).build()).build());
}
Similarly to providing a default configuration, you can create a Customize bean this is passed a
Resilience4JCircuitBreakerFactory or ReactiveResilience4JCircuitBreakerFactory.
@Bean
public Customizer<Resilience4JCircuitBreakerFactory> slowCustomizer() {
return factory -> factory.configure(builder ->
builder.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(2
)).build()), "slow");
}
In addition to configuring the circuit breaker that is created you can also customize the circuit
breaker after it has been created but before it is returned to the caller. To do this you can use the
addCircuitBreakerCustomizer method. This can be useful for adding event handlers to Resilience4J
circuit breakers.
@Bean
public Customizer<Resilience4JCircuitBreakerFactory> slowCustomizer() {
return factory -> factory.addCircuitBreakerCustomizer(circuitBreaker ->
circuitBreaker.getEventPublisher()
.onError(normalFluxErrorConsumer).onSuccess(normalFluxSuccessConsumer),
"normalflux");
}
Reactive Example
@Bean
public Customizer<ReactiveResilience4JCircuitBreakerFactory> slowCustomizer() {
return factory -> {
factory.configure(builder -> builder
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(2
)).build())
.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults()), "slow",
"slowflux");
factory.addCircuitBreakerCustomizer(circuitBreaker ->
circuitBreaker.getEventPublisher()
.onError(normalFluxErrorConsumer).onSuccess(normalFluxSuccessConsumer),
"normalflux");
};
}
You can configure CircuitBreaker and TimeLimiter instances in your application’s configuration
properties file. Property configuration has higher priority than Java Customizer configuration.
resilience4j.circuitbreaker:
instances:
backendA:
registerHealthIndicator: true
slidingWindowSize: 100
backendB:
registerHealthIndicator: true
slidingWindowSize: 10
permittedNumberOfCallsInHalfOpenState: 3
slidingWindowType: TIME_BASED
recordFailurePredicate: io.github.robwin.exception.RecordFailurePredicate
resilience4j.timelimiter:
instances:
backendA:
timeoutDuration: 2s
cancelRunningFuture: true
backendB:
timeoutDuration: 1s
cancelRunningFuture: false
For more information on Resilience4j property configuration, see Resilience4J Spring Boot 2
Configuration.
If resilience4j-bulkhead is on the classpath, Spring Cloud CircuitBreaker will wrap all methods with
a Resilience4j Bulkhead. You can disable the Resilience4j Bulkhead by setting
spring.cloud.circuitbreaker.bulkhead.resilience4j.enabled to false.
@Bean
public Customizer<Resilience4jBulkheadProvider> slowBulkheadProviderCustomizer() {
return provider -> provider.configure(builder -> builder
.bulkheadConfig(BulkheadConfig.custom().maxConcurrentCalls(1).build())
.threadPoolBulkheadConfig(ThreadPoolBulkheadConfig.ofDefaults()),
"slowBulkhead");
}
In addition to configuring the Bulkhead that is created you can also customize the bulkhead and
thread pool bulkhead after they have been created but before they are returned to caller. To do this
you can use the addBulkheadCustomizer and addThreadPoolBulkheadCustomizer methods.
Bulkhead Example
@Bean
public Customizer<Resilience4jBulkheadProvider> customizer() {
return provider -> provider.addBulkheadCustomizer(bulkhead ->
bulkhead.getEventPublisher()
.onCallRejected(slowRejectedConsumer)
.onCallFinished(slowFinishedConsumer), "slowBulkhead");
}
resilience4j.thread-pool-bulkhead:
instances:
backendA:
maxThreadPoolSize: 1
coreThreadPoolSize: 1
resilience4j.bulkhead:
instances:
backendB:
maxConcurrentCalls: 10
For more inforamtion on the Resilience4j property configuration, see Resilience4J Spring Boot 2
Configuration.
Spring Cloud Circuit Breaker Resilience4j includes auto-configuration to setup metrics collection as
long as the right dependencies are on the classpath. To enable metric collection you must include
org.springframework.boot:spring-boot-starter-actuator, and io.github.resilience4j:resilience4j-
micrometer. For more information on the metrics that get produced when these dependencies are
present, see the Resilience4j documentation.
To provide a default configuration for all of your circuit breakers create a Customize bean that is
passed a SpringRetryCircuitBreakerFactory. The configureDefault method can be used to provide a
default configuration.
@Bean
public Customizer<SpringRetryCircuitBreakerFactory> defaultCustomizer() {
return factory -> factory.configureDefault(id -> new
SpringRetryConfigBuilder(id)
.retryPolicy(new TimeoutRetryPolicy()).build());
}
Similarly to providing a default configuration, you can create a Customize bean this is passed a
SpringRetryCircuitBreakerFactory.
@Bean
public Customizer<SpringRetryCircuitBreakerFactory> slowCustomizer() {
return factory -> factory.configure(builder -> builder.retryPolicy(new
SimpleRetryPolicy(1)).build(), "slow");
}
In addition to configuring the circuit breaker that is created you can also customize the circuit
breaker after it has been created but before it is returned to the caller. To do this you can use the
addRetryTemplateCustomizers method. This can be useful for adding event handlers to the
RetryTemplate.
@Bean
public Customizer<SpringRetryCircuitBreakerFactory> slowCustomizer() {
return factory -> factory.addRetryTemplateCustomizers(retryTemplate ->
retryTemplate.registerListener(new RetryListener() {
@Override
public <T, E extends Throwable> boolean open(RetryContext context,
RetryCallback<T, E> callback) {
return false;
}
@Override
public <T, E extends Throwable> void close(RetryContext context,
RetryCallback<T, E> callback, Throwable throwable) {
}
@Override
public <T, E extends Throwable> void onError(RetryContext context,
RetryCallback<T, E> callback, Throwable throwable) {
}
}));
}
2. Building
2.1. Basic Compile and Test
To build the source you will need to install JDK 1.8.
Spring Cloud uses Maven for most build-related activities, and you should be able to get off the
ground quite quickly by cloning the project you are interested in and typing
$ ./mvnw install
You can also install Maven (>=3.3.3) yourself and run the mvn command in place of
./mvnw in the examples below. If you do that you also might need to add -P spring
if your local Maven settings do not contain repository declarations for spring pre-
release artifacts.
Be aware that you might need to increase the amount of memory available to
Maven by setting a MAVEN_OPTS environment variable with a value like -Xmx512m
-XX:MaxPermSize=128m. We try to cover this in the .mvn configuration, so if you find
you have to do it to make a build succeed, please raise a ticket to get the settings
added to source control.
For hints on how to build the project look in .travis.yml if there is one. There should be a "script"
and maybe "install" command. Also look at the "services" section to see if any services need to be
running locally (e.g. mongo or rabbit). Ignore the git-related bits that you might find in
"before_install" since they’re related to setting git credentials and you already have those.
The projects that require middleware generally include a docker-compose.yml, so consider using
Docker Compose to run the middeware servers in Docker containers. See the README in the scripts
demo repository for specific instructions about the common cases of mongo, rabbit and redis.
If all else fails, build with the command from .travis.yml (usually ./mvnw install).
2.2. Documentation
The spring-cloud-build module has a "docs" profile, and if you switch that on it will try to build
asciidoc sources from src/main/asciidoc. As part of that process it will look for a README.adoc and
process it by loading all the includes, but not parsing or rendering it, just copying it to
${main.basedir} (defaults to $/tmp/releaser-1622203270911-0/spring-cloud-release/train-
docs/target/unpacked-docs, i.e. the root of the project). If there are any changes in the README it
will then show up after a Maven build as a modified file in the correct place. Just commit it and
push the change.
Spring Cloud projects require the 'spring' Maven profile to be activated to resolve the spring
milestone and snapshot repositories. Use your preferred IDE to set this profile to be active, or you
may experience build errors.
We recommend the m2eclipse eclipse plugin when working with eclipse. If you don’t already have
m2eclipse installed it is available from the "eclipse marketplace".
Older versions of m2e do not support Maven 3.3, so once the projects are imported
into Eclipse you will also need to tell m2eclipse to use the right profile for the
projects. If you see many different errors related to the POMs in the projects, check
that you have an up to date installation. If you can’t upgrade m2e, add the "spring"
profile to your settings.xml. Alternatively you can copy the repository settings
from the "spring" profile of the parent pom into your settings.xml.
If you prefer not to use m2eclipse you can generate eclipse project metadata using the following
command:
$ ./mvnw eclipse:eclipse
The generated eclipse projects can be imported by selecting import existing projects from the file
menu.
3. Contributing
Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard
Github development process, using Github tracker for issues and merging pull requests into master.
If you want to contribute even something trivial please do not hesitate, but follow the guidelines
below.
• Use the Spring Framework code format conventions. If you use Eclipse you can import
formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project.
If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file.
• Make sure all new .java files to have a simple Javadoc class comment with at least an @author
tag identifying you, and preferably at least a paragraph on what the class is for.
• Add the ASF license header comment to all new .java files (copy from existing files in the
project)
• Add yourself as an @author to the .java files that you modify substantially (more than cosmetic
changes).
• Add some Javadocs and, if you change the namespace, some XSD doc elements.
• If no-one else is using your branch, please rebase it against the current master (or other target
branch in the main project).
• When writing a commit message please follow these conventions, if you are fixing an existing
issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue
number).
3.4. Checkstyle
Spring Cloud Build comes with a set of checkstyle rules. You can find them in the spring-cloud-
build-tools module. The most notable files under the module are:
spring-cloud-build-tools/
└── src
├── checkstyle
│ └── checkstyle-suppressions.xml ③
└── main
└── resources
├── checkstyle-header.txt ②
└── checkstyle.xml ①
Checkstyle rules are disabled by default. To add checkstyle to your project just define the
following properties and plugins.
pom.xml
<properties>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError> ①
<maven-checkstyle-plugin.failsOnViolation>true
</maven-checkstyle-plugin.failsOnViolation> ②
<maven-checkstyle-plugin.includeTestSourceDirectory>true
</maven-checkstyle-plugin.includeTestSourceDirectory> ③
</properties>
<build>
<plugins>
<plugin> ④
<groupId>io.spring.javaformat</groupId>
<artifactId>spring-javaformat-maven-plugin</artifactId>
</plugin>
<plugin> ⑤
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
<reporting>
<plugins>
<plugin> ⑤
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
</reporting>
</build>
④ Add the Spring Java Format plugin that will reformat your code to pass most of the Checkstyle
formatting rules
If you need to suppress some rules (e.g. line length needs to be longer), then it’s enough for you to
define a file under ${project.root}/src/checkstyle/checkstyle-suppressions.xml with your
suppressions. Example:
projectRoot/src/checkstyle/checkstyle-suppresions.xml
<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC
"-//Puppy Crawl//DTD Suppressions 1.1//EN"
"https://www.puppycrawl.com/dtds/suppressions_1_1.dtd">
<suppressions>
<suppress files=".*ConfigServerApplication\.java"
checks="HideUtilityClassConstructor"/>
<suppress files=".*ConfigClientWatch\.java" checks="LineLengthCheck"/>
</suppressions>
$ curl https://raw.githubusercontent.com/spring-cloud/spring-cloud-
build/master/.editorconfig -o .editorconfig
$ touch .springformat
In order to setup Intellij you should import our coding conventions, inspection profiles and set up
the checkstyle plugin. The following files can be found in the Spring Cloud Build project.
spring-cloud-build-tools/
└── src
├── checkstyle
│ └── checkstyle-suppressions.xml ③
└── main
└── resources
├── checkstyle-header.txt ②
├── checkstyle.xml ①
└── intellij
├── Intellij_Project_Defaults.xml ④
└── Intellij_Spring_Boot_Java_Conventions.xml ⑤
⑤ Project style conventions for Intellij that apply most of Checkstyle rules
Figure 3. Code style
Go to File → Settings → Editor → Code style. There click on the icon next to the Scheme section.
There, click on the Import Scheme value and pick the Intellij IDEA code style XML option. Import
the spring-cloud-build-
tools/src/main/resources/intellij/Intellij_Spring_Boot_Java_Conventions.xml file.
Figure 4. Inspection profiles
Go to File → Settings → Editor → Inspections. There click on the icon next to the Profile section.
There, click on the Import Profile and import the spring-cloud-build-
tools/src/main/resources/intellij/Intellij_Project_Defaults.xml file.
Checkstyle
To have Intellij work with Checkstyle, you have to install the Checkstyle plugin. It’s advisable to also
install the Assertions2Assertj to automatically convert the JUnit assertions
Go to File → Settings → Other settings → Checkstyle. There click on the + icon in the Configuration
file section. There, you’ll have to define where the checkstyle rules should be picked from. In the
image above, we’ve picked the rules from the cloned Spring Cloud Build repository. However, you
can point to the Spring Cloud Build’s GitHub repository (e.g. for the checkstyle.xml :
raw.githubusercontent.com/spring-cloud/spring-cloud-build/master/spring-cloud-build-tools/src/
main/resources/checkstyle.xml). We need to provide the following variables:
Remember to set the Scan Scope to All sources since we apply checkstyle rules for
production and test sources.
1. Installation
To install, make sure you have Spring Boot CLI (2.0.0 or better):
$ spring version
Spring CLI v2.2.3.RELEASE
$ mvn install
$ spring install org.springframework.cloud:spring-cloud-cli:2.2.0.RELEASE
Prerequisites: to use the encryption and decryption features you need the full-
strength JCE installed in your JVM (it’s not there by default). You can download the
"Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files"
from Oracle, and follow instructions for installation (essentially replace the 2
policy files in the JRE lib/security directory with the ones that you downloaded).
Each of these apps can be configured using a local YAML file with the same name (in the current
working directory or a subdirectory called "config" or in ~/.spring-cloud). E.g. in configserver.yml
you might want to do something like this to locate a local git repository for the backend:
configserver.yml
spring:
profiles:
active: git
cloud:
config:
server:
git:
uri: file://${user.home}/dev/demo/config-repo
E.g. in Stub Runner app you could fetch stubs from your local .m2 in the following way.
stubrunner.yml
stubrunner:
workOffline: true
ids:
- com.example:beer-api-producer:+:9876
spring:
cloud:
launcher:
deployables:
source:
coordinates: maven://com.example:source:0.0.1-SNAPSHOT
port: 7000
sink:
coordinates: maven://com.example:sink:0.0.1-SNAPSHOT
port: 7001
app.groovy
@EnableEurekaServer
class Eureka {}
which you can run from the command line like this
To include additional dependencies, often it suffices just to add the appropriate feature-enabling
annotation, e.g. @EnableConfigServer, @EnableOAuth2Sso or @EnableEurekaClient. To manually include
a dependency you can use a @Grab with the special "Spring Boot" short style artifact co-ordinates, i.e.
with just the artifact ID (no need for group or version information), e.g. to set up a client app to
listen on AMQP for management events from the Spring CLoud Bus:
app.groovy
@Grab('spring-cloud-starter-bus-amqp')
@RestController
class Service {
@RequestMapping('/')
def home() { [message: 'Hello'] }
}
To use a key in a file (e.g. an RSA public key for encyption) prepend the key value with "@" and
provide the file path, e.g.
The spring-cloud-cloudfoundry-web project provides basic support for some enhanced features of
webapps in Cloud Foundry: binding automatically to single-sign-on services and optionally
enabling sticky routing for discovery.
1. Discovery
Here’s a Spring Cloud app with Cloud Foundry discovery:
app.groovy
@Grab('org.springframework.cloud:spring-cloud-cloudfoundry')
@RestController
@EnableDiscoveryClient
class Application {
@Autowired
DiscoveryClient client
@RequestMapping('/')
String home() {
'Hello from ' + client.getLocalServiceInstance()
}
The DiscoveryClient can lists all the apps in a space, according to the credentials it is authenticated
with, where the space defaults to the one the client is running in (if any). If neither org nor space
are configured, they default per the user’s profile in Cloud Foundry.
2. Single Sign On
All of the OAuth2 SSO and resource server features moved to Spring Boot in
version 1.3. You can find documentation in the Spring Boot user guide.
This project provides automatic binding from CloudFoundry service credentials to the Spring Boot
features. If you have a CloudFoundry service called "sso", for instance, with credentials containing
"client_id", "client_secret" and "auth_domain", it will bind automatically to the Spring OAuth2 client
that you enable with @EnableOAuth2Sso (from Spring Boot). The name of the service can be
parameterized using spring.oauth2.sso.serviceId.
3. Configuration
To see the list of all Spring Cloud Sloud Foundry related configuration properties please check the
Appendix page.
Many of those features are covered by Spring Boot, on which Spring Cloud builds. Some more
features are delivered by Spring Cloud as two libraries: Spring Cloud Context and Spring Cloud
Commons. Spring Cloud Context provides utilities and special services for the ApplicationContext of
a Spring Cloud application (bootstrap context, encryption, refresh scope, and environment
endpoints). Spring Cloud Commons is a set of abstractions and common classes used in different
Spring Cloud implementations (such as Spring Cloud Netflix and Spring Cloud Consul).
If you get an exception due to "Illegal key size" and you use Sun’s JDK, you need to install the Java
Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links
for more information:
• Java 6 JCE
• Java 7 JCE
• Java 8 JCE
Extract the files into the JDK/jre/lib/security folder for whichever version of JRE/JDK x64/x86 you
use.
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would
like to contribute to this section of the documentation or if you find an error, you
can find the source code and issue trackers for the project at github.
The bootstrap context uses a different convention for locating external configuration than the main
application context. Instead of application.yml (or .properties), you can use bootstrap.yml, keeping
the external configuration for bootstrap and main context nicely separate. The following listing
shows an example:
Example 1. bootstrap.yml
spring:
application:
name: foo
cloud:
config:
uri: ${SPRING_CONFIG_URI:http://localhost:8888}
If your application needs any application-specific configuration from the server, it is a good idea to
set the spring.application.name (in bootstrap.yml or application.yml). For the property
spring.application.name to be used as the application’s context ID, you must set it in
bootstrap.[properties | yml].
If you want to retrieve specific profile configuration, you should also set spring.profiles.active in
bootstrap.[properties | yml].
• “bootstrap”: If any PropertySourceLocators are found in the bootstrap context and if they have
non-empty properties, an optional CompositePropertySource appears with high priority. An
example would be properties from the Spring Cloud Config Server. See “Customizing the
Bootstrap Property Sources” for how to customize the contents of this property source.
• “applicationConfig: [classpath:bootstrap.yml]” (and related files if Spring profiles are active): If
you have a bootstrap.yml (or .properties), those properties are used to configure the bootstrap
context. Then they get added to the child context when its parent is set. They have lower
precedence than the application.yml (or .properties) and any other property sources that are
added to the child as a normal part of the process of creating a Spring Boot application. See
“Changing the Location of Bootstrap Properties” for how to customize the contents of these
property sources.
Because of the ordering rules of property sources, the “bootstrap” entries take precedence.
However, note that these do not contain any data from bootstrap.yml, which has very low
precedence but can be used to set defaults.
You can extend the context hierarchy by setting the parent context of any ApplicationContext you
create — for example, by using its own interface or with the SpringApplicationBuilder convenience
methods (parent(), child() and sibling()). The bootstrap context is the parent of the most senior
ancestor that you create yourself. Every context in the hierarchy has its own “bootstrap” (possibly
empty) property source to avoid promoting values inadvertently from parents down to their
descendants. If there is a config server, every context in the hierarchy can also (in principle) have a
different spring.application.name and, hence, a different remote property source. Normal Spring
application context behavior rules apply to property resolution: properties from a child context
override those in the parent, by name and also by property source name. (If the child has a
property source with the same name as the parent, the value from the parent is not included in the
child).
Note that the SpringApplicationBuilder lets you share an Environment amongst the whole hierarchy,
but that is not the default. Thus, sibling contexts (in particular) do not need to have the same
profiles or property sources, even though they may share common values with their parent.
Those properties behave like the spring.config.* variants with the same name. With
spring.cloud.bootstrap.location the default locations are replaced and only the specified ones are
used. To add locations to the list of default ones, spring.cloud.bootstrap.additional-location could
be used. In fact, they are used to set up the bootstrap ApplicationContext by setting those properties
in its Environment. If there is an active profile (from spring.profiles.active or through the
Environment API in the context you are building), properties in that profile get loaded as well, the
same as in a regular Spring Boot app — for example, from bootstrap-development.properties for a
development profile.
When adding custom BootstrapConfiguration, be careful that the classes you add
are not @ComponentScanned by mistake into your “main” application context, where
they might not be needed. Use a separate package name for boot configuration
classes and make sure that name is not already covered by your @ComponentScan or
@SpringBootApplication annotated configuration classes.
The bootstrap process ends by injecting initializers into the main SpringApplication instance (which
is the normal Spring Boot startup sequence, whether it runs as a standalone application or is
deployed in an application server). First, a bootstrap context is created from the classes found in
spring.factories. Then, all @Beans of type ApplicationContextInitializer are added to the main
SpringApplication before it is started.
@Override
public PropertySource<?> locate(Environment environment) {
return new MapPropertySource("customProperty",
Collections.<String,
Object>singletonMap("property.from.sample.custom.source", "worked as intended"));
}
The Environment that is passed in is the one for the ApplicationContext about to be created — in
other words, the one for which we supply additional property sources. It already has its normal
Spring Boot-provided property sources, so you can use those to locate a property source specific to
this Environment (for example, by keying it on spring.application.name, as is done in the default
Spring Cloud Config Server property source locator).
If you create a jar with this class in it and then add a META-INF/spring.factories containing the
following setting, the customProperty PropertySource appears in any application that includes that
jar on its classpath:
org.springframework.cloud.bootstrap.BootstrapConfiguration=sample.custom.CustomPro
pertySourceLocator
For Spring Cloud to initialize logging configuration properly, you cannot use a
custom prefix. For example, using custom.loggin.logpath is not recognized by
Spring Cloud when initializing the logging system.
Note that the Spring Cloud Config Client does not, by default, poll for changes in the Environment.
Generally, we would not recommend that approach for detecting changes (although you could set it
up with a @Scheduled annotation). If you have a scaled-out client application, it is better to broadcast
the EnvironmentChangeEvent to all the instances instead of having them polling for changes (for
example, by using the Spring Cloud Bus).
The EnvironmentChangeEvent covers a large class of refresh use cases, as long as you can actually
make a change to the Environment and publish the event. Note that those APIs are public and part of
core Spring). You can verify that the changes are bound to @ConfigurationProperties beans by
visiting the /configprops endpoint (a standard Spring Boot Actuator feature). For instance, a
DataSource can have its maxPoolSize changed at runtime (the default DataSource created by Spring
Boot is a @ConfigurationProperties bean) and grow capacity dynamically. Re-binding
@ConfigurationProperties does not cover another large class of use cases, where you need more
control over the refresh and where you need a change to be atomic over the whole
ApplicationContext. To address those concerns, we have @RefreshScope.
Sometimes, it might even be mandatory to apply the @RefreshScope annotation on some beans that
can be only initialized once. If a bean is “immutable”, you have to either annotate the bean with
@RefreshScope or specify the classname under the property key: spring.cloud.refresh.extra-
refreshable.
Refresh scope beans are lazy proxies that initialize when they are used (that is, when a method is
called), and the scope acts as a cache of initialized values. To force a bean to re-initialize on the next
method call, you must invalidate its cache entry.
The RefreshScope is a bean in the context and has a public refreshAll() method to refresh all beans
in the scope by clearing the target cache. The /refresh endpoint exposes this functionality (over
HTTP or JMX). To refresh an individual bean by name, there is also a refresh(String) method.
To expose the /refresh endpoint, you need to add following configuration to your application:
management:
endpoints:
web:
exposure:
include: refresh
If you get an exception due to "Illegal key size" and you use Sun’s JDK, you need to install the Java
Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links
for more information:
• Java 6 JCE
• Java 7 JCE
• Java 8 JCE
Extract the files into the JDK/jre/lib/security folder for whichever version of JRE/JDK x64/x86 you
use.
1.11. Endpoints
For a Spring Boot Actuator application, some additional management endpoints are available. You
can use:
• POST to /actuator/env to update the Environment and rebind @ConfigurationProperties and log
levels. To enabled this endpoint you must set management.endpoint.env.post.enabled=true.
• /actuator/refresh to re-load the boot strap context and refresh the @RefreshScope beans.
Spring Cloud will provide both the blocking and reactive service discovery clients by default. You
can disable the blocking and/or reactive clients easily by setting
spring.cloud.discovery.blocking.enabled=false or spring.cloud.discovery.reactive.enabled=false.
To completely disable service discovery you just need to set spring.cloud.discovery.enabled=false.
By default, implementations of DiscoveryClient auto-register the local Spring Boot server with the
remote discovery server. This behavior can be disabled by setting autoRegister=false in
@EnableDiscoveryClient.
DiscoveryClientHealthIndicator
DiscoveryCompositeHealthContributor
DiscoveryClient interface extends Ordered. This is useful when using multiple discovery clients, as it
allows you to define the order of the returned discovery clients, similar to how you can order the
beans loaded by a Spring application. By default, the order of any DiscoveryClient is set to 0. If you
want to set a different order for your custom DiscoveryClient implementations, you just need to
override the getOrder() method so that it returns the value that is suitable for your setup. Apart
from this, you can use properties to set the order of the DiscoveryClient implementations provided
by Spring Cloud, among others ConsulDiscoveryClient, EurekaDiscoveryClient and
ZookeeperDiscoveryClient. In order to do it, you just need to set the
spring.cloud.{clientIdentifier}.discovery.order (or eureka.client.order for Eureka) property to
the desired value.
2.1.3. SimpleDiscoveryClient
The information about the available instances should be passed to via properties in the following
format: spring.cloud.discovery.client.simple.instances.service1[0].uri=http://s11:8080, where
spring.cloud.discovery.client.simple.instances is the common prefix, then service1 stands for the
ID of the service in question, while [0] indicates the index number of the instance (as visible in the
example, indexes start with 0), and then the value of uri is the actual URI under which the instance
is available.
2.2. ServiceRegistry
Commons now provides a ServiceRegistry interface that provides methods such as
register(Registration) and deregister(Registration), which let you provide custom registered
services. Registration is a marker interface.
If you are using the ServiceRegistry interface, you are going to need to pass the correct Registry
implementation for the ServiceRegistry implementation you are using.
By default, the ServiceRegistry implementation auto-registers the running service. To disable that
behavior, you can set: * @EnableDiscoveryClient(autoRegister=false) to permanently disable auto-
registration. * spring.cloud.service-registry.auto-registration.enabled=false to disable the
behavior through configuration.
There are two events that will be fired when a service auto-registers. The first event, called
InstancePreRegisteredEvent, is fired before the service is registered. The second event, called
InstanceRegisteredEvent, is fired after the service is registered. You can register an
ApplicationListener(s) to listen to and react to these events.
Spring Cloud Commons provides a /service-registry actuator endpoint. This endpoint relies on a
Registration bean in the Spring Application Context. Calling /service-registry with GET returns the
status of the Registration. Using POST to the same endpoint with a JSON body changes the status of
the current Registration to the new value. The JSON body has to include the status field with the
preferred value. Please see the documentation of the ServiceRegistry implementation you use for
the allowed values when updating the status and the values returned for the status. For instance,
Eureka’s supported statuses are UP, DOWN, OUT_OF_SERVICE, and UNKNOWN.
@Configuration
public class MyConfiguration {
@LoadBalanced
@Bean
RestTemplate restTemplate() {
return new RestTemplate();
}
}
The URI needs to use a virtual host name (that is, a service name, not a host name). The
BlockingLoadBalancerClient is used to create a full physical address.
@Configuration
public class MyConfiguration {
@Bean
@LoadBalanced
public WebClient.Builder loadBalancedWebClientBuilder() {
return WebClient.builder();
}
}
The URI needs to use a virtual host name (that is, a service name, not a host name). The Spring
Cloud LoadBalancer is used to create a full physical address.
A load-balanced RestTemplate can be configured to retry failed requests. By default, this logic is
disabled. For the non-reactive version (with RestTemplate), you can enable it by adding Spring Retry
to your application’s classpath. For the reactive version (with WebTestClient), you need to set
`spring.cloud.loadbalancer.retry.enabled=true.
If you would like to disable the retry logic with Spring Retry or Reactive Retry on the classpath, you
can set spring.cloud.loadbalancer.retry.enabled=false.
For the non-reactive implementation, if you would like to implement a BackOffPolicy in your
retries, you need to create a bean of type LoadBalancedRetryFactory and override the
createBackOffPolicy() method.
For the reactive implementation, you just need to enable it by setting
spring.cloud.loadbalancer.retry.backoff.enabled to false.
For the reactive implementation, you can also implement your own LoadBalancerRetryPolicy to
have more detailed control over the load-balanced call retries.
@Configuration
public class MyConfiguration {
@Bean
LoadBalancedRetryFactory retryFactory() {
return new LoadBalancedRetryFactory() {
@Override
public BackOffPolicy createBackOffPolicy(String service) {
return new ExponentialBackOffPolicy();
}
};
}
}
If you want to add one or more RetryListener implementations to your retry functionality, you need
to create a bean of type LoadBalancedRetryListenerFactory and return the RetryListener array you
would like to use for a given service, as the following example shows:
@Configuration
public class MyConfiguration {
@Bean
LoadBalancedRetryListenerFactory retryListenerFactory() {
return new LoadBalancedRetryListenerFactory() {
@Override
public RetryListener[] createRetryListeners(String service) {
return new RetryListener[]{new RetryListener() {
@Override
public <T, E extends Throwable> boolean open(RetryContext
context, RetryCallback<T, E> callback) {
//TODO Do you business...
return true;
}
@Override
public <T, E extends Throwable> void close(RetryContext
context, RetryCallback<T, E> callback, Throwable throwable) {
//TODO Do you business...
}
@Override
public <T, E extends Throwable> void onError(RetryContext
context, RetryCallback<T, E> callback, Throwable throwable) {
//TODO Do you business...
}
}};
}
};
}
}
@LoadBalanced
@Bean
RestTemplate loadBalanced() {
return new RestTemplate();
}
@Primary
@Bean
RestTemplate restTemplate() {
return new RestTemplate();
}
}
@Autowired
@LoadBalanced
private RestTemplate loadBalanced;
Notice the use of the @Primary annotation on the plain RestTemplate declaration in
the preceding example to disambiguate the unqualified @Autowired injection.
@LoadBalanced
@Bean
WebClient.Builder loadBalanced() {
return WebClient.builder();
}
@Primary
@Bean
WebClient.Builder webClient() {
return WebClient.builder();
}
}
@Autowired
@LoadBalanced
private WebClient.Builder loadBalanced;
• [load-balancer-exchange-filter-functionload-balancer-exchange-filter-function]
2.7.1. Spring WebFlux WebClient with
ReactorLoadBalancerExchangeFilterFunction
You can configure WebClient to use the ReactiveLoadBalancer. If you add Spring Cloud LoadBalancer
starter to your project and if spring-webflux is on the classpath,
ReactorLoadBalancerExchangeFilterFunction is auto-configured. The following example shows how
to configure a WebClient to use reactive load-balancer:
The URI needs to use a virtual host name (that is, a service name, not a host name). The
ReactorLoadBalancer is used to create a full physical address.
The URI needs to use a virtual host name (that is, a service name, not a host name). The
LoadBalancerClient is used to create a full physical address.
WARN: This approach is now deprecated. We suggest that you use WebFlux with reactive Load-
Balancer instead.
Example 2. application.yml
spring:
cloud:
inetutils:
ignoredInterfaces:
- docker0
- veth.*
You can also force the use of only specified network addresses by using a list of regular expressions,
as the following example shows:
Example 3. bootstrap.yml
spring:
cloud:
inetutils:
preferredNetworks:
- 192.168
- 10.0
You can also force the use of only site-local addresses, as the following example shows:
Example 4. application.yml
spring:
cloud:
inetutils:
useOnlySiteLocalInterfaces: true
Abstract features are features where an interface or abstract class is defined and that an
implementation the creates, such as DiscoveryClient, LoadBalancerClient, or LockService. The
abstract class or interface is used to find a bean of that type in the context. The version displayed is
bean.getClass().getPackage().getImplementationVersion().
Named features are features that do not have a particular class they implement. These features
include “Circuit Breaker”, “API Gateway”, “Spring Cloud Bus”, and others. These features require a
name and a bean type.
Any module can declare any number of HasFeature beans, as the following examples show:
@Bean
public HasFeatures commonsFeatures() {
return HasFeatures.abstractFeatures(DiscoveryClient.class,
LoadBalancerClient.class);
}
@Bean
public HasFeatures consulFeatures() {
return HasFeatures.namedFeatures(
new NamedFeature("Spring Cloud Bus", ConsulBusAutoConfiguration.class),
new NamedFeature("Circuit Breaker", HystrixCommandAspect.class));
}
@Bean
HasFeatures localFeatures() {
return HasFeatures.builder()
.abstractFeature(Something.class)
.namedFeature(new NamedFeature("Some Other Feature", Someother.class))
.abstractFeature(Somethingelse.class)
.build();
}
Example of a report
***************************
APPLICATION FAILED TO START
***************************
Description:
Your project setup is incompatible with our requirements due to following reasons:
- Spring Boot [2.1.0.RELEASE] is not compatible with this Spring Cloud release
train
Action:
- Change Spring Boot version to one of the following versions [1.2.x, 1.3.x] .
You can find the latest Spring Boot versions here
[https://spring.io/projects/spring-boot#learn].
If you want to learn more about the Spring Cloud Release train compatibility, you
can visit this page [https://spring.io/projects/spring-cloud#overview] and check
the [Release Trains] section.
For example, the following configuration can be passed via @LoadBalancerClient annotation to
switch to using the RandomLoadBalancer:
@Bean
ReactorLoadBalancer<ServiceInstance> randomLoadBalancer(Environment environment,
LoadBalancerClientFactory loadBalancerClientFactory) {
String name =
environment.getProperty(LoadBalancerClientFactory.PROPERTY_NAME);
return new RandomLoadBalancer(loadBalancerClientFactory
.getLazyProvider(name, ServiceInstanceListSupplier.class),
name);
}
}
If you are using Caffeine, you can also override the default Caffeine Cache setup for the
LoadBalancer by passing your own Caffeine Specification in the
spring.cloud.loadbalancer.cache.caffeine.spec property.
WARN: Passing your own Caffeine specification will override any other LoadBalancerCache
settings, including General LoadBalancer Cache Configuration fields, such as ttl and capacity.
If you do not have Caffeine in the classpath, the DefaultLoadBalancerCache, which comes
automatically with spring-cloud-starter-loadbalancer, will be used. See the
LoadBalancerCacheConfiguration section for information on how to configure it.
You can set your own ttl value (the time after write after which entries should be expired),
expressed as Duration, by passing a String compliant with the Spring Boot String to Duration
converter syntax. as the value of the spring.cloud.loadbalancer.cache.ttl property. You can also set
your own LoadBalancer cache initial capacity by setting the value of the
spring.cloud.loadbalancer.cache.capacity property.
The default setup includes ttl set to 35 seconds and the default initialCapacity is 256.
You can also altogether disable loadBalancer caching by setting the value of
spring.cloud.loadbalancer.cache.enabled to false.
You can also override DiscoveryClient-specific zone setup by setting the value of
spring.cloud.loadbalancer.zone property.
For the time being, only Eureka Discovery Client is instrumented to set the
LoadBalancer zone. For other discovery client,
spring.cloud.loadbalancer.zone property. More instrumentations coming shortly.
set the
To determine the zone of a retrieved ServiceInstance, we check the value under
the "zone" key in its metadata map.
The ZonePreferenceServiceInstanceListSupplier filters retrieved instances and only returns the ones
within the same zone. If the zone is null or there are no instances within the same zone, it returns
all the retrieved instances.
In order to use the zone-based load-balancing approach, you will have to instantiate a
ZonePreferenceServiceInstanceListSupplier bean in a custom configuration.
@Bean
public ServiceInstanceListSupplier discoveryClientServiceInstanceListSupplier(
ConfigurableApplicationContext context) {
return ServiceInstanceListSupplier.builder()
.withDiscoveryClient()
.withZonePreference()
.withCaching()
.build(context);
}
}
This supplier is also recommended for setups with a small number of instances
per service in order to avoid retrying calls on a failing instance.
If using any of the Service Discovery-backed suppliers, adding this health-check
mechanism is usually not necessary, as we retrieve the health state of the instances
directly from the Service Registry.
If you rely on the default path (/actuator/health), make sure you add spring-boot-
starter-actuator to your collaborator’s dependencies, unless you are planning to
add such an endpoint on your own.
In order to use the health-check scheduler approach, you will have to instantiate a
HealthCheckServiceInstanceListSupplier bean in a custom configuration.
@Bean
public ServiceInstanceListSupplier discoveryClientServiceInstanceListSupplier(
ConfigurableApplicationContext context) {
return ServiceInstanceListSupplier.builder()
.withDiscoveryClient()
.withHealthChecks()
.build(context);
}
}
@Bean
public ServiceInstanceListSupplier discoveryClientServiceInstanceListSupplier(
ConfigurableApplicationContext context) {
return ServiceInstanceListSupplier.builder()
.withDiscoveryClient()
.withSameInstancePreference()
.build(context);
}
}
@Bean
public ServiceInstanceListSupplier discoveryClientServiceInstanceListSupplier(
ConfigurableApplicationContext context) {
return ServiceInstanceListSupplier.builder()
.withDiscoveryClient()
.withRequestBasedStickySession()
.build(context);
}
}
For that functionality, it is useful to have the selected service instance (which can be different from
the one in the original request cookie if that one is not available) to be updated before sending the
request forward. To do that, set the value of spring.cloud.loadbalancer.sticky-session.add-
service-instance-cookie to true.
By default, the name of the cookie is sc-lb-instance-id. You can modify it by changing the value of
the spring.cloud.loadbalancer.instance-id-cookie-name property.
You can set a default hint for all services by setting the value of the
spring.cloud.loadbalancer.hint.default property. You can also set a specific value for any given
service by setting the value of the spring.cloud.loadbalancer.hint.[SERVICE_ID] property,
substituting [SERVICE_ID] with the correct ID of your service. If the hint is not set by the user,
default is used.
If no hint header has been added, HintBasedServiceInstanceListSupplier uses hint values from
properties to filter service instances.
If no hint is set, either by the header or by properties, all service instances provided by the delegate
are returned.
@Bean
public ServiceInstanceListSupplier discoveryClientServiceInstanceListSupplier(
ConfigurableApplicationContext context) {
return ServiceInstanceListSupplier.builder()
.withDiscoveryClient()
.withHints()
.withCaching()
.build(context);
}
}
@Bean
public LoadBalancerClientRequestTransformer transformer() {
return new LoadBalancerClientRequestTransformer() {
@Override
public ClientRequest transformRequest(ClientRequest request, ServiceInstance
instance) {
return ClientRequest.from(request)
.header("X-InstanceId", instance.getInstanceId())
.build();
}
};
}
If multiple transformers are defined, they are applied in the order in which Beans are defined.
Alternatively, you can use LoadBalancerRequestTransformer.DEFAULT_ORDER or
LoadBalancerClientRequestTransformer.DEFAULT_ORDER to specify the order.
Spring Cloud LoadBalancer starter includes Spring Boot Caching and Evictor.
3.12. Passing Your Own Spring Cloud LoadBalancer
Configuration
You can also use the @LoadBalancerClient annotation to pass your own load-balancer client
configuration, passing the name of the load-balancer client and the configuration class, as follows:
@Configuration
@LoadBalancerClient(value = "stores", configuration =
CustomLoadBalancerConfiguration.class)
public class MyConfiguration {
@Bean
@LoadBalanced
public WebClient.Builder loadBalancedWebClientBuilder() {
return WebClient.builder();
}
}
TIP
In order to make working on your own LoadBalancer configuration easier, we have added
a builder() method to the ServiceInstanceListSupplier class.
TIP
You can also use our alternative predefined configurations in place of the default ones by
setting the value of spring.cloud.loadbalancer.configurations property to zone-preference
to use ZonePreferenceServiceInstanceListSupplier with caching or to health-check to use
HealthCheckServiceInstanceListSupplier with caching.
The annotation value arguments (stores in the example above) specifies the
service id of the service that we should send the requests to with the given custom
configuration.
You can also pass multiple configurations (for more than one load-balancer client) through the
@LoadBalancerClients annotation, as the following example shows:
@Configuration
@LoadBalancerClients({@LoadBalancerClient(value = "stores", configuration =
StoresLoadBalancerClientConfiguration.class), @LoadBalancerClient(value =
"customers", configuration = CustomersLoadBalancerClientConfiguration.class)})
public class MyConfiguration {
@Bean
@LoadBalanced
public WebClient.Builder loadBalancedWebClientBuilder() {
return WebClient.builder();
}
}
onStart(Request<RC> request) takes a Request object as a parameter. It contains data that is used to
select an appropriate instance, including the downstream client request and hint. onStartRequest
also takes the Request object and, additionally, the Response<T> object as parameters. On the other
hand, a CompletionContext object is provided to the onComplete(CompletionContext<RES, T, RC>
completionContext) method. It contains the LoadBalancer Response, including the selected service
instance, the Status of the request executed against that service instance and (if available) the
response returned to the downstream client, and (if an exception has occurred) the corresponding
Throwable.
In the preceding method calls, RC means RequestContext type, RES means client
response type, and T means returned server type.
3.14. Spring Cloud LoadBalancer Statistics
We provide a LoadBalancerLifecycle bean called MicrometerStatsLoadBalancerLifecycle, which uses
Micrometer to provide statistics for load-balanced calls.
In order to get this bean added to your application context, set the value of the
spring.cloud.loadbalancer.stats.micrometer.enabled to true and have a MeterRegistry available (for
example, by adding Spring Boot Actuator to your project).
Additional information regarding the service instances, request data, and response data is added to
metrics via tags whenever available.
The meters are registered in the registry when at least one record is added for a
given meter.
You can further configure the behavior of those metrics (for example, add
publishing percentiles and histograms) by adding MeterFilters.
• Sentinel
• Spring Retry
@Service
public static class DemoControllerService {
private RestTemplate rest;
private CircuitBreakerFactory cbFactory;
The CircuitBreakerFactory.create API creates an instance of a class called CircuitBreaker. The run
method takes a Supplier and a Function. The Supplier is the code that you are going to wrap in a
circuit breaker. The Function is the fallback that is run if the circuit breaker is tripped. The function
is passed the Throwable that caused the fallback to be triggered. You can optionally exclude the
fallback if you do not want to provide one.
If Project Reactor is on the class path, you can also use ReactiveCircuitBreakerFactory for your
reactive code. The following example shows how to do so:
@Service
public static class DemoControllerService {
private ReactiveCircuitBreakerFactory cbFactory;
private WebClient webClient;
4.3. Configuration
You can configure your circuit breakers by creating beans of type Customizer. The Customizer
interface has a single method (called customize) that takes the Object to customize.
For detailed information on how to customize a given implementation see the following
documentation:
• Resilience4J
• Sentinal
• Spring Retry
5. CachedRandomPropertySource
Spring Cloud Context provides a PropertySource that caches random values based on a key. Outside
of the caching functionality it works the same as Spring Boot’s RandomValuePropertySource. This
random value might be useful in the case where you want a random value that is consistent even
after the Spring Application context restarts. The property value takes the form of
cachedrandom.[yourkey].[type] where yourkey is the key in the cache. The type value can be any type
supported by Spring Boot’s RandomValuePropertySource.
myrandom=${cachedrandom.appname.value}
6. Security
6.1. Single Sign On
All of the OAuth2 SSO and resource server features moved to Spring Boot in
version 1.3. You can find documentation in the Spring Boot user guide.
If your app is a user facing OAuth2 client (i.e. has declared @EnableOAuth2Sso or @EnableOAuth2Client)
then it has an OAuth2ClientContext in request scope from Spring Boot. You can create your own
OAuth2RestTemplate from this context and an autowired OAuth2ProtectedResourceDetails, and then
the context will always forward the access token downstream, also refreshing the access token
automatically if it expires. (These are features of Spring Security and Spring Boot.)
If your app has @EnableResourceServer you might want to relay the incoming token downstream to
other services. If you use a RestTemplate to contact the downstream services then this is just a
matter of how to create the template with the right context.
If your service uses UserInfoTokenServices to authenticate incoming tokens (i.e. it is using the
security.oauth2.user-info-uri configuration), then you can simply create an OAuth2RestTemplate
using an autowired OAuth2ClientContext (it will be populated by the authentication process before it
hits the backend code). Equivalently (with Spring Boot 1.4), you could inject a
UserInfoRestTemplateFactory and grab its OAuth2RestTemplate in your configuration. For example:
MyConfiguration.java
@Bean
public OAuth2RestTemplate restTemplate(UserInfoRestTemplateFactory factory) {
return factory.getUserInfoRestTemplate();
}
This rest template will then have the same OAuth2ClientContext (request-scoped) that is used by the
authentication filter, so you can use it to send requests with the same access token.
If your app is not using UserInfoTokenServices but is still a client (i.e. it declares @EnableOAuth2Client
or @EnableOAuth2Sso), then with Spring Security Cloud any OAuth2RestOperations that the user
creates from an @Autowired OAuth2Context will also forward tokens. This feature is implemented by
default as an MVC handler interceptor, so it only works in Spring MVC. If you are not using MVC
you could use a custom filter or AOP interceptor wrapping an AccessTokenContextRelay to provide
the same feature.
Here’s a basic example showing the use of an autowired rest template created elsewhere ("foo.com"
is a Resource Server accepting the same tokens as the surrounding app):
MyController.java
@Autowired
private OAuth2RestOperations restTemplate;
@RequestMapping("/relay")
public String relay() {
ResponseEntity<String> response =
restTemplate.getForEntity("https://foo.com/bar", String.class);
return "Success! (" + response.getBody() + ")";
}
If you don’t want to forward tokens (and that is a valid choice, since you might want to act as
yourself, rather than the client that sent you the token), then you only need to create your own
OAuth2Context instead of autowiring the default one.
Feign clients will also pick up an interceptor that uses the OAuth2ClientContext if it is available, so
they should also do a token relay anywhere where a RestTemplate would.
7. Configuration Properties
To see the list of all Spring Cloud Commons related configuration properties please check the
Appendix page.
Spring Cloud Config
2020.0.3
Spring Cloud Config provides server-side and client-side support for externalized configuration in a
distributed system. With the Config Server, you have a central place to manage external properties
for applications across all environments. The concepts on both client and server map identically to
the Spring Environment and PropertySource abstractions, so they fit very well with Spring
applications but can be used with any application running in any language. As an application
moves through the deployment pipeline from dev to test and into production, you can manage the
configuration between those environments and be certain that applications have everything they
need to run when they migrate. The default implementation of the server storage backend uses git,
so it easily supports labelled versions of configuration environments as well as being accessible to a
wide range of tooling for managing the content. It is easy to add alternative implementations and
plug them in with Spring configuration.
1. Quick Start
This quick start walks through using both the server and the client of Spring Cloud Config Server.
$ cd spring-cloud-config-server
$ ../mvnw spring-boot:run
The server is a Spring Boot application, so you can run it from your IDE if you prefer to do so (the
main class is ConfigServerApplication).
The default strategy for locating property sources is to clone a git repository (at
spring.cloud.config.server.git.uri) and use it to initialize a mini SpringApplication. The mini-
application’s Environment is used to enumerate property sources and publish them at a JSON
endpoint.
/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties
For example:
curl localhost:8888/foo/development
curl localhost:8888/foo/development/master
curl localhost:8888/foo/development,db/master
curl localhost:8888/foo-development.yml
curl localhost:8888/foo-db.properties
curl localhost:8888/master/foo-db.properties
where application is injected as the spring.config.name in the SpringApplication (what is normally
application in a regular Spring Boot app), profile is an active profile (or comma-separated list of
properties), and label is an optional git label (defaults to master.)
Spring Cloud Config Server pulls configuration for remote clients from various sources. The
following example gets configuration from a git repository (which must be provided), as shown in
the following example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
Other sources are any JDBC compatible database, Subversion, Hashicorp Vault, Credhub and local
filesystems.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>{spring-boot-docs-version}</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>{spring-cloud-version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
Now you can create a standard Spring Boot application, such as the following HTTP server:
@SpringBootApplication
@RestController
public class Application {
@RequestMapping("/")
public String home() {
return "Hello World!";
}
When this HTTP server runs, it picks up the external configuration from the default local config
server (if it is running) on port 8888. To modify the startup behavior, you can change the location of
the config server by using application.properties as shown in the following example:
spring.config.import=optional:configserver:http://myconfigserver.com
By default, if no application name is set, application will be used. To modify the name, the following
property can be added to the application.properties file:
spring.application.name: myapp
The Config Server properties show up in the /env endpoint as a high-priority property source, as
shown in the following example.
$ curl localhost:8080/env
{
"activeProfiles": [],
{
"name": "servletContextInitParams",
"properties": {}
},
{
"name": "configserver:https://github.com/spring-cloud-samples/config-
repo/foo.properties",
"properties": {
"foo": {
"value": "bar",
"origin": "Config Server https://github.com/spring-cloud-samples/config-
repo/foo.properties:2:12"
}
}
},
...
}
A property source called configserver:<URL of remote repository>/<file name> contains the foo
property with a value of bar.
The URL in the property source name is the git repository, not the config server
URL.
If you use Spring Cloud Config Client, you need to set the spring.config.import
property in order to bind to Config Server. You can read more about it in the
Spring Cloud Config Reference Guide.
ConfigServer.java
@SpringBootApplication
@EnableConfigServer
public class ConfigServer {
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
}
Like all Spring Boot applications, it runs on port 8080 by default, but you can switch it to the more
conventional port 8888 in various ways. The easiest, which also sets a default configuration
repository, is by launching it with spring.config.name=configserver (there is a configserver.yml in
the Config Server jar). Another is to use your own application.properties, as shown in the
following example:
application.properties
server.port: 8888
spring.cloud.config.server.git.uri: file://${user.home}/config-repo
On Windows, you need an extra "/" in the file URL if it is absolute with a drive
prefix (for example,/${user.home}/config-repo).
The following listing shows a recipe for creating the git repository in the preceding
example:
$ cd $HOME
$ mkdir config-repo
$ cd config-repo
$ git init .
$ echo info.foo: bar > application.properties
$ git add -A .
$ git commit -m "Add application.properties"
Using the local filesystem for your git repository is intended for testing only. You
should use a server to host your configuration repositories in production.
The initial clone of your configuration repository can be quick and efficient if you
keep only text files in it. If you store binary files, especially large ones, you may
experience delays on the first request for configuration or encounter out of
memory errors in the server.
• {label}, which is a server side feature labelling a "versioned" set of config files.
Repository implementations generally behave like a Spring Boot application, loading configuration
files from a spring.config.name equal to the {application} parameter, and spring.profiles.active
equal to the {profiles} parameter. Precedence rules for profiles are also the same as in a regular
Spring Boot application: Active profiles take precedence over defaults, and, if there are multiple
profiles, the last one wins (similar to adding entries to a Map).
spring:
application:
name: foo
profiles:
active: dev,mysql
(As usual with a Spring Boot application, these properties could also be set by environment
variables or command line arguments).
If the repository is file-based, the server creates an Environment from application.yml (shared
between all clients) and foo.yml (with foo.yml taking precedence). If the YAML files have documents
inside them that point to Spring profiles, those are applied with higher precedence (in order of the
profiles listed). If there are profile-specific YAML (or properties) files, these are also applied with
higher precedence than the defaults. Higher precedence translates to a PropertySource listed earlier
in the Environment. (These same rules apply in a standalone Spring Boot application.)
You can set spring.cloud.config.server.accept-empty to false so that Server would return a HTTP 404
status, if the application is not found.By default, this flag is set to true.
The default implementation of EnvironmentRepository uses a Git backend, which is very convenient
for managing upgrades and physical environments and for auditing changes. To change the
location of the repository, you can set the spring.cloud.config.server.git.uri configuration
property in the Config Server (for example in application.yml). If you set it with a file: prefix, it
should work from a local repository so that you can get started quickly and easily without a server.
However, in that case, the server operates directly on the local repository without cloning it (it does
not matter if it is not bare because the Config Server never makes changes to the "remote"
repository). To scale the Config Server up and make it highly available, you need to have all
instances of the server pointing to the same repository, so only a shared file system would work.
Even in that case, it is better to use the ssh: protocol for a shared filesystem repository, so that the
server can clone it and use a local working copy as a cache.
This repository implementation maps the {label} parameter of the HTTP resource to a git label
(commit id, branch name, or tag). If the git branch or tag name contains a slash (/), then the label in
the HTTP URL should instead be specified with the special string (_) (to avoid ambiguity with other
URL paths). For example, if the label is foo/bar, replacing the slash would result in the following
label: foo(_)bar. The inclusion of the special string (_) can also be applied to the {application}
parameter. If you use a command-line client such as curl, be careful with the brackets in the
URL — you should escape them from the shell with single quotes ('').
Skipping SSL Certificate Validation
The configuration server’s validation of the Git server’s SSL certificate can be disabled by setting
the git.skipSslValidation property to true (default is false).
spring:
cloud:
config:
server:
git:
uri: https://example.com/my/repo
skipSslValidation: true
You can configure the time, in seconds, that the configuration server will wait to acquire an HTTP
connection. Use the git.timeout property.
spring:
cloud:
config:
server:
git:
uri: https://example.com/my/repo
timeout: 4
Spring Cloud Config Server supports a git repository URL with placeholders for the {application}
and {profile} (and {label} if you need it, but remember that the label is applied as a git label
anyway). So you can support a “one repository per application” policy by using a structure similar
to the following:
spring:
cloud:
config:
server:
git:
uri: https://github.com/myorg/{application}
You can also support a “one repository per profile” policy by using a similar pattern but with
{profile}.
Additionally, using the special string "(_)" within your {application} parameters can enable support
for multiple organizations, as shown in the following example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/{application}
Spring Cloud Config also includes support for more complex requirements with pattern matching
on the application and profile name. The pattern format is a comma-separated list of
{application}/{profile} names with wildcards (note that a pattern beginning with a wildcard may
need to be quoted), as shown in the following example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
repos:
simple: https://github.com/simple/config-repo
special:
pattern: special*/dev*,*special*/dev*
uri: https://github.com/special/config-repo
local:
pattern: local*
uri: file:/home/configsvc/config-repo
If {application}/{profile} does not match any of the patterns, it uses the default URI defined under
spring.cloud.config.server.git.uri. In the above example, for the “simple” repository, the pattern
is simple/* (it only matches one application named simple in all profiles). The “local” repository
matches all application names beginning with local in all profiles (the /* suffix is added
automatically to any pattern that does not have a profile matcher).
The “one-liner” short cut used in the “simple” example can be used only if the only
property to be set is the URI. If you need to set anything else (credentials, pattern,
and so on) you need to use the full form.
The pattern property in the repo is actually an array, so you can use a YAML array (or [0], [1], etc.
suffixes in properties files) to bind to multiple patterns. You may need to do so if you are going to
run apps with multiple profiles, as shown in the following example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
repos:
development:
pattern:
- '*/development'
- '*/staging'
uri: https://github.com/development/config-repo
staging:
pattern:
- '*/qa'
- '*/production'
uri: https://github.com/staging/config-repo
Spring Cloud guesses that a pattern containing a profile that does not end in *
implies that you actually want to match a list of profiles starting with this pattern
(so */staging is a shortcut for ["*/staging", "*/staging,*"], and so on). This is
common where, for instance, you need to run applications in the “development”
profile locally but also the “cloud” profile remotely.
Every repository can also optionally store config files in sub-directories, and patterns to search for
those directories can be specified as search-paths. The following example shows a config file at the
top level:
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
search-paths:
- foo
- bar*
In the preceding example, the server searches for config files in the top level and in the foo/ sub-
directory and also any sub-directory whose name begins with bar.
By default, the server clones remote repositories when configuration is first requested. The server
can be configured to clone the repositories at startup, as shown in the following top-level example:
spring:
cloud:
config:
server:
git:
uri: https://git/common/config-repo.git
repos:
team-a:
pattern: team-a-*
cloneOnStart: true
uri: https://git/team-a/config-repo.git
team-b:
pattern: team-b-*
cloneOnStart: false
uri: https://git/team-b/config-repo.git
team-c:
pattern: team-c-*
uri: https://git/team-a/config-repo.git
In the preceding example, the server clones team-a’s config-repo on startup, before it accepts any
requests. All other repositories are not cloned until configuration from the repository is requested.
Setting a repository to be cloned when the Config Server starts up can help to
identify a misconfigured configuration source (such as an invalid repository URI)
quickly, while the Config Server is starting up. With cloneOnStart not enabled for a
configuration source, the Config Server may start successfully with a
misconfigured or invalid configuration source and not detect an error until an
application requests configuration from that configuration source.
Authentication
To use HTTP basic authentication on the remote repository, add the username and password
properties separately (not in the URL), as shown in the following example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
username: trolley
password: strongpassword
If you do not use HTTPS and user credentials, SSH should also work out of the box when you store
keys in the default directories (~/.ssh) and the URI points to an SSH location, such as
[email protected]:configuration/cloud-configuration. It is important that an entry for the Git server
be present in the ~/.ssh/known_hosts file and that it is in ssh-rsa format. Other formats (such as
ecdsa-sha2-nistp256) are not supported. To avoid surprises, you should ensure that only one entry
is present in the known_hosts file for the Git server and that it matches the URL you provided to the
config server. If you use a hostname in the URL, you want to have exactly that (not the IP) in the
known_hosts file. The repository is accessed by using JGit, so any documentation you find on that
should be applicable. HTTPS proxy settings can be set in ~/.git/config or (in the same way as for
any other JVM process) with system properties (-Dhttps.proxyHost and -Dhttps.proxyPort).
If you do not know where your ~/.git directory is, use git config --global to
manipulate the settings (for example, git config --global http.sslVerify false).
JGit requires RSA keys in PEM format. Below is an example ssh-keygen (from openssh) command
that will generate a key in the corect format:
Warning: When working with SSH keys, the expected ssh private-key must begin with -----BEGIN
RSA PRIVATE KEY-----. If the key starts with -----BEGIN OPENSSH PRIVATE KEY----- then the RSA key
will not load when spring-cloud-config server is started. The error looks like:
To correct the above error the RSA key must be converted to PEM format. An example using
openssh is provided above for generating a new key in the appropriate format.
Spring Cloud Config Server also supports AWS CodeCommit authentication. AWS CodeCommit uses
an authentication helper when using Git from the command line. This helper is not used with the
JGit library, so a JGit CredentialProvider for AWS CodeCommit is created if the Git URI matches the
AWS CodeCommit pattern. AWS CodeCommit URIs follow this pattern:
https//git-codecommit.${AWS_REGION}.amazonaws.com/v1/repos/${repo}.
If you provide a username and password with an AWS CodeCommit URI, they must be the AWS
accessKeyId and secretAccessKey that provide access to the repository. If you do not specify a
username and password, the accessKeyId and secretAccessKey are retrieved by using the AWS
Default Credential Provider Chain.
If your Git URI matches the CodeCommit URI pattern (shown earlier), you must provide valid AWS
credentials in the username and password or in one of the locations supported by the default
credential provider chain. AWS EC2 instances may use IAM Roles for EC2 Instances.
The aws-java-sdk-core jar is an optional dependency. If the aws-java-sdk-core jar is
not on your classpath, the AWS Code Commit credential provider is not created,
regardless of the git server URI.
Spring Cloud Config Server also supports authenticating against Google Cloud Source repositories.
If your Git URI uses the http or https protocol and the domain name is
source.developers.google.com, the Google Cloud Source credentials provider will be used. A Google
Cloud Source repository URI has the format source.developers.google.com/p/${GCP_PROJECT}/r/
${REPO}. To obtain the URI for your repository, click on "Clone" in the Google Cloud Source UI, and
select "Manually generated credentials". Do not generate any credentials, simply copy the displayed
URI.
The Google Cloud Source credentials provider will use Google Cloud Platform application default
credentials. See Google Cloud SDK documentation on how to create application default credentials
for a system. This approach will work for user accounts in dev environments and for service
accounts in production environments.
By default, the JGit library used by Spring Cloud Config Server uses SSH configuration files such as
~/.ssh/known_hosts and /etc/ssh/ssh_config when connecting to Git repositories by using an SSH
URI. In cloud environments such as Cloud Foundry, the local filesystem may be ephemeral or not
easily accessible. For those cases, SSH configuration can be set by using Java properties. In order to
activate property-based SSH configuration, the
spring.cloud.config.server.git.ignoreLocalSshSettings property must be set to true, as shown in
the following example:
spring:
cloud:
config:
server:
git:
uri: [email protected]:team/repo1.git
ignoreLocalSshSettings: true
hostKey: someHostKey
hostKeyAlgorithm: ssh-rsa
privateKey: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpgIBAAKCAQEAx4UbaDzY5xjW6hc9jwN0mX33XpTDVW9WqHp5AKaRbtAC3DqX
IXFMPgw3K45jxRb93f8tv9vL3rD9CUG1Gv4FM+o7ds7FRES5RTjv2RT/JVNJCoqF
ol8+ngLqRZCyBtQN7zYByWMRirPGoDUqdPYrj2yq+ObBBNhg5N+hOwKjjpzdj2Ud
1l7R+wxIqmJo1IYyy16xS8WsjyQuyC0lL456qkd5BDZ0Ag8j2X9H9D5220Ln7s9i
oezTipXipS7p7Jekf3Ywx6abJwOmB0rX79dV4qiNcGgzATnG1PkXxqt76VhcGa0W
DDVHEEYGbSQ6hIGSh0I7BQun0aLRZojfE3gqHQIDAQABAoIBAQCZmGrk8BK6tXCd
fY6yTiKxFzwb38IQP0ojIUWNrq0+9Xt+NsypviLHkXfXXCKKU4zUHeIGVRq5MN9b
BO56/RrcQHHOoJdUWuOV2qMqJvPUtC0CpGkD+valhfD75MxoXU7s3FK7yjxy3rsG
EmfA6tHV8/4a5umo5TqSd2YTm5B19AhRqiuUVI1wTB41DjULUGiMYrnYrhzQlVvj
5MjnKTlYu3V8PoYDfv1GmxPPh6vlpafXEeEYN8VB97e5x3DGHjZ5UrurAmTLTdO8
+AahyoKsIY612TkkQthJlt7FJAwnCGMgY6podzzvzICLFmmTXYiZ/28I4BX/mOSe
pZVnfRixAoGBAO6Uiwt40/PKs53mCEWngslSCsh9oGAaLTf/XdvMns5VmuyyAyKG
ti8Ol5wqBMi4GIUzjbgUvSUt+IowIrG3f5tN85wpjQ1UGVcpTnl5Qo9xaS1PFScQ
xrtWZ9eNj2TsIAMp/svJsyGG3OibxfnuAIpSXNQiJPwRlW3irzpGgVx/AoGBANYW
dnhshUcEHMJi3aXwR12OTDnaLoanVGLwLnkqLSYUZA7ZegpKq90UAuBdcEfgdpyi
PhKpeaeIiAaNnFo8m9aoTKr+7I6/uMTlwrVnfrsVTZv3orxjwQV20YIBCVRKD1uX
VhE0ozPZxwwKSPAFocpyWpGHGreGF1AIYBE9UBtjAoGBAI8bfPgJpyFyMiGBjO6z
FwlJc/xlFqDusrcHL7abW5qq0L4v3R+FrJw3ZYufzLTVcKfdj6GelwJJO+8wBm+R
gTKYJItEhT48duLIfTDyIpHGVm9+I1MGhh5zKuCqIhxIYr9jHloBB7kRm0rPvYY4
VAykcNgyDvtAVODP+4m6JvhjAoGBALbtTqErKN47V0+JJpapLnF0KxGrqeGIjIRV
cYA6V4WYGr7NeIfesecfOC356PyhgPfpcVyEztwlvwTKb3RzIT1TZN8fH4YBr6Ee
KTbTjefRFhVUjQqnucAvfGi29f+9oE3Ei9f7wA+H35ocF6JvTYUsHNMIO/3gZ38N
CPjyCMa9AoGBAMhsITNe3QcbsXAbdUR00dDsIFVROzyFJ2m40i4KCRM35bC/BIBs
q0TY3we+ERB40U8Z2BvU61QuwaunJ2+uGadHo58VSVdggqAo0BSkH58innKKt96J
69pcVH/4rmLbXdcmNYGm6iu+MlPQk4BUZknHSmVHIFdJ0EPupVaQ8RHT
-----END RSA PRIVATE KEY-----
Spring Cloud Config Server also supports a search path with placeholders for the {application} and
{profile} (and {label} if you need it), as shown in the following example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
search-paths: '{application}'
The preceding listing causes a search of the repository for files in the same name as the directory
(as well as the top level). Wildcards are also valid in a search path with placeholders (any matching
directory is included in the search).
As mentioned earlier, Spring Cloud Config Server makes a clone of the remote git repository in case
the local copy gets dirty (for example, folder content changes by an OS process) such that Spring
Cloud Config Server cannot update the local copy from remote repository.
To solve this issue, there is a force-pull property that makes Spring Cloud Config Server force pull
from the remote repository if the local copy is dirty, as shown in the following example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
force-pull: true
If you have a multiple-repositories configuration, you can configure the force-pull property per
repository, as shown in the following example:
spring:
cloud:
config:
server:
git:
uri: https://git/common/config-repo.git
force-pull: true
repos:
team-a:
pattern: team-a-*
uri: https://git/team-a/config-repo.git
force-pull: true
team-b:
pattern: team-b-*
uri: https://git/team-b/config-repo.git
force-pull: true
team-c:
pattern: team-c-*
uri: https://git/team-a/config-repo.git
As Spring Cloud Config Server has a clone of the remote git repository after check-outing branch to
local repo (e.g fetching properties by label) it will keep this branch forever or till the next server
restart (which creates new local repo). So there could be a case when remote branch is deleted but
local copy of it is still available for fetching. And if Spring Cloud Config Server client service starts
with --spring.cloud.config.label=deletedRemoteBranch,master it will fetch properties from
deletedRemoteBranch local branch, but not from master.
You can control how often the config server will fetch updated configuration data from your Git
backend by using spring.cloud.config.server.git.refreshRate. The value of this property is
specified in seconds. By default the value is 0, meaning the config server will fetch updated
configuration from the Git repo every time it is requested.
With VCS-based backends (git, svn), files are checked out or cloned to the local
filesystem. By default, they are put in the system temporary directory with a prefix
of config-repo-. On linux, for example, it could be /tmp/config-repo-<randomid>.
Some operating systems routinely clean out temporary directories. This can lead to
unexpected behavior, such as missing properties. To avoid this problem, change
the directory that Config Server uses by setting
spring.cloud.config.server.git.basedir or
spring.cloud.config.server.svn.basedir to a directory that does not reside in the
system temp structure.
There is also a “native” profile in the Config Server that does not use Git but loads the config files
from the local classpath or file system (any static URL you want to point to with
spring.cloud.config.server.native.searchLocations). To use the native profile, launch the Config
Server with spring.profiles.active=native.
Remember to use the file: prefix for file resources (the default without a prefix is
usually the classpath). As with any Spring Boot configuration, you can embed ${}
-style environment placeholders, but remember that absolute paths in Windows
require an extra / (for example, /${user.home}/config-repo).
The search locations can contain placeholders for {application}, {profile}, and {label}. In this way,
you can segregate the directories in the path and choose a strategy that makes sense for you (such
as subdirectory per application or subdirectory per profile).
If you do not use placeholders in the search locations, this repository also appends the {label}
parameter of the HTTP resource to a suffix on the search path, so properties files are loaded from
each search location and a subdirectory with the same name as the label (the labelled properties
take precedence in the Spring Environment). Thus, the default behaviour with no placeholders is
the same as adding a search location ending with /{label}/. For example, file:/tmp/config is the
same as file:/tmp/config,file:/tmp/config/{label}. This behavior can be disabled by setting
spring.cloud.config.server.native.addLabelLocations=false.
Vault is a tool for securely accessing secrets. A secret is anything that to which you want to
tightly control access, such as API keys, passwords, certificates, and other sensitive
information. Vault provides a unified interface to any secret while providing tight access
control and recording a detailed audit log.
For more information on Vault, see the Vault quick start guide.
To enable the config server to use a Vault backend, you can run your config server with the vault
profile. For example, in your config server’s application.properties, you can add
spring.profiles.active=vault.
By default, the config server assumes that your Vault server runs at 127.0.0.1:8200. It also assumes
that the name of backend is secret and the key is application. All of these defaults can be
configured in your config server’s application.properties. The following table describes
configurable Vault properties:
host 127.0.0.1
port 8200
scheme http
backend secret
defaultKey application
profileSeparator ,
kvVersion 1
Name Default Value
skipSslValidation false
timeout 5
namespace null
Vault 0.10.0 introduced a versioned key-value backend (k/v backend version 2) that
exposes a different API than earlier versions, it now requires a data/ between the
mount path and the actual context path and wraps secrets in a data object. Setting
spring.cloud.config.server.vault.kv-version=2 will take this into account.
Optionally, there is support for the Vault Enterprise X-Vault-Namespace header. To have it sent to
Vault set the namespace property.
With your config server running, you can make HTTP requests to the server to retrieve values from
the Vault backend. To do so, you need a token for your Vault server.
First, place some data in you Vault, as shown in the following example:
Second, make an HTTP request to your config server to retrieve the values, as shown in the
following example:
The default way for a client to provide the necessary authentication to let Config Server talk to
Vault is to set the X-Config-Token header. However, you can instead omit the header and configure
the authentication in the server, by setting the same configuration properties as Spring Cloud Vault.
The property to set is spring.cloud.config.server.vault.authentication. It should be set to one of
the supported authentication methods. You may also need to set other properties specific to the
authentication method you use, by using the same property names as documented for
spring.cloud.vault but instead using the spring.cloud.config.server.vault prefix. See the Spring
Cloud Vault Reference Guide for more detail.
If you omit the X-Config-Token header and use a server property to set the
authentication, the Config Server application needs an additional dependency on
Spring Vault to enable the additional authentication options. See the Spring Vault
Reference Guide for how to add that dependency.
When using Vault, you can provide your applications with multiple properties sources. For
example, assume you have written data to the following paths in Vault:
secret/myApp,dev
secret/myApp
secret/application,dev
secret/application
Properties written to secret/application are available to all applications using the Config Server.
An application with the name, myApp, would have any properties written to secret/myApp and
secret/application available to it. When myApp has the dev profile enabled, properties written to all
of the above paths would be available to it, with properties in the first path in the list taking
priority over the others.
The configuration server can access a Git or Vault backend through an HTTP or HTTPS proxy. This
behavior is controlled for either Git or Vault by settings under proxy.http and proxy.https. These
settings are per repository, so if you are using a composite environment repository you must
configure proxy settings for each backend in the composite individually. If using a network which
requires separate proxy servers for HTTP and HTTPS URLs, you can configure both the HTTP and
the HTTPS proxy settings for a single backend.
The following table describes the proxy configuration properties for both HTTP and HTTPS proxies.
All of these properties must be prefixed by proxy.http or proxy.https.
Sharing configuration between all applications varies according to which approach you take, as
described in the following topics:
• Vault Server
With file-based (git, svn, and native) repositories, resources with file names in application*
(application.properties, application.yml, application-*.properties, and so on) are shared between
all client applications. You can use resources with these file names to configure global defaults and
have them be overridden by application-specific files as necessary.
The property overrides feature can also be used for setting global defaults, with placeholders
applications allowed to override them locally.
With the “native” profile (a local file system backend) , you should use an explicit
search location that is not part of the server’s own configuration. Otherwise, the
application* resources in the default search locations get removed because they
are part of the server.
Vault Server
When using Vault as a backend, you can share configuration with all applications by placing
configuration in secret/application. For example, if you run the following Vault command, all
applications using the config server will have the properties foo and baz available to them:
When using CredHub as a backend, you can share configuration with all applications by placing
configuration in /application/ or by placing it in the default profile for the application. For
example, if you run the following CredHub command, all applications using the config server will
have the properties shared.color1 and shared.color2 available to them:
Spring Cloud Config Server supports JDBC (relational database) as a backend for configuration
properties. You can enable this feature by adding spring-jdbc to the classpath and using the jdbc
profile or by adding a bean of type JdbcEnvironmentRepository. If you include the right dependencies
on the classpath (see the user guide for more details on that), Spring Boot configures a data source.
The database needs to have a table called PROPERTIES with columns called APPLICATION, PROFILE, and
LABEL (with the usual Environment meaning), plus KEY and VALUE for the key and value pairs in
Properties style. All fields are of type String in Java, so you can make them VARCHAR of whatever
length you need. Property values behave in the same way as they would if they came from Spring
Boot properties files named {application}-{profile}.properties, including all the encryption and
decryption, which will be applied as post-processing steps (that is, not in the repository
implementation directly).
Spring Cloud Config Server supports Redis as a backend for configuration properties. You can
enable this feature by adding a dependency to Spring Data Redis.
pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
</dependencies>
The following configuration uses Spring Data RedisTemplate to access a Redis. We can use
spring.redis.* properties to override default connection settings.
spring:
profiles:
active: redis
redis:
host: redis
port: 16379
The properties should be stored as fields in a hash. The name of hash should be the same as
spring.application.name property or conjunction of spring.application.name and
spring.profiles.active[n].
After running the command visible above a hash should contain the following keys with values:
HGETALL sample-app
{
"server.port": "8100",
"sample.topic.name": "test",
"test.property1": "property1"
}
Spring Cloud Config Server supports AWS S3 as a backend for configuration properties. You can
enable this feature by adding a dependency to the AWS Java SDK For Amazon S3.
pom.xml
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
</dependency>
</dependencies>
The following configuration uses the AWS S3 client to access configuration files. We can use
spring.awss3.* properties to select the bucket where your configuration is stored.
spring:
profiles:
active: awss3
cloud:
config:
server:
awss3:
region: us-east-1
bucket: bucket1
It is also possible to specify an AWS URL to override the standard endpoint of your S3 service with
spring.awss3.endpoint. This allows support for beta regions of S3, and other S3 compatible storage
APIs.
Credentials are found using the Default AWS Credential Provider Chain. Versioned and encrypted
buckets are supported without further configuration.
Spring Cloud Config Server supports CredHub as a backend for configuration properties. You can
enable this feature by adding a dependency to Spring CredHub.
pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.credhub</groupId>
<artifactId>spring-credhub-starter</artifactId>
</dependency>
</dependencies>
spring:
profiles:
active: credhub
cloud:
config:
server:
credhub:
url: https://credhub:8844
The properties should be stored as JSON, such as:
All client applications with the name spring.cloud.config.name=demo-app will have the following
properties available to them:
{
toggle.button: "blue",
toggle.link: "red",
marketing.enabled: true,
external.enabled: false
}
When no profile is specified default will be used and when no label is specified
master will be used as a default value. NOTE: Values added to application will be
shared by all the applications.
OAuth 2.0
pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-oauth2-client</artifactId>
</dependency>
</dependencies>
The following configuration uses OAuth 2.0 and UAA to access a CredHub:
spring:
profiles:
active: credhub
cloud:
config:
server:
credhub:
url: https://credhub:8844
oauth2:
registration-id: credhub-client
security:
oauth2:
client:
registration:
credhub-client:
provider: uaa
client-id: credhub_config_server
client-secret: asecret
authorization-grant-type: client_credentials
provider:
uaa:
token-uri: https://uaa:8443/oauth/token
In some scenarios, you may wish to pull configuration data from multiple environment
repositories. To do so, you can enable the composite profile in your configuration server’s
application properties or YAML file. If, for example, you want to pull configuration data from a
Subversion repository as well as two Git repositories, you can set the following properties for your
configuration server:
spring:
profiles:
active: composite
cloud:
config:
server:
composite:
-
type: svn
uri: file:///path/to/svn/repo
-
type: git
uri: file:///path/to/rex/git/repo
-
type: git
uri: file:///path/to/walter/git/repo
Using this configuration, precedence is determined by the order in which repositories are listed
under the composite key. In the above example, the Subversion repository is listed first, so a value
found in the Subversion repository will override values found for the same property in one of the
Git repositories. A value found in the rex Git repository will be used before a value found for the
same property in the walter Git repository.
If you want to pull configuration data only from repositories that are each of distinct types, you can
enable the corresponding profiles, rather than the composite profile, in your configuration server’s
application properties or YAML file. If, for example, you want to pull configuration data from a
single Git repository and a single HashiCorp Vault server, you can set the following properties for
your configuration server:
spring:
profiles:
active: git, vault
cloud:
config:
server:
git:
uri: file:///path/to/git/repo
order: 2
vault:
host: 127.0.0.1
port: 8200
order: 1
Using this configuration, precedence can be determined by an order property. You can use the order
property to specify the priority order for all your repositories. The lower the numerical value of the
order property, the higher priority it has. The priority order of a repository helps resolve any
potential conflicts between repositories that contain values for the same properties.
If your composite environment includes a Vault server as in the previous example,
you must include a Vault token in every request made to the configuration server.
See Vault Backend.
Any type of failure when retrieving values from an environment repository results
in a failure for the entire composite environment.
In addition to using one of the environment repositories from Spring Cloud, you can also provide
your own EnvironmentRepository bean to be included as part of a composite environment. To do so,
your bean must implement the EnvironmentRepository interface. If you want to control the priority
of your custom EnvironmentRepository within the composite environment, you should also
implement the Ordered interface and override the getOrdered method. If you do not implement the
Ordered interface, your EnvironmentRepository is given the lowest priority.
The Config Server has an “overrides” feature that lets the operator provide configuration properties
to all applications. The overridden properties cannot be accidentally changed by the application
with the normal Spring Boot hooks. To declare overrides, add a map of name-value pairs to
spring.cloud.config.server.overrides, as shown in the following example:
spring:
cloud:
config:
server:
overrides:
foo: bar
The preceding examples causes all applications that are config clients to read foo=bar, independent
of their own configuration.
Normally, Spring environment placeholders with ${} can be escaped (and resolved
on the client) by using backslash (\) to escape the $ or the {. For example,
\${app.foo:bar} resolves to bar, unless the app provides its own app.foo.
In YAML, you do not need to escape the backslash itself. However, in properties
files, you do need to escape the backslash, when you configure the overrides on
the server.
You can change the priority of all overrides in the client to be more like default values, letting
applications supply their own values in environment variables or System properties, by setting the
spring.cloud.config.overrideNone=true flag (the default is false) in the remote repository.
You can configure the Health Indicator to check more applications along with custom profiles and
custom labels, as shown in the following example:
spring:
cloud:
config:
server:
health:
repositories:
myservice:
label: mylabel
myservice-dev:
name: myservice
profiles: development
2.3. Security
You can secure your Config Server in any way that makes sense to you (from physical network
security to OAuth2 bearer tokens), because Spring Security and Spring Boot offer support for many
security arrangements.
To use the default Spring Boot-configured HTTP Basic security, include Spring Security on the
classpath (for example, through spring-boot-starter-security). The default is a username of user
and a randomly generated password. A random password is not useful in practice, so we
recommend you configure the password (by setting spring.security.user.password) and encrypt it
(see below for instructions on how to do that).
2.4. Encryption and Decryption
To use the encryption and decryption features you need the full-strength JCE
installed in your JVM (it is not included by default). You can download the “Java
Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files” from
Oracle and follow the installation instructions (essentially, you need to replace the
two policy files in the JRE lib/security directory with the ones that you
downloaded).
If the remote property sources contain encrypted content (values starting with {cipher}), they are
decrypted before sending to clients over HTTP. The main advantage of this setup is that the
property values need not be in plain text when they are “at rest” (for example, in a git repository).
If a value cannot be decrypted, it is removed from the property source and an additional property
is added with the same key but prefixed with invalid and a value that means “not applicable”
(usually <n/a>). This is largely to prevent cipher text being used as a password and accidentally
leaking.
If you set up a remote config repository for config client applications, it might contain an
application.yml similar to the following:
application.yml
spring:
datasource:
username: dbuser
password: '{cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ'
Encrypted values in application.properties file must not be wrapped in quotes. Otherwise, the
value is not decrypted. The following example shows values that would work:
application.properties
spring.datasource.username: dbuser
spring.datasource.password: {cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ
You can safely push this plain text to a shared git repository, and the secret password remains
protected.
The server also exposes /encrypt and /decrypt endpoints (on the assumption that these are secured
and only accessed by authorized agents). If you edit a remote config file, you can use the Config
Server to encrypt values by POSTing to the /encrypt endpoint, as shown in the following example:
Be sure not to include any of the curl command statistics in the encrypted value,
this is why the examples use the -s option to silence them. Outputting the value to
a file can help avoid this problem.
The inverse operation is also available through /decrypt (provided the server is configured with a
symmetric key or a full key pair), as shown in the following example:
$ curl localhost:8888/decrypt -s -d
682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
mysecret
Take the encrypted value and add the {cipher} prefix before you put it in the YAML or properties
file and before you commit and push it to a remote (potentially insecure) store.
The /encrypt and /decrypt endpoints also both accept paths in the form of
/*/{application}/{profiles}, which can be used to control cryptography on a per-application
(name) and per-profile basis when clients call into the main environment resource.
To control the cryptography in this granular way, you must also provide a @Bean of
type TextEncryptorLocator that creates a different encryptor per name and profiles.
The one that is provided by default does not do so (all encryptions use the same
key).
The spring command line client (with Spring Cloud CLI extensions installed) can also be used to
encrypt and decrypt, as shown in the following example:
To use a key in a file (such as an RSA public key for encryption), prepend the key value with "@"
and provide the file path, as shown in the following example:
To configure a symmetric key, you need to set encrypt.key to a secret String (or use the ENCRYPT_KEY
environment variable to keep it out of plain-text configuration files).
To configure an asymmetric key use a keystore (e.g. as created by the keytool utility that comes with
the JDK). The keystore properties are encrypt.keyStore.* with * equal to
Property Description
encrypt.keyStore.location Contains a Resource location
encrypt.keyStore.password Holds the password that unlocks the keystore
encrypt.keyStore.alias Identifies which key in the store to use
encrypt.keyStore.type The type of KeyStore to create. Defaults to jks.
The encryption is done with the public key, and a private key is needed for decryption. Thus, in
principle, you can configure only the public key in the server if you want to only encrypt (and are
prepared to decrypt the values yourself locally with the private key). In practice, you might not
want to do decrypt locally, because it spreads the key management process around all the clients,
instead of concentrating it in the server. On the other hand, it can be a useful option if your config
server is relatively insecure and only a handful of clients need the encrypted properties.
When using JDK 11 or above you may get the following warning when using the
command above. In this case you probably want to make sure the keypass and
storepass values match.
Warning: Different store and key passwords not supported for PKCS12 KeyStores.
Ignoring user-specified -keypass value.
Put the server.jks file in the classpath (for instance) and then, in your bootstrap.yml, for the Config
Server, create the following settings:
encrypt:
keyStore:
location: classpath:/server.jks
password: letmein
alias: mytestkey
secret: changeme
foo:
bar: `{cipher}{key:testkey}...`
The locator looks for a key named "testkey". A secret can also be supplied by using a {secret:…}
value in the prefix. However, if it is not supplied, the default is to use the keystore password (which
is what you get when you build a keystore and do not specify a secret). If you do supply a secret,
you should also encrypt the secret using a custom SecretLocator.
When the keys are being used only to encrypt a few bytes of configuration data (that is, they are not
being used elsewhere), key rotation is hardly ever necessary on cryptographic grounds. However,
you might occasionally need to change the keys (for example, in the event of a security breach). In
that case, all the clients would need to change their source config files (for example, in git) and use
a new {key:…} prefix in all the ciphers. Note that the clients need to first check that the key alias is
available in the Config Server keystore.
If you want to let the Config Server handle all encryption as well as decryption, the
{name:value} prefixes can also be added as plain text posted to the /encrypt
endpoint, .
The YAML and properties representations have an additional flag (provided as a boolean query
parameter called resolvePlaceholders) to signal that placeholders in the source documents (in the
standard Spring ${…} form) should be resolved in the output before rendering, where possible.
This is a useful feature for consumers that do not know about the Spring placeholder conventions.
There are limitations in using the YAML or properties formats, mainly in relation
to the loss of metadata. For example, the JSON is structured as an ordered list of
property sources, with names that correlate with the source. The YAML and
properties forms are coalesced into a single map, even if the origin of the values
has multiple sources, and the names of the original source files are lost. Also, the
YAML representation is not necessarily a faithful representation of the YAML
source in a backing repository either. It is constructed from a list of flat property
sources, and assumptions have to be made about the form of the keys.
After a resource is located, placeholders in the normal format (${…}) are resolved by using the
effective Environment for the supplied application name, profile, and label. In this way, the resource
endpoint is tightly integrated with the environment endpoints.
As with the source files for environment configuration, the profile is used to
resolve the file name. So, if you want a profile-specific file,
/*/development/*/logback.xml can be resolved by a file called logback-
development.xml (in preference to logback.xml).
If you do not want to supply the label and let the server use the default label, you
can supply a useDefaultLabel request parameter. Consequently, the preceding
example for the default profile could be
/sample/default/nginx.conf?useDefaultLabel.
At present, Spring Cloud Config can serve plaintext for git, SVN, native backends, and AWS S3. The
support for git, SVN, and native backends is identical. AWS S3 works a bit differently. The following
sections show how each one works:
• AWS S3
application.yml
nginx.conf
server {
listen 80;
server_name ${nginx.server.name};
}
nginx:
server:
name: example.com
---
spring:
profiles: development
nginx:
server:
name: develop.com
server {
listen 80;
server_name develop.com;
}
4.2. AWS S3
To enable serving plain text for AWS s3, the Config Server application needs to include a
dependency on Spring Cloud AWS. For details on how to set up that dependency, see the Spring
Cloud AWS Reference Guide. Then you need to configure Spring Cloud AWS, as described in the
Spring Cloud AWS Reference Guide.
Decrypting plain text files is only supported for YAML, JSON, and properties file
extensions.
If this feature is enabled, and an unsupported file extention is requested, any encrypted values in
the file will not be decrypted.
If you use the bootstrap flag, the config server needs to have its name and
repository URI configured in bootstrap.yml.
To change the location of the server endpoints, you can (optionally) set
spring.cloud.config.server.prefix (for example, /config), to serve the resources under a prefix.
The prefix should start but not end with a /. It is applied to the @RequestMappings in the Config
Server (that is, underneath the Spring Boot server.servletPath and server.contextPath prefixes).
If you want to read the configuration for an application directly from the backend repository
(instead of from the config server), you basically want an embedded config server with no
endpoints. You can switch off the endpoints entirely by not using the @EnableConfigServer
annotation (set spring.cloud.config.server.bootstrap=true).
When the webhook is activated, the Config Server sends a RefreshRemoteApplicationEvent targeted
at the applications it thinks might have changed. The change detection can be strategized. However,
by default, it looks for changes in files that match the application name (for example,
foo.properties is targeted at the foo application, while application.properties is targeted at all
applications). The strategy to use when you want to override the behavior is
PropertyPathNotificationExtractor, which accepts the request headers and body as parameters and
returns a list of file paths that changed.
The default configuration works out of the box with Github, Gitlab, Gitea, Gitee, Gogs or Bitbucket.
In addition to the JSON notifications from Github, Gitlab, Gitee, or Bitbucket, you can trigger a
change notification by POSTing to /monitor with form-encoded body parameters in the pattern of
path={application}. Doing so broadcasts to applications matching the {application} pattern (which
can contain wildcards).
The default configuration also detects filesystem changes in local git repositories.
In that case, the webhook is not used. However, as soon as you edit a config file, a
refresh is broadcast.
application.properties
spring.config.import=optional:configserver:
This will connect to the Config Server at the default location of "http://localhost:8888". Removing the
optional: prefix will cause the Config Client to fail if it is unable to connect to Config Server. To
change the location of Config Server either set spring.cloud.config.uri or add the url to the
spring.config.import statement such as,
spring.config.import=optional:configserver:http://myhost:8888. The location in the import
property has precedence over the uri property.
A bootstrap file (properties or yaml) is not needed for the Spring Boot Config Data
method of import via spring.config.import.
Unless you are using config first bootstrap, you will need to have a
spring.config.import property in your configuration properties. For example,
spring.config.import=optional:configserver:.
If you use a DiscoveryClient implementation, such as Spring Cloud Netflix and Eureka Service
Discovery or Spring Cloud Consul, you can have the Config Server register with the Discovery
Service.
If you prefer to use DiscoveryClient to locate the Config Server, you can do so by setting
spring.cloud.config.discovery.enabled=true (the default is false). For example, with Spring Cloud
Netflix, you need to define the Eureka server address (for example, in
eureka.client.serviceUrl.defaultZone). The price for using this option is an extra network round
trip on startup, to locate the service registration. The benefit is that, as long as the Discovery Service
is a fixed point, the Config Server can change its coordinates. The default service ID is configserver,
but you can change that on the client by setting spring.cloud.config.discovery.serviceId (and on
the server, in the usual way for a service, such as by setting spring.application.name).
The discovery client implementations all support some kind of metadata map (for example, we
have eureka.instance.metadataMap for Eureka). Some additional properties of the Config Server may
need to be configured in its service registration metadata so that clients can connect correctly. If the
Config Server is secured with HTTP Basic, you can configure the credentials as user and password.
Also, if the Config Server has a context path, you can set configPath. For example, the following
YAML file is for a Config Server that is a Eureka client:
eureka:
instance:
...
metadataMap:
user: osufhalskjrtl
password: lviuhlszvaorhvlo5847
configPath: /config
If you use the Eureka DiscoveryClient from Spring Cloud Netflix and also want to use WebClient
instead of Jersey or RestTemplate, you need to include WebClient on your classpath as well as set
eureka.client.webclient.enabled=true.
To take full control of the retry behavior and are using legacy bootstrap, add a
@Bean of type RetryOperationsInterceptor with an ID of
configServerRetryInterceptor. Spring Retry has a RetryInterceptorBuilder that
supports creating one.
application-prod.properties
spring.config.import=configserver:http://configserver.example.com?fail-fast=true&max-
attempts=10&max-interval=1500&multiplier=1.2&initial-interval=1100"
This sets spring.cloud.config.fail-fast=true (notice the missing prefix above) and all the available
spring.cloud.config.retry.* configuration properties.
• "application" = ${spring.application.name}
• "label" = "master"
When setting the property ${spring.application.name} do not prefix your app
name with the reserved word application- to prevent issues resolving the correct
property source.
You can override all of them by setting spring.cloud.config.* (where * is name, profile or label). The
label is useful for rolling back to previous versions of configuration. With the default Config Server
implementation, it can be a git label, branch name, or commit ID. Label can also be provided as a
comma-separated list. In that case, the items in the list are tried one by one until one succeeds. This
behavior can be useful when working on a feature branch. For instance, you might want to align
the config label with your branch but make it optional (in that case, use
spring.cloud.config.label=myfeature,develop).
If you use HTTP basic security on your Config Server, it is currently possible to support per-Config
Server auth credentials only if you embed the credentials in each URL you specify under the
spring.cloud.config.uri property. If you use any other kind of security mechanism, you cannot
(currently) support per-Config Server authentication and authorization.
7.9. Security
If you use HTTP Basic security on the server, clients need to know the password (and username if it
is not the default). You can specify the username and password through the config server URI or via
separate username and password properties, as shown in the following example:
spring:
cloud:
config:
uri: https://user:[email protected]
The following example shows an alternate way to pass the same information:
spring:
cloud:
config:
uri: https://myconfig.mycompany.com
username: user
password: secret
If you deploy your apps on Cloud Foundry, the best way to provide the password is through service
credentials (such as in the URI, since it does not need to be in a config file). The following example
works locally and for a user-provided service on Cloud Foundry named configserver:
spring:
cloud:
config:
uri:
${vcap.services.configserver.credentials.uri:http://user:password@localhost:8888}
If config server requires client side TLS certificate, you can configure client side TLS certificate and
trust store via properties, as shown in following example:
spring:
cloud:
config:
uri: https://myconfig.myconfig.com
tls:
enabled: true
key-store: <path-of-key-store>
key-store-type: PKCS12
key-store-password: <key-store-password>
key-password: <key-password>
trust-store: <path-of-trust-store>
trust-store-type: PKCS12
trust-store-password: <trust-store-password>
The spring.cloud.config.tls.enabled needs to be true to enable config client side TLS. When
spring.cloud.config.tls.trust-store is omitted, a JVM default trust store is used. The default value
for spring.cloud.config.tls.key-store-type and spring.cloud.config.tls.trust-store-type is
PKCS12. When password properties are omitted, empty password is assumed.
If you use another form of security, you might need to provide a RestTemplate to the
ConfigServicePropertySourceLocator (for example, by grabbing it in the bootstrap context and
injecting it).
The Config Client supplies a Spring Boot Health Indicator that attempts to load configuration from
the Config Server. The health indicator can be disabled by setting health.config.enabled=false. The
response is also cached for performance reasons. The default cache time to live is 5 minutes. To
change that value, set the health.config.time-to-live property (in milliseconds).
In some cases, you might need to customize the requests made to the config server from the client.
Typically, doing so involves passing special Authorization headers to authenticate requests to the
server. To provide a custom RestTemplate:
CustomConfigServiceBootstrapConfiguration.java
@Configuration
public class CustomConfigServiceBootstrapConfiguration {
@Bean
public ConfigServicePropertySourceLocator configServicePropertySourceLocator() {
ConfigClientProperties clientProperties = configClientProperties();
ConfigServicePropertySourceLocator configServicePropertySourceLocator = new
ConfigServicePropertySourceLocator(clientProperties);
configServicePropertySourceLocator.setRestTemplate(customRestTemplate(clientProperties
));
return configServicePropertySourceLocator;
}
}
spring.factories
org.springframework.cloud.bootstrap.BootstrapConfiguration =
com.my.config.client.CustomConfigServiceBootstrapConfiguration
7.9.3. Vault
When using Vault as a backend to your config server, the client needs to supply a token for the
server to retrieve values from Vault. This token can be provided within the client by setting
spring.cloud.config.token in bootstrap.yml, as shown in the following example:
spring:
cloud:
config:
token: YourVaultToken
This command writes a JSON object to your Vault. To access these values in Spring, you would use
the traditional dot(.) annotation, as shown in the following example
@Value("${appA.secret}")
String name = "World";
The preceding code would sets the value of the name variable to appAsecret.
This project provides Consul integrations for Spring Boot apps through autoconfiguration and
binding to the Spring Environment and other Spring programming model idioms. With a few
simple annotations you can quickly enable and configure the common patterns inside your
application and build large distributed systems with Consul based components. The patterns
provided include Service Discovery, Control Bus and Configuration. Intelligent Routing and Client
Side Load Balancing, Circuit Breaker are provided by integration with other Spring Cloud projects.
1. Quick Start
This quick start walks through using Spring Cloud Consul for Service Discovery and Distributed
Configuration.
First, run Consul Agent on your machine. Then you can access it and use it as a Service Registry and
Configuration source with Spring Cloud Consul.
1.1. Discovery Client Usage
To use these features in an application, you can build it as a Spring Boot application that depends
on spring-cloud-consul-core. The most convenient way to add the dependency is with a Spring Boot
starter: org.springframework.cloud:spring-cloud-starter-consul-discovery. We recommend using
dependency management and spring-boot-starter-parent. The following example shows a typical
Maven configuration:
pom.xml
<project>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>{spring-boot-version}</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
The following example shows a typical Gradle setup:
build.gradle
plugins {
id 'org.springframework.boot' version ${spring-boot-version}
id 'io.spring.dependency-management' version ${spring-dependency-management-version}
id 'java'
}
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.cloud:spring-cloud-starter-consul-discovery'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-
dependencies:${springCloudVersion}"
}
}
Now you can create a standard Spring Boot application, such as the following HTTP server:
@SpringBootApplication
@RestController
public class Application {
@GetMapping("/")
public String home() {
return "Hello World!";
}
When this HTTP server runs, it connects to Consul Agent running at the default local 8500 port. To
modify the startup behavior, you can change the location of Consul Agent by using
application.properties, as shown in the following example:
spring:
cloud:
consul:
host: localhost
port: 8500
@Autowired
private DiscoveryClient discoveryClient;
<project>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>{spring-boot-version}</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-consul-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
plugins {
id 'org.springframework.boot' version ${spring-boot-version}
id 'io.spring.dependency-management' version ${spring-dependency-management-version}
id 'java'
}
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.cloud:spring-cloud-starter-consul-config'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-
dependencies:${springCloudVersion}"
}
}
Now you can create a standard Spring Boot application, such as the following HTTP server:
@SpringBootApplication
@RestController
public class Application {
@GetMapping("/")
public String home() {
return "Hello World!";
}
If you use Spring Cloud Consul Config, you need to set the spring.config.import
property in order to bind to Consul. You can read more about it in the Spring Boot
Config Data Import section.
2. Install Consul
Please see the installation documentation for instructions on how to install Consul.
3. Consul Agent
A Consul Agent client must be available to all Spring Cloud Consul applications. By default, the
Agent client is expected to be at localhost:8500. See the Agent documentation for specifics on how
to start an Agent client and how to connect to a cluster of Consul Agent Servers. For development,
after you have installed consul, you may start a Consul Agent using the following command:
./src/main/bash/local_run_consul.sh
This will start an agent in server mode on port 8500, with the ui available at localhost:8500
@RequestMapping("/")
public String home() {
return "Hello world";
}
(i.e. utterly normal Spring Boot app). If the Consul client is located somewhere other than
localhost:8500, the configuration is required to locate the client. Example:
application.yml
spring:
cloud:
consul:
host: localhost
port: 8500
If you use Spring Cloud Consul Config, and you have set
spring.cloud.bootstrap.enabled=true or spring.config.use-legacy-processing=true
or use spring-cloud-starter-bootstrap, then the above values will need to be
placed in bootstrap.yml instead of application.yml.
The default service name, instance id and port, taken from the Environment, are
${spring.application.name}, the Spring Context ID and ${server.port} respectively.
To disable the Consul Discovery Client you can set spring.cloud.consul.discovery.enabled to false.
Consul Discovery Client will also be disabled when spring.cloud.discovery.enabled is set to false.
When management server port is set to something different than the application port, by setting
management.server.port property, management service will be registered as a separate service than
the application service. For example:
application.yml
spring:
application:
name: myApp
management:
server:
port: 4452
• Application Service:
ID: myApp
Name: myApp
• Management Service:
ID: myApp-management
Name: myApp-management
Management service will inherit its instanceId and serviceName from the application service. For
example:
application.yml
spring:
application:
name: myApp
management:
server:
port: 4452
spring:
cloud:
consul:
discovery:
instance-id: custom-service-id
serviceName: myprefix-${spring.application.name}
• Application Service:
ID: custom-service-id
Name: myprefix-myApp
• Management Service:
ID: custom-service-id-management
Name: myprefix-myApp-management
/** Port to register the management service under (defaults to management port) */
spring.cloud.consul.discovery.management-port
The health check for a Consul instance defaults to "/actuator/health", which is the default location of
the health endpoint in a Spring Boot Actuator application. You need to change this, even for an
Actuator application, if you use a non-default context path or servlet path (e.g.
server.servletPath=/foo) or management endpoint path (e.g. management.server.servlet.context-
path=/admin).
The interval that Consul uses to check the health endpoint may also be configured. "10s" and "1m"
represent 10 seconds and 1 minute respectively.
application.yml
spring:
cloud:
consul:
discovery:
healthCheckPath: ${management.server.servlet.context-path}/actuator/health
healthCheckInterval: 15s
You can disable the HTTP health check entirely by setting spring.cloud.consul.discovery.register-
health-check=false.
Applying Headers
Headers can be applied to health check requests. For example, if you’re trying to register a Spring
Cloud Config server that uses Vault Backend:
application.yml
spring:
cloud:
consul:
discovery:
health-check-headers:
X-Config-Token: 6442e58b-d1ea-182e-cfa5-cf9cddef0722
According to the HTTP standard, each header can have more than one values, in which case, an
array can be supplied:
application.yml
spring:
cloud:
consul:
discovery:
health-check-headers:
X-Config-Token:
- "6442e58b-d1ea-182e-cfa5-cf9cddef0722"
- "Some other value"
If the service instance is a Spring Boot Actuator application, it may be provided the following
Actuator health indicators.
DiscoveryClientHealthIndicator
ConsulHealthIndicator
By default, it retrieves the Consul leader node status and all registered services. In deployments
that have many registered services it may be costly to retrieve all services on every health check. To
skip the service retrieval and only check the leader node status set spring.cloud.consul.health-
indicator.include-services-query=false.
When the application runs in bootstrap context mode (the default), this indicator is
loaded into the bootstrap context and is not made available to the Actuator health
endpoint.
4.2.4. Metadata
Consul supports metadata on services. Spring Cloud’s ServiceInstance has a Map<String, String>
metadata field which is populated from a services meta field. To populate the meta field set values on
spring.cloud.consul.discovery.metadata or spring.cloud.consul.discovery.management-metadata
properties.
application.yml
spring:
cloud:
consul:
discovery:
metadata:
myfield: myvalue
anotherfield: anothervalue
The above configuration will result in a service who’s meta field contains myfield→myvalue and
anotherfield→anothervalue.
Generated Metadata
Key Value
'group' Property
spring.cloud.consul.discovery.instance-group.
This values is only generated if instance-group is
not empty.'
Property Property
spring.cloud.consul.discovery.default-zone- spring.cloud.consul.discovery.instance-zone.
metadata-name, defaults to 'zone' This values is only generated if instance-zone is
not empty.'
By default a consul instance is registered with an ID that is equal to its Spring Application Context
ID. By default, the Spring Application Context ID is
${spring.application.name}:comma,separated,profiles:${server.port}. For most cases, this will
allow multiple instances of one service to run on one machine. If further uniqueness is required,
Using Spring Cloud you can override this by providing a unique identifier in
spring.cloud.consul.discovery.instanceId. For example:
application.yml
spring:
cloud:
consul:
discovery:
instanceId:
${spring.application.name}:${vcap.application.instance_id:${spring.application.instanc
e_id:${random.value}}}
With this metadata, and multiple service instances deployed on localhost, the random value will
kick in there to make the instance unique. In Cloudfoundry the vcap.application.instance_id will
be populated automatically in a Spring Boot application, so the random value will not be needed.
Spring Cloud has support for Feign (a REST client builder) and also Spring RestTemplate for looking
up services using the logical service names/ids instead of physical URLs. Both Feign and the
discovery-aware RestTemplate utilize Spring Cloud LoadBalancer for client-side load balancing.
If you want to access service STORES using the RestTemplate simply declare:
@LoadBalanced
@Bean
public RestTemplate loadbalancedRestTemplate() {
return new RestTemplate();
}
and use it like this (notice how we use the STORES service name/id from Consul instead of a fully
qualified domainname):
@Autowired
RestTemplate restTemplate;
If you have Consul clusters in multiple datacenters and you want to access a service in another
datacenter a service name/id alone is not enough. In that case you use property
spring.cloud.consul.discovery.datacenters.STORES=dc-west where STORES is the service name/id and
dc-west is the datacenter where the STORES service lives.
Spring Cloud now also offers support for Spring Cloud LoadBalancer.
@Autowired
private DiscoveryClient discoveryClient;
The watch uses a Spring TaskScheduler to schedule the call to consul. By default it is a
ThreadPoolTaskScheduler with a poolSize of 1. To change the TaskScheduler, create a bean of type
TaskScheduler named with the
ConsulDiscoveryClientConfiguration.CATALOG_WATCH_TASK_SCHEDULER_NAME constant.
config/testApp,dev/
config/testApp/
config/application,dev/
config/application/
The most specific property source is at the top, with the least specific at the bottom. Properties in
the config/application folder are applicable to all applications using consul for configuration.
Properties in the config/testApp folder are only available to the instances of the service named
"testApp".
Configuration is currently read on startup of the application. Sending a HTTP POST to /refresh will
cause the configuration to be reloaded. Config Watch will also automatically detect changes and
reload the application context.
application.properties
spring.config.import=optional:consul:
This will connect to the Consul Agent at the default location of "http://localhost:8500". Removing the
optional: prefix will cause Consul Config to fail if it is unable to connect to Consul. To change the
connection properties of Consul Config either set spring.cloud.consul.host and
spring.cloud.consul.port or add the host/port pair to the spring.config.import statement such as,
spring.config.import=optional:consul:myhost:8500. The location in the import property has
precedence over the host and port propertie.
Consul Config will try to load values from four automatic contexts based on
spring.cloud.consul.config.name (which defaults to the value of the spring.application.name
property) and spring.cloud.consul.config.default-context (which defaults to application). If you
want to specify the contexts rather than using the computed ones, you can add that information to
the spring.config.import statement.
application.properties
spring.config.import=optional:consul:myhost:8500/contextone;/context/two
This will optionally load configuration only from /contextone and /context/two.
A bootstrap file (properties or yaml) is not needed for the Spring Boot Config Data
method of import via spring.config.import.
5.3. Customizing
Consul Config may be customized using the following properties:
spring:
cloud:
consul:
config:
enabled: true
prefix: configuration
defaultContext: apps
profileSeparator: '::'
• profileSeparator sets the value of the separator used to separate the profile name in property
sources with profiles
spring:
cloud:
consul:
config:
format: YAML
YAML must be set in the appropriate data key in consul. Using the defaults above the keys would
look like:
config/testApp,dev/data
config/testApp/data
config/application,dev/data
config/application/data
You could store a YAML document in any of the keys listed above.
spring:
cloud:
consul:
config:
format: FILES
Given the following keys in /config, the development profile and an application name of foo:
.gitignore
application.yml
bar.properties
foo-development.properties
foo-production.yml
foo.properties
master.ref
config/foo-development.properties
config/foo.properties
config/application.yml
The value of each key needs to be a properly formatted YAML or Properties file.
6. Consul Retry
If you expect that the consul agent may occasionally be unavailable when your app starts, you can
ask it to keep trying after a failure. You need to add spring-retry and spring-boot-starter-aop to
your classpath. The default behaviour is to retry 6 times with an initial backoff interval of 1000ms
and an exponential multiplier of 1.1 for subsequent backoffs. You can configure these properties
(and others) using spring.cloud.consul.retry.* configuration properties. This works with both
Spring Cloud Consul Config and Discovery registration.
To take full control of the retry add a @Bean of type RetryOperationsInterceptor with
id "consulRetryInterceptor". Spring Retry has a RetryInterceptorBuilder that
makes it easy to create one.
See the Spring Cloud Bus documentation for the available actuator endpoints and howto send
custom messages.
pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-netflix-turbine</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>
Notice that the Turbine dependency is not a starter. The turbine starter includes support for Netflix
Eureka.
application.yml
spring.application.name: turbine
applications: consulhystrixclient
turbine:
aggregator:
clusterConfig: ${applications}
appConfig: ${applications}
The clusterConfig and appConfig sections must match, so it’s useful to put the comma-separated list
of service ID’s into a separate configuration property.
Turbine.java
@EnableTurbine
@SpringBootApplication
public class Turbine {
public static void main(String[] args) {
SpringApplication.run(DemoturbinecommonsApplication.class, args);
}
}
Documentation Overview About the Documentation, Getting Help, First Steps, and
more.
Using Spring Cloud Contract Spring Cloud Contract usage examples and workflows.
Spring Cloud Contract Features Contract DSL, Messaging, Spring Cloud Contract Stub
Runner, and Spring Cloud Contract WireMock.
3.1.3
1. Introduction
Spring Cloud Function is a project with the following high-level goals:
• Decouple the development lifecycle of business logic from any specific runtime target so that
the same code can run as a web endpoint, a stream processor, or a task.
• Support a uniform programming model across serverless providers, as well as the ability to run
standalone (locally or in a PaaS).
It abstracts away all of the transport details and infrastructure, allowing the developer to keep all
the familiar tools and processes, and focus firmly on business logic.
Here’s a complete, executable, testable Spring Boot application (implementing a simple string
manipulation):
@SpringBootApplication
public class Application {
@Bean
public Function<Flux<String>, Flux<String>> uppercase() {
return flux -> flux.map(value -> value.toUpperCase());
}
It’s just a Spring Boot application, so it can be built, run and tested, locally and in a CI build, the
same way as any other Spring Boot application. The Function is from java.util and Flux is a
Reactive Streams Publisher from Project Reactor. The function can be accessed over HTTP or
messaging.
In the nutshell Spring Cloud Function provides the following features: 1. Wrappers for @Beans of
type Function, Consumer and Supplier, exposing them to the outside world as either HTTP endpoints
and/or message stream listeners/publishers with RabbitMQ, Kafka etc.
• Function composition and adaptation (e.g., composing imperative functions with reactive).
• Support for reactive function with multiple inputs and outputs allowing merging, joining and
other complex streaming operation to be handled by functions.
• Packaging functions for deployments, specific to the target platform (e.g., Project Riff, AWS
Lambda and more)
• Deploying a JAR file containing such an application context with an isolated classloader, so that
you can pack them together in a single JVM.
• Compiling strings which are Java function bodies into bytecode, and then turning them into @Beans
that can be wrapped as above.
• Adapters for AWS Lambda, Azure, Google Cloud Functions, Apache OpenWhisk and possibly other
"serverless" service providers.
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would
like to contribute to this section of the documentation or if you find an error,
please find the source code and issue trackers in the project at github.
2. Getting Started
Build from the command line (and "install" the samples):
This runs the app and exposes its functions over HTTP, so you can convert a string to uppercase,
like this:
You can convert multiple strings (a Flux<String>) by separating them with new lines
(You can use QJ in a terminal to insert a new line in a literal string like that.)
3. Programming model
3.1. Function Catalog and Flexible Function Signatures
One of the main features of Spring Cloud Function is to adapt and support a range of type
signatures for user-defined functions, while providing a consistent execution model. That’s why all
user defined functions are transformed into a canonical representation by FunctionCatalog.
While users don’t normally have to care about the FunctionCatalog at all, it is useful to know what
kind of functions are supported in user code.
It is also important to understand that Spring Cloud Function provides first class support for
reactive API provided by Project Reactor allowing reactive primitives such as Mono and Flux to be
used as types in user defined functions providing greater flexibility when choosing programming
model for your function implementation. Reactive programming model also enables functional
support for features that would be otherwise difficult to impossible to implement using imperative
programming style. For more on this please read Function Arity section.
3.2. Java 8 function support
Spring Cloud Function embraces and builds on top of the 3 core functional interfaces defined by
Java and available to us since Java 8.
• Supplier<O>
• Function<I, O>
• Consumer<I>
3.2.1. Supplier
@PollableSupplier(splittable = true)
public Supplier<Flux<String>> someSupplier() {
return () -> {
String v1 = String.valueOf(System.nanoTime());
String v2 = String.valueOf(System.nanoTime());
String v3 = String.valueOf(System.nanoTime());
return Flux.just(v1, v2, v3);
};
}
3.2.2. Function
Function can also be written in imperative or reactive way, yet unlike Supplier and Consumer there
are no special considerations for the implementor other then understanding that when used within
frameworks such as Spring Cloud Stream and others, reactive function is invoked only once to pass
a reference to the stream (Flux or Mono) and imperative is invoked once per event.
3.2.3. Consumer
Consumer is a little bit special because it has a void return type, which implies blocking, at least
potentially. Most likely you will not need to write Consumer<Flux<?>>, but if you do need to do that,
remember to subscribe to the input flux.
This feature allows you to provide composition instruction in a declarative way using | (pipe) or ,
(comma) delimiter when providing spring.cloud.function.definition property.
For example
--spring.cloud.function.definition=uppercase|reverse
Spring Cloud Function also supports composing Supplier with Consumer or Function as well as
Function with Consumer. What’s important here is to understand the end product of such definitions.
Composing Supplier with Function still results in Supplier while composing Supplier with
Consumer will effectively render Runnable. Following the same logic composing Function with
Consumer will result in Consumer.
And of course you can’t compose uncomposable such as Consumer and Function, Consumer and
Supplier etc.
The RoutingFunction is registered in FunctionCatalog under the name functionRouter. For simplicity
and consistency you can also refer to RoutingFunction.FUNCTION_NAME constant.
This function has the following signature:
MessageRoutingCallback
The MessageRoutingCallback is a strategy to assist with determining the name of the route-to
function definition.
/**
* Determines the name of the function definition to route incoming {@link
Message}.
*
* @param message instance of incoming {@link Message}
* @return the name of the route-to function definition
*/
String functionDefinition(Message<?> message);
}
All you need to do is implement it and and register it as a bean. The framework will automatically
pick it up and use it for routing decisions. For example
@Bean
public MessageRoutingCallback customRouter() {
return new MessageRoutingCallback() {
@Override
public String functionDefinition(Message<?> message) {
return (String) message.getHeaders().get("func_name");
}
};
}
In the preceding example you can see a very simple implementation of MessageRoutingCallback
which determines the function definition from func_name header of the incoming Message.
Message Headers
If the input argument is of type Message<?>, you can communicate routing instruction by setting one
of spring.cloud.function.definition or spring.cloud.function.routing-expression Message headers.
For more static cases you can use spring.cloud.function.definition header which allows you to
provide the name of a single function (e.g., …definition=foo) or a composition instruction (e.g., …
definition=foo|bar|baz). For more dynamic cases you can use spring.cloud.function.routing-
expression header which allows you to use Spring Expression Language (SpEL) and provide SpEL
expression that should resolve into definition of a function (as described above).
SpEL evaluation context’s root object is the actual input argument, so in the case of
Message<?> you can construct expression that has access to both payload and
headers (e.g., spring.cloud.function.routing-expression=headers.function_name).
Application Properties
1. MessageRoutingCallback (If function is imperative will take over regardless if anything else is
defined)
Function Filtering Filtering is the type of routing where there are only two paths - 'go' or 'discard'.
In terms of functions it mean you only want to invoke a certain function if some condition returns
'true', otherwise you want to discard input. However, when it comes to discarding input there are
many interpretation of what it could mean in the context of your application. For example, you may
want to log it, or you may want to maintain the counter of discarded messages. you may also want
to do nothing at all. Because of these different paths, we do not provide a general configuration
option for how to deal with discarded messages. Instead we simply recommend to define a simple
Consumer which would signify the 'discard' path:
@Bean
public Consumer<?> devNull() {
// log, count or whatever
}
Now you can have routing expression that really only has two paths effectively becoming a filter.
For example:
--spring.cloud.function.routing
-expression=headers.contentType.toString().equals('text/plain') ? 'echo' : 'devNull'
Every message that does not fit criteria to go to 'echo' function will go to 'devNull' where you can
simply do nothing with it. The signature Consumer<?> will also ensure that no type conversion will
be attempted resulting in almost no execution overhead.
When dealing with reactive inputs (e.g., Publisher), routing instructions must only
be provided via Function properties. This is due to the nature of the reactive
functions which are invoked only once to pass a Publisher and the rest is handled
by the reactor, hence we can not access and/or rely on the routing instructions
communicated via individual values (e.g., Message).
You can always accomplish it via Function Composition. Such approach provides several benefits:
• It allows you to isolate this non-functional concern into a separate function which you can
compose with the business function as function definition.
• It provides you with complete freedom (and danger) as to what you can modify before incoming
message reaches the actual business function.
@Bean
public Function<Message<?>, Message<?>> enrich() {
return message -> MessageBuilder.fromMessage(message).setHeader("foo",
"bar").build();
}
@Bean
public Function<Message<?>, Message<?>> myBusinessFunction() {
// do whatever
}
And then compose your function by providing the following function definition
enrich|myBusinessFunction.
While the described approach is the most flexible, it is also the most involved as it requires you to
write some code, make it a bean or manually register it as a function before you can compose it
with the business function as you can see from the preceding example.
But what if modifications (enrichments) you are trying to make are trivial as they are in the
preceding example? Is there a simpler and more dynamic and configurable mechanism to
accomplish the same?
Since version 3.1.3, the framework allows you to provide SpEL expression to enrich individual
message headers. Let’s look at one of the tests as the example.
@Test
public void testInputHeaderMappingPropertyWithoutIndex() throws Exception {
try (ConfigurableApplicationContext context = new SpringApplicationBuilder(
SampleFunctionConfiguration.class).web(WebApplicationType.NONE).run(
"--spring.cloud.function.configuration.echo.input-header-mapping-
expression.key1='hello1'",
"--spring.cloud.function.configuration.echo.input-header-mapping-
expression.key2='hello2'",
"--spring.cloud.function.configuration.echo.input-header-mapping-
expression.foo=headers.contentType")) {
Here you see a property called input-header-mapping-expression preceded by the name of the
function (i.e., echo) and followed by the name of the message header key you want to set and the
value as SpEL expression. The first two expressions (for 'key1' and 'key2') are literal SpEL
expressions enclosed in single quotes, effectively setting 'key1' to value hello1 and 'key2' to value
hello2. The third one will map Message header ‘foo’ to the value of the current ‘contentType’
header.
if for whatever reason the provided expression evaluation fails, the execution of
the function will proceed as if nothing ever happen. However you will see the
WARN message in your logs informing you about it
In the event you are dealing with functions that have multiple inputs (next section), you can use
index immediately after input-header-mapping-expression
--spring.cloud.function.configuration.echo.input-header-mapping
-expression[0].key1=‘hello1'
--spring.cloud.function.configuration.echo.input-header-mapping
-expression[1].key2='hello2'
Let’s look at an example of such a function (full implementation details are available here),
@Bean
public Function<Flux<Integer>, Tuple2<Flux<String>, Flux<String>>> organise() {
return flux -> ...;
}
Given that Project Reactor is a core dependency of SCF, we are using its Tuple library. Tuples give us
a unique advantage by communicating to us both cardinality and type information. Both are
extremely important in the context of SCSt. Cardinality lets us know how many input and output
bindings need to be created and bound to the corresponding inputs and outputs of a function.
Awareness of the type information ensures proper type conversion.
Also, this is where the ‘index’ part of the naming convention for binding names comes into play,
since, in this function, the two output binding names are organise-out-0 and organise-out-1.
To better understand the mechanics and the necessity behind content-type negotiation, we take a
look at a very simple use case by using the following function as an example:
@Bean
public Function<Person, String> personFunction {..}
The function shown in the preceding example expects a Person object as an argument and produces
a String type as an output. If such function is invoked with the type Person, than all works fine. But
typically function plays a role of a handler for the incoming data which most often comes in the
raw format such as byte[], JSON String etc. In order for the framework to succeed in passing the
incoming data as an argument to this function, it has to somehow transform the incoming data to a
Person type.
Spring Cloud Function relies on two native to Spring mechanisms to accomplish that.
1. MessageConverter - to convert from incoming Message data to a type declared by the function.
This means that depending on the type of the raw data (Message or non-Message) Spring Cloud
Function will apply one or the other mechanisms.
For most cases when dealing with functions that are invoked as part of some other request (e.g.,
HTTP, Messaging etc) the framework relies on MessageConverters, since such requests already
converted to Spring Message. In other words, the framework locates and applies the appropriate
MessageConverter. To accomplish that, the framework needs some instructions from the user. One of
these instructions is already provided by the signature of the function itself (Person type).
Consequently, in theory, that should be (and, in some cases, is) enough. However, for the majority of
use cases, in order to select the appropriate MessageConverter, the framework needs an additional
piece of information. That missing piece is contentType header.
Such header usually comes as part of the Message where it is injected by the corresponding adapter
that created such Message in the first place. For example, HTTP POST request will have its content-
type HTTP header copied to contentType header of the Message.
For cases when such header does not exist framework relies on the default content type as
application/json.
As mentioned earlier, for the framework to select the appropriate MessageConverter, it requires
argument type and, optionally, content type information. The logic for selecting the appropriate
MessageConverter resides with the argument resolvers which trigger right before the invocation of
the user-defined function (which is when the actual argument type is known to the framework). If
the argument type does not match the type of the current payload, the framework delegates to the
stack of the pre-configured MessageConverters to see if any one of them can convert the payload.
The combination of contentType and argument type is the mechanism by which framework
determines if message can be converted to a target type by locating the appropriate
MessageConverter. If no appropriate MessageConverter is found, an exception is thrown, which you
can handle by adding a custom MessageConverter (see User-defined Message Converters).
Do not expect Message to be converted into some other type based only on the
contentType. Remember that the contentType is complementary to the target type. It
is a hint, which MessageConverter may or may not take into consideration.
It is important to understand the contract of these methods and their usage, specifically in the
context of Spring Cloud Stream.
The fromMessage method converts an incoming Message to an argument type. The payload of the
Message could be any type, and it is up to the actual implementation of the MessageConverter to
support multiple types.
As mentioned earlier, the framework already provides a stack of MessageConverters to handle most
common use cases. The following list describes the provided MessageConverters, in order of
precedence (the first MessageConverter that works is used):
1. JsonMessageConverter: Supports conversion of the payload of the Message to/from POJO for cases
when contentType is application/json using Jackson or Gson libraries (DEFAULT).
When no appropriate converter is found, the framework throws an exception. When that happens,
you should check your code and configuration and ensure you did not miss anything (that is,
ensure that you provided a contentType by using a binding or a header). However, most likely, you
found some uncommon case (such as a custom contentType perhaps) and the current stack of
provided MessageConverters does not know how to convert. If that is the case, you can add custom
MessageConverter. See User-defined Message Converters.
Spring Cloud Function exposes a mechanism to define and register additional MessageConverters. To
use it, implement org.springframework.messaging.converter.MessageConverter, configure it as a
@Bean. It is then appended to the existing stack of `MessageConverter`s.
The following example shows how to create a message converter bean to support a new content
type called application/bar:
@SpringBootApplication
public static class SinkApplication {
...
@Bean
public MessageConverter customMessageConverter() {
return new MyCustomMessageConverter();
}
}
public MyCustomMessageConverter() {
super(new MimeType("application", "bar"));
}
@Override
protected boolean supports(Class<?> clazz) {
return (Bar.class.equals(clazz));
}
@Override
protected Object convertFromInternal(Message<?> message, Class<?> targetClass,
Object conversionHint) {
Object payload = message.getPayload();
return (payload instanceof Bar ? payload : new Bar((byte[]) payload));
}
}
3.7.5. Note on JSON options
In Spring Cloud Function we support Jackson and Gson mechanisms to deal with JSON. And for
your benefit have abstracted it under org.springframework.cloud.function.json.JsonMapper which
itself is aware of two mechanisms and will use the one selected by you or following the default rule.
The default rules are as follows:
• Whichever library is on the classpath that is the mechanism that is going to be used. So if you
have com.fasterxml.jackson.* to the classpath, Jackson is going to be used and if you have
com.google.code.gson, then Gson will be used.
• If you have both, then Gson will be the default, or you can set spring.cloud.function.preferred-
json-mapper property with either of two values: gson or jackson.
That said, the type conversion is usually transparent to the developer, however given that
org.springframework.cloud.function.json.JsonMapper is also registered as a bean you can easily
inject it into your code if needed.
@Bean
open fun kotlinSupplier(): () -> String {
return { "Hello from Kotlin" }
}
@Bean
open fun kotlinFunction(): (String) -> String {
return { it.toUpperCase() }
}
@Bean
open fun kotlinConsumer(): (String) -> Unit {
return { println(it) }
}
The above represents Kotlin lambdas configured as Spring beans. The signature of each maps to a
Java equivalent of Supplier, Function and Consumer, and thus supported/recognized signatures by the
framework. While mechanics of Kotlin-to-Java mapping are outside of the scope of this
documentation, it is important to understand that the same rules for signature transformation
outlined in "Java 8 function support" section are applied here as well.
To enable Kotlin support all you need is to add Kotlin SDK libraries on the classpath which will
trigger appropriate autoconfiguration and supporting classes.
4. Standalone Web Applications
Functions could be automatically exported as HTTP endpoints.
With the web configurations activated your app will have an MVC endpoint (on "/" by default, but
configurable with spring.cloud.function.web.path) that can be used to access the functions in the
application context where function name becomes part of the URL path. The supported content
types are plain text and JSON.
POST /{consumer} JSON object or text Mirrors input and 202 Accepted
pushes request
body into
consumer
POST /{consumer} JSON array or text Mirrors input and 202 Accepted
with new lines pushes body into
consumer one by
one
As the table above shows the behaviour of the endpoint depends on the method and also the type of
incoming request data. When the incoming data is single valued, and the target function is declared
as obviously single valued (i.e. not returning a collection or Flux), then the response will also
contain a single value. For multi-valued responses the client can ask for a server-sent event stream
by sending `Accept: text/event-stream".
Functions and consumers that are declared with input and output in Message<?> will see the request
headers on the input messages, and the output message headers will be converted to HTTP headers.
When POSTing text the response format might be different with Spring Boot 2.0 and older versions,
depending on the content negotiation (provide content type and accept headers for the best
results).
See Testing Functional Applications to see the details and example on how to test such application.
Composite functions can be addressed using pipes or commas to separate function names (pipes
are legal in URL paths, but a bit awkward to type on the command line). For example, curl -H
"Content-Type: text/plain" localhost:8080/uppercase,reverse -d hello.
For cases where there is more then a single function in catalog, each function will be exported and
mapped with function name being part of the path (e.g., localhost:8080/uppercase). In this scenario
you can still map specific function or function composition to the root path by providing
spring.cloud.function.definition property
For example,
--spring.cloud.function.definition=foo|bar
The above property will compose 'foo' and 'bar' function and map the composed function to the "/"
path.
For example,
--spring.cloud.function.definition=foo;bar
This will only export function foo and function bar regardless how many functions are available in
catalog (e.g., localhost:8080/foo).
--spring.cloud.function.definition=foo|bar;baz
This will only export function composition foo|bar and function baz regardless how many functions
are available in catalog (e.g., localhost:8080/foo,bar).
The standard entry point is to add spring-cloud-function-deployer to the classpath, the deployer
kicks in and looks for some configuration to tell it where to find the function jar.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-deployer</artifactId>
<version>${spring.cloud.function.version}</version>
</dependency>
Here is the example of deploying a JAR which contains an 'uppercase' function and invoking it .
@SpringBootApplication
public class DeployFunctionDemo {
And here is the example using Maven URI (taken from one of the tests in FunctionDeployerTests):
@SpringBootApplication
public class DeployFunctionDemo {
assertThat(function.apply("bob")).isEqualTo("BOB");
}
}
Keep in mind that Maven resource such as local and remote repositories, user, password and more
are resolved using default MavenProperties which effectively use local defaults and will work for
majority of cases. However if you need to customize you can simply provide a bean of type
MavenProperties where you can set additional properties (see example below).
@Bean
public MavenProperties mavenProperties() {
MavenProperties properties = new MavenProperties();
properties.setLocalRepository("target/it/");
return properties;
}
6.1. Supported Packaging Scenarios
Currently Spring Cloud Function supports several packaging scenarios to give you the most
flexibility when it comes to deploying functions.
This packaging option implies no dependency on anything related to Spring. For example; Consider
that such JAR contains the following class:
package function.example;
. . .
public class UpperCaseFunction implements Function<String, String> {
@Override
public String apply(String value) {
return value.toUpperCase();
}
}
All you need to do is specify location and function-class properties when deploying such package:
--spring.cloud.function.location=target/it/simplestjar/target/simplestjar
-1.0.0.RELEASE.jar
--spring.cloud.function.function-class=function.example.UpperCaseFunction
It’s conceivable in some cases that you might want to package multiple functions together. For such
scenarios you can use spring.cloud.function.function-class property to list several classes
delimiting them by ;.
For example,
--spring.cloud.function.function
-class=function.example.UpperCaseFunction;function.example.ReverseFunction
Here we are identifying two functions to deploy, which we can now access in function catalog by
name (e.g., catalog.lookup("reverseFunction");).
For more details please reference the complete sample available here. You can also find a
corresponding test in FunctionDeployerTests.
This packaging option implies there is a dependency on Spring Boot and that the JAR was generated
as Spring Boot JAR. That said, given that the deployed JAR runs in the isolated class loader, there
will not be any version conflict with the Spring Boot version used by the actual deployer. For
example; Consider that such JAR contains the following class (which could have some additional
Spring dependencies providing Spring/Spring Boot is on the classpath):
package function.example;
. . .
public class UpperCaseFunction implements Function<String, String> {
@Override
public String apply(String value) {
return value.toUpperCase();
}
}
As before all you need to do is specify location and function-class properties when deploying such
package:
--spring.cloud.function.location=target/it/simplestjar/target/simplestjar
-1.0.0.RELEASE.jar
--spring.cloud.function.function-class=function.example.UpperCaseFunction
For more details please reference the complete sample available here. You can also find a
corresponding test in FunctionDeployerTests.
This packaging option implies your JAR is complete stand alone Spring Boot application with
functions as managed Spring beans. As before there is an obvious assumption that there is a
dependency on Spring Boot and that the JAR was generated as Spring Boot JAR. That said, given that
the deployed JAR runs in the isolated class loader, there will not be any version conflict with the
Spring Boot version used by the actual deployer. For example; Consider that such JAR contains the
following class:
package function.example;
. . .
@SpringBootApplication
public class SimpleFunctionAppApplication {
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
Given that we’re effectively dealing with another Spring Application context and that functions are
spring managed beans, in addition to the location property we also specify definition property
instead of function-class.
--spring.cloud.function.location=target/it/bootapp/target/bootapp-1.0.0.RELEASE
-exec.jar
--spring.cloud.function.definition=uppercase
For more details please reference the complete sample available here. You can also find a
corresponding test in FunctionDeployerTests.
This particular deployment option may or may not have Spring Cloud Function on
it’s classpath. From the deployer perspective this doesn’t matter.
@SpringBootApplication
public class DemoApplication {
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
Now for the functional beans: the user application code can be recast into "functional" form, like
this:
@SpringBootConfiguration
public class DemoApplication implements
ApplicationContextInitializer<GenericApplicationContext> {
@Override
public void initialize(GenericApplicationContext context) {
context.registerBean("demo", FunctionRegistration.class,
() -> new FunctionRegistration<>(uppercase())
.type(FunctionType.from(String.class).to(String.class)));
}
• The SpringApplication from Spring Boot has been replaced with a FunctionalSpringApplication
from Spring Cloud Function (it’s a subclass).
The business logic beans that you register in a Spring Cloud Function app are of type
FunctionRegistration. This is a wrapper that contains both the function and information about the
input and output types. In the @Bean form of the application that information can be derived
reflectively, but in a functional bean registration some of it is lost unless we use a
FunctionRegistration.
@Override
public String apply(String value) {
return value.toUpperCase();
}
It would also work if you add a separate, standalone class of type Function and register it with the
SpringApplication using an alternative form of the run() method. The main thing is that the generic
type information is available at runtime through the class declaration.
@Component
public class CustomFunction implements Function<Flux<Foo>, Flux<Bar>> {
@Override
public Flux<Bar> apply(Flux<Foo> flux) {
return flux.map(foo -> new Bar("This is a Bar object from Foo value: " +
foo.getValue()));
}
@Override
public void initialize(GenericApplicationContext context) {
context.registerBean("function", FunctionRegistration.class,
() -> new FunctionRegistration<>(new
CustomFunction()).type(CustomFunction.class));
}
@SpringBootApplication
public class SampleFunctionApplication {
@Bean
public Function<String, String> uppercase() {
return v -> v.toUpperCase();
}
}
Here is an integration test for the HTTP server wrapping this application:
@SpringBootTest(classes = SampleFunctionApplication.class,
webEnvironment = WebEnvironment.RANDOM_PORT)
public class WebFunctionTests {
@Autowired
private TestRestTemplate rest;
@Test
public void test() throws Exception {
ResponseEntity<String> result = this.rest.exchange(
RequestEntity.post(new URI("/uppercase")).body("hello"), String.class);
System.out.println(result.getBody());
}
}
@Autowired
private TestRestTemplate rest;
@Test
public void test() throws Exception {
ResponseEntity<String> result = this.rest.exchange(
RequestEntity.post(new URI("/uppercase")).body("hello"), String.class);
System.out.println(result.getBody());
}
}
This test is almost identical to the one you would write for the @Bean version of the same app - the
only difference is the @FunctionalSpringBootTest annotation, instead of the regular @SpringBootTest.
All the other pieces, like the @Autowired TestRestTemplate, are standard Spring Boot features.
And to help with correct dependencies here is the excerpt from POM
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.2.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
. . . .
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
<version>3.0.1.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
Or you could write a test for a non-HTTP app using just the FunctionCatalog. For example:
@RunWith(SpringRunner.class)
@FunctionalSpringBootTest
public class FunctionalTests {
@Autowired
private FunctionCatalog catalog;
@Test
public void words() throws Exception {
Function<String, String> function = catalog.lookup(Function.class,
"uppercase");
assertThat(function.apply("hello")).isEqualTo("HELLO");
}
9. Dynamic Compilation
There is a sample app that uses the function compiler to create a function from a configuration
property. The vanilla "function-sample" also has that feature. And there are some scripts that you
can run to see the compilation happening at run time. To run these examples, change into the
scripts directory:
cd scripts
./function-registry.sh
Register a Function:
Register a Supplier:
./registerSupplier.sh -n words -f "()->Flux.just(\"foo\",\"bar\")"
Register a Consumer:
Then start the source (supplier), processor (function), and sink (consumer) apps (in reverse order):
The output will appear in the console of the sink app (one message per second, converted to
uppercase):
MESSAGE-0
MESSAGE-1
MESSAGE-2
MESSAGE-3
MESSAGE-4
MESSAGE-5
MESSAGE-6
MESSAGE-7
MESSAGE-8
MESSAGE-9
...
The details of how to get stared with AWS Lambda is out of scope of this document, so the
expectation is that user has some familiarity with AWS and AWS Lambda and wants to learn what
additional value spring provides.
One of the goals of Spring Cloud Function framework is to provide necessary infrastructure
elements to enable a simple function application to interact in a certain way in a particular
environment. A simple function application (in context or Spring) is an application that contains
beans of type Supplier, Function or Consumer. So, with AWS it means that a simple function bean
should somehow be recognised and executed in AWS Lambda environment.
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
It shows a complete Spring Boot application with a function bean defined in it. What’s interesting is
that on the surface this is just another boot app, but in the context of AWS Adapter it is also a
perfectly valid AWS Lambda application. No other code or configuration is required. All you need to
do is package it and deploy it, so let’s look how we can do that.
To make things simpler we’ve provided a sample project ready to be built and deployed and you
can access it here.
You simply execute ./mvnw clean package to generate JAR file. All the necessary maven plugins have
already been setup to generate appropriate AWS deployable JAR file. (You can read more details
about JAR layout in Notes on JAR Layout).
Then you have to upload the JAR file (via AWS dashboard or AWS CLI) to AWS.
That is all. Save and execute the function with some sample data which for this function is expected
to be a String which function will uppercase and return back.
The adapter has a couple of generic request handlers that you can use. The most generic is (and the
one we used in the Getting Started section) is
org.springframework.cloud.function.adapter.aws.FunctionInvoker which is the implementation of
AWS’s RequestStreamHandler. User doesn’t need to do anything other then specify it as 'handler' on
AWS dashboard when deploying function. It will handle most of the case including Kinesis,
streaming etc. .
If your app has more than one @Bean of type Function etc. then you can choose the one to use by
configuring spring.cloud.function.definition property or environment variable. The functions are
extracted from the Spring Cloud FunctionCatalog. In the event you don’t specify
spring.cloud.function.definition the framework will attempt to find a default following the search
order where it searches first for Function then Consumer and finally Supplier).
One of the core features of Spring Cloud Function is routing - an ability to have one special function
to delegate to other functions based on the user provided routing instructions.
In AWS Lambda environment this feature provides one additional benefit, as it allows you to bind a
single function (Routing Function) as AWS Lambda and thus a single HTTP endpoint for API
Gateway. So in the end you only manage one function and one endpoint, while benefiting from
many function that can be part of your application.
More details are available in the provided sample, yet few general things worth mentioning.
Routing capabilities will be enabled by default whenever there is more then one function in your
application as org.springframework.cloud.function.adapter.aws.FunctionInvoker can not determine
which function to bind as AWS Lambda, so it defaults to RoutingFunction. This means that all you
need to do is provide routing instructions which you can do using several mechanisms (see sample
for more details).
Also, note that since AWS does not allow dots . and/or hyphens`-` in the name of the environment
variable, you can benefit from boot support and simply substitute dots with underscores and
hyphens with camel case. So for example spring.cloud.function.definition becomes
spring_cloud_function_definition and spring.cloud.function.routing-expression becomes
spring_cloud_function_routingExpression.
10.1.4. Notes on JAR Layout
You don’t need the Spring Cloud Function Web or Stream adapter at runtime in Lambda, so you
might need to exclude those before you create the JAR you send to AWS. A Lambda application has
to be shaded, but a Spring Boot standalone application does not, so you can run the same app using
2 separate jars (as per the sample). The sample app creates 2 jar files, one with an aws classifier for
deploying in Lambda, and one executable (thin) jar that includes spring-cloud-function-web at
runtime. Spring Cloud Function will try and locate a "main class" for you from the JAR file manifest,
using the Start-Class attribute (which will be added for you by the Spring Boot tooling if you use
the starter parent). If there is no Start-Class in your manifest you can use an environment variable
or system property MAIN_CLASS when you deploy the function to AWS.
If you are not using the functional bean definitions but relying on Spring Boot’s auto-configuration,
then additional transformers must be configured as part of the maven-shade-plugin execution.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</dependency>
</dependencies>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
<shadedArtifactAttached>true</shadedArtifactAttached>
<shadedClassifierName>aws</shadedClassifierName>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/spring.handlers</resource>
</transformer>
<transformer
implementation="org.springframework.boot.maven.PropertiesMergingResourceTransformer">
<resource>META-INF/spring.factories</resource>
</transformer>
<transformer
implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/spring.schemas</resource>
</transformer>
</transformers>
</configuration>
</plugin>
In order to run Spring Cloud Function applications on AWS Lambda, you can leverage Maven or
Gradle plugins offered by the cloud platform provider.
Maven
In order to use the adapter plugin for Maven, add the plugin dependency to your pom.xml file:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-aws</artifactId>
</dependency>
</dependencies>
As pointed out in the Notes on JAR Layout, you will need a shaded jar in order to upload it to AWS
Lambda. You can use the Maven Shade Plugin for that. The example of the setup can be found
above.
You can use theSpring Boot Maven Plugin to generate the thin jar.
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.springframework.boot.experimental</groupId>
<artifactId>spring-boot-thin-layout</artifactId>
<version>${wrapper.version}</version>
</dependency>
</dependencies>
</plugin>
You can find the entire sample pom.xml file for deploying Spring Cloud Function applications to AWS
Lambda with Maven here.
Gradle
In order to use the adapter plugin for Gradle, add the dependency to your build.gradle file:
dependencies {
compile("org.springframework.cloud:spring-cloud-function-adapter-aws:${version}")
}
As pointed out in Notes on JAR Layout, you will need a shaded jar in order to upload it to AWS
Lambda. You can use the Gradle Shadow Plugin for that:
buildscript {
dependencies {
classpath "com.github.jengelman.gradle.plugins:shadow:${shadowPluginVersion}"
}
}
apply plugin: 'com.github.johnrengelman.shadow'
assemble.dependsOn = [shadowJar]
import com.github.jengelman.gradle.plugins.shadow.transformers.*
shadowJar {
classifier = 'aws'
dependencies {
exclude(
dependency("org.springframework.cloud:spring-cloud-function-
web:${springCloudFunctionVersion}"))
}
// Required for Spring
mergeServiceFiles()
append 'META-INF/spring.handlers'
append 'META-INF/spring.schemas'
append 'META-INF/spring.tooling'
transform(PropertiesFileTransformer) {
paths = ['META-INF/spring.factories']
mergeStrategy = "append"
}
}
You can use the Spring Boot Gradle Plugin and Spring Boot Thin Gradle Plugin to generate the thin
jar.
buildscript {
dependencies {
classpath("org.springframework.boot.experimental:spring-boot-thin-gradle-
plugin:${wrapperVersion}")
classpath("org.springframework.boot:spring-boot-gradle-
plugin:${springBootVersion}")
}
}
apply plugin: 'org.springframework.boot'
apply plugin: 'org.springframework.boot.experimental.thin-launcher'
assemble.dependsOn = [thinJar]
You can find the entire sample build.gradle file for deploying Spring Cloud Function applications to
AWS Lambda with Gradle here.
10.1.6. Upload
Build the sample under spring-cloud-function-samples/function-sample-aws and upload the -aws jar
file to Lambda. The handler can be example.Handler or
org.springframework.cloud.function.adapter.aws.SpringBootStreamHandler (FQN of the class, not a
method reference, although Lambda does accept method references).
The input type for the function in the AWS sample is a Foo with a single property called "value". So
you would need this to test it:
{
"value": "test"
}
Spring Cloud Function will attempt to transparently handle type conversion between the raw input
stream and types declared by your function.
For example, if your function signature is as such Function<Foo, Bar> we will attempt to convert
incoming stream event to an instance of Foo.
In the event type is not known or can not be determined (e.g., Function<?, ?>) we will attempt to
convert an incoming stream event to a generic Map.
Raw Input
There are times when you may want to have access to a raw input. In this case all you need is to
declare your function signature to accept InputStream. For example, Function<InputStream, ?>. In
this case we will not attempt any conversion and will pass the raw input directly to a function.
10.2. Microsoft Azure
The Azure adapter bootstraps a Spring Cloud Function context and channels function calls from the
Azure framework into the user functions, using Spring Boot configuration where necessary. Azure
Functions has quite a unique, but invasive programming model, involving annotations in user code
that are specific to the platform. The easiest way to use it with Spring Cloud is to extend a base class
and write a method in it with the @FunctionName annotation which delegates to a base class method.
This project provides an adapter layer for a Spring Cloud Function application onto Azure. You can
write an app with a single @Bean of type Function and it will be deployable in Azure if you get the
JAR file laid out right.
Example:
If your app has more than one @Bean of type Function etc. then you can choose the one to use by
configuring function.name. Or if you make the @FunctionName in the Azure handler method match the
function name it should work that way (also for function apps with multiple functions). The
functions are extracted from the Spring Cloud FunctionCatalog so the default function names are
the same as the bean names.
Some time there is a need to access the target execution context provided by Azure runtime in the
form of com.microsoft.azure.functions.ExecutionContext. For example one of such needs is logging,
so it can appear in the Azure console.
For that purpose we propagate ExecutionContext as Message header under executionContext name,
so all you need is access it is have your function accept a Message and access this header.
Spring Cloud Function will register ExecutionContext as bean in the Application context, so it could
be injected into your function. For example
@Bean
public Function<Message<Foo>, Bar> uppercase() {
return message -> {
ExecutionContext targetContext = message.getHeaders().get("executionContext");
targetContext.getLogger().info("Invoking 'uppercase' on " + foo.getValue());
return new Bar(message.getPayload().getValue().toUpperCase());
};
}
With Message you will also have access to additional Azure meta information as Message headers
that come as part of your request.
You don’t need the Spring Cloud Function Web at runtime in Azure, so you can exclude this before
you create the JAR you deploy to Azure, but it won’t be used if you include it, so it doesn’t hurt to
leave it in. A function application on Azure is an archive generated by the Maven plugin. The
function lives in the JAR file generated by this project. The sample creates it as an executable jar,
using the thin layout, so that Azure can find the handler classes. If you prefer you can just use a
regular flat JAR file. The dependencies should not be included.
In order to run Spring Cloud Function applications on Microsoft Azure, you can leverage the Maven
plugin offered by the cloud platform provider.
In order to use the adapter plugin for Maven, add the plugin dependency to your pom.xml file:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-azure</artifactId>
</dependency>
</dependencies>
Then, configure the plugin. You will need to provide Azure-specific configuration for your
application, specifying the resourceGroup, appName and other optional properties, and add the package
goal execution so that the function.json file required by Azure is generated for you. Full plugin
documentation can be found in the plugin repository.
<plugin>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-functions-maven-plugin</artifactId>
<configuration>
<resourceGroup>${functionResourceGroup}</resourceGroup>
<appName>${functionAppName}</appName>
</configuration>
<executions>
<execution>
<id>package-functions</id>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
You will also have to ensure that the files to be scanned by the plugin can be found in the Azure
functions staging directory (see the plugin repository for more details on the staging directory and
it’s default location).
You can find the entire sample pom.xml file for deploying Spring Cloud Function applications to
Microsoft Azure with Maven here.
As of yet, only Maven plugin is available. Gradle plugin has not been created by the
cloud platform provider.
10.2.4. Build
You can run the sample locally, just like the other Spring Cloud Function samples:
$ az login
$ mvn azure-functions:deploy
On another terminal try this: curl <azure-function-url-from-the-log>/api/uppercase -d '{"value":
"hello foobar!"}'. Please ensure that you use the right URL for the function above. Alternatively
you can test the function in the Azure Dashboard UI (click on the function name, go to the right
hand side and click "Test" and to the bottom right, "Run").
The input type for the function in the Azure sample is a Foo with a single property called "value". So
you need this to test it with something like below:
{
"value": "foobar"
}
The Azure sample app is written in the "non-functional" style (using @Bean). The
functional style (with just Function or ApplicationContextInitializer) is much
faster on startup in Azure than the traditional @Bean style, so if you don’t need
@Beans (or @EnableAutoConfiguration) it’s a good choice. Warm starts are not
affected. :branch: master
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-gcp</artifactId>
</dependency>
...
</dependencies>
In addition, add the spring-boot-maven-plugin which will build the JAR of the function to deploy.
Finally, add the Maven plugin provided as part of the Google Functions Framework for Java. This
allows you to test your functions locally via mvn function:run.
<plugin>
<groupId>com.google.cloud.functions</groupId>
<artifactId>function-maven-plugin</artifactId>
<version>0.9.1</version>
<configuration>
<functionTarget>org.springframework.cloud.function.adapter.gcp.GcfJarLauncher</functio
nTarget>
<port>8080</port>
</configuration>
</plugin>
A full example of a working pom.xml can be found in the Spring Cloud Functions GCP sample.
Google Cloud Functions supports deploying HTTP Functions, which are functions that are invoked
by HTTP request. The sections below describe instructions for deploying a Spring Cloud Function as
an HTTP Function.
Getting Started
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
Main-Class: com.example.CloudFunctionMain
Then run the function locally. This is provided by the Google Cloud Functions function-maven-plugin
described in the project dependencies section.
mvn function:run
Deploy to GCP
mvn package
If you added the custom spring-boot-maven-plugin plugin defined above, you should see the
resulting JAR in target/deploy directory. This JAR is correctly formatted for deployment to Google
Cloud Functions.
Next, make sure that you have the Cloud SDK CLI installed.
From the project base directory run the following command to deploy.
gcloud functions deploy function-sample-gcp-http \
--entry-point org.springframework.cloud.function.adapter.gcp.GcfJarLauncher \
--runtime java11 \
--trigger-http \
--source target/deploy \
--memory 512MB
Google Cloud Functions also supports deploying Background Functions which are invoked
indirectly in response to an event, such as a message on a Cloud Pub/Sub topic, a change in a Cloud
Storage bucket, or a Firebase event.
The sections below describe the process for writing a Cloud Pub/Sub topic background function.
However, there are a number of different event types that can trigger a background function to
execute which are not discussed here; these are described in the Background Function triggers
documentation.
Getting Started
Let’s start with a simple Spring Cloud Function which will run as a GCF background function:
@SpringBootApplication
public class BackgroundFunctionMain {
@Bean
public Consumer<PubSubMessage> pubSubFunction() {
return message -> System.out.println("The Pub/Sub message data: " +
message.getData());
}
}
In addition, create PubSubMessage class in the project with the below definition. This class represents
the Pub/Sub event structure which gets passed to your function on a Pub/Sub topic event.
public class PubSubMessage {
Main-Class: com.example.BackgroundFunctionMain
Then run the function locally. This is provided by the Google Cloud Functions function-maven-plugin
described in the project dependencies section.
mvn function:run
Deploy to GCP
In order to deploy your background function to GCP, first package your application.
mvn package
If you added the custom spring-boot-maven-plugin plugin defined above, you should see the
resulting JAR in target/deploy directory. This JAR is correctly formatted for deployment to Google
Cloud Functions.
Next, make sure that you have the Cloud SDK CLI installed.
From the project base directory run the following command to deploy.
Google Cloud Function will now invoke the function every time a message is published to the topic
specified by --trigger-topic.
For a walkthrough on testing and verifying your background function, see the instructions for
running the GCF Background Function sample.
• The function-sample-gcp-http is an HTTP Function which you can test locally and try deploying.
This project provides an API Gateway built on top of the Spring Ecosystem, including: Spring 5,
Spring Boot 2 and Project Reactor. Spring Cloud Gateway aims to provide a simple, yet effective way
to route to APIs and provide cross cutting concerns to them such as: security, monitoring/metrics,
and resiliency.
If you include the starter, but you do not want the gateway to be enabled, set
spring.cloud.gateway.enabled=false.
Spring Cloud Gateway is built on Spring Boot 2.x, Spring WebFlux, and Project
Reactor. As a consequence, many of the familiar synchronous libraries (Spring
Data and Spring Security, for example) and patterns you know may not apply
when you use Spring Cloud Gateway. If you are unfamiliar with these projects, we
suggest you begin by reading their documentation to familiarize yourself with
some of the new concepts before working with Spring Cloud Gateway.
Spring Cloud Gateway requires the Netty runtime provided by Spring Boot and
Spring Webflux. It does not work in a traditional Servlet Container or when built
as a WAR.
2. Glossary
• Route: The basic building block of the gateway. It is defined by an ID, a destination URI, a
collection of predicates, and a collection of filters. A route is matched if the aggregate predicate
is true.
• Predicate: This is a Java 8 Function Predicate. The input type is a Spring Framework
ServerWebExchange. This lets you match on anything from the HTTP request, such as headers or
parameters.
• Filter: These are instances of GatewayFilter that have been constructed with a specific factory.
Here, you can modify requests and responses before or after sending the downstream request.
3. How It Works
The following diagram provides a high-level overview of how Spring Cloud Gateway works:
[Spring Cloud Gateway Diagram] | spring_cloud_gateway_diagram.png
Clients make requests to Spring Cloud Gateway. If the Gateway Handler Mapping determines that a
request matches a route, it is sent to the Gateway Web Handler. This handler runs the request
through a filter chain that is specific to the request. The reason the filters are divided by the dotted
line is that filters can run logic both before and after the proxy request is sent. All “pre” filter logic
is executed. Then the proxy request is made. After the proxy request is made, the “post” filter logic
is run.
URIs defined in routes without a port get default port values of 80 and 443 for the
HTTP and HTTPS URIs, respectively.
The name and argument names will be listed as code in the first sentance or two of the each section.
The arguments are typically listed in the order that would be needed for the shortcut configuration.
application.yml
spring:
cloud:
gateway:
routes:
- id: after_route
uri: https://example.org
predicates:
- Cookie=mycookie,mycookievalue
The previous sample defines the Cookie Route Predicate Factory with two arguments, the cookie
name, mycookie and the value to match mycookievalue.
spring:
cloud:
gateway:
routes:
- id: after_route
uri: https://example.org
predicates:
- name: Cookie
args:
name: mycookie
regexp: mycookievalue
This is the full configuration of the shortcut configuration of the Cookie predicate shown above.
Example 5. application.yml
spring:
cloud:
gateway:
routes:
- id: after_route
uri: https://example.org
predicates:
- After=2017-01-20T17:42:47.789-07:00[America/Denver]
This route matches any request made after Jan 20, 2017 17:42 Mountain Time (Denver).
Example 6. application.yml
spring:
cloud:
gateway:
routes:
- id: before_route
uri: https://example.org
predicates:
- Before=2017-01-20T17:42:47.789-07:00[America/Denver]
This route matches any request made before Jan 20, 2017 17:42 Mountain Time (Denver).
Example 7. application.yml
spring:
cloud:
gateway:
routes:
- id: between_route
uri: https://example.org
predicates:
- Between=2017-01-20T17:42:47.789-07:00[America/Denver], 2017-01-
21T17:42:47.789-07:00[America/Denver]
This route matches any request made after Jan 20, 2017 17:42 Mountain Time (Denver) and before
Jan 21, 2017 17:42 Mountain Time (Denver). This could be useful for maintenance windows.
spring:
cloud:
gateway:
routes:
- id: cookie_route
uri: https://example.org
predicates:
- Cookie=chocolate, ch.p
This route matches requests that have a cookie named chocolate whose value matches the ch.p
regular expression.
Example 9. application.yml
spring:
cloud:
gateway:
routes:
- id: header_route
uri: https://example.org
predicates:
- Header=X-Request-Id, \d+
This route matches if the request has a header named X-Request-Id whose value matches the \d+
regular expression (that is, it has a value of one or more digits).
spring:
cloud:
gateway:
routes:
- id: host_route
uri: https://example.org
predicates:
- Host=**.somehost.org,**.anotherhost.org
This route matches if the request has a Host header with a value of www.somehost.org or
beta.somehost.org or www.anotherhost.org.
This predicate extracts the URI template variables (such as sub, defined in the preceding example)
as a map of names and values and places it in the ServerWebExchange.getAttributes() with a key
defined in ServerWebExchangeUtils.URI_TEMPLATE_VARIABLES_ATTRIBUTE. Those values are then
available for use by GatewayFilter factories
spring:
cloud:
gateway:
routes:
- id: method_route
uri: https://example.org
predicates:
- Method=GET,POST
spring:
cloud:
gateway:
routes:
- id: path_route
uri: https://example.org
predicates:
- Path=/red/{segment},/blue/{segment}
This route matches if the request path was, for example: /red/1 or /red/1/ or /red/blue or
/blue/green.
If matchTrailingSlash is set to false, then request path /red/1/ will not be matched.
This predicate extracts the URI template variables (such as segment, defined in the preceding
example) as a map of names and values and places it in the ServerWebExchange.getAttributes() with
a key defined in ServerWebExchangeUtils.URI_TEMPLATE_VARIABLES_ATTRIBUTE. Those values are then
available for use by GatewayFilter factories
A utility method (called get) is available to make access to these variables easier. The following
example shows how to use the get method:
spring:
cloud:
gateway:
routes:
- id: query_route
uri: https://example.org
predicates:
- Query=green
The preceding route matches if the request contained a green query parameter.
application.yml
spring:
cloud:
gateway:
routes:
- id: query_route
uri: https://example.org
predicates:
- Query=red, gree.
The preceding route matches if the request contained a red query parameter whose value matched
the gree. regexp, so green and greet would match.
spring:
cloud:
gateway:
routes:
- id: remoteaddr_route
uri: https://example.org
predicates:
- RemoteAddr=192.168.1.1/24
This route matches if the remote address of the request was, for example, 192.168.1.10.
5.11. The Weight Route Predicate Factory
The Weight route predicate factory takes two arguments: group and weight (an int). The weights are
calculated per group. The following example configures a weight route predicate:
spring:
cloud:
gateway:
routes:
- id: weight_high
uri: https://weighthigh.org
predicates:
- Weight=group1, 8
- id: weight_low
uri: https://weightlow.org
predicates:
- Weight=group1, 2
This route would forward ~80% of traffic to weighthigh.org and ~20% of traffic to weighlow.org
By default, the RemoteAddr route predicate factory uses the remote address from the incoming
request. This may not match the actual client IP address if Spring Cloud Gateway sits behind a
proxy layer.
You can customize the way that the remote address is resolved by setting a custom
RemoteAddressResolver. Spring Cloud Gateway comes with one non-default remote address resolver
that is based off of the X-Forwarded-For header, XForwardedRemoteAddressResolver.
maxTrustedIndex result
1 0.0.0.3
2 0.0.0.2
3 0.0.0.1
The following example shows how to achieve the same configuration with Java:
...
.route("direct-route",
r -> r.remoteAddr("10.1.1.1", "10.10.1.1/24")
.uri("https://downstream1")
.route("proxied-route",
r -> r.remoteAddr(resolver, "10.10.1.1", "10.10.1.1/24")
.uri("https://downstream2")
)
6. GatewayFilter Factories
Route filters allow the modification of the incoming HTTP request or outgoing HTTP response in
some manner. Route filters are scoped to a particular route. Spring Cloud Gateway includes many
built-in GatewayFilter Factories.
For more detailed examples of how to use any of the following filters, take a look
at the unit tests.
6.1. The AddRequestHeader GatewayFilter Factory
The AddRequestHeader GatewayFilter factory takes a name and value parameter. The following
example configures an AddRequestHeader GatewayFilter:
spring:
cloud:
gateway:
routes:
- id: add_request_header_route
uri: https://example.org
filters:
- AddRequestHeader=X-Request-red, blue
This listing adds X-Request-red:blue header to the downstream request’s headers for all matching
requests.
AddRequestHeader is aware of the URI variables used to match a path or host. URI variables may be
used in the value and are expanded at runtime. The following example configures an
AddRequestHeader GatewayFilter that uses a variable:
spring:
cloud:
gateway:
routes:
- id: add_request_header_route
uri: https://example.org
predicates:
- Path=/red/{segment}
filters:
- AddRequestHeader=X-Request-Red, Blue-{segment}
spring:
cloud:
gateway:
routes:
- id: add_request_parameter_route
uri: https://example.org
filters:
- AddRequestParameter=red, blue
This will add red=blue to the downstream request’s query string for all matching requests.
AddRequestParameter is aware of the URI variables used to match a path or host. URI variables may
be used in the value and are expanded at runtime. The following example configures an
AddRequestParameter GatewayFilter that uses a variable:
spring:
cloud:
gateway:
routes:
- id: add_request_parameter_route
uri: https://example.org
predicates:
- Host: {segment}.myhost.org
filters:
- AddRequestParameter=foo, bar-{segment}
spring:
cloud:
gateway:
routes:
- id: add_response_header_route
uri: https://example.org
filters:
- AddResponseHeader=X-Response-Red, Blue
This adds X-Response-Foo:Bar header to the downstream response’s headers for all matching
requests.
AddResponseHeader is aware of URI variables used to match a path or host. URI variables may be used
in the value and are expanded at runtime. The following example configures an AddResponseHeader
GatewayFilter that uses a variable:
spring:
cloud:
gateway:
routes:
- id: add_response_header_route
uri: https://example.org
predicates:
- Host: {segment}.myhost.org
filters:
- AddResponseHeader=foo, bar-{segment}
spring:
cloud:
gateway:
routes:
- id: dedupe_response_header_route
uri: https://example.org
filters:
- DedupeResponseHeader=Access-Control-Allow-Credentials Access-Control-
Allow-Origin
The DedupeResponseHeader filter also accepts an optional strategy parameter. The accepted values
are RETAIN_FIRST (default), RETAIN_LAST, and RETAIN_UNIQUE.
To enable the Spring Cloud CircuitBreaker filter, you need to place spring-cloud-starter-
circuitbreaker-reactor-resilience4j on the classpath. The following example configures a Spring
Cloud CircuitBreaker GatewayFilter:
spring:
cloud:
gateway:
routes:
- id: circuitbreaker_route
uri: https://example.org
filters:
- CircuitBreaker=myCircuitBreaker
To configure the circuit breaker, see the configuration for the underlying circuit breaker
implementation you are using.
• Resilience4J Documentation
The Spring Cloud CircuitBreaker filter can also accept an optional fallbackUri parameter. Currently,
only forward: schemed URIs are supported. If the fallback is called, the request is forwarded to the
controller matched by the URI. The following example configures such a fallback:
spring:
cloud:
gateway:
routes:
- id: circuitbreaker_route
uri: lb://backing-service:8088
predicates:
- Path=/consumingServiceEndpoint
filters:
- name: CircuitBreaker
args:
name: myCircuitBreaker
fallbackUri: forward:/inCaseOfFailureUseThis
- RewritePath=/consumingServiceEndpoint, /backingServiceEndpoint
@Bean
public RouteLocator routes(RouteLocatorBuilder builder) {
return builder.routes()
.route("circuitbreaker_route", r -> r.path("/consumingServiceEndpoint")
.filters(f -> f.circuitBreaker(c ->
c.name("myCircuitBreaker").fallbackUri("forward:/inCaseOfFailureUseThis"))
.rewritePath("/consumingServiceEndpoint",
"/backingServiceEndpoint")).uri("lb://backing-service:8088")
.build();
}
This example forwards to the /inCaseofFailureUseThis URI when the circuit breaker fallback is
called. Note that this example also demonstrates the (optional) Spring Cloud LoadBalancer load-
balancing (defined by the lb prefix on the destination URI).
The primary scenario is to use the fallbackUri to define an internal controller or handler within the
gateway application. However, you can also reroute the request to a controller or handler in an
external application, as follows:
Example 27. application.yml
spring:
cloud:
gateway:
routes:
- id: ingredients
uri: lb://ingredients
predicates:
- Path=//ingredients/**
filters:
- name: CircuitBreaker
args:
name: fetchIngredients
fallbackUri: forward:/fallback
- id: ingredients-fallback
uri: http://localhost:9994
predicates:
- Path=/fallback
In this example, there is no fallback endpoint or handler in the gateway application. However,
there is one in another application, registered under localhost:9994.
In case of the request being forwarded to fallback, the Spring Cloud CircuitBreaker Gateway filter
also provides the Throwable that has caused it. It is added to the ServerWebExchange as the
ServerWebExchangeUtils.CIRCUITBREAKER_EXECUTION_EXCEPTION_ATTR attribute that can be used when
handling the fallback within the gateway application.
For the external controller/handler scenario, headers can be added with exception details. You can
find more information on doing so in the FallbackHeaders GatewayFilter Factory section.
In some cases you might want to trip a circuit breaker based on the status code returned from the
route it wraps. The circuit breaker config object takes a list of status codes that if returned will
cause the the circuit breaker to be tripped. When setting the status codes you want to trip the
circuit breaker you can either use a integer with the status code value or the String representation
of the HttpStatus enumeration.
Example 28. application.yml
spring:
cloud:
gateway:
routes:
- id: circuitbreaker_route
uri: lb://backing-service:8088
predicates:
- Path=/consumingServiceEndpoint
filters:
- name: CircuitBreaker
args:
name: myCircuitBreaker
fallbackUri: forward:/inCaseOfFailureUseThis
statusCodes:
- 500
- "NOT_FOUND"
@Bean
public RouteLocator routes(RouteLocatorBuilder builder) {
return builder.routes()
.route("circuitbreaker_route", r -> r.path("/consumingServiceEndpoint")
.filters(f -> f.circuitBreaker(c ->
c.name("myCircuitBreaker").fallbackUri("forward:/inCaseOfFailureUseThis").addStatu
sCode("INTERNAL_SERVER_ERROR"))
.rewritePath("/consumingServiceEndpoint",
"/backingServiceEndpoint")).uri("lb://backing-service:8088")
.build();
}
spring:
cloud:
gateway:
routes:
- id: ingredients
uri: lb://ingredients
predicates:
- Path=//ingredients/**
filters:
- name: CircuitBreaker
args:
name: fetchIngredients
fallbackUri: forward:/fallback
- id: ingredients-fallback
uri: http://localhost:9994
predicates:
- Path=/fallback
filters:
- name: FallbackHeaders
args:
executionExceptionTypeHeaderName: Test-Header
In this example, after an execution exception occurs while running the circuit breaker, the request
is forwarded to the fallback endpoint or handler in an application running on localhost:9994. The
headers with the exception type, message and (if available) root cause exception type and message
are added to that request by the FallbackHeaders filter.
You can overwrite the names of the headers in the configuration by setting the values of the
following arguments (shown with their default values):
• executionExceptionTypeHeaderName ("Execution-Exception-Type")
• executionExceptionMessageHeaderName ("Execution-Exception-Message")
• rootCauseExceptionTypeHeaderName ("Root-Cause-Exception-Type")
• rootCauseExceptionMessageHeaderName ("Root-Cause-Exception-Message")
For more information on circuit breakers and the gateway see the Spring Cloud CircuitBreaker
Factory section.
spring:
cloud:
gateway:
routes:
- id: map_request_header_route
uri: https://example.org
filters:
- MapRequestHeader=Blue, X-Request-Red
This adds X-Request-Red:<values> header to the downstream request with updated values from the
incoming HTTP request’s Blue header.
spring:
cloud:
gateway:
routes:
- id: prefixpath_route
uri: https://example.org
filters:
- PrefixPath=/mypath
This will prefix /mypath to the path of all matching requests. So a request to /hello would be sent to
/mypath/hello.
spring:
cloud:
gateway:
routes:
- id: preserve_host_route
uri: https://example.org
filters:
- PreserveHostHeader
This filter takes an optional keyResolver parameter and parameters specific to the rate limiter
(described later in this section).
keyResolver is a bean that implements the KeyResolver interface. In configuration, reference the
bean by name using SpEL. #{@myKeyResolver} is a SpEL expression that references a bean named
myKeyResolver. The following listing shows the KeyResolver interface:
The KeyResolver interface lets pluggable strategies derive the key for limiting requests. In future
milestone releases, there will be some KeyResolver implementations.
By default, if the KeyResolver does not find a key, requests are denied. You can adjust this behavior
by setting the spring.cloud.gateway.filter.request-rate-limiter.deny-empty-key (true or false) and
spring.cloud.gateway.filter.request-rate-limiter.empty-key-status-code properties.
The RequestRateLimiter is not configurable with the "shortcut" notation. The
following example below is invalid:
The Redis implementation is based off of work done at Stripe. It requires the use of the spring-boot-
starter-data-redis-reactive Spring Boot starter.
The redis-rate-limiter.replenishRate property is how many requests per second you want a user
to be allowed to do, without any dropped requests. This is the rate at which the token bucket is
filled.
The redis-rate-limiter.requestedTokens property is how many tokens a request costs. This is the
number of tokens taken from the bucket for each request and defaults to 1.
A steady rate is accomplished by setting the same value in replenishRate and burstCapacity.
Temporary bursts can be allowed by setting burstCapacity higher than replenishRate. In this case,
the rate limiter needs to be allowed some time between bursts (according to replenishRate), as two
consecutive bursts will result in dropped requests (HTTP 429 - Too Many Requests). The following
listing configures a redis-rate-limiter:
Rate limits bellow 1 request/s are accomplished by setting replenishRate to the wanted number of
requests, requestedTokens to the timespan in seconds and burstCapacity to the product of
replenishRate and requestedTokens, e.g. setting replenishRate=1, requestedTokens=60 and
burstCapacity=60 will result in a limit of 1 request/min.
Example 36. application.yml
spring:
cloud:
gateway:
routes:
- id: requestratelimiter_route
uri: https://example.org
filters:
- name: RequestRateLimiter
args:
redis-rate-limiter.replenishRate: 10
redis-rate-limiter.burstCapacity: 20
redis-rate-limiter.requestedTokens: 1
@Bean
KeyResolver userKeyResolver() {
return exchange ->
Mono.just(exchange.getRequest().getQueryParams().getFirst("user"));
}
This defines a request rate limit of 10 per user. A burst of 20 is allowed, but, in the next second, only
10 requests are available. The KeyResolver is a simple one that gets the user request parameter (note
that this is not recommended for production).
You can also define a rate limiter as a bean that implements the RateLimiter interface. In
configuration, you can reference the bean by name using SpEL. #{@myRateLimiter} is a SpEL
expression that references a bean with named myRateLimiter. The following listing defines a rate
limiter that uses the KeyResolver defined in the previous listing:
Example 38. application.yml
spring:
cloud:
gateway:
routes:
- id: requestratelimiter_route
uri: https://example.org
filters:
- name: RequestRateLimiter
args:
rate-limiter: "#{@myRateLimiter}"
key-resolver: "#{@userKeyResolver}"
spring:
cloud:
gateway:
routes:
- id: prefixpath_route
uri: https://example.org
filters:
- RedirectTo=302, https://acme.org
This will send a status 302 with a Location:https://acme.org header to perform a redirect.
spring:
cloud:
gateway:
routes:
- id: removerequestheader_route
uri: https://example.org
filters:
- RemoveRequestHeader=X-Request-Foo
spring:
cloud:
gateway:
routes:
- id: removeresponseheader_route
uri: https://example.org
filters:
- RemoveResponseHeader=X-Response-Foo
This will remove the X-Response-Foo header from the response before it is returned to the gateway
client.
To remove any kind of sensitive header, you should configure this filter for any routes for which
you may want to do so. In addition, you can configure this filter once by using
spring.cloud.gateway.default-filters and have it applied to all routes.
spring:
cloud:
gateway:
routes:
- id: removerequestparameter_route
uri: https://example.org
filters:
- RemoveRequestParameter=red
spring:
cloud:
gateway:
routes:
- id: rewritepath_route
uri: https://example.org
predicates:
- Path=/red/**
filters:
- RewritePath=/red/?(?<segment>.*), /$\{segment}
For a request path of /red/blue, this sets the path to /blue before making the downstream request.
Note that the $ should be replaced with $\ because of the YAML specification.
spring:
cloud:
gateway:
routes:
- id: rewritelocationresponseheader_route
uri: http://example.org
filters:
- RewriteLocationResponseHeader=AS_IN_REQUEST, Location, ,
For example, for a request of POST api.example.com/some/object/name, the Location response header
value of object-service.prod.example.net/v2/some/object/id is rewritten as api.example.com/some/
object/id.
The stripVersionMode parameter has the following possible values: NEVER_STRIP, AS_IN_REQUEST
(default), and ALWAYS_STRIP.
• NEVER_STRIP: The version is not stripped, even if the original request path contains no version.
• AS_IN_REQUEST The version is stripped only if the original request path contains no version.
• ALWAYS_STRIP The version is always stripped, even if the original request path contains version.
The hostValue parameter, if provided, is used to replace the host:port portion of the response
Location header. If it is not provided, the value of the Host request header is used.
The protocolsRegex parameter must be a valid regex String, against which the protocol name is
matched. If it is not matched, the filter does nothing. The default is http|https|ftp|ftps.
spring:
cloud:
gateway:
routes:
- id: rewriteresponseheader_route
uri: https://example.org
filters:
- RewriteResponseHeader=X-Response-Red, , password=[^&]+, password=***
For a header value of /42?user=ford&password=omg!what&flag=true, it is set to
/42?user=ford&password=***&flag=true after making the downstream request. You must use $\ to
mean $ because of the YAML specification.
spring:
cloud:
gateway:
routes:
- id: save_session
uri: https://example.org
predicates:
- Path=/foo/**
filters:
- SaveSession
If you integrate Spring Security with Spring Session and want to ensure security details have been
forwarded to the remote process, this is critical.
The following headers (shown with their default values) are added:
• X-Xss-Protection:1 (mode=block)
• Strict-Transport-Security (max-age=631138519)
• X-Frame-Options (DENY)
• X-Content-Type-Options (nosniff)
• Referrer-Policy (no-referrer)
• X-Permitted-Cross-Domain-Policies (none)
To change the default values, set the appropriate property in the
spring.cloud.gateway.filter.secure-headers namespace. The following properties are available:
• xss-protection-header
• strict-transport-security
• x-frame-options
• x-content-type-options
• referrer-policy
• content-security-policy
• x-download-options
• x-permitted-cross-domain-policies
spring.cloud.gateway.filter.secure-headers.disable=x-frame-options,strict-
transport-security
The lowercase full name of the secure header needs to be used to disable it..
spring:
cloud:
gateway:
routes:
- id: setpath_route
uri: https://example.org
predicates:
- Path=/red/{segment}
filters:
- SetPath=/{segment}
For a request path of /red/blue, this sets the path to /blue before making the downstream request.
6.21. The SetRequestHeader GatewayFilter Factory
The SetRequestHeader GatewayFilter factory takes name and value parameters. The following listing
configures a SetRequestHeader GatewayFilter:
spring:
cloud:
gateway:
routes:
- id: setrequestheader_route
uri: https://example.org
filters:
- SetRequestHeader=X-Request-Red, Blue
This GatewayFilter replaces (rather than adding) all headers with the given name. So, if the
downstream server responded with a X-Request-Red:1234, this would be replaced with X-Request-
Red:Blue, which is what the downstream service would receive.
SetRequestHeader is aware of URI variables used to match a path or host. URI variables may be used
in the value and are expanded at runtime. The following example configures an SetRequestHeader
GatewayFilter that uses a variable:
spring:
cloud:
gateway:
routes:
- id: setrequestheader_route
uri: https://example.org
predicates:
- Host: {segment}.myhost.org
filters:
- SetRequestHeader=foo, bar-{segment}
spring:
cloud:
gateway:
routes:
- id: setresponseheader_route
uri: https://example.org
filters:
- SetResponseHeader=X-Response-Red, Blue
This GatewayFilter replaces (rather than adding) all headers with the given name. So, if the
downstream server responded with a X-Response-Red:1234, this is replaced with X-Response-
Red:Blue, which is what the gateway client would receive.
SetResponseHeader is aware of URI variables used to match a path or host. URI variables may be used
in the value and will be expanded at runtime. The following example configures an
SetResponseHeader GatewayFilter that uses a variable:
spring:
cloud:
gateway:
routes:
- id: setresponseheader_route
uri: https://example.org
predicates:
- Host: {segment}.myhost.org
filters:
- SetResponseHeader=foo, bar-{segment}
spring:
cloud:
gateway:
routes:
- id: setstatusstring_route
uri: https://example.org
filters:
- SetStatus=BAD_REQUEST
- id: setstatusint_route
uri: https://example.org
filters:
- SetStatus=401
You can configure the SetStatus GatewayFilter to return the original HTTP status code from the
proxied request in a header in the response. The header is added to the response if configured with
the following property:
spring:
cloud:
gateway:
set-status:
original-status-header-name: original-http-status
spring:
cloud:
gateway:
routes:
- id: nameRoot
uri: https://nameservice
predicates:
- Path=/name/**
filters:
- StripPrefix=2
When a request is made through the gateway to /name/blue/red, the request made to nameservice
looks like nameservice/red.
• statuses: The HTTP status codes that should be retried, represented by using
org.springframework.http.HttpStatus.
• backoff: The configured exponential backoff for the retries. Retries are performed after a
backoff interval of firstBackoff * (factor ^ n), where n is the iteration. If maxBackoff is
configured, the maximum backoff applied is limited to maxBackoff. If basedOnPreviousValue is
true, the backoff is calculated byusing prevBackoff * factor.
• backoff: disabled
spring:
cloud:
gateway:
routes:
- id: retry_test
uri: http://localhost:8080/flakey
predicates:
- Host=*.retry.com
filters:
- name: Retry
args:
retries: 3
statuses: BAD_GATEWAY
methods: GET,POST
backoff:
firstBackoff: 10ms
maxBackoff: 50ms
factor: 2
basedOnPreviousValue: false
When using the retry filter with a forward: prefixed URL, the target endpoint
should be written carefully so that, in case of an error, it does not do anything that
could result in a response being sent to the client and committed. For example, if
the target endpoint is an annotated controller, the target controller method should
not return ResponseEntity with an error status code. Instead, it should throw an
Exception or signal an error (for example, through a Mono.error(ex) return value),
which the retry filter can be configured to handle by retrying.
When using the retry filter with any HTTP method with a body, the body will be
cached and the gateway will become memory constrained. The body is cached in a
request attribute defined by ServerWebExchangeUtils.CACHED_REQUEST_BODY_ATTR. The
type of the object is a org.springframework.core.io.buffer.DataBuffer.
spring:
cloud:
gateway:
routes:
- id: request_size_route
uri: http://localhost:8080/upload
predicates:
- Path=/upload
filters:
- name: RequestSize
args:
maxSize: 5000000
The RequestSize GatewayFilter factory sets the response status as 413 Payload Too Large with an
additional header errorMessage when the request is rejected due to size. The following example
shows such an errorMessage:
The default request size is set to five MB if not provided as a filter argument in the
route definition.
spring:
cloud:
gateway:
routes:
- id: set_request_host_header_route
uri: http://localhost:8080/headers
predicates:
- Path=/headers
filters:
- name: SetRequestHostHeader
args:
host: example.org
The SetRequestHostHeader GatewayFilter factory replaces the value of the host header with
example.org.
public Hello() { }
if the request has no body, the RewriteFilter will be passed null. Mono.empty()
should be returned to assign a missing body in the request.
Spring Cloud Gateway can forward OAuth2 access tokens downstream to the services it is proxying.
To add this functionlity to gateway you need to add the TokenRelayGatewayFilterFactory like this:
App.java
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route("resource", r -> r.path("/resource")
.filters(f -> f.tokenRelay())
.uri("http://localhost:9000"))
.build();
}
or this
application.yaml
spring:
cloud:
gateway:
routes:
- id: resource
uri: http://localhost:9000
predicates:
- Path=/resource
filters:
- TokenRelay=
and it will (in addition to logging the user in and grabbing a token) pass the authentication token
downstream to the services (in this case /resource).
To enable this for Spring Cloud Gateway add the following dependencies
• org.springframework.boot:spring-boot-starter-oauth2-client
How does it work? The filter extracts an access token from the currently authenticated user, and
puts it in a request header for the downstream requests.
spring:
cloud:
gateway:
default-filters:
- AddResponseHeader=X-Response-Default-Red, Default-Blue
- PrefixPath=/httpbin
7. Global Filters
The GlobalFilter interface has the same signature as GatewayFilter. These are special filters that
are conditionally applied to all routes.
This interface and its usage are subject to change in future milestone releases.
As Spring Cloud Gateway distinguishes between “pre” and “post” phases for filter logic execution
(see How it Works), the filter with the highest precedence is the first in the “pre”-phase and the last
in the “post”-phase.
@Bean
public GlobalFilter customFilter() {
return new CustomGlobalFilter();
}
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain)
{
log.info("custom global filter");
return chain.filter(exchange);
}
@Override
public int getOrder() {
return -1;
}
}
spring:
cloud:
gateway:
routes:
- id: myRoute
uri: lb://service
predicates:
- Path=/service/**
Gateway supports all the LoadBalancer features. You can read more about them in
the Spring Cloud Commons documentation.
If the URI has a scheme prefix, such as lb:ws://serviceid, the lb scheme is stripped from the URI
and placed in the ServerWebExchangeUtils.GATEWAY_SCHEME_PREFIX_ATTR for use later in the filter
chain.
You can load-balance websockets by prefixing the URI with lb, such as lb:ws://serviceid.
If you use SockJS as a fallback over normal HTTP, you should configure a normal
HTTP route as well as the websocket Route.
spring:
cloud:
gateway:
routes:
# SockJS route
- id: websocket_sockjs_route
uri: http://localhost:3001
predicates:
- Path=/websocket/info/**
# Normal Websocket route
- id: websocket_route
uri: ws://localhost:3001
predicates:
- Path=/websocket/**
These metrics are then available to be scraped from /actuator/metrics/gateway.requests and can be
easily integrated with Prometheus to create a Grafana dashboard.
8. HttpHeadersFilters
HttpHeadersFilters are applied to requests before sending them downstream, such as in the
NettyRoutingFilter.
• Keep-Alive
• Proxy-Authenticate
• Proxy-Authorization
• TE
• Trailer
• Transfer-Encoding
• Upgrade
Creating of individual headers can be controlled by the following boolean properties (defaults to
true):
• spring.cloud.gateway.x-forwarded.for-enabled
• spring.cloud.gateway.x-forwarded.host-enabled
• spring.cloud.gateway.x-forwarded.port-enabled
• spring.cloud.gateway.x-forwarded.proto-enabled
• spring.cloud.gateway.x-forwarded.prefix-enabled
Appending multiple headers can be controlled by the following boolean properties (defaults to
true):
• spring.cloud.gateway.x-forwarded.for-append
• spring.cloud.gateway.x-forwarded.host-append
• spring.cloud.gateway.x-forwarded.port-append
• spring.cloud.gateway.x-forwarded.proto-append
• spring.cloud.gateway.x-forwarded.prefix-append
server:
ssl:
enabled: true
key-alias: scg
key-store-password: scg1234
key-store: classpath:scg-keystore.p12
key-store-type: PKCS12
You can route gateway routes to both HTTP and HTTPS backends. If you are routing to an HTTPS
backend, you can configure the gateway to trust all downstream certificates with the following
configuration:
spring:
cloud:
gateway:
httpclient:
ssl:
useInsecureTrustManager: true
Using an insecure trust manager is not suitable for production. For a production deployment, you
can configure the gateway with a set of known certificates that it can trust with the following
configuration:
spring:
cloud:
gateway:
httpclient:
ssl:
trustedX509Certificates:
- cert1.pem
- cert2.pem
If the Spring Cloud Gateway is not provisioned with trusted certificates, the default trust store is
used (which you can override by setting the javax.net.ssl.trustStore system property).
spring:
cloud:
gateway:
httpclient:
ssl:
handshake-timeout-millis: 10000
close-notify-flush-timeout-millis: 3000
close-notify-read-timeout-millis: 0
10. Configuration
Configuration for Spring Cloud Gateway is driven by a collection of RouteDefinitionLocator
instances. The following listing shows the definition of the RouteDefinitionLocator interface:
Example 66. RouteDefinitionLocator.java
The earlier configuration examples all use a shortcut notation that uses positional arguments
rather than named ones. The following two examples are equivalent:
spring:
cloud:
gateway:
routes:
- id: setstatus_route
uri: https://example.org
filters:
- name: SetStatus
args:
status: 401
- id: setstatusshortcut_route
uri: https://example.org
filters:
- SetStatus=401
For some usages of the gateway, properties are adequate, but some production use cases benefit
from loading configuration from an external source, such as a database. Future milestone versions
will have RouteDefinitionLocator implementations based off of Spring Data Repositories, such as
Redis, MongoDB, and Cassandra.
spring:
cloud:
gateway:
routes:
- id: route_with_metadata
uri: https://example.org
metadata:
optionName: "OptionValue"
compositeObject:
name: "value"
iAmNumber: 1
spring:
cloud:
gateway:
httpclient:
connect-timeout: 1000
response-timeout: 5s
12.2. Per-route timeouts
To configure per-route timeouts:
connect-timeout must be specified in milliseconds.
response-timeout must be specified in milliseconds.
- id: per_route_timeouts
uri: https://example.org
predicates:
- name: Path
args:
pattern: /delay/{timeout}
metadata:
response-timeout: 200
connect-timeout: 200
import static
org.springframework.cloud.gateway.support.RouteMetadataUtils.CONNECT_TIMEOUT_ATTR;
import static
org.springframework.cloud.gateway.support.RouteMetadataUtils.RESPONSE_TIMEOUT_ATTR;
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder routeBuilder){
return routeBuilder.routes()
.route("test1", r -> {
return r.host("*.somehost.org").and().path("/somepath")
.filters(f -> f.addRequestHeader("header1", "header-value-1"))
.uri("http://someuri")
.metadata(RESPONSE_TIMEOUT_ATTR, 200)
.metadata(CONNECT_TIMEOUT_ATTR, 200);
})
.build();
}
This style also allows for more custom predicate assertions. The predicates defined by
RouteDefinitionLocator beans are combined using logical and. By using the fluent Java API, you can
use the and(), or(), and negate() operators on the Predicate class.
By default, the gateway defines a single predicate and filter for routes created with a
DiscoveryClient.
The default predicate is a path predicate defined with the pattern /serviceId/**, where serviceId is
the ID of the service from the DiscoveryClient.
The default filter is a rewrite path filter with the regex /serviceId/?(?<remaining>.*) and the
replacement /${remaining}. This strips the service ID from the path before the request is sent
downstream.
If you want to customize the predicates or filters used by the DiscoveryClient routes, set
spring.cloud.gateway.discovery.locator.predicates[x] and
spring.cloud.gateway.discovery.locator.filters[y]. When doing so, you need to make sure to
include the default predicate and filter shown earlier, if you want to retain that functionality. The
following example shows what this looks like:
spring.cloud.gateway.discovery.locator.predicates[0].name: Path
spring.cloud.gateway.discovery.locator.predicates[0].args[pattern]:
"'/'+serviceId+'/**'"
spring.cloud.gateway.discovery.locator.predicates[1].name: Host
spring.cloud.gateway.discovery.locator.predicates[1].args[pattern]: "'**.foo.com'"
spring.cloud.gateway.discovery.locator.filters[0].name: CircuitBreaker
spring.cloud.gateway.discovery.locator.filters[0].args[name]: serviceId
spring.cloud.gateway.discovery.locator.filters[1].name: RewritePath
spring.cloud.gateway.discovery.locator.filters[1].args[regexp]: "'/' + serviceId +
'/?(?<remaining>.*)'"
spring.cloud.gateway.discovery.locator.filters[1].args[replacement]:
"'/${remaining}'"
You can configure the logging system to have a separate access log file. The following example
creates a Logback configuration:
Example 71. logback.xml
spring:
cloud:
gateway:
globalcors:
cors-configurations:
'[/**]':
allowedOrigins: "https://docs.spring.io"
allowedMethods:
- GET
In the preceding example, CORS requests are allowed from requests that originate from
docs.spring.io for all GET requested paths.
To provide the same CORS configuration to requests that are not handled by some gateway route
predicate, set the spring.cloud.gateway.globalcors.add-to-simple-url-handler-mapping property to
true. This is useful when you try to support CORS preflight requests and your route predicate does
not evalute to true because the HTTP method is options.
15. Actuator API
The /gateway actuator endpoint lets you monitor and interact with a Spring Cloud Gateway
application. To be remotely accessible, the endpoint has to be enabled and exposed over HTTP or
JMX in the application properties. The following listing shows how to do so:
[
{
"predicate": "(Hosts: [**.addrequestheader.org] && Paths: [/headers], match
trailing slash: true)",
"route_id": "add_request_header_test",
"filters": [
"[[AddResponseHeader X-Response-Default-Foo = 'Default-Bar'], order = 1]",
"[[AddRequestHeader X-Request-Foo = 'Bar'], order = 1]",
"[[PrefixPath prefix = '/httpbin'], order = 2]"
],
"uri": "lb://testservice",
"order": 0
}
]
This feature is enabled by default. To disable it, set the following property:
spring.cloud.gateway.actuator.verbose.enabled=false
• Global Filters
• [gateway-route-filters]
To retrieve the global filters applied to all routes, make a GET request to
/actuator/gateway/globalfilters. The resulting response is similar to the following:
{
"org.springframework.cloud.gateway.filter.ReactiveLoadBalancerClientFilter@77856cc
5": 10100,
"org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter@4f6fd101":
10000,
"org.springframework.cloud.gateway.filter.NettyWriteResponseFilter@32d22650":
-1,
"org.springframework.cloud.gateway.filter.ForwardRoutingFilter@106459d9":
2147483647,
"org.springframework.cloud.gateway.filter.NettyRoutingFilter@1fbd5e0":
2147483647,
"org.springframework.cloud.gateway.filter.ForwardPathFilter@33a71d23": 0,
"org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter@135064ea":
2147483637,
"org.springframework.cloud.gateway.filter.WebsocketRoutingFilter@23c05889":
2147483646
}
The response contains the details of the global filters that are in place. For each global filter, there is
a string representation of the filter object (for example,
org.springframework.cloud.gateway.filter.ReactiveLoadBalancerClientFilter@77856cc5) and the
corresponding order in the filter chain.}
The response contains the details of the GatewayFilter factories applied to any particular route. For
each factory there is a string representation of the corresponding object (for example,
[SecureHeadersGatewayFilterFactory@fceab5d configClass = Object]). Note that the null value is due
to an incomplete implementation of the endpoint controller, because it tries to set the order of the
object in the filter chain, which does not apply to a GatewayFilter factory object.
The response contains the details of all the routes defined in the gateway. The following table
describes the structure of each element (each is a route) of the response:
16. Troubleshooting
This section covers common problems that may arise when you use Spring Cloud Gateway.
• org.springframework.cloud.gateway
• org.springframework.http.server.reactive
• org.springframework.web.reactive
• org.springframework.boot.autoconfigure.web
• reactor.netty
• redisratelimiter
16.2. Wiretap
The Reactor Netty HttpClient and HttpServer can have wiretap enabled. When combined with
setting the reactor.netty log level to DEBUG or TRACE, it enables the logging of information, such as
headers and bodies sent and received across the wire. To enable wiretap, set
spring.cloud.gateway.httpserver.wiretap=true or spring.cloud.gateway.httpclient.wiretap=true for
the HttpServer and HttpClient, respectively.
public MyRoutePredicateFactory() {
super(Config.class);
}
@Override
public Predicate<ServerWebExchange> apply(Config config) {
// grab configuration from Config object
return exchange -> {
//grab the request
ServerHttpRequest request = exchange.getRequest();
//take information from the request to see if it
//matches configuration.
return matches(config, request);
};
}
public PreGatewayFilterFactory() {
super(Config.class);
}
@Override
public GatewayFilter apply(Config config) {
// grab configuration from Config object
return (exchange, chain) -> {
//If you want to build a "pre" filter you need to manipulate the
//request before calling chain.filter
ServerHttpRequest.Builder builder = exchange.getRequest().mutate();
//use builder to manipulate the request
return
chain.filter(exchange.mutate().request(builder.build()).build());
};
}
}
PostGatewayFilterFactory.java
public PostGatewayFilterFactory() {
super(Config.class);
}
@Override
public GatewayFilter apply(Config config) {
// grab configuration from Config object
return (exchange, chain) -> {
return chain.filter(exchange).then(Mono.fromRunnable(() -> {
ServerHttpResponse response = exchange.getResponse();
//Manipulate the response in some way
}));
};
}
For example, to reference a filter named Something in configuration files, the filter must be in a class
named SomethingGatewayFilterFactory.
The following examples show how to set up global pre and post filters, respectively:
@Bean
public GlobalFilter customGlobalFilter() {
return (exchange, chain) -> exchange.getPrincipal()
.map(Principal::getName)
.defaultIfEmpty("Default User")
.map(userName -> {
//adds header to proxied request
exchange.getRequest().mutate().header("CUSTOM-REQUEST-HEADER",
userName).build();
return exchange;
})
.flatMap(chain::filter);
}
@Bean
public GlobalFilter customGlobalPostFilter() {
return (exchange, chain) -> chain.filter(exchange)
.then(Mono.just(exchange))
.map(serverWebExchange -> {
//adds header to response
serverWebExchange.getResponse().getHeaders().set("CUSTOM-RESPONSE-
HEADER",
HttpStatus.OK.equals(serverWebExchange.getResponse().getStatusCode()) ? "It
worked": "It did not work");
return serverWebExchange;
})
.then();
}
Spring Cloud Gateway provides a utility object called ProxyExchange. You can use it inside a regular
Spring web handler as a method parameter. It supports basic downstream HTTP exchanges
through methods that mirror the HTTP verbs. With MVC, it also supports forwarding to a local
handler through the forward() method. To use the ProxyExchange, include the right module in your
classpath (either spring-cloud-gateway-mvc or spring-cloud-gateway-webflux).
The following MVC example proxies a request to /test downstream to a remote server:
@RestController
@SpringBootApplication
public class GatewaySampleApplication {
@Value("${remote.home}")
private URI home;
@GetMapping("/test")
public ResponseEntity<?> proxy(ProxyExchange<byte[]> proxy) throws Exception {
return proxy.uri(home.toString() + "/image/png").get();
}
@RestController
@SpringBootApplication
public class GatewaySampleApplication {
@Value("${remote.home}")
private URI home;
@GetMapping("/test")
public Mono<ResponseEntity<?>> proxy(ProxyExchange<byte[]> proxy) throws
Exception {
return proxy.uri(home.toString() + "/image/png").get();
}
Convenience methods on the ProxyExchange enable the handler method to discover and enhance the
URI path of the incoming request. For example, you might want to extract the trailing elements of a
path to pass them downstream:
@GetMapping("/proxy/path/**")
public ResponseEntity<?> proxyPath(ProxyExchange<byte[]> proxy) throws Exception {
String path = proxy.path("/proxy/path/");
return proxy.uri(home.toString() + "/foos/" + path).get();
}
All the features of Spring MVC and Webflux are available to gateway handler methods. As a result,
you can inject request headers and query parameters, for instance, and you can constrain the
incoming requests with declarations in the mapping annotation. See the documentation for
@RequestMapping in Spring MVC for more details of those features.
You can add headers to the downstream response by using the header() methods on ProxyExchange.
You can also manipulate response headers (and anything else you like in the response) by adding a
mapper to the get() method (and other methods). The mapper is a Function that takes the incoming
ResponseEntity and converts it to an outgoing one.
First-class support is provided for “sensitive” headers (by default, cookie and authorization), which
are not passed downstream, and for “proxy” (x-forwarded-*) headers.
2. Starters
Starters are convenient dependency descriptors you can include in your application. Include a
starter to get the dependencies and Spring Boot auto-configuration for a feature set. Starters that
begin with spring-cloud-starter-kubernetes-fabric8 provide implementations using the Fabric8
Kubernetes Java Client. Starters that begin with spring-cloud-starter-kubernetes-client provide
implementations using the Kubernetes Java Client.
Starter Features
<dependency>
<groupId>org.springframework.cloud</grou
pId>
<artifactId>spring-cloud-starter-
kubernetes-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</grou
pId>
<artifactId>spring-cloud-starter-
kubernetes-client-config</artifactId>
</dependency>
Starter Features
<dependency>
<groupId>org.springframework.cloud</grou
pId>
<artifactId>spring-cloud-starter-
kubernetes-fabric8-all</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</grou
pId>
<artifactId>spring-cloud-starter-
kubernetes-client-all</artifactId>
</dependency>
This is something that you get for free by adding the following dependency inside your project:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-fabric8</artifactId>
</dependency>
Kubernetes Java Client
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-client</artifactId>
</dependency>
@SpringBootApplication
@EnableDiscoveryClient
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Then you can inject the client in your code simply by autowiring it, as the following example shows:
@Autowired
private DiscoveryClient discoveryClient;
You can choose to enable DiscoveryClient from all namespaces by setting the following property in
application.properties:
spring.cloud.kubernetes.discovery.all-namespaces=true
To discover service endpoint addresses that are not marked as "ready" by the kubernetes api
server, you can set the following property in application.properties (default: false):
spring.cloud.kubernetes.discovery.include-not-ready-addresses=true
This might be useful when discovering services for monitoring purposes, and
would enable inspecting the /health endpoint of not-ready service instances.
If your service exposes multiple ports, you will need to specify which port the DiscoveryClient
should use. The DiscoveryClient will choose the port using the following logic.
1. If the service has a label primary-port-name it will use the port with the name specified in the
label’s value.
3. If neither of the above are specified it will use the port named https.
4. If none of the above conditions are met it will use the port named http.
5. As a last resort it wil pick the first port in the list of ports.
The last option may result in non-deterministic behaviour. Please make sure to
configure your service and/or application accordingly.
By default all of the ports and their names will be added to the metadata of the ServiceInstance.
If, for any reason, you need to disable the DiscoveryClient, you can set the following property in
application.properties:
spring.cloud.kubernetes.discovery.enabled=false
Some Spring Cloud components use the DiscoveryClient in order to obtain information about the
local service instance. For this to work, you need to align the Kubernetes service name with the
spring.application.name property.
Spring Cloud Kubernetes can also watch the Kubernetes service catalog for changes and update the
DiscoveryClient implementation accordingly. In order to enable this functionality you need to add
@EnableScheduling on a configuration class in your application.
The caller service then need only refer to names resolvable in a particular Kubernetes cluster. A
simple implementation might use a spring RestTemplate that refers to a fully qualified domain name
(FQDN), such as {service-name}.{namespace}.svc.{cluster}.local:{service-port}.
5. Kubernetes PropertySource
implementations
The most common approach to configuring your Spring Boot application is to create an
application.properties or application.yaml or an application-profile.properties or application-
profile.yaml file that contains key-value pairs that provide customization values to your
application or Spring Boot starters. You can override these properties by specifying system
properties or environment variables.
However, more advanced configuration is possible where you can use multiple ConfigMap instances.
The spring.cloud.kubernetes.config.sources list makes this possible. For example, you could define
the following ConfigMap instances:
spring:
application:
name: cloud-k8s-app
cloud:
kubernetes:
config:
name: default-name
namespace: default-namespace
sources:
# Spring Cloud Kubernetes looks up a ConfigMap named c1 in namespace
default-namespace
- name: c1
# Spring Cloud Kubernetes looks up a ConfigMap named default-name in
whatever namespace n2
- namespace: n2
# Spring Cloud Kubernetes looks up a ConfigMap named c3 in namespace n3
- namespace: n3
name: c3
The single exception to the aforementioned flow is when the ConfigMap contains a single key that
indicates the file is a YAML or properties file. In that case, the name of the key does NOT have to be
application.yaml or application.properties (it can be anything) and the value of the property is
treated correctly. This features facilitates the use case where the ConfigMap was created by using
something like the following:
Assume that we have a Spring Boot application named demo that uses the following properties to
read its thread pool configuration.
• pool.size.core
• pool.size.maximum
Individual properties work fine for most cases. However, sometimes, embedded yaml is more
convenient. In this case, we use a single property named application.yaml to embed our yaml, as
follows:
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yaml: |-
pool:
size:
core: 1
max:16
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
custom-name.yaml: |-
pool:
size:
core: 1
max:16
You can also configure Spring Boot applications differently depending on active profiles that are
merged together when the ConfigMap is read. You can provide different property values for different
profiles by using an application.properties or application.yaml property, specifying profile-specific
values, each in their own document (indicated by the --- sequence), as follows:
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
---
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
---
spring:
profiles: production
greeting:
message: Say Hello to the Ops
In the preceding case, the configuration loaded into your Spring Application with the development
profile is as follows:
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
greeting:
message: Say Hello to the Ops
farewell:
message: Say Goodbye
If both profiles are active, the property that appears last within the ConfigMap overwrites any
preceding values.
Another option is to create a different config map per profile and spring boot will automatically
fetch it based on active profiles
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
kind: ConfigMap
apiVersion: v1
metadata:
name: demo-development
data:
application.yml: |-
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
kind: ConfigMap
apiVersion: v1
metadata:
name: demo-production
data:
application.yml: |-
spring:
profiles: production
greeting:
message: Say Hello to the Ops
farewell:
message: Say Goodbye
To tell Spring Boot which profile should be enabled at bootstrap, you can pass
SPRING_PROFILES_ACTIVE environment variable. To do so, you can launch your Spring Boot
application with an environment variable that you can define it in the PodSpec at the container
specification. Deployment resource file, as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: deployment-name
spec:
replicas: 1
selector:
matchLabels:
app: deployment-name
template:
metadata:
labels:
app: deployment-name
spec:
containers:
- name: container-name
image: your-image
env:
- name: SPRING_PROFILES_ACTIVE
value: "development"
You should check the security configuration section. To access config maps from
inside a pod you need to have the correct Kubernetes service accounts, roles and
role bindings.
Another option for using ConfigMap instances is to mount them into the Pod by running the Spring
Cloud Kubernetes application and having Spring Cloud Kubernetes read them from the file system.
This behavior is controlled by the spring.cloud.kubernetes.config.paths property. You can use it in
addition to or instead of the mechanism described earlier. You can specify multiple (exact) file
paths in spring.cloud.kubernetes.config.paths by using the , delimiter.
You have to provide the full exact path to each property file, because directories
are not being recursively parsed.
Table 5. Properties:
Name Type Default Description
spring.cloud.kubernete Boolean true Enable ConfigMaps
s.config.enabled PropertySource
spring.cloud.kubernete String ${spring.application.n Sets the name of
s.config.name ame} ConfigMap to look up
spring.cloud.kubernete String Client namespace Sets the Kubernetes
s.config.namespace namespace where to
lookup
spring.cloud.kubernete List null Sets the paths where
s.config.paths ConfigMap instances are
mounted
spring.cloud.kubernete Boolean true Enable or disable
s.config.enableApi consuming ConfigMap
instances through APIs
When enabled, the Fabric8SecretsPropertySource looks up Kubernetes for Secrets from the
following sources:
Note:
By default, consuming Secrets through the API (points 2 and 3 above) is not enabled for security
reasons. The permission 'list' on secrets allows clients to inspect secrets values in the specified
namespace. Further, we recommend that containers share secrets through mounted volumes.
If you enable consuming Secrets through the API, we recommend that you limit access to Secrets by
using an authorization policy, such as RBAC. For more information about risks and best practices
when consuming Secrets through the API refer to this doc.
If the secrets are found, their data is made available to the application.
Assume that we have a spring boot application named demo that uses properties to read its database
configuration. We can create a Kubernetes secret by using the following command:
kubectl create secret generic db-secret --from-literal=username=user --from
-literal=password=p455w0rd
The preceding command would create the following secret (which you can see by using kubectl get
secrets db-secret -o yaml):
apiVersion: v1
data:
password: cDQ1NXcwcmQ=
username: dXNlcg==
kind: Secret
metadata:
creationTimestamp: 2017-07-04T09:15:57Z
name: db-secret
namespace: default
resourceVersion: "357496"
selfLink: /api/v1/namespaces/default/secrets/db-secret
uid: 63c89263-6099-11e7-b3da-76d6186905a8
type: Opaque
Note that the data contains Base64-encoded versions of the literal provided by the create command.
Your application can then use this secret — for example, by exporting the secret’s value as
environment variables:
apiVersion: v1
kind: Deployment
metadata:
name: ${project.artifactId}
spec:
template:
spec:
containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
-Dspring.cloud.kubernetes.secrets.paths=/etc/secrets/db
-secret,etc/secrets/postgresql
If you have all the secrets mapped to a common root, you can set them like:
-Dspring.cloud.kubernetes.secrets.paths=/etc/secrets
-Dspring.cloud.kubernetes.secrets.name=db-secret
As the case with ConfigMap, more advanced configuration is also possible where you can use
multiple Secret instances. The spring.cloud.kubernetes.secrets.sources list makes this possible. For
example, you could define the following Secret instances:
spring:
application:
name: cloud-k8s-app
cloud:
kubernetes:
secrets:
name: default-name
namespace: default-namespace
sources:
# Spring Cloud Kubernetes looks up a Secret named s1 in namespace
default-namespace
- name: s1
# Spring Cloud Kubernetes looks up a Secret named default-name in
whatever namespace n2
- namespace: n2
# Spring Cloud Kubernetes looks up a Secret named s3 in namespace n3
- namespace: n3
name: s3
In the preceding example, if spring.cloud.kubernetes.secrets.namespace had not been set, the Secret
named s1 would be looked up in the namespace that the application runs.
Table 6. Properties:
Notes:
• Access to secrets through the API may be restricted for security reasons. The preferred way is to
mount secrets to the Pod.
You can find an example of an application that uses secrets (though it has not been updated to use
the new spring-cloud-kubernetes project) at spring-boot-camel-config
Some applications may need to detect changes on external property sources and update their
internal status to reflect the new configuration. The reload feature of Spring Cloud Kubernetes is
able to trigger an application reload when a related ConfigMap or Secret changes.
• shutdown: the Spring ApplicationContext is shut down to activate a restart of the container. When
you use this level, make sure that the lifecycle of all non-daemon threads is bound to the
ApplicationContext and that a replication controller or replica set is configured to restart the
pod.
Assuming that the reload feature is enabled with default settings (refresh mode), the following
bean is refreshed when the config map changes:
@Configuration
@ConfigurationProperties(prefix = "bean")
public class MyConfig {
To see that changes effectively happen, you can create another bean that prints the message
periodically, as follows
@Component
public class MyBean {
@Autowired
private MyConfig config;
@Scheduled(fixedDelay = 5000)
public void hello() {
System.out.println("The message is: " + config.getMessage());
}
}
You can change the message printed by the application by using a ConfigMap, as follows:
apiVersion: v1
kind: ConfigMap
metadata:
name: reload-example
data:
application.properties: |-
bean.message=Hello World!
Any change to the property named bean.message in the ConfigMap associated with the pod is reflected
in the output. More generally speaking, changes associated to properties prefixed with the value
defined by the prefix field of the @ConfigurationProperties annotation are detected and reflected in
the application. Associating a ConfigMap with a pod is explained earlier in this chapter.
The reload feature supports two operating modes: * Event (default): Watches for changes in config
maps or secrets by using the Kubernetes API (web socket). Any event produces a re-check on the
configuration and, in case of changes, a reload. The view role on the service account is required in
order to listen for config map changes. A higher level role (such as edit) is required for secrets (by
default, secrets are not monitored). * Polling: Periodically re-creates the configuration from config
maps and secrets to see if it has changed. You can configure the polling period by using the
spring.cloud.kubernetes.reload.period property and defaults to 15 seconds. It requires the same
role as the monitored property source. This means, for example, that using polling on file-mounted
secret sources does not require particular privileges.
Table 7. Properties:
Notes: * You should not use properties under spring.cloud.kubernetes.reload in config maps or
secrets. Changing such properties at runtime may lead to unexpected results. * Deleting a property
or the whole config map does not restore the original state of the beans when you use the refresh
level.
To disable the integration with Kubernetes you can set spring.cloud.kubernetes.enabled to false.
Please be aware that when spring-cloud-kubernetes-config is on the classpath,
spring.cloud.kubernetes.enabled should be set in bootstrap.{properties|yml} (or the profile specific
one) otherwise it should be in application.{properties|yml} (or the profile specific one). Also note
that these properties: spring.cloud.kubernetes.config.enabled and
spring.cloud.kubernetes.secrets.enabled only take effect when set in bootstrap.{properties|yml}
The Istio awareness module uses me.snowdrop:istio-client to interact with Istio APIs, letting us
discover traffic rules, circuit breakers, and so on, making it easy for our Spring Boot applications to
consume this data to dynamically configure themselves according to the environment.
The Kubernetes health indicator (which is part of the core module) exposes the following info:
• Pod name, IP address, namespace, service account, node name, and its IP address
• A flag that indicates whether the Spring Boot application is internal or external to Kubernetes
8. Info Contributor
Spring Cloud Kubernetes includes an InfoContributor which adds Pod information to Spring Boot’s
/info Acturator endpoint.
9. Leader Election
The Spring Cloud Kubernetes leader election mechanism implements the leader election API of
Spring Integration using a Kubernetes ConfigMap.
Multiple application instances compete for leadership, but leadership will only be granted to one.
When granted leadership, a leader application receives an OnGrantedEvent application event with
leadership Context. Applications periodically attempt to gain leadership, with leadership granted to
the first caller. A leader will remain a leader until either it is removed from the cluster, or it yields
its leadership. When leadership removal occurs, the previous leader receives OnRevokedEvent
application event. After removal, any instances in the cluster may become the new leader,
including the old leader.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-kubernetes-fabric8-leader</artifactId>
</dependency>
To specify the name of the configmap used for leader election use the following property.
spring.cloud.kubernetes.leader.config-map-name=leader
Fabric8 Implementation
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-fabric8-loadbalancer</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-client-loadbalancer</artifactId>
</dependency>
To enable load balancing based on Kubernetes Service name use the following property. Then load
balancer would try to call application using address, for example service-
a.default.svc.cluster.local
spring.cloud.kubernetes.loadbalancer.mode=SERVICE
To enabled load balancing across all namespaces use the following property. Property from spring-
cloud-kubernetes-discovery module is respected.
spring.cloud.kubernetes.discovery.all-namespaces=true
11. Security Configurations Inside
Kubernetes
11.1. Namespace
Most of the components provided in this project need to know the namespace. For Kubernetes
(1.3+), the namespace is made available to the pod as part of the service account secret and is
automatically detected by the client. For earlier versions, it needs to be specified as an environment
variable to the pod. A quick way to do this is as follows:
env:
- name: "KUBERNETES_NAMESPACE"
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
Depending on the requirements, you’ll need get, list and watch permission on the following
resources:
Dependency Resources
For development purposes, you can add cluster-reader permissions to your default service
account. On a production system you’ll likely want to provide more granular permissions.
The following Role and RoleBinding are an example for namespaced permissions for the default
account:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: YOUR-NAME-SPACE
name: namespace-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["configmaps", "pods", "services", "endpoints", "secrets"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-reader-binding
namespace: YOUR-NAME-SPACE
subjects:
- kind: ServiceAccount
name: default
apiGroup: ""
roleRef:
kind: Role
name: namespace-reader
apiGroup: ""
However, Spring Boot will not automatically update those changes unless you restart the
application. Spring Cloud provides the ability refresh the application context without restarting the
application by either hitting the actuator endpoint /refresh or via publishing a
RefreshRemoteApplicationEvent using Spring Cloud Bus.
To achieve this configuration refresh of a Spring Cloud app running on Kubernetes, you can deploy
the Spring Cloud Kubernetes Configuration Watcher controller into your Kubernetes cluster.
Spring Cloud Kubernetes Configuration Watcher can send refresh notifications to applications in
two ways.
1. Over HTTP in which case the application being notified must of the /refresh actuator endpoint
exposed and accessible from within the cluster
2. Using Spring Cloud Bus, in which case you will need a message broker deployed to your custer
for the application to use.
The Service Account and associated Role Binding is important for Spring Cloud Kubernetes
Configuration to work properly. The controller needs access to read data about ConfigMaps, Pods,
Services, Endpoints and Secrets in the Kubernetes cluster.
The labels Spring Cloud Kubernetes Configuration Watcher looks for on ConfigMaps and Secrets
can be changed by setting spring.cloud.kubernetes.configuration.watcher.configLabel and
spring.cloud.kubernetes.configuration.watcher.secretLabel respectively.
If a change is made to a ConfigMap or Secret with valid labels then Spring Cloud Kubernetes
Configuration Watcher will take the name of the ConfigMap or Secret and send a notification to the
application with that name.
13.3. HTTP Implementation
The HTTP implementation is what is used by default. When this implementation is used Spring
Cloud Kubernetes Configuration Watcher and a change to a ConfigMap or Secret occurs then the
HTTP implementation will use the Spring Cloud Kubernetes Discovery Client to fetch all instances
of the application which match the name of the ConfigMap or Secret and send an HTTP POST
request to the application’s actuator /refresh endpoint. By default it will send the post request to
/actuator/refresh using the port registered in the discovery client.
If the application is using a non-default actuator path and/or using a different port for the
management endpoints, the Kubernetes service for the application can add an annotation called
boot.spring.io/actuator and set its value to the path and port used by the application. For example
apiVersion: v1
kind: Service
metadata:
labels:
app: config-map-demo
name: config-map-demo
annotations:
boot.spring.io/actuator: http://:9090/myactuator/home
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: config-map-demo
Another way you can choose to configure the actuator path and/or management port is by setting
spring.cloud.kubernetes.configuration.watcher.actuatorPath and
spring.cloud.kubernetes.configuration.watcher.actuatorPort.
spring:
rabbitmq:
username: user
password: password
host: rabbitmq
spring:
kafka:
producer:
bootstrap-servers: localhost:9092
14. Examples
Spring Cloud Kubernetes tries to make it transparent for your applications to consume Kubernetes
Native Services by following the Spring Cloud interfaces.
The following projects highlight the usage of these dependencies and demonstrate how you can use
these libraries from any Spring Boot application:
• Spring Cloud Kubernetes Examples: the ones located inside this repository.
◦ Minion
◦ Boss
• Spring Cloud Gateway with Spring Cloud Kubernetes Discovery and Config
• Spring Boot Admin with Spring Cloud Kubernetes Discovery and Config
15. Other Resources
This section lists other resources, such as presentations (slides) and videos about Spring Cloud
Kubernetes.
Please feel free to submit other resources through pull requests to this repository.
17. Building
17.1. Basic Compile and Test
To build the source you will need to install JDK 1.8.
Spring Cloud uses Maven for most build-related activities, and you should be able to get off the
ground quite quickly by cloning the project you are interested in and typing
$ ./mvnw install
You can also install Maven (>=3.3.3) yourself and run the mvn command in place of
./mvnw in the examples below. If you do that you also might need to add -P spring
if your local Maven settings do not contain repository declarations for spring pre-
release artifacts.
Be aware that you might need to increase the amount of memory available to
Maven by setting a MAVEN_OPTS environment variable with a value like -Xmx512m
-XX:MaxPermSize=128m. We try to cover this in the .mvn configuration, so if you find
you have to do it to make a build succeed, please raise a ticket to get the settings
added to source control.
For hints on how to build the project look in .travis.yml if there is one. There should be a "script"
and maybe "install" command. Also look at the "services" section to see if any services need to be
running locally (e.g. mongo or rabbit). Ignore the git-related bits that you might find in
"before_install" since they’re related to setting git credentials and you already have those.
The projects that require middleware generally include a docker-compose.yml, so consider using
Docker Compose to run the middeware servers in Docker containers. See the README in the scripts
demo repository for specific instructions about the common cases of mongo, rabbit and redis.
If all else fails, build with the command from .travis.yml (usually ./mvnw install).
17.2. Documentation
The spring-cloud-build module has a "docs" profile, and if you switch that on it will try to build
asciidoc sources from src/main/asciidoc. As part of that process it will look for a README.adoc and
process it by loading all the includes, but not parsing or rendering it, just copying it to
${main.basedir} (defaults to $/tmp/releaser-1622203270911-0/spring-cloud-release/train-
docs/target/unpacked-docs, i.e. the root of the project). If there are any changes in the README it
will then show up after a Maven build as a modified file in the correct place. Just commit it and
push the change.
Spring Cloud projects require the 'spring' Maven profile to be activated to resolve the spring
milestone and snapshot repositories. Use your preferred IDE to set this profile to be active, or you
may experience build errors.
We recommend the m2eclipse eclipse plugin when working with eclipse. If you don’t already have
m2eclipse installed it is available from the "eclipse marketplace".
Older versions of m2e do not support Maven 3.3, so once the projects are imported
into Eclipse you will also need to tell m2eclipse to use the right profile for the
projects. If you see many different errors related to the POMs in the projects, check
that you have an up to date installation. If you can’t upgrade m2e, add the "spring"
profile to your settings.xml. Alternatively you can copy the repository settings
from the "spring" profile of the parent pom into your settings.xml.
If you prefer not to use m2eclipse you can generate eclipse project metadata using the following
command:
$ ./mvnw eclipse:eclipse
The generated eclipse projects can be imported by selecting import existing projects from the file
menu.
18. Contributing
Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard
Github development process, using Github tracker for issues and merging pull requests into master.
If you want to contribute even something trivial please do not hesitate, but follow the guidelines
below.
• Use the Spring Framework code format conventions. If you use Eclipse you can import
formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project.
If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file.
• Make sure all new .java files to have a simple Javadoc class comment with at least an @author
tag identifying you, and preferably at least a paragraph on what the class is for.
• Add the ASF license header comment to all new .java files (copy from existing files in the
project)
• Add yourself as an @author to the .java files that you modify substantially (more than cosmetic
changes).
• Add some Javadocs and, if you change the namespace, some XSD doc elements.
• If no-one else is using your branch, please rebase it against the current master (or other target
branch in the main project).
• When writing a commit message please follow these conventions, if you are fixing an existing
issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue
number).
18.4. Checkstyle
Spring Cloud Build comes with a set of checkstyle rules. You can find them in the spring-cloud-
build-tools module. The most notable files under the module are:
spring-cloud-build-tools/
└── src
├── checkstyle
│ └── checkstyle-suppressions.xml ③
└── main
└── resources
├── checkstyle-header.txt ②
└── checkstyle.xml ①
Checkstyle rules are disabled by default. To add checkstyle to your project just define the
following properties and plugins.
pom.xml
<properties>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError> ①
<maven-checkstyle-plugin.failsOnViolation>true
</maven-checkstyle-plugin.failsOnViolation> ②
<maven-checkstyle-plugin.includeTestSourceDirectory>true
</maven-checkstyle-plugin.includeTestSourceDirectory> ③
</properties>
<build>
<plugins>
<plugin> ④
<groupId>io.spring.javaformat</groupId>
<artifactId>spring-javaformat-maven-plugin</artifactId>
</plugin>
<plugin> ⑤
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
<reporting>
<plugins>
<plugin> ⑤
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
</reporting>
</build>
④ Add the Spring Java Format plugin that will reformat your code to pass most of the Checkstyle
formatting rules
If you need to suppress some rules (e.g. line length needs to be longer), then it’s enough for you to
define a file under ${project.root}/src/checkstyle/checkstyle-suppressions.xml with your
suppressions. Example:
projectRoot/src/checkstyle/checkstyle-suppresions.xml
<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC
"-//Puppy Crawl//DTD Suppressions 1.1//EN"
"https://www.puppycrawl.com/dtds/suppressions_1_1.dtd">
<suppressions>
<suppress files=".*ConfigServerApplication\.java"
checks="HideUtilityClassConstructor"/>
<suppress files=".*ConfigClientWatch\.java" checks="LineLengthCheck"/>
</suppressions>
$ curl https://raw.githubusercontent.com/spring-cloud/spring-cloud-
build/master/.editorconfig -o .editorconfig
$ touch .springformat
In order to setup Intellij you should import our coding conventions, inspection profiles and set up
the checkstyle plugin. The following files can be found in the Spring Cloud Build project.
spring-cloud-build-tools/
└── src
├── checkstyle
│ └── checkstyle-suppressions.xml ③
└── main
└── resources
├── checkstyle-header.txt ②
├── checkstyle.xml ①
└── intellij
├── Intellij_Project_Defaults.xml ④
└── Intellij_Spring_Boot_Java_Conventions.xml ⑤
⑤ Project style conventions for Intellij that apply most of Checkstyle rules
Figure 5. Code style
Go to File → Settings → Editor → Code style. There click on the icon next to the Scheme section.
There, click on the Import Scheme value and pick the Intellij IDEA code style XML option. Import
the spring-cloud-build-
tools/src/main/resources/intellij/Intellij_Spring_Boot_Java_Conventions.xml file.
Figure 6. Inspection profiles
Go to File → Settings → Editor → Inspections. There click on the icon next to the Profile section.
There, click on the Import Profile and import the spring-cloud-build-
tools/src/main/resources/intellij/Intellij_Project_Defaults.xml file.
Checkstyle
To have Intellij work with Checkstyle, you have to install the Checkstyle plugin. It’s advisable to also
install the Assertions2Assertj to automatically convert the JUnit assertions
Go to File → Settings → Other settings → Checkstyle. There click on the + icon in the Configuration
file section. There, you’ll have to define where the checkstyle rules should be picked from. In the
image above, we’ve picked the rules from the cloned Spring Cloud Build repository. However, you
can point to the Spring Cloud Build’s GitHub repository (e.g. for the checkstyle.xml :
raw.githubusercontent.com/spring-cloud/spring-cloud-build/master/spring-cloud-build-tools/src/
main/resources/checkstyle.xml). We need to provide the following variables:
Remember to set the Scan Scope to All sources since we apply checkstyle rules for
production and test sources.
This project provides Netflix OSS integrations for Spring Boot apps through autoconfiguration and
binding to the Spring Environment and other Spring programming model idioms. With a few
simple annotations you can quickly enable and configure the common patterns inside your
application and build large distributed systems with battle-tested Netflix components. The patterns
provided include Service Discovery (Eureka).
1. Service Discovery: Eureka Clients
Service Discovery is one of the key tenets of a microservice-based architecture. Trying to hand-
configure each client or some form of convention can be difficult to do and can be brittle. Eureka is
the Netflix Service Discovery Server and Client. The server can be configured and deployed to be
highly available, with each server replicating state about the registered services to the others.
@SpringBootApplication
@RestController
public class Application {
@RequestMapping("/")
public String home() {
return "Hello world";
}
Note that the preceding example shows a normal Spring Boot application. By having spring-cloud-
starter-netflix-eureka-client on the classpath, your application automatically registers with the
Eureka Server. Configuration is required to locate the Eureka server, as shown in the following
example:
application.yml
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
In the preceding example, defaultZone is a magic string fallback value that provides the service URL
for any client that does not express a preference (in other words, it is a useful default).
The defaultZone property is case sensitive and requires camel case because the
serviceUrl property is a Map<String, String>. Therefore, the defaultZone property
does not follow the normal Spring Boot snake-case convention of default-zone.
The default application name (that is, the service ID), virtual host, and non-secure port (taken from
the Environment) are ${spring.application.name}, ${spring.application.name} and ${server.port},
respectively.
To disable the Eureka Discovery Client, you can set eureka.client.enabled to false. Eureka
Discovery Client will also be disabled when spring.cloud.discovery.enabled is set to false.
When Eureka server requires client side certificate for authentication, the client side certificate and
trust store can be configured via properties, as shown in following example:
application.yml
eureka:
client:
tls:
enabled: true
key-store: <path-of-key-store>
key-store-type: PKCS12
key-store-password: <key-store-password>
key-password: <key-password>
trust-store: <path-of-trust-store>
trust-store-type: PKCS12
trust-store-password: <trust-store-password>
The eureka.client.tls.enabled needs to be true to enable Eureka client side TLS. When
eureka.client.tls.trust-store is omitted, a JVM default trust store is used. The default value for
eureka.client.tls.key-store-type and eureka.client.tls.trust-store-type is PKCS12. When
password properties are omitted, empty password is assumed.
If you want to customize the RestTemplate used by the Eureka HTTP Client you may want to create
a bean of EurekaClientHttpRequestFactorySupplier and provide your own logic for generating a
ClientHttpRequestFactory instance.
application.yml
eureka:
instance:
statusPageUrlPath: ${server.servletPath}/info
healthCheckUrlPath: ${server.servletPath}/health
These links show up in the metadata that is consumed by clients and are used in some scenarios to
decide whether to send requests to your application, so it is helpful if they are accurate.
In Dalston it was also required to set the status and health check URLs when
changing that management context path. This requirement was removed
beginning in Edgware.
1.5. Registering a Secure Application
If your app wants to be contacted over HTTPS, you can set two flags in the
EurekaInstanceConfigBean:
• eureka.instance.[nonSecurePortEnabled]=[false]
• eureka.instance.[securePortEnabled]=[true]
Doing so makes Eureka publish instance information that shows an explicit preference for secure
communication. The Spring Cloud DiscoveryClient always returns a URI starting with https for a
service configured this way. Similarly, when a service is configured this way, the Eureka (native)
instance information has a secure health check URL.
Because of the way Eureka works internally, it still publishes a non-secure URL for the status and
home pages unless you also override those explicitly. You can use placeholders to configure the
eureka instance URLs, as shown in the following example:
application.yml
eureka:
instance:
statusPageUrl: https://${eureka.hostname}/info
healthCheckUrl: https://${eureka.hostname}/health
homePageUrl: https://${eureka.hostname}/
(Note that ${eureka.hostname} is a native placeholder only available in later versions of Eureka. You
could achieve the same thing with Spring placeholders as well — for example, by using
${eureka.instance.hostName}.)
If your application runs behind a proxy, and the SSL termination is in the proxy
(for example, if you run in Cloud Foundry or other platforms as a service), then
you need to ensure that the proxy “forwarded” headers are intercepted and
handled by the application. If the Tomcat container embedded in a Spring Boot
application has explicit configuration for the 'X-Forwarded-\*` headers, this
happens automatically. The links rendered by your app to itself being wrong (the
wrong host, port, or protocol) is a sign that you got this configuration wrong.
eureka:
client:
healthcheck:
enabled: true
If you require more control over the health checks, consider implementing your own
com.netflix.appinfo.HealthCheckHandler.
Cloud Foundry has a global router so that all instances of the same app have the same hostname
(other PaaS solutions with a similar architecture have the same arrangement). This is not
necessarily a barrier to using Eureka. However, if you use the router (recommended or even
mandatory, depending on the way your platform was set up), you need to explicitly set the
hostname and port numbers (secure or non-secure) so that they use the router. You might also want
to use instance metadata so that you can distinguish between the instances on the client (for
example, in a custom load balancer). By default, the eureka.instance.instanceId is
vcap.application.instance_id, as shown in the following example:
application.yml
eureka:
instance:
hostname: ${vcap.application.uris[0]}
nonSecurePort: 80
Depending on the way the security rules are set up in your Cloud Foundry instance, you might be
able to register and use the IP address of the host VM for direct service-to-service calls. This feature
is not yet available on Pivotal Web Services (PWS).
1.7.2. Using Eureka on AWS
If the application is planned to be deployed to an AWS cloud, the Eureka instance must be
configured to be AWS-aware. You can do so by customizing the EurekaInstanceConfigBean as
follows:
@Bean
@Profile("!default")
public EurekaInstanceConfigBean eurekaInstanceConfig(InetUtils inetUtils) {
EurekaInstanceConfigBean bean = new EurekaInstanceConfigBean(inetUtils);
AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka");
bean.setDataCenterInfo(info);
return bean;
}
A vanilla Netflix Eureka instance is registered with an ID that is equal to its host name (that is, there
is only one service per host). Spring Cloud Eureka provides a sensible default, which is defined as
follows:
${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instance_id:${s
erver.port}}
An example is myhost:myappname:8080.
By using Spring Cloud, you can override this value by providing a unique identifier in
eureka.instance.instanceId, as shown in the following example:
application.yml
eureka:
instance:
instanceId:
${spring.application.name}:${vcap.application.instance_id:${spring.application.instanc
e_id:${random.value}}}
With the metadata shown in the preceding example and multiple service instances deployed on
localhost, the random value is inserted there to make the instance unique. In Cloud Foundry, the
vcap.application.instance_id is populated automatically in a Spring Boot application, so the
random value is not needed.
By default, EurekaClient uses Spring’s RestTemplate for HTTP communication. If you wish to use
Jersey instead, you need to add the Jersey dependencies to your classpath. The following example
shows the dependencies you need to add:
<dependency>
<groupId>com.sun.jersey</groupId>
<artifactId>jersey-client</artifactId>
</dependency>
<dependency>
<groupId>com.sun.jersey</groupId>
<artifactId>jersey-core</artifactId>
</dependency>
<dependency>
<groupId>com.sun.jersey.contribs</groupId>
<artifactId>jersey-apache-client4</artifactId>
</dependency>
1.11. Zones
If you have deployed Eureka clients to multiple zones, you may prefer that those clients use
services within the same zone before trying services in another zone. To set that up, you need to
configure your Eureka clients correctly.
First, you need to make sure you have Eureka servers deployed to each zone and that they are
peers of each other. See the section on zones and regions for more information.
Next, you need to tell Eureka which zone your service is in. You can do so by using the metadataMap
property. For example, if service 1 is deployed to both zone 1 and zone 2, you need to set the
following Eureka properties in service 1:
Service 1 in Zone 1
eureka.instance.metadataMap.zone = zone1
eureka.client.preferSameZoneEureka = true
Service 1 in Zone 2
eureka.instance.metadataMap.zone = zone2
eureka.client.preferSameZoneEureka = true
1.12. Refreshing Eureka Clients
By default, the EurekaClient bean is refreshable, meaning the Eureka client properties can be
changed and refreshed. When a refresh occurs clients will be unregistered from the Eureka server
and there might be a brief moment of time where all instance of a given service are not available.
One way to eliminate this from happening is to disable the ability to refresh Eureka clients. To do
this set eureka.client.refresh.enable=false.
If there is no other source of zone data, then a guess is made, based on the client configuration (as
opposed to the instance configuration). We take eureka.client.availabilityZones, which is a map
from region name to a list of zones, and pull out the first zone for the instance’s own region (that is,
the eureka.client.region, which defaults to "us-east-1", for compatibility with native Netflix).
If your project already uses Thymeleaf as its template engine, the Freemarker
templates of the Eureka server may not be loaded correctly. In this case it is
necessary to configure the template loader manually:
application.yml
spring:
freemarker:
template-loader-path: classpath:/templates/
prefer-file-system-access: false
2.2. How to Run a Eureka Server
The following example shows a minimal Eureka server:
@SpringBootApplication
@EnableEurekaServer
public class Application {
The server has a home page with a UI and HTTP API endpoints for the normal Eureka functionality
under /eureka/*.
The following links have some Eureka background reading: flux capacitor and google group
discussion.
Due to Gradle’s dependency resolution rules and the lack of a parent bom feature,
depending on spring-cloud-starter-netflix-eureka-server can cause failures on
application startup. To remedy this issue, add the Spring Boot Gradle plugin and
import the Spring cloud starter parent bom as follows:
build.gradle
buildscript {
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-
plugin:{spring-boot-docs-version}")
}
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-
dependencies:{spring-cloud-version}"
}
}
By default, every Eureka server is also a Eureka client and requires (at least one) service URL to
locate a peer. If you do not provide it, the service runs and works, but it fills your logs with a lot of
noise about not being able to register with the peer.
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
Notice that the serviceUrl is pointing to the same host as the local instance.
---
spring:
profiles: peer1
eureka:
instance:
hostname: peer1
client:
serviceUrl:
defaultZone: https://peer2/eureka/
---
spring:
profiles: peer2
eureka:
instance:
hostname: peer2
client:
serviceUrl:
defaultZone: https://peer1/eureka/
In the preceding example, we have a YAML file that can be used to run the same server on two
hosts (peer1 and peer2) by running it in different Spring profiles. You could use this configuration to
test the peer awareness on a single host (there is not much value in doing that in production) by
manipulating /etc/hosts to resolve the host names. In fact, the eureka.instance.hostname is not
needed if you are running on a machine that knows its own hostname (by default, it is looked up by
using java.net.InetAddress).
You can add multiple peers to a system, and, as long as they are all connected to each other by at
least one edge, they synchronize the registrations amongst themselves. If the peers are physically
separated (inside a data center or between multiple data centers), then the system can, in principle,
survive “split-brain” type failures. You can add multiple peers to a system, and as long as they are
all directly connected to each other, they will synchronize the registrations amongst themselves.
application.yml (Three Peer Aware Eureka Servers)
eureka:
client:
serviceUrl:
defaultZone: https://peer1/eureka/,http://peer2/eureka/,http://peer3/eureka/
---
spring:
profiles: peer1
eureka:
instance:
hostname: peer1
---
spring:
profiles: peer2
eureka:
instance:
hostname: peer2
---
spring:
profiles: peer3
eureka:
instance:
hostname: peer3
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().ignoringAntMatchers("/eureka/**");
super.configure(http);
}
}
A demo Eureka Server can be found in the Spring Cloud Samples repo.
<dependency>
<groupId>org.glassfish.jaxb</groupId>
<artifactId>jaxb-runtime</artifactId>
</dependency>
3. Configuration properties
To see the list of all Spring Cloud Netflix related configuration properties please check the Appendix
page.
This project provides OpenFeign integrations for Spring Boot apps through autoconfiguration and
binding to the Spring Environment and other Spring programming model idioms.
@SpringBootApplication
@EnableFeignClients
public class Application {
StoreClient.java
@FeignClient("stores")
public interface StoreClient {
@RequestMapping(method = RequestMethod.GET, value = "/stores")
List<Store> getStores();
In the @FeignClient annotation the String value ("stores" above) is an arbitrary client name, which
is used to create a Spring Cloud LoadBalancer client. You can also specify a URL using the url
attribute (absolute value or just a hostname). The name of the bean in the application context is the
fully qualified name of the interface. To specify your own alias value you can use the qualifiers
value of the @FeignClient annotation.
The load-balancer client above will want to discover the physical addresses for the "stores" service.
If your application is a Eureka client then it will resolve the service in the Eureka service registry. If
you don’t want to use Eureka, you can configure a list of servers in your external configuration
using SimpleDiscoveryClient.
Spring Cloud OpenFeign supports all the features available for the blocking mode of Spring Cloud
LoadBalancer. You can read more about them in the project documentation.
1.2. Overriding Feign Defaults
A central concept in Spring Cloud’s Feign support is that of the named client. Each feign client is
part of an ensemble of components that work together to contact a remote server on demand, and
the ensemble has a name that you give it as an application developer using the @FeignClient
annotation. Spring Cloud creates a new ensemble as an ApplicationContext on demand for each
named client using FeignClientsConfiguration. This contains (amongst other things) an
feign.Decoder, a feign.Encoder, and a feign.Contract. It is possible to override the name of that
ensemble by using the contextId attribute of the @FeignClient annotation.
Spring Cloud lets you take full control of the feign client by declaring additional configuration (on
top of the FeignClientsConfiguration) using @FeignClient. Example:
In this case the client is composed from the components already in FeignClientsConfiguration
together with any in FooConfiguration (where the latter will override the former).
Previously, using the url attribute, did not require the name attribute. Using name is
now required.
Spring Cloud OpenFeign provides the following beans by default for feign (BeanType beanName:
ClassName):
• Decoder feignDecoder: ResponseEntityDecoder (which wraps a SpringDecoder)
The OkHttpClient and ApacheHttpClient and ApacheHC5 feign clients can be used by setting
feign.okhttp.enabled or feign.httpclient.enabled or feign.httpclient.hc5.enabled to true,
respectively, and having them on the classpath. You can customize the HTTP client used by
providing a bean of either org.apache.http.impl.client.CloseableHttpClient when using Apache or
okhttp3.OkHttpClient when using OK HTTP or
org.apache.hc.client5.http.impl.classic.CloseableHttpClient when using Apache HC5.
Spring Cloud OpenFeign does not provide the following beans by default for feign, but still looks up
beans of these types from the application context to create the feign client:
• Logger.Level
• Retryer
• ErrorDecoder
• Request.Options
• Collection<RequestInterceptor>
• SetterFactory
• QueryMapEncoder
A bean of Retryer.NEVER_RETRY with the type Retryer is created by default, which will disable
retrying. Notice this retrying behavior is different from the Feign default one, where it will
automatically retry IOExceptions, treating them as transient network related exceptions, and any
RetryableException thrown from an ErrorDecoder.
Creating a bean of one of those type and placing it in a @FeignClient configuration (such as
FooConfiguration above) allows you to override each one of the beans described. Example:
@Configuration
public class FooConfiguration {
@Bean
public Contract feignContract() {
return new feign.Contract.Default();
}
@Bean
public BasicAuthRequestInterceptor basicAuthRequestInterceptor() {
return new BasicAuthRequestInterceptor("user", "password");
}
}
application.yml
feign:
client:
config:
feignName:
connectTimeout: 5000
readTimeout: 5000
loggerLevel: full
errorDecoder: com.example.SimpleErrorDecoder
retryer: com.example.SimpleRetryer
defaultQueryParameters:
query: queryValue
defaultRequestHeaders:
header: headerValue
requestInterceptors:
- com.example.FooRequestInterceptor
- com.example.BarRequestInterceptor
decode404: false
encoder: com.example.SimpleEncoder
decoder: com.example.SimpleDecoder
contract: com.example.SimpleContract
capabilities:
- com.example.FooCapability
- com.example.BarCapability
metrics.enabled: false
application.yml
feign:
client:
config:
default:
connectTimeout: 5000
readTimeout: 5000
loggerLevel: basic
If we create both @Configuration bean and configuration properties, configuration properties will
win. It will override @Configuration values. But if you want to change the priority to @Configuration,
you can change feign.client.default-to-properties to false.
If we want to create multiple feign clients with the same name or url so that they would point to the
same server but each with a different custom configuration then we have to use contextId attribute
of the @FeignClient in order to avoid name collision of these configuration beans.
It is also possible to configure FeignClient not to inherit beans from the parent context. You can do
this by overriding the inheritParentConfiguration() in a FeignClientConfigurer bean to return
false:
@Configuration
public class CustomConfiguration{
@Bean
public FeignClientConfigurer feignClientConfigurer() {
return new FeignClientConfigurer() {
@Override
public boolean inheritParentConfiguration() {
return false;
}
};
}
}
By default, Feign clients do not encode slash / characters. You can change this
behaviour, by setting the value of feign.client.decodeSlash to false.
In the SpringEncoder that we provide, we set null charset for binary content types and UTF-8 for all
the other ones.
You can modify this behaviour to derive the charset from the Content-Type header charset instead
by setting the value of feign.encoder.charset-from-content-type to true.
• connectTimeout prevents blocking the caller due to the long server processing time.
• readTimeout is applied from the time of connection establishment and is triggered when
returning the response takes too long.
In case the server is not running or available a packet results in connection refused.
The communication ends either with an error message or in a fallback. This can
happen before the connectTimeout if it is set very low. The time taken to perform a
lookup and to receive such a packet causes a significant part of this delay. It is
subject to change based on the remote host that involves a DNS lookup.
@Import(FeignClientsConfiguration.class)
class FooController {
@Autowired
public FooController(Client client, Encoder encoder, Decoder decoder, Contract
contract, MicrometerCapability micrometerCapability) {
this.fooClient = Feign.builder().client(client)
.encoder(encoder)
.decoder(decoder)
.contract(contract)
.addCapability(micrometerCapability)
.requestInterceptor(new BasicAuthRequestInterceptor("user", "user"))
.target(FooClient.class, "https://PROD-SVC");
this.adminClient = Feign.builder().client(client)
.encoder(encoder)
.decoder(decoder)
.contract(contract)
.addCapability(micrometerCapability)
.requestInterceptor(new BasicAuthRequestInterceptor("admin", "admin"))
.target(FooClient.class, "https://PROD-SVC");
}
}
PROD-SVC is the name of the service the Clients will be making requests to.
The Feign Contract object defines what annotations and values are valid on
interfaces. The autowired Contract bean provides supports for SpringMVC
annotations, instead of the default Feign native annotations.
You can also use the Builder`to configure FeignClient not to inherit beans from the parent
context. You can do this by overriding calling `inheritParentContext(false) on the Builder.
@Configuration
public class FooConfiguration {
@Bean
@Scope("prototype")
public Feign.Builder feignBuilder() {
return Feign.builder();
}
}
The circuit breaker name follows this pattern <feignClientName>#<calledMethod>. When calling a
@FeignClient with name foo and the called interface method is bar then the circuit breaker name
will be foo_bar.
}
@Component
static class Fallback implements TestClient {
@Override
public Hello getHello() {
throw new NoFallbackAvailableException("Boom!", new RuntimeException());
}
@Override
public String getException() {
return "Fixed response";
}
}
If one needs access to the cause that made the fallback trigger, one can use the fallbackFactory
attribute inside @FeignClient.
@FeignClient(name = "testClientWithFactory", url = "http://localhost:${server.port}/",
fallbackFactory = TestFallbackFactory.class)
protected interface TestClientWithFactory {
}
@Component
static class TestFallbackFactory implements FallbackFactory<FallbackWithFactory> {
@Override
public FallbackWithFactory create(Throwable cause) {
return new FallbackWithFactory();
}
}
@Override
public Hello getHello() {
throw new NoFallbackAvailableException("Boom!", new RuntimeException());
}
@Override
public String getException() {
return "Fixed response";
}
}
UserService.java
UserResource.java
@RestController
public class UserResource implements UserService {
UserClient.java
package project.user;
@FeignClient("users")
public interface UserClient extends UserService {
Feign request compression gives you settings similar to what you may set for your web server:
feign.compression.request.enabled=true
feign.compression.request.mime-types=text/xml,application/xml,application/json
feign.compression.request.min-request-size=2048
These properties allow you to be selective about the compressed media types and minimum request
threshold length.
For http clients except OkHttpClient, default gzip decoder can be enabled to decode gzip response in
UTF-8 encoding:
feign.compression.response.enabled=true
feign.compression.response.useGzipDecoder=true
application.yml
logging.level.project.user.UserClient: DEBUG
The Logger.Level object that you may configure per client, tells Feign how much to log. Choices are:
• BASIC, Log only the request method and URL and the response status code and execution time.
• HEADERS, Log the basic information along with request and response headers.
• FULL, Log the headers, body, and metadata for both requests and responses.
@Configuration
public class FooConfiguration {
@Bean
Logger.Level feignLoggerLevel() {
return Logger.Level.FULL;
}
}
1.11. Feign Capability support
The Feign capabilities expose core Feign components so that these components can be modified. For
example, the capabilities can take the Client, decorate it, and give the decorated instance back to
Feign. The support for metrics libraries is a good real-life example for this. See Feign metrics.
Creating one or more Capability beans and placing them in a @FeignClient configuration lets you
register them and modify the behavior of the involved client.
@Configuration
public class FooConfiguration {
@Bean
Capability customCapability() {
return new CustomCapability();
}
}
◦ feign.metrics.enabled=false
◦ feign.client.config.feignName.metrics.enabled=false
@Configuration
public class FooConfiguration {
@Bean
public MicrometerCapability micrometerCapability(MeterRegistry meterRegistry) {
return new MicrometerCapability(meterRegistry);
}
}
For example, the Params class defines parameters param1 and param2:
// Params.java
public class Params {
private String param1;
private String param2;
The following feign client uses the Params class by using the @SpringQueryMap annotation:
@FeignClient("demo")
public interface DemoTemplate {
@GetMapping(path = "/demo")
String demoEndpoint(@SpringQueryMap Params params);
}
If you need more control over the generated query parameter map, you can implement a custom
QueryMapEncoder bean.
When HATEOAS support is enabled, Feign clients are allowed to serialize and deserialize HATEOAS
representation models: EntityModel, CollectionModel and PagedModel.
@FeignClient("demo")
public interface DemoTemplate {
@GetMapping(path = "/stores")
CollectionModel<Store> getStores();
}
If a map is passed as the method argument, the @MatrixVariable path segment is created by joining
key-value pairs from the map with a =.
If a different object is passed, either the name provided in the @MatrixVariable annotation (if defined)
or the annotated variable name is joined with the provided method argument using =.
IMPORTANT
Even though, on the server side, Spring does not require the users to name the path segment
placeholder same as the matrix variable name, since it would be too ambiguous on the client
side, Spring Cloud OpenFeign requires that you add a path segment placeholder with a name
matching either the name provided in the @MatrixVariable annotation (if defined) or the
annotated variable name.
For example:
@GetMapping("/objects/links/{matrixVars}")
Map<String, List<String>> getObjects(@MatrixVariable Map<String, List<String>>
matrixVars);
Note that both variable name and the path segment placeholder are called matrixVars.
@FeignClient("demo")
public interface DemoTemplate {
@GetMapping(path = "/stores")
CollectionModel<Store> getStores();
}
1.16. Feign CollectionFormat support
We support feign.CollectionFormat by providing the @CollectionFormat annotation.You can
annotate a Feign client method with it by passing the desired feign.CollectionFormat as annotation
value.
In the following example, the CSV format is used instead of the default EXPLODED to process the
method.
@FeignClient(name = "demo")
protected interface PageableFeignClient {
@CollectionFormat(feign.CollectionFormat.CSV)
@GetMapping(path = "/page")
ResponseEntity performRequest(Pageable page);
}
Set the CSV format while sending Pageable as a query parameter in order for it to be
encoded correctly.
Until that is done, we recommend using feign-reactive for Spring WebClient support.
Depending on how you are using your Feign clients you may see initialization errors when starting
your application. To work around this problem you can use an ObjectProvider when autowiring
your client.
@Autowired
ObjectProvider<TestFeginClient> testFeginClient;
feign.autoconfiguration.jackson.enabled=true
1.19. Spring @RefreshScope Support
If Feign client refresh is enabled, each feign client is created with feign.Request.Options as a
refresh-scoped bean. This means properties such as connectTimeout and readTimeout can be
refreshed against any Feign client instance through POST /actuator/refresh.
By default, refresh behavior in Feign clients is disabled. Use the following property to enable
refresh behavior:
feign.client.refresh-enabled=true
2. Configuration properties
To see the list of all Spring Cloud OpenFeign related configuration properties please check the
Appendix page.
Documentation Overview About the Documentation, Getting Help, First Steps, and
more.
Using Spring Cloud Sleuth Spring Cloud Sleuth usage examples and workflows.
Spring Cloud Sleuth Features Span creation, context propagation, and more.
3.1.3
1. Preface
1.1. A Brief History of Spring’s Data Integration
Journey
Spring’s journey on Data Integration started with Spring Integration. With its programming model,
it provided a consistent developer experience to build applications that can embrace Enterprise
Integration Patterns to connect with external systems such as, databases, message brokers, and
among others.
Fast forward to the cloud-era, where microservices have become prominent in the enterprise
setting. Spring Boot transformed the way how developers built Applications. With Spring’s
programming model and the runtime responsibilities handled by Spring Boot, it became seamless
to develop stand-alone, production-grade Spring-based microservices.
To extend this to Data Integration workloads, Spring Integration and Spring Boot were put together
into a new project. Spring Cloud Stream was born.
• Port the business logic onto message brokers (such as RabbitMQ, Apache Kafka, Amazon
Kinesis).
• Rely on the framework’s automatic content-type support for common use-cases. Extending to
different data conversion types is possible.
We show you how to create a Spring Cloud Stream application that receives messages coming from
the messaging middleware of your choice (more on this later) and logs received messages to the
console. We call it LoggingConsumer. While not very practical, it provides a good introduction to some
of the main concepts and abstractions, making it easier to digest the rest of this user guide.
The three steps are as follows:
To get started, visit the Spring Initializr. From there, you can generate our LoggingConsumer
application. To do so:
1. In the Dependencies section, start typing stream. When the “Cloud Stream” option should
appears, select it.
Basically, you choose the messaging middleware to which your application binds. We
recommend using the one you have already installed or feel more comfortable with installing
and running. Also, as you can see from the Initilaizer screen, there are a few other options you
can choose. For example, you can choose Gradle as your build tool instead of Maven (the
default).
The value of the Artifact field becomes the application name. If you chose RabbitMQ for the
middleware, your Spring Initializr should now be as follows:
Doing so downloads the zipped version of the generated project to your hard drive.
2. Unzip the file into the folder you want to use as your project directory.
Now you can import the project into your IDE. Keep in mind that, depending on the IDE, you may
need to follow a specific import procedure. For example, depending on how the project was
generated (Maven or Gradle), you may need to follow specific import procedure (for example, in
Eclipse or STS, you need to use File → Import → Maven → Existing Maven Project).
Once imported, the project must have no errors of any kind. Also, src/main/java should contain
com.example.loggingconsumer.LoggingConsumerApplication.
Technically, at this point, you can run the application’s main class. It is already a valid Spring Boot
application. However, it does not do anything, so we want to add some code.
@SpringBootApplication
public class LoggingConsumerApplication {
@Bean
public Consumer<Person> log() {
return person -> {
System.out.println("Received: " + person);
};
}
• We are using functional programming model (see Spring Cloud Function support) to define a
single message handler as Consumer.
• We are relying on framework conventions to bind such handler to the input destination binding
exposed by the binder.
Doing so also lets you see one of the core features of the framework: It tries to automatically
convert incoming message payloads to type Person.
You now have a fully functional Spring Cloud Stream application that does listens for messages.
From here, for simplicity, we assume you selected RabbitMQ in step one. Assuming you have
RabbitMQ installed and running, you can start the application by running its main method in your
IDE.
You should see following output:
Go to the RabbitMQ management console or any other RabbitMQ client and send a message to
input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg. The anonymous.CbMIwdkJSBO1ZoPDOtHtCg part represents the
group name and is generated, so it is bound to be different in your environment. For something
more predictable, you can use an explicit group name by setting
spring.cloud.stream.bindings.input.group=hello (or whatever name you like).
The contents of the message should be a JSON representation of the Person class, as follows:
{"name":"Sam Spade"}
You can also build and package your application into a boot jar (by using ./mvnw clean install) and
run the built JAR by using the java -jar command.
Now you have a working (albeit very basic) Spring Cloud Stream application.
• StreamBridge - for dynamic destinations. See Sending arbitrary data to an output (e.g. Foreign
event-driven sources) for more details.
• Multiple bindings with functions (multiple message handlers) - see Multiple functions in a
single application for more details.
• Functions with multiple inputs/outputs (single function that can subscribe or target multiple
destinations) - see Functions with multiple input and output arguments for more details.
• Native support for reactive programming - since v3.0.0 we no longer distribute spring-cloud-
stream-reactive modules and instead relying on native reactive support provided by spring
cloud function. For backward compatibility you can still bring spring-cloud-stream-reactive
from previous versions.
• The original-content-type header references have been removed after it’s been deprecated in
v2.0.
This section goes into more detail about how you can work with Spring Cloud Stream. It covers
topics such as creating and running stream applications.
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
@SpringBootTest(classes = SampleApplication.class)
@Import({TestChannelBinderConfiguration.class})
class BootTestStreamApplicationTests {
@Autowired
private InputDestination input;
@Autowired
private OutputDestination output;
@Test
void contextLoads() {
input.send(new GenericMessage<byte[]>("hello".getBytes()));
assertThat(output.receive().getPayload()).isEqualTo("HELLO".getBytes());
}
}
4. Main Concepts
Spring Cloud Stream provides a number of abstractions and primitives that simplify the writing of
message-driven microservice applications. This section gives an overview of the following:
• Partitioning support
Spring Cloud Stream applications can be run in stand-alone mode from your IDE for testing. To run
a Spring Cloud Stream application in production, you can create an executable (or “fat”) JAR by
using the standard Spring Boot tooling provided for Maven or Gradle. See the Spring Boot
Reference Guide for more details.
Binder abstraction is also one of the extension points of the framework, which means you can
implement your own binder on top of Spring Cloud Stream. In the How to create a Spring Cloud
Stream Binder from scratch post a community member documents in details, with an example, a
set of steps necessary to implement a custom binder. The steps are also highlighted in the
Implementing Custom Binders section.
Spring Cloud Stream uses Spring Boot for configuration, and the Binder abstraction makes it
possible for a Spring Cloud Stream application to be flexible in how it connects to middleware. For
example, deployers can dynamically choose, at runtime, the mapping between the external
destinations (such as the Kafka topics or RabbitMQ exchanges) and inputs and outputs of the
message handler (such as input parameter of the function and its return argument). Such
configuration can be provided through external configuration properties and in any form
supported by Spring Boot (including application arguments, environment variables, and
application.yml or application.properties files). In the sink example from the Introducing Spring
Cloud Stream section, setting the spring.cloud.stream.bindings.input.destination application
property to raw-sensor-data causes it to read from the raw-sensor-data Kafka topic or from a queue
bound to the raw-sensor-data RabbitMQ exchange.
Spring Cloud Stream automatically detects and uses a binder found on the classpath. You can use
different types of middleware with the same code. To do so, include a different binder at build time.
For more complex use cases, you can also package multiple binders with your application and have
it choose the binder( and even whether to use different binders for different bindings) at runtime.
Data reported by sensors to an HTTP endpoint is sent to a common destination named raw-sensor-
data. From the destination, it is independently processed by a microservice application that
computes time-windowed averages and by another microservice application that ingests the raw
data into HDFS (Hadoop Distributed File System). In order to process the data, both applications
declare the topic as their input at runtime.
The publish-subscribe communication model reduces the complexity of both the producer and the
consumer and lets new applications be added to the topology without disruption of the existing
flow. For example, downstream from the average-calculating application, you can add an
application that calculates the highest temperature values for display and monitoring. You can then
add another application that interprets the same flow of averages for fault detection. Doing all
communication through shared topics rather than point-to-point queues reduces coupling between
microservices.
While the concept of publish-subscribe messaging is not new, Spring Cloud Stream takes the extra
step of making it an opinionated choice for its application model. By using native middleware
support, Spring Cloud Stream also simplifies use of the publish-subscribe model across different
platforms.
Spring Cloud Stream models this behavior through the concept of a consumer group. (Spring Cloud
Stream consumer groups are similar to and inspired by Kafka consumer groups.) Each consumer
binding can use the spring.cloud.stream.bindings.<bindingName>.group property to specify a group
name. For the consumers shown in the following figure, this property would be set as
spring.cloud.stream.bindings.<bindingName>.group=hdfsWrite or
spring.cloud.stream.bindings.<bindingName>.group=average.
All groups that subscribe to a given destination receive a copy of published data, but only one
member of each group receives a given message from that destination. By default, when a group is
not specified, Spring Cloud Stream assigns the application to an anonymous and independent
single-member consumer group that is in a publish-subscribe relationship with all other consumer
groups.
Prior to version 2.0, only asynchronous consumers were supported. A message is delivered as soon
as it is available and a thread is available to process it.
When you wish to control the rate at which messages are processed, you might want to use a
synchronous consumer.
4.5.1. Durability
Consistent with the opinionated application model of Spring Cloud Stream, consumer group
subscriptions are durable. That is, a binder implementation ensures that group subscriptions are
persistent and that, once at least one subscription for a group has been created, the group receives
messages, even if they are sent while all applications in the group are stopped.
Spring Cloud Stream provides a common abstraction for implementing partitioned processing use
cases in a uniform fashion. Partitioning can thus be used whether the broker itself is naturally
partitioned (for example, Kafka) or not (for example, RabbitMQ).
Partitioning is a critical concept in stateful processing, where it is critical (for either performance or
consistency reasons) to ensure that all related data is processed together. For example, in the time-
windowed average calculation example, it is important that all measurements from any given
sensor are processed by the same application instance.
To set up a partitioned processing scenario, you must configure both the data-
producing and the data-consuming ends.
5. Programming Model
To understand the programming model, you should be familiar with the following core concepts:
• Bindings: Bridge between the external messaging systems and application provided Producers
and Consumers of messages (created by the Destination Binders).
• Message: The canonical data structure used by producers and consumers to communicate with
Destination Binders (and thus other applications via external messaging systems).
Binders handle a lot of the boiler plate responsibilities that would otherwise fall on your shoulders.
However, to accomplish that, the binder still needs some help in the form of minimalistic yet
required set of instructions from the user, which typically come in the form of some type of binding
configuration.
While it is out of scope of this section to discuss all of the available binder and binding
configuration options (the rest of the manual covers them extensively), Binding as a concept, does
require special attention. The next section discusses it in detail.
5.2. Bindings
As stated earlier, Bindings provide a bridge between the external messaging system (e.g., queue,
topic etc.) and application-provided Producers and Consumers.
The following example shows a fully configured and functioning Spring Cloud Stream application
that receives the payload of the message as a String type (see Content Type Negotiation section),
logs it to the console and sends it down stream after converting it to upper case.
@SpringBootApplication
public class SampleApplication {
@Bean
public Function<String, String> uppercase() {
return value -> {
System.out.println("Received: " + value);
return value.toUpperCase()
};
}
}
The above example looks no different then any vanilla spring-boot application. It defines a single
bean of type Function and that it is. So, how does it became spring-cloud-stream application? It
becomes spring-cloud-stream application simply based on the presence of spring-cloud-stream and
binder dependencies and auto-configuration classes on the classpath effectively setting the context
for your boot application as spring-cloud-stream application. And in this context beans of type
Supplier, Function or Consumer are treated as defacto message handlers triggering binding of to
destinations exposed by the provided binder following certain naming conventions and rules to
avoid extra configuration.
Binding is an abstraction that represents a bridge between sources and targets exposed by the
binder and user code, This abstraction has a name and while we try to do our best to limit
configuration required to run spring-cloud-stream applications, being aware of such name(s) is
necessary for cases where additional per-binding configuration is required.
Throughout this manual you will see examples of configuration properties such as
spring.cloud.stream.bindings.input.destination=myQueue. The input segment in this property name
is what we refer to as binding name and it could derive via several mechanisms. The following sub-
sections will describe the naming conventions and configuration elements used by spring-cloud-
stream to control binding names.
Unlike the explicit naming required by annotation-based support (legacy) used in the previous
versions of spring-cloud-stream, the functional programming model defaults to a simple
convention when it comes to binding names, thus greatly simplifying application configuration.
Let’s look at the first example:
@SpringBootApplication
public class SampleApplication {
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
In the preceding example we have an application with a single function which acts as message
handler. As a Function it has an input and output. The naming convention used to name input and
output bindings is as follows:
The in and out corresponds to the type of binding (such as input or output). The index is the index of
the input or output binding. It is always 0 for typical single input/output function, so it’s only
relevant for Functions with multiple input and output arguments.
So if for example you would want to map the input of this function to a remote destination (e.g.,
topic, queue etc) called "my-topic" you would do so with the following property:
--spring.cloud.stream.bindings.uppercase-in-0.destination=my-topic
Note how uppercase-in-0 is used as a segment in property name. The same goes for uppercase-out-0.
Some times to improve readability you may want to give your binding a more descriptive name
(such as 'account', 'orders` etc). Another way of looking at it is you can map an implicit binding
name to an explicit binding name. And you can do it with
spring.cloud.stream.function.bindings.<binding-name> property. This property also provides a
migration path for existing applications that rely on custom interface-based bindings that require
explicit names.
For example,
--spring.cloud.stream.function.bindings.uppercase-in-0=input
In the preceding example you mapped and effectively renamed uppercase-in-0 binding name to
input. Now all configuration properties can refer to input binding name instead (e.g.,
--spring.cloud.stream.bindings.input.destination=my-topic).
While descriptive binding names may enhance the readability aspect of the
configuration, they also create another level of misdirection by mapping an
implicit binding name to an explicit binding name. And since all subsequent
configuration properties will use the explicit binding name you must always refer
to this 'bindings' property to correlate which function it actually corresponds to.
We believe that for most cases (with the exception of Functional Composition) it
may be an overkill, so, it is our recommendation to avoid using it altogether,
especially since not using it provides a clear path between binder destination and
binding name, such as spring.cloud.stream.bindings.uppercase-in-
0.destination=sample-topic, where you are clearly correlating the input of
uppercase function to sample-topic destination.
For more on properties and other configuration options please see Configuration Options section.
Since Spring Cloud Stream v2.1, another alternative for defining stream handlers and sources is to
use build-in support for Spring Cloud Function where they can be expressed as beans of type
java.util.function.[Supplier/Function/Consumer].
To specify which functional bean to bind to the external destination(s) exposed by the bindings, you
must provide spring.cloud.function.definition property.
@SpringBootApplication
public class MyFunctionBootApp {
@Bean
public Function<String, String> toUpperCase() {
return s -> s.toUpperCase();
}
}
Below are the examples of simple functional applications to support other semantics:
@Bean
public Supplier<Date> date() {
return () -> new Date(12345L);
}
}
@SpringBootApplication
public static class SinkFromConsumer {
@Bean
public Consumer<String> sink() {
return System.out::println;
}
}
Suppliers (Sources)
Function and Consumer are pretty straightforward when it comes to how their invocation is
triggered. They are triggered based on data (events) sent to the destination they are bound to. In
other words, they are classic event-driven components.
However, Supplier is in its own category when it comes to triggering. Since it is, by definition, the
source (the origin) of the data, it does not subscribe to any in-bound destination and, therefore, has
to be triggered by some other mechanism(s). There is also a question of Supplier implementation,
which could be imperative or reactive and which directly relates to the triggering of such suppliers.
@SpringBootApplication
public static class SupplierConfiguration {
@Bean
public Supplier<String> stringSupplier() {
return () -> "Hello from Supplier";
}
}
The preceding Supplier bean produces a string whenever its get() method is invoked. However,
who invokes this method and how often? The framework provides a default polling mechanism
(answering the question of "Who?") that will trigger the invocation of the supplier and by default it
will do so every second (answering the question of "How often?"). In other words, the above
configuration produces a single message every second and each message is sent to an output
destination that is exposed by the binder. To learn how to customize the polling mechanism, see
Polling Configuration Properties section.
@SpringBootApplication
public static class SupplierConfiguration {
@Bean
public Supplier<Flux<String>> stringSupplier() {
return () -> Flux.fromStream(Stream.generate(new Supplier<String>() {
@Override
public String get() {
try {
Thread.sleep(1000);
return "Hello from Supplier";
} catch (Exception e) {
// ignore
}
}
})).subscribeOn(Schedulers.elastic()).share();
}
}
The preceding Supplier bean adopts the reactive programming style. Typically, and unlike the
imperative supplier, it should be triggered only once, given that the invocation of its get() method
produces (supplies) the continuous stream of messages and not an individual message.
The framework recognizes the difference in the programming style and guarantees that such a
supplier is triggered only once.
However, imagine the use case where you want to poll some data source and return a finite stream
of data representing the result set. The reactive programming style is a perfect mechanism for such
a Supplier. However, given the finite nature of the produced stream, such Supplier still needs to be
invoked periodically.
Consider the following sample, which emulates such use case by producing a finite stream of data:
@SpringBootApplication
public static class SupplierConfiguration {
@PollableBean
public Supplier<Flux<String>> stringSupplier() {
return () -> Flux.just("hello", "bye");
}
}
The bean itself is annotated with PollableBean annotation (sub-set of @Bean), thus signaling to the
framework that although the implementation of such a supplier is reactive, it still needs to be
polled.
As you have learned by now, unlike Function and Consumer, which are triggered by
an event (they have input data), Supplier does not have any input and thus
triggered by a different mechanism - poller, which may have an unpredictable
threading mechanism. And while the details of the threading mechanism most of
the time are not relevant to the downstream execution of the function it may
present an issue in certain cases especially with integrated frameworks that may
have certain expectations to thread affinity. For example, Spring Cloud Sleuth
which relies on tracing data stored in thread local. For those cases we have
another mechanism via StreamBridge, where user has more control over threading
mechanism. You can get more details in Sending arbitrary data to an output (e.g.
Foreign event-driven sources) section.
Consumer (Reactive)
Reactive Consumer is a little bit special because it has a void return type, leaving framework with no
reference to subscribe to. Most likely you will not need to write Consumer<Flux<?>>, and instead
write it as a Function<Flux<?>, Mono<Void>> invoking then operator as the last operator on your
stream.
For example:
But if you do need to write an explicit Consumer<Flux<?>>, remember to subscribe to the incoming
Flux.
fixedDelay
Fixed delay for default poller in milliseconds.
Default: 1000L.
maxMessagesPerPoll
Maximum messages for each polling event of the default poller.
Default: 1L.
cron
Cron expression value for the Cron Trigger.
Default: none.
initialDelay
Initial delay for periodic triggers.
Default: 0.
timeUnit
The TimeUnit to apply to delay values.
Default: MILLISECONDS.
There are cases where the actual source of data may be coming from the external (foreign) system
that is not a binder. For example, the source of the data may be a classic REST endpoint. How do we
bridge such source with the functional mechanism used by spring-cloud-stream?
Spring Cloud Stream provides two mechanisms, so let’s look at them in more details
Here, for both samples we’ll use a standard MVC endpoint method called delegateToSupplier bound
to the root web context, delegating incoming requests to stream via StreamBridge mechanism.
@SpringBootApplication
@Controller
public class WebSourceApplication {
@Autowired
private StreamBridge streamBridge;
@RequestMapping
@ResponseStatus(HttpStatus.ACCEPTED)
public void delegateToSupplier(@RequestBody String body) {
System.out.println("Sending " + body);
streamBridge.send("toStream-out-0", body);
}
}
Here we autowire a StreamBridge bean which allows us to send data to an output binding effectively
bridging non-stream application with spring-cloud-stream. Note that preceding example does not
have any source functions defined (e.g., Supplier bean) leaving the framework with no trigger to
create source bindings, which would be typical for cases where configuration contains function
beans. So to trigger the creation of source binding we use spring.cloud.stream.source property
where you can declare the name of your sources. The provided name will be used as a trigger to
create a source binding. So in the preceding example the name of the output binding will be
toStream-out-0 which is consistent with the binding naming convention used by functions (see
Binding and Binding names). You can use ; to signify multiple sources (e.g.,
--spring.cloud.stream.source=foo;bar)
Also, note that streamBridge.send(..) method takes an Object for data. This means you can send
POJO or Message to it and it will go through the same routine when sending output as if it was from
any Function or Supplier providing the same level of consistency as with functions. This means the
output type conversion, partitioning etc are honored as if it was from the output produced by
functions.
StreamBridge can also be used for cases when output destination(s) are not known ahead of time
similar to the use cases described in Routing FROM Consumer section.
@Autowired
private StreamBridge streamBridge;
@RequestMapping
@ResponseStatus(HttpStatus.ACCEPTED)
public void delegateToSupplier(@RequestBody String body) {
System.out.println("Sending " + body);
streamBridge.send("myDestination", body);
}
}
As you can see the preceding example is very similar to the previous one with the exception of
explicit binding instruction provided via spring.cloud.stream.source property (which is not
provided). Here we’re sending data to myDestination name which does not exist as a binding.
Therefore such name will be treated as dynamic destination as described in Routing FROM
Consumer section.
In the preceding example, we are using ApplicationRunner as a foreign source to feed the stream.
@SpringBootApplication
@Controller
public class WebSourceApplication {
@Autowired
private StreamBridge streamBridge;
@RequestMapping
@ResponseStatus(HttpStatus.ACCEPTED)
public void delegateToSupplier(@RequestBody String body) {
streamBridge.send("myBinidng", body);
}
}
As you can see inside of delegateToSupplier method we’re using StreamBridge to send data to
myBinidng binding. And here you’re also benefiting from the dynamic features of StreamBridge
where if myBinidng doesn’t exist it will be created automatically, otherwise existing binding will be
used.
By showing two example we want to emphasize the approach will work with any type of foreign
sources.
You can also provide specific content type if necessary with the following method signature public
boolean send(String bindingName, Object data, MimeType outputContentType). Or if you send data as
a Message, its content type will be honored.
Spring Cloud Stream supports multiple binder scenarios. For example you may be receiving data
from Kafka and sending it to RabbitMQ.
For more information on multiple binders scenarios, please see Binders section and specifically
Multiple Binders on the Classpath
In the event you are planning to use StreamBridge and have more then one binder configured in
your application you must also tell StreamBridge which binder to use. And for that there are two
more variations of send method:
As you can see there is one additional argument that you can provide - binderType, telling
BindingService which binder to use when creating dynamic binding.
Since Spring Cloud Function is build on top of Project Reactor there isn’t much you need to do to
benefit from reactive programming model while implementing Supplier, Function or Consumer.
For example:
@SpringBootApplication
public static class SinkFromConsumer {
@Bean
public Function<Flux<String>, Flux<String>> reactiveUpperCase() {
return flux -> flux.map(val -> val.toUpperCase());
}
}
Functional Composition
Using functional programming model you can also benefit from functional composition where you
can dynamically compose complex handlers from a set of simple functions. As an example let’s add
the following function bean to the application defined above
@Bean
public Function<String, String> wrapInQuotes() {
return s -> "\"" + s + "\"";
}
--spring.cloud.function.definition=toUpperCase|wrapInQuotes
The result of a composition is a single function which, as you may guess, could have a very long and
rather cryptic name (e.g., foo|bar|baz|xyz. . .) presenting a great deal of inconvenience when it
comes to other configuration properties. This is where descriptive binding names feature described
in Functional binding names section can help.
For example, if we want to give our toUpperCase|wrapInQuotes a more descriptive name we can do
so with the following property spring.cloud.stream.function.bindings.toUpperCase|wrapInQuotes-
in-0=quotedUpperCaseInput allowing other configuration properties to refer to that binding name
(e.g., spring.cloud.stream.bindings.quotedUpperCaseInput.destination=myDestination).
Function composition effectively allows you to address complexity by breaking it down to a set of
simple and individually manageable/testable components that could still be represented as one at
runtime. But that is not the only benefit.
You can also use composition to address certain cross-cutting non-functional concerns, such as
content enrichment. For example, assume you have an incoming message that may be lacking
certain headers, or some headers are not in the exact state your business function would expect.
You can now implement a separate function that addresses those concerns and then compose it
with the main business function.
@SpringBootApplication
public class DemoStreamApplication {
}
@Bean
public Function<Message<String>, Message<String>> enrich() {
return message -> {
Assert.isTrue(!message.getHeaders().containsKey("foo"), "Should NOT
contain 'foo' header");
return MessageBuilder.fromMessage(message).setHeader("foo",
"bar").build();
};
}
@Bean
public Function<Message<String>, Message<String>> echo() {
return message -> {
Assert.isTrue(message.getHeaders().containsKey("foo"), "Should contain
'foo' header");
System.out.println("Incoming message " + message);
return message;
};
}
}
While trivial, this example demonstrates how one function enriches the incoming Message with the
additional header(s) (non-functional concern), so the other function - echo - can benefit form it. The
echo function stays clean and focused on business logic only. You can also see the usage of
spring.cloud.stream.function.bindings property to simplify composed binding name.
Starting with version 3.0 spring-cloud-stream provides support for functions that have multiple
inputs and/or multiple outputs (return values). What does this actually mean and what type of use
cases it is targeting?
• Big Data: Imagine the source of data you’re dealing with is highly un-organized and contains
various types of data elements (e.g., orders, transactions etc) and you effectively need to sort it out.
• Data aggregation: Another use case may require you to merge data elements from 2+ incoming
_streams.
The above describes just a few use cases where you may need to use a single function to accept
and/or produce multiple streams of data. And that is the type of use cases we are targeting here.
Also, note a slightly different emphasis on the concept of streams here. The assumption is that such
functions are only valuable if they are given access to the actual streams of data (not the individual
elements). So for that we are relying on abstractions provided by Project Reactor (i.e., Flux and Mono)
which is already available on the classpath as part of the dependencies brought in by spring-cloud-
functions.
Another important aspect is representation of multiple input and outputs. While java provides
variety of different abstractions to represent multiple of something those abstractions are a)
unbounded, b) lack arity and c) lack type information which are all important in this context. As an
example, let’s look at Collection or an array which only allows us to describe multiple of a single
type or up-cast everything to an Object, affecting the transparent type conversion feature of spring-
cloud-stream and so on.
So to accommodate all these requirements the initial support is relying on the signature which
utilizes another abstraction provided by Project Reactor - Tuples. However, we are working on
allowing a more flexible signatures.
Please refer to Binding and Binding names section to understand the naming
convention used to establish binding names used by such application.
@SpringBootApplication
public class SampleApplication {
@Bean
public Function<Tuple2<Flux<String>, Flux<Integer>>, Flux<String>> gather() {
return tuple -> {
Flux<String> stringStream = tuple.getT1();
Flux<String> intStream = tuple.getT2().map(i -> String.valueOf(i));
return Flux.merge(stringStream, intStream);
};
}
}
The above example demonstrates function which takes two inputs (first of type String and second
of type Integer) and produces a single output of type String.
So, for the above example the two input bindings will be gather-in-0 and gather-in-1 and for
consistency the output binding also follows the same convention and is named gather-out-0.
Knowing that will allow you to set binding specific properties. For example, the following will
override content-type for gather-in-0 binding:
--spring.cloud.stream.bindings.gather-in-0.content-type=text/plain
@SpringBootApplication
public class SampleApplication {
@Bean
public static Function<Flux<Integer>, Tuple2<Flux<String>, Flux<String>>>
scatter() {
return flux -> {
Flux<Integer> connectedFlux = flux.publish().autoConnect(2);
UnicastProcessor even = UnicastProcessor.create();
UnicastProcessor odd = UnicastProcessor.create();
Flux<Integer> evenFlux = connectedFlux.filter(number -> number % 2 ==
0).doOnNext(number -> even.onNext("EVEN: " + number));
Flux<Integer> oddFlux = connectedFlux.filter(number -> number % 2 !=
0).doOnNext(number -> odd.onNext("ODD: " + number));
The above example is somewhat of a the opposite from the previous sample and demonstrates
function which takes single input of type Integer and produces two outputs (both of type String).
So, for the above example the input binding is scatter-in-0 and the output bindings are scatter-
out-0 and scatter-out-1.
int counter = 0;
for (int i = 0; i < 5; i++) {
Message<byte[]> even = outputDestination.receive(0, 0);
assertThat(even.getPayload()).isEqualTo(("EVEN: " +
String.valueOf(counter++)).getBytes());
Message<byte[]> odd = outputDestination.receive(0, 1);
assertThat(odd.getPayload()).isEqualTo(("ODD: " +
String.valueOf(counter++)).getBytes());
}
}
}
There may also be a need for grouping several message handlers in a single application. You would
do so by defining several functions.
@SpringBootApplication
public class SampleApplication {
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
@Bean
public Function<String, String> reverse() {
return value -> new StringBuilder(value).reverse().toString();
}
}
In the above example we have configuration which defines two functions uppercase and reverse. So
first, as mentioned before, we need to notice that there is a a conflict (more then one function) and
therefore we need to resolve it by providing spring.cloud.function.definition property pointing to
the actual function we want to bind. Except here we will use ; delimiter to point to both functions
(see test case below).
@Test
public void testMultipleFunctions() {
try (ConfigurableApplicationContext context = new SpringApplicationBuilder(
TestChannelBinderConfiguration.getCompleteConfiguration(
ReactiveFunctionConfiguration.class))
.run("--
spring.cloud.function.definition=uppercase;reverse")) {
Message<byte[]> inputMessage =
MessageBuilder.withPayload("Hello".getBytes()).build();
inputDestination.send(inputMessage, "uppercase-in-0");
inputDestination.send(inputMessage, "reverse-in-0");
Batch Consumers
When using a MessageChannelBinder that supports batch listeners, and the feature is enabled for the
consumer binding, you can set spring.cloud.stream.bindings.<binding-name>.consumer.batch-mode to
true to enable the entire batch of messages to be passed to the function in a List.
@Bean
public Function<List<Person>, Person> findFirstPerson() {
return persons -> persons.get(0);
}
Batch Producers
You can also use the concept of batching on the producer side by returning a collection of Messages
which effectively provides an inverse effect where each message in the collection will be sent
individually by the binder.
@Bean
public Function<String, List<Message<String>>> batch() {
return p -> {
List<Message<String>> list = new ArrayList<>();
list.add(MessageBuilder.withPayload(p + ":1").build());
list.add(MessageBuilder.withPayload(p + ":2").build());
list.add(MessageBuilder.withPayload(p + ":3").build());
list.add(MessageBuilder.withPayload(p + ":4").build());
return list;
};
}
Each message in the returned list will be sent individually resulting in four messages sent to output
destination.
When you implement a function, you may have complex requirements that fit the category of
Enterprise Integration Patterns (EIP). These are best handled by using a framework such as Spring
Integration (SI), which is a reference implementation of EIP.
Thankfully SI already provides support for exposing integration flows as functions via Integration
flow as gateway Consider the following sample:
@SpringBootApplication
public class FunctionSampleSpringIntegrationApplication {
@Bean
public IntegrationFlow uppercaseFlow() {
return IntegrationFlows.from(MessageFunction.class, "uppercase")
.<String, String>transform(String::toUpperCase)
.logAndReply(LoggingHandler.Level.WARN);
}
}
}
For those who are familiar with SI you can see we define a bean of type IntegrationFlow where we
declare an integration flow that we want to expose as a Function<String, String> (using SI DSL)
called uppercase. The MessageFunction interface lets us explicitly declare the type of the inputs and
outputs for proper type conversion. See Content Type Negotiation section for more on type
conversion.
The resulting function is bound to the input and output destinations exposed by the target binder.
Please refer to Binding and Binding names section to understand the naming
convention used to establish binding names used by such application.
For more details on interoperability of Spring Integration and Spring Cloud Stream specifically
around functional programming model you may find this post very interesting, as it dives a bit
deeper into various patterns you can apply by merging the best of Spring Integration and Spring
Cloud Stream/Functions.
Overview
When using polled consumers, you poll the PollableMessageSource on demand. To define binding for
polled consumer you need to provide spring.cloud.stream.pollable-source property.
The pollable-source name myDestination in the preceding example will result in myDestination-in-0
binding name to stay consistent with functional programming model.
Given the polled consumer in the preceding example, you might use it as follows:
@Bean
public ApplicationRunner poller(PollableMessageSource destIn, MessageChannel destOut)
{
return args -> {
while (someCondition()) {
try {
if (!destIn.poll(m -> {
String newPayload = ((String) m.getPayload()).toUpperCase();
destOut.send(new GenericMessage<>(newPayload));
})) {
Thread.sleep(1000);
}
}
catch (Exception e) {
// handle failure
}
}
};
}
A less manual and more Spring-like alternative would be to configure a scheduled task bean. For
example,
@Scheduled(fixedDelay = 5_000)
public void poll() {
System.out.println("Polling...");
this.source.poll(m -> {
System.out.println(m.getPayload());
Normally, the poll() method acknowledges the message when the MessageHandler exits. If the
method exits abnormally, the message is rejected (not re-queued), but see Handling Errors. You can
override that behavior by taking responsibility for the acknowledgment, as shown in the following
example:
@Bean
public ApplicationRunner poller(PollableMessageSource dest1In, MessageChannel
dest2Out) {
return args -> {
while (someCondition()) {
if (!dest1In.poll(m -> {
StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).noAutoAck();
// e.g. hand off to another thread which can perform the ack
// or acknowledge(Status.REQUEUE)
})) {
Thread.sleep(1000);
}
}
};
}
You must ack (or nack) the message at some point, to avoid resource leaks.
Some messaging systems (such as Apache Kafka) maintain a simple offset in a log.
If a delivery fails and is re-queued with
StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).acknowledge(Status.REQ
UEUE);, any later successfully ack’d messages are redelivered.
There is also an overloaded poll method, for which the definition is as follows:
The type is a conversion hint that allows the incoming message payload to be converted, as shown
in the following example:
Handling Errors
By default, an error channel is configured for the pollable source; if the callback throws an
exception, an ErrorMessage is sent to the error channel (<destination>.<group>.errors); this error
channel is also bridged to the global Spring Integration errorChannel.
You can subscribe to either error channel with a @ServiceActivator to handle errors; without a
subscription, the error will simply be logged and the message will be acknowledged as successful. If
the error channel service activator throws an exception, the message will be rejected (by default)
and won’t be redelivered. If the service activator throws a RequeueCurrentMessageException, the
message will be requeued at the broker and will be again retrieved on a subsequent poll.
Routing can be achieved by relying on RoutingFunction available in Spring Cloud Function 3.0. All
you need to do is enable it via --spring.cloud.stream.function.routing.enabled=true application
property or provide spring.cloud.function.routing-expression property. Once enabled
RoutingFunction will be bound to input destination receiving all the messages and route them to
other functions based on the provided instruction.
For the purposes of binding the name of the routing destination is functionRouter-
in-0 (see RoutingFunction.FUNCTION_NAME and binding naming convention
Functional binding names).
@Bean
public Consumer<String> even() {
return value -> {
System.out.println("EVEN: " + value);
};
}
@Bean
public Consumer<String> odd() {
return value -> {
System.out.println("ODD: " + value);
};
}
}
By sending a message to the functionRouter-in-0 destination exposed by the binder (i.e., rabbit,
kafka), such message will be routed to the appropriate (‘even’ or ‘odd’) Consumer.
@Bean
public Consumer<Integer> odd() {
return value -> System.out.println("ODD: " + value);
}
}
RoutingFunction is a Function and as such treated no differently than any other function. Well. . .
almost.
When RoutingFunction routes to another Function, its output is sent to the output binding of the
RoutingFunction which is functionRouter-in-0 as expected. But what if RoutingFunction routes to a
Consumer? In other words the result of invocation of the RoutingFunction may not produce anything
to be sent to the output binding, thus making it necessary to even have one. So, we do treat
RoutingFunction a little bit differently when we create bindings. And even though it is transparent
to you as a user (there is really nothing for you to do), being aware of some of the mechanics would
help you understand its inner workings.
So, the rule is; We never create output binding for the RoutingFunction, only input. So when you
routing to Consumer, the RoutingFunction effectively becomes as a Consumer by not having any output
bindings. However, if RoutingFunction happen to route to another Function which produces the
output, the output binding for the RoutingFunction will be create dynamically at which point
RoutingFunction will act as a regular Function with regards to bindings (having both input and
output bindings).
Aside from static destinations, Spring Cloud Stream lets applications send messages to dynamically
bound destinations. This is useful, for example, when the target destination needs to be determined
at runtime. Applications can do so in one of two ways.
BinderAwareChannelResolver
The following example demonstrates one of the common scenarios where REST controller uses a
path variable to determine target destination:
@SpringBootApplication
@Controller
public class SourceWithDynamicDestination {
@Autowired
private BinderAwareChannelResolver resolver;
@RequestMapping(value="/{target}")
@ResponseStatus(HttpStatus.ACCEPTED)
public void send(@RequestBody String body, @PathVariable("target") String target){
resolver.resolveDestination(target).send(new GenericMessage<String>(body));
}
}
Now consider what happens when we start the application on the default port (8080) and make the
following requests with CURL:
The destinations, 'customers' and 'orders', are created in the broker (in the exchange for Rabbit or
in the topic for Kafka) with names of 'customers' and 'orders', and the data is published to the
appropriate destinations.
spring.cloud.stream.sendto.destination
You can also delegate to the framework to dynamically resolve the output destination by specifying
spring.cloud.stream.sendto.destination header set to the name of the destination to be resolved.
@Bean
public Function<String, Message<String>> destinationAsPayload() {
return value -> {
return MessageBuilder.withPayload(value)
.setHeader("spring.cloud.stream.sendto.destination", value).build();};
}
}
Albeit trivial you can clearly see in this example, our output is a Message with
spring.cloud.stream.sendto.destination header set to the value of he input argument. The
framework will consult this header and will attempt to create or discover a destination with that
name and send output to it.
If destination names are known in advance, you can configure the producer properties as with any
other destination. Alternatively, if you register a NewDestinationBindingCallback<> bean, it is
invoked just before the binding is created. The callback takes the generic type of the extended
producer properties used by the binder. It has one method:
@Bean
public NewDestinationBindingCallback<RabbitProducerProperties> dynamicConfigurer() {
return (name, channel, props, extended) -> {
props.setRequiredGroups("bindThisQueue");
extended.setQueueNameGroupOnly(true);
extended.setAutoBindDlq(true);
extended.setDeadLetterQueueName("myDLQ");
};
}
If you need to support dynamic destinations with multiple binder types, use Object
for the generic type and cast the extended argument as needed.
Also, please see [Using StreamBridge] section to see how yet another option (StreamBridge) can be
utilized for similar cases.
5.5. Error Handling
In this section we’ll explain the general idea behind error handling mechanisms provided by the
framework. We’ll be using Rabbit binder as an example, since individual binders define different
set of properties for certain supported mechanisms specific to underlying broker capabilities (such
as Kafka binder).
Errors happen, and Spring Cloud Stream provides several flexible mechanisms to deal with them.
Note, the techniques are dependent on binder implementation and the capability of the underlying
messaging middleware as well as programming model (more on this later).
Whenever Message handler (function) throws an exception, it is propagated back to the binder, and
the binder subsequently propagates the error back to the messaging system. The framework then
will make several attempts at re-trying the same message (3 by default) using RetryTemplate
provided by the Spring Retry library.
After that, depending on the capabilities of the messaging system such system may drop the
message, re-queue the message for re-processing or send the failed message to DLQ. Both Rabbit and
Kafka support these concepts. However, other binders may not, so refer to your individual binder’s
documentation for details on supported error-handling options.
Keep in mind however, the reactive function does NOT qualify as a Message handler, since it does
not handle individual messages and instead provides a way to connect stream (i.e., Flux) provided
by the framework with the one provided by the user. In other way of looking at it is - Message
handler (i.e., imperative function) is invoked for each Message, while the reactive function is
invoked only once during the initialization to connect two stream definitions at which point
framework effectively hands off any and all control to the reactive API.
Why is this important? That is because anything you read later in this section with regard to Retry
Template, dropping failed messages, retrying, DLQ and configuration properties that assist with all
of it only applies to Message handlers (i.e., imperative functions).
Reactive API provides a very rich library of its own operators and mechanisms to assist you with
error handling specific to variety of reactive uses cases which are far more complex then simple
Message handler cases, So use them, such as public final Flux<T> retryWhen(Retry retrySpec); that
you can find in reactor.core.publisher.Flux.
@Bean
public Function<Flux<String>, Flux<String>> uppercase() {
return flux -> flux
.retryWhen(Retry.backoff(3, Duration.ofMillis(1000)))
.map(v -> v.toUpperCase());
}
By default, if no additional system-level configuration is provided, the messaging system drops the
failed message. While acceptable in some cases, for most cases, it is not, and we need some
recovery mechanism to avoid message loss.
Perhaps the most common mechanism, DLQ allows failed messages to be sent to a special
destination: the Dead Letter Queue.
When configured, failed messages are sent to this destination for subsequent re-processing or
auditing and reconciliation.
@SpringBootApplication
public class SimpleStreamApplication {
@Bean
public Function<Person, Person> uppercase() {
return personIn -> {
throw new RuntimeException("intentional");
});
};
}
}
As a reminder, in this example uppercase-in-0 segment of the property corresponds to the name of
the input destination binding. The consumer segment indicates that it is a consumer property.
When using DLQ, at least the group property must be provided for proper naming
of the DLQ destination. However group is often used together with destination
property, as in our example.
Aside from some standard properties we also set the auto-bind-dlq to instruct the binder to create
and configure DLQ destination for uppercase-in-0 binding which corresponds to uppercase
destination (see corresponding property), which results in an additional Rabbit queue named
uppercase.myGroup.dlq (see Kafka documentation for Kafka specific DLQ properties).
Once configured, all failed messages are routed to this destination preserving the original message
for further actions.
And you can see that the error message contains more information relevant to the original error, as
follows:
. . . .
x-exception-stacktrace: org.springframework.messaging.MessageHandlingException: nested
exception is
org.springframework.messaging.MessagingException: has an error,
failedMessage=GenericMessage [payload=byte[15],
headers={amqp_receivedDeliveryMode=NON_PERSISTENT,
amqp_receivedRoutingKey=input.hello, amqp_deliveryTag=1,
deliveryAttempt=3, amqp_consumerQueue=input.hello, amqp_redelivered=false,
id=a15231e6-3f80-677b-5ad7-d4b1e61e486e,
amqp_consumerTag=amq.ctag-skBFapilvtZhDsn0k3ZmQg, contentType=application/json,
timestamp=1522327846136}]
at
org.spring...integ...han...MethodInvokingMessageProcessor.processMessage(MethodInvokin
gMessageProcessor.java:107)
at. . . . .
Payload: blah
You can also facilitate immediate dispatch to DLQ (without re-tries) by setting max-attempts to '1'.
For example,
--spring.cloud.stream.bindings.uppercase-in-0.consumer.max-attempts=1
The RetryTemplate is part of the Spring Retry library. While it is out of scope of this document to
cover all of the capabilities of the RetryTemplate, we will mention the following consumer
properties that are specifically related to the RetryTemplate:
maxAttempts
The number of attempts to process the message.
Default: 3.
backOffInitialInterval
The backoff initial interval on retry.
backOffMaxInterval
The maximum backoff interval.
Default 2.0.
defaultRetryable
Whether exceptions thrown by the listener that are not listed in the retryableExceptions are
retryable.
Default: true.
retryableExceptions
A map of Throwable class names in the key and a boolean in the value. Specify those exceptions
(and subclasses) that will or won’t be retried. Also see defaultRetriable. Example:
spring.cloud.stream.bindings.input.consumer.retryable-
exceptions.java.lang.IllegalStateException=false.
Default: empty.
While the preceding settings are sufficient for the majority of the customization requirements, they
may not satisfy certain complex requirements, at which point you may want to provide your own
instance of the RetryTemplate. To do so configure it as a bean in your application configuration. The
application provided instance will override the one provided by the framework. Also, to avoid
conflicts you must qualify the instance of the RetryTemplate you want to be used by the binder as
@StreamRetryTemplate. For example,
@StreamRetryTemplate
public RetryTemplate myRetryTemplate() {
return new RetryTemplate();
}
As you can see from the above example you don’t need to annotate it with @Bean since
@StreamRetryTemplate is a qualified @Bean.
If you need to be more precise with your RetryTemplate, you can specify the bean by name in your
ConsumerProperties to associate the specific retry bean per binding.
spring.cloud.stream.bindings.<foo>.consumer.retry-template-name=<your-retry-template-
bean-name>
6. Binders
Spring Cloud Stream provides a Binder abstraction for use in connecting to physical destinations at
the external middleware. This section provides information about the main concepts behind the
Binder SPI, its main components, and implementation-specific details.
6.1. Producers and Consumers
The following image shows the general relationship of producers and consumers:
A producer is any component that sends messages to a binding destination. The binding destination
can be bound to an external message broker with a Binder implementation for that broker. When
invoking the bindProducer() method, the first parameter is the name of the destination within the
broker, the second parameter is the instance if local destination to which the producer sends
messages, and the third parameter contains properties (such as a partition key expression) to be
used within the adapter that is created for that binding destination.
A consumer is any component that receives messages from the binding destination. As with a
producer, the consumer can be bound to an external message broker. When invoking the
bindConsumer() method, the first parameter is the destination name, and a second parameter
provides the name of a logical group of consumers. Each group that is represented by consumer
bindings for a given destination receives a copy of each message that a producer sends to that
destination (that is, it follows normal publish-subscribe semantics). If there are multiple consumer
instances bound with the same group name, then messages are load-balanced across those
consumer instances so that each message sent by a producer is consumed by only a single
consumer instance within each group (that is, it follows normal queueing semantics).
The key point of the SPI is the Binder interface, which is a strategy for connecting inputs and
outputs to external middleware. The following listing shows the definition of the Binder interface:
• Extended consumer and producer properties, allowing specific Binder implementations to add
supplemental properties that can be supported in a type-safe manner.
• A Spring @Configuration class that creates a bean of type Binder along with the middleware
connection infrastructure.
kafka:\
org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration
As it was mentioned earlier Binder abstraction is also one of the extension points
of the framework. So if you can’t find a suitable binder in the preceding list you
can implement your own binder on top of Spring Cloud Stream. In the How to
create a Spring Cloud Stream Binder from scratch post a community member
documents in details, with an example, a set of steps necessary to implement a
custom binder. The steps are also highlighted in the Implementing Custom Binders
section.
By default, Spring Cloud Stream relies on Spring Boot’s auto-configuration to configure the binding
process. If a single Binder implementation is found on the classpath, Spring Cloud Stream
automatically uses it. For example, a Spring Cloud Stream project that aims to bind only to
RabbitMQ can add the following dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>
For the specific Maven coordinates of other binder dependencies, see the documentation of that
binder implementation.
Similar files exist for the other provided binder implementations (such as Kafka), and custom
binder implementations are expected to provide them as well. The key represents an identifying
name for the binder implementation, whereas the value is a comma-separated list of configuration
classes that each contain one and only one bean definition of type
org.springframework.cloud.stream.binder.Binder.
spring.cloud.stream.bindings.input.binder=kafka
spring.cloud.stream.bindings.output.binder=rabbit
The following example shows a typical configuration for a processor application that connects to
two RabbitMQ broker instances:
spring:
cloud:
stream:
bindings:
input:
destination: thing1
binder: rabbit1
output:
destination: thing2
binder: rabbit2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: <host1>
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: <host2>
The environment property of the particular binder can also be used for any Spring
Boot property, including this spring.main.sources which can be useful for adding
additional configurations for the particular binders, e.g. overriding auto-
configured beans.
For example;
environment:
spring:
main:
sources: com.acme.config.MyCustomBinderConfiguration
To activate a specific profile for the particular binder environment, you should use a
spring.profiles.active property:
environment:
spring:
profiles:
active: myBinderProfile
6.6. Customizing binders in multi binder applications
When an application has multiple binders in it and wants to customize the binders, then that can
be achieved by providing a BinderCustomizer implementation. In the case of applications with a
single binder, this special customizer is not necessary since the binder context can access the
customization beans directly. However, this is not the case in a multi-binder scenario, since various
binders live in different application contexts. By providing an implementation of BinderCustomizer
interface, the binders, although reside in different application contexts, will receive the
customization. Spring Cloud Stream ensures that the customizations take place before the
applications start using the binders. The user must check for the binder type and then apply the
necessary customizations.
@Bean
public BinderCustomizer binderCustomizer() {
return (binder, binderName) -> {
if (binder instanceof KafkaMessageChannelBinder) {
((KafkaMessageChannelBinder) binder).setRebalanceListener(...);
}
else if (binder instanceof KStreamBinder) {
...
}
else if (binder instanceof RabbitMessageChannelBinder) {
...
}
};
}
Note that, when there are more than one instance of the same type of the binder, the binder name
can be used to filter customization.
For example, looks at the fragment from one of the test cases. As you can see we retrieve
BindingsLifecycleController from spring application context and execute individual methods to
control the lifecycle of echo-in-0 binding..
BindingsLifecycleController bindingsController =
context.getBean(BindingsLifecycleController.class);
Binding binding = bindingsController.queryState("echo-in-0");
assertThat(binding.isRunning()).isTrue();
bindingsController.changeState("echo-in-0", State.STOPPED);
//Alternative way of changing state. For convenience we expose start/stop and
pause/resume operations.
//bindingsController.stop("echo-in-0")
assertThat(binding.isRunning()).isFalse();
6.7.2. Actuator
Since actuator and web are optional, you must first add one of the web dependencies as well as add
the actuator dependency manually. The following example shows how to add the dependency for
the Web framework:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
The following example shows how to add the dependency for the WebFlux framework:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
To run Spring Cloud Stream 2.0 apps in Cloud Foundry, you must add spring-boot-
starter-web and spring-boot-starter-actuator to the classpath. Otherwise, the
application will not start due to health check failures.
You must also enable the bindings actuator endpoints by setting the following property:
--management.endpoints.web.exposure.include=bindings.
Once those prerequisites are satisfied. you should see the following in the logs when application
start:
: Mapped "{[/actuator/bindings/{name}],methods=[POST]. . .
: Mapped "{[/actuator/bindings],methods=[GET]. . .
: Mapped "{[/actuator/bindings/{name}],methods=[GET]. . .
Alternative, to see a single binding, access one of the URLs similar to the following: <code><a
href="http://<host>:<port>/actuator/bindings/<bindingName>"
class="bare"><host>:<port>/actuator/bindings/<bindingName></a>;</code>
You can also stop, start, pause, and resume individual bindings by posting to the same URL while
providing a state argument as JSON, as shown in the following examples:
PAUSED and RESUMED work only when the corresponding binder and its underlying
technology supports it. Otherwise, you see the warning message in the logs.
Currently, only Kafka binder supports the PAUSED and RESUMED states.
type
The binder type. It typically references one of the binders found on the classpath — in particular,
a key in a META-INF/spring.binders file.
inheritEnvironment
Whether the configuration inherits the environment of the application itself.
Default: true.
environment
Root for a set of properties that can be used to customize the environment of the binder. When
this property is set, the context in which the binder is being created is not a child of the
application context. This setting allows for complete separation between the binder components
and the application components.
Default: empty.
defaultCandidate
Whether the binder configuration is a candidate for being considered a default binder or can be
used only when explicitly referenced. This setting allows adding binder configurations without
interfering with the default processing.
Default: true.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<version>${spring.cloud.stream.version}</version>
</dependency>
@Override
public ProducerDestination provisionProducerDestination(
final String name,
final ProducerProperties properties) {
@Override
public ConsumerDestination provisionConsumerDestination(
final String name,
final String group,
final ConsumerProperties properties) {
@Override
public String getName() {
return destination.trim();
}
@Override
public String getNameForPartition(int partition) {
throw new UnsupportedOperationException("Partitioning is not implemented
for file messaging.");
}
}
The MessageProducer is responsible for consuming events and handling them as messages to the
client application that is configured to consume such events.
@Override
public void doStart() {
receive();
}
executorService.scheduleWithFixedDelay(() -> {
String payload = getPayload();
if(payload != null) {
Message<String> receivedMessage =
MessageBuilder.withPayload(payload).build();
archiveMessage(payload);
sendMessage(receivedMessage);
}
}, 0, 50, MILLISECONDS);
}
if(!currentPayload.equals(previousPayload)) {
previousPayload = currentPayload;
return currentPayload;
}
} catch (IOException e) {
throw new RuntimeException(e);
}
return null;
}
When implementing a custom binder, this step is not strictly mandatory as you
could always
implementation!
resort to using an already existing MessageProducer
@Override
public void handleMessage(Message<?> message) throws MessagingException {
//write message to file
}
When implementing a custom binder, this step is not strictly mandatory as you
could always resort to using an already existing MessageHandler implementation!
You are now able to provide your own implementation of the Binder abstraction. This can be easily
done by:
eg.:
public class FileMessageBinder extends
AbstractMessageChannelBinder<ConsumerProperties, ProducerProperties,
FileMessageBinderProvisioner> {
public FileMessageBinder(
String[] headersToEmbed,
FileMessageBinderProvisioner provisioningProvider) {
super(headersToEmbed, provisioningProvider);
}
@Override
protected MessageHandler createProducerMessageHandler(
final ProducerDestination destination,
final ProducerProperties producerProperties,
final MessageChannel errorChannel) throws Exception {
try {
Files.write(Paths.get(fileName), payload.getBytes(), CREATE, APPEND);
} catch (IOException e) {
throw new RuntimeException(e);
}
};
}
@Override
protected MessageProducer createConsumerEndpoint(
final ConsumerDestination destination,
final String group,
final ConsumerProperties properties) throws Exception {
It is strictly required that you create a Spring Configuration to initialize the bean for your binder
implementation (and all other beans that you might need):
@Configuration
public class FileMessageBinderConfiguration {
@Bean
@ConditionalOnMissingBean
public FileMessageBinderProvisioner fileMessageBinderProvisioner() {
return new FileMessageBinderProvisioner();
}
@Bean
@ConditionalOnMissingBean
public FileMessageBinder fileMessageBinder(FileMessageBinderProvisioner
fileMessageBinderProvisioner) {
return new FileMessageBinder(null, fileMessageBinderProvisioner);
}
Finally, you must define your binder in a META-INF/spring.binders file on the classpath, specifying
both the name of the binder and the full qualified name of your Binder Configuration class:
myFileBinder:\
com.example.springcloudstreamcustombinder.config.FileMessageBinderConfiguration
7. Configuration Options
Spring Cloud Stream supports general configuration options as well as configuration for bindings
and binders. Some binders let additional binding properties support middleware-specific features.
Configuration options can be provided to Spring Cloud Stream applications through any
mechanism supported by Spring Boot. This includes application arguments, environment variables,
and YAML or .properties files.
spring.cloud.stream.instanceCount
The number of deployed instances of an application. Must be set for partitioning on the
producer side. Must be set on the consumer side when using RabbitMQ and with Kafka if
autoRebalanceEnabled=false.
Default: 1.
spring.cloud.stream.instanceIndex
The instance index of the application: A number from 0 to instanceCount - 1. Used for
partitioning with RabbitMQ and with Kafka if autoRebalanceEnabled=false. Automatically set in
Cloud Foundry to match the application’s instance index.
spring.cloud.stream.dynamicDestinations
A list of destinations that can be bound dynamically (for example, in a dynamic routing
scenario). If set, only listed destinations can be bound.
spring.cloud.stream.defaultBinder
The default binder to use, if multiple binders are configured. See Multiple Binders on the
Classpath.
Default: empty.
spring.cloud.stream.overrideCloudConnectors
This property is only applicable when the cloud profile is active and Spring Cloud Connectors are
provided with the application. If the property is false (the default), the binder detects a suitable
bound service (for example, a RabbitMQ service bound in Cloud Foundry for the RabbitMQ
binder) and uses it for creating connections (usually through Spring Cloud Connectors). When
set to true, this property instructs binders to completely ignore the bound services and rely on
Spring Boot properties (for example, relying on the spring.rabbitmq.* properties provided in the
environment for the RabbitMQ binder). The typical usage of this property is to be nested in a
customized environment when connecting to multiple systems.
Default: false.
spring.cloud.stream.bindingRetryInterval
The interval (in seconds) between retrying binding creation when, for example, the binder does
not support late binding and the broker (for example, Apache Kafka) is down. Set it to zero to
treat such conditions as fatal, preventing the application from starting.
Default: 30
@Bean
public Function<String, String> uppercase() {
return v -> v.toUpperCase();
}
there are two bindings named uppercase-in-0 for input and uppercase-out-0 for output. See Binding
and Binding names for more details.
To avoid repetition, Spring Cloud Stream supports setting values for all bindings, in the format of
spring.cloud.stream.default.<property>=<value> and
spring.cloud.stream.default.<producer|consumer>.<property>=<value> for common binding
properties.
When it comes to avoiding repetitions for extended binding properties, this format should be used -
spring.cloud.stream.<binder-type>.default.<producer|consumer>.<property>=<value>.
The following binding properties are available for both input and output bindings and must be
prefixed with spring.cloud.stream.bindings.<bindingName>. (for example,
spring.cloud.stream.bindings.uppercase-in-0.destination=ticktock).
destination
The target destination of a binding on the bound middleware (for example, the RabbitMQ
exchange or Kafka topic). If binding represents a consumer binding (input), it could be bound to
multiple destinations, and the destination names can be specified as comma-separated String
values. If not, the actual binding name is used instead. The default value of this property cannot
be overridden.
group
The consumer group of the binding. Applies only to inbound bindings. See Consumer Groups.
contentType
The content type of this binding. See Content Type Negotiation.
Default: application/json.
binder
The binder used by this binding. See Multiple Binders on the Classpath for details.
The following binding properties are available for input bindings only and must be prefixed with
spring.cloud.stream.bindings.<bindingName>.consumer. (for example,
spring.cloud.stream.bindings.input.consumer.concurrency=3).
Default values can be set by using the spring.cloud.stream.default.consumer prefix (for example,
spring.cloud.stream.default.consumer.headerMode=none).
autoStartup
Signals if this consumer needs to be started automatically
Default: true.
concurrency
The concurrency of the inbound consumer.
Default: 1.
partitioned
Whether the consumer receives data from a partitioned producer.
Default: false.
headerMode
When set to none, disables header parsing on input. Effective only for messaging middleware
that does not support message headers natively and requires header embedding. This option is
useful when consuming data from non-Spring Cloud Stream applications when native headers
are not supported. When set to headers, it uses the middleware’s native header mechanism.
When set to embeddedHeaders, it embeds headers into the message payload.
maxAttempts
If processing fails, the number of attempts to process the message (including the first). Set to 1 to
disable retry.
Default: 3.
backOffInitialInterval
The backoff initial interval on retry.
Default: 1000.
backOffMaxInterval
The maximum backoff interval.
Default: 10000.
backOffMultiplier
The backoff multiplier.
Default: 2.0.
defaultRetryable
Whether exceptions thrown by the listener that are not listed in the retryableExceptions are
retryable.
Default: true.
instanceCount
When set to a value greater than equal to zero, it allows customizing the instance count of this
consumer (if different from spring.cloud.stream.instanceCount). When set to a negative value, it
defaults to spring.cloud.stream.instanceCount. See Instance Index and Instance Count for more
information.
Default: -1.
instanceIndex
When set to a value greater than equal to zero, it allows customizing the instance index of this
consumer (if different from spring.cloud.stream.instanceIndex). When set to a negative value, it
defaults to spring.cloud.stream.instanceIndex. Ignored if instanceIndexList is provided. See
Instance Index and Instance Count for more information.
Default: -1.
instanceIndexList
Used with binders that do not support native partitioning (such as RabbitMQ); allows an
application instance to consume from more than one partition.
Default: empty.
retryableExceptions
A map of Throwable class names in the key and a boolean in the value. Specify those exceptions
(and subclasses) that will or won’t be retried. Also see defaultRetriable. Example:
spring.cloud.stream.bindings.input.consumer.retryable-
exceptions.java.lang.IllegalStateException=false.
Default: empty.
useNativeDecoding
When set to true, the inbound message is deserialized directly by the client library, which must
be configured correspondingly (for example, setting an appropriate Kafka producer value
deserializer). When this configuration is being used, the inbound message unmarshalling is not
based on the contentType of the binding. When native decoding is used, it is the responsibility of
the producer to use an appropriate encoder (for example, the Kafka producer value serializer) to
serialize the outbound message. Also, when native encoding and decoding is used, the
headerMode=embeddedHeaders property is ignored and headers are not embedded in the message.
See the producer property useNativeEncoding.
Default: false.
multiplex
When set to true, the underlying binder will natively multiplex destinations on the same input
binding.
Default: false.
For advanced configuration of the underlying message listener container for message-driven
consumers, add a single ListenerContainerCustomizer bean to the application context. It will be
invoked after the above properties have been applied and can be used to set additional properties.
Similarly, for polled consumers, add a MessageSourceCustomizer bean.
@Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer>
containerCustomizer() {
return (container, dest, group) -> container.setAdviceChain(advice1, advice2);
}
@Bean
public MessageSourceCustomizer<AmqpMessageSource> sourceCustomizer() {
return (source, dest, group) ->
source.setPropertiesConverter(customPropertiesConverter);
}
The following binding properties are available for output bindings only and must be prefixed with
spring.cloud.stream.bindings.<bindingName>.producer. (for example,
spring.cloud.stream.bindings.func-out-0.producer.partitionKeyExpression=payload.id).
Default values can be set by using the prefix spring.cloud.stream.default.producer (for example,
spring.cloud.stream.default.producer.partitionKeyExpression=payload.id).
autoStartup
Signals if this consumer needs to be started automatically
Default: true.
partitionKeyExpression
A SpEL expression that determines how to partition outbound data. If set, outbound data on this
binding is partitioned. partitionCount must be set to a value greater than 1 to be effective. See
Partitioning Support.
Default: null.
partitionKeyExtractorName
The name of the bean that implements PartitionKeyExtractorStrategy. Used to extract a key used
to compute the partition id (see 'partitionSelector*'). Mutually exclusive with
'partitionKeyExpression'.
Default: null.
partitionSelectorName
The name of the bean that implements PartitionSelectorStrategy. Used to determine partition id
based on partition key (see 'partitionKeyExtractor*'). Mutually exclusive with
'partitionSelectorExpression'.
Default: null.
partitionSelectorExpression
A SpEL expression for customizing partition selection. If neither is set, the partition is selected as
the hashCode(key) % partitionCount, where key is computed through either
partitionKeyExpression.
Default: null.
partitionCount
The number of target partitions for the data, if partitioning is enabled. Must be set to a value
greater than 1 if the producer is partitioned. On Kafka, it is interpreted as a hint. The larger of
this and the partition count of the target topic is used instead.
Default: 1.
requiredGroups
A comma-separated list of groups to which the producer must ensure message delivery even if
they start after it has been created (for example, by pre-creating durable queues in RabbitMQ).
headerMode
When set to none, it disables header embedding on output. It is effective only for messaging
middleware that does not support message headers natively and requires header embedding.
This option is useful when producing data for non-Spring Cloud Stream applications when
native headers are not supported. When set to headers, it uses the middleware’s native header
mechanism. When set to embeddedHeaders, it embeds headers into the message payload.
useNativeEncoding
When set to true, the outbound message is serialized directly by the client library, which must be
configured correspondingly (for example, setting an appropriate Kafka producer value
serializer). When this configuration is being used, the outbound message marshalling is not
based on the contentType of the binding. When native encoding is used, it is the responsibility of
the consumer to use an appropriate decoder (for example, the Kafka consumer value de-
serializer) to deserialize the inbound message. Also, when native encoding and decoding is used,
the headerMode=embeddedHeaders property is ignored and headers are not embedded in the
message. See the consumer property useNativeDecoding.
Default: false.
errorChannelEnabled
When set to true, if the binder supports asynchroous send results, send failures are sent to an
error channel for the destination. See Error Handling for more information.
Default: false.
1. To convert the contents of the incoming message to match the signature of the application-
provided handler.
The wire format is typically byte[] (that is true for the Kafka and Rabbit binders), but it is governed
by the binder implementation.
As a supplement to the details to follow, you may also want to read the following
blog post.
8.1. Mechanics
To better understand the mechanics and the necessity behind content-type negotiation, we take a
look at a very simple use case by using the following message handler as an example:
For simplicity, we assume that this is the only handler function in the application
(we assume there is no internal pipeline).
The handler shown in the preceding example expects a Person object as an argument and produces
a String type as an output. In order for the framework to succeed in passing the incoming Message
as an argument to this handler, it has to somehow transform the payload of the Message type from
the wire format to a Person type. In other words, the framework must locate and apply the
appropriate MessageConverter. To accomplish that, the framework needs some instructions from the
user. One of these instructions is already provided by the signature of the handler method itself
(Person type). Consequently, in theory, that should be (and, in some cases, is) enough. However, for
the majority of use cases, in order to select the appropriate MessageConverter, the framework needs
an additional piece of information. That missing piece is contentType.
Spring Cloud Stream provides three mechanisms to define contentType (in order of precedence):
1. HEADER: The contentType can be communicated through the Message itself. By providing a
contentType header, you declare the content type to use to locate and apply the appropriate
MessageConverter.
2. BINDING: The contentType can be set per destination binding by setting the
spring.cloud.stream.bindings.input.content-type property.
The input segment in the property name corresponds to the actual name of the
destination (which is “input” in our case). This approach lets you declare, on a
per-binding basis, the content type to use to locate and apply the appropriate
MessageConverter.
3. DEFAULT: If contentType is not present in the Message header or the binding, the default
application/json content type is used to locate and apply the appropriate MessageConverter.
As mentioned earlier, the preceding list also demonstrates the order of precedence in case of a tie.
For example, a header-provided content type takes precedence over any other content type. The
same applies for a content type set on a per-binding basis, which essentially lets you override the
default content type. However, it also provides a sensible default (which was determined from
community feedback).
Another reason for making application/json the default stems from the interoperability
requirements driven by distributed microservices architectures, where producer and consumer not
only run in different JVMs but can also run on different non-JVM platforms.
When the non-void handler method returns, if the return value is already a Message, that Message
becomes the payload. However, when the return value is not a Message, the new Message is
constructed with the return value as the payload while inheriting headers from the input Message
minus the headers defined or filtered by
SpringIntegrationProperties.messageHandlerNotPropagatedHeaders. By default, there is only one
header set there: contentType. This means that the new Message does not have contentType header
set, thus ensuring that the contentType can evolve. You can always opt out of returning a Message
from the handler method where you can inject any header you wish.
If there is an internal pipeline, the Message is sent to the next handler by going through the same
process of conversion. However, if there is no internal pipeline or you have reached the end of it,
the Message is sent back to the output destination.
As mentioned earlier, for the framework to select the appropriate MessageConverter, it requires
argument type and, optionally, content type information. The logic for selecting the appropriate
MessageConverter resides with the argument resolvers (HandlerMethodArgumentResolvers), which
trigger right before the invocation of the user-defined handler method (which is when the actual
argument type is known to the framework). If the argument type does not match the type of the
current payload, the framework delegates to the stack of the pre-configured MessageConverters to
see if any one of them can convert the payload. As you can see, the Object fromMessage(Message<?>
message, Class<?> targetClass); operation of the MessageConverter takes targetClass as one of its
arguments. The framework also ensures that the provided Message always contains a contentType
header. When no contentType header was already present, it injects either the per-binding
contentType header or the default contentType header. The combination of contentType argument
type is the mechanism by which framework determines if message can be converted to a target
type. If no appropriate MessageConverter is found, an exception is thrown, which you can handle by
adding a custom MessageConverter (see User-defined Message Converters).
But what if the payload type matches the target type declared by the handler method? In this case,
there is nothing to convert, and the payload is passed unmodified. While this sounds pretty
straightforward and logical, keep in mind handler methods that take a Message<?> or Object as an
argument. By declaring the target type to be Object (which is an instanceof everything in Java), you
essentially forfeit the conversion process.
Do not expect Message to be converted into some other type based only on the
contentType. Remember that the contentType is complementary to the target type. If
you wish, you can provide a hint, which MessageConverter may or may not take into
consideration.
It is important to understand the contract of these methods and their usage, specifically in the
context of Spring Cloud Stream.
The fromMessage method converts an incoming Message to an argument type. The payload of the
Message could be any type, and it is up to the actual implementation of the MessageConverter to
support multiple types. For example, some JSON converter may support the payload type as byte[],
String, and others. This is important when the application contains an internal pipeline (that is,
input → handler1 → handler2 →. . . → output) and the output of the upstream handler results in a
Message which may not be in the initial wire format.
However, the toMessage method has a more strict contract and must always convert Message to the
wire format: byte[].
So, for all intents and purposes (and especially when implementing your own converter) you
regard the two methods as having the following signatures:
Object fromMessage(Message<?> message, Class<?> targetClass);
When no appropriate converter is found, the framework throws an exception. When that happens,
you should check your code and configuration and ensure you did not miss anything (that is,
ensure that you provided a contentType by using a binding or a header). However, most likely, you
found some uncommon case (such as a custom contentType perhaps) and the current stack of
provided MessageConverters does not know how to convert. If that is the case, you can add custom
MessageConverter. See User-defined Message Converters.
The following example shows how to create a message converter bean to support a new content
type called application/bar:
@SpringBootApplication
public static class SinkApplication {
...
@Bean
public MessageConverter customMessageConverter() {
return new MyCustomMessageConverter();
}
}
public MyCustomMessageConverter() {
super(new MimeType("application", "bar"));
}
@Override
protected boolean supports(Class<?> clazz) {
return (Bar.class.equals(clazz));
}
@Override
protected Object convertFromInternal(Message<?> message, Class<?> targetClass,
Object conversionHint) {
Object payload = message.getPayload();
return (payload instanceof Bar ? payload : new Bar((byte[]) payload));
}
}
Spring Cloud Stream also provides support for Avro-based converters and schema evolution. See
[schema-evolution] for details.
[ == Inter-Application Communication
• Partitioning
Time Source (that has the binding named output) would set the following property:
spring.cloud.stream.bindings.output.destination=ticktock
Log Sink (that has the binding named input) would set the following property:
spring.cloud.stream.bindings.input.destination=ticktock
When Spring Cloud Stream applications are deployed through Spring Cloud Data Flow, these
properties are configured automatically; when Spring Cloud Stream applications are launched
independently, these properties must be set correctly. By default, spring.cloud.stream.instanceCount
is 1, and spring.cloud.stream.instanceIndex is 0.
In a scaled-up scenario, correct configuration of these two properties is important for addressing
partitioning behavior (see below) in general, and the two properties are always required by certain
binders (for example, the Kafka binder) in order to ensure that data are split correctly across
multiple consumer instances.
8.6. Partitioning
Partitioning in Spring Cloud Stream consists of two tasks:
You can configure an output binding to send partitioned data by setting one and only one of its
partitionKeyExpression or partitionKeyExtractorName properties, as well as its partitionCount
property.
Based on that example configuration, data is sent to the target partition by using the following logic.
A partition key’s value is calculated for each message sent to a partitioned output binding based on
the partitionKeyExpression. The partitionKeyExpression is a SpEL expression that is evaluated
against the outbound message for extracting the partitioning key.
If a SpEL expression is not sufficient for your needs, you can instead calculate the partition key
value by providing an implementation of
org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy and configuring it as a
bean (by using the @Bean annotation). If you have more then one bean of type
org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy available in the
Application Context, you can further filter it by specifying its name with the
partitionKeyExtractorName property, as shown in the following example:
--spring.cloud.stream.bindings.func-out
-0.producer.partitionKeyExtractorName=customPartitionKeyExtractor
--spring.cloud.stream.bindings.func-out-0.producer.partitionCount=5
. . .
@Bean
public CustomPartitionKeyExtractorClass customPartitionKeyExtractor() {
return new CustomPartitionKeyExtractorClass();
}
In previous versions of Spring Cloud Stream, you could specify the implementation
of org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy by
setting
spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass
the
Once the message key is calculated, the partition selection process determines the target partition
as a value between 0 and partitionCount - 1. The default calculation, applicable in most scenarios,
is based on the following formula: key.hashCode() % partitionCount. This can be customized on the
binding, either by setting a SpEL expression to be evaluated against the 'key' (through the
partitionSelectorExpression property) or by configuring an implementation of
org.springframework.cloud.stream.binder.PartitionSelectorStrategy as a bean (by using the @Bean
annotation). Similar to the PartitionKeyExtractorStrategy, you can further filter it by using the
spring.cloud.stream.bindings.output.producer.partitionSelectorName property when more than
one bean of this type is available in the Application Context, as shown in the following example:
--spring.cloud.stream.bindings.func-out
-0.producer.partitionSelectorName=customPartitionSelector
. . .
@Bean
public CustomPartitionSelectorClass customPartitionSelector() {
return new CustomPartitionSelectorClass();
}
In previous versions of Spring Cloud Stream you could specify the implementation
of org.springframework.cloud.stream.binder.PartitionSelectorStrategy by setting
the spring.cloud.stream.bindings.output.producer.partitionSelectorClass
property. Since version 3.0, this property is removed.
An input binding (with the binding name uppercase-in-0) is configured to receive partitioned data
by setting its partitioned property, as well as the instanceIndex and instanceCount properties on the
application itself, as shown in the following example:
spring.cloud.stream.bindings.uppercase-in-0.consumer.partitioned=true
spring.cloud.stream.instanceIndex=3
spring.cloud.stream.instanceCount=5
The instanceCount value represents the total number of application instances between which the
data should be partitioned. The instanceIndex must be a unique value across the multiple instances,
with a value between 0 and instanceCount - 1. The instance index helps each application instance
to identify the unique partition(s) from which it receives data. It is required by binders using
technology that does not support partitioning natively. For example, with RabbitMQ, there is a
queue for each partition, with the queue name containing the instance index. With Kafka, if
autoRebalanceEnabled is true (default), Kafka takes care of distributing partitions across instances,
and these properties are not required. If autoRebalanceEnabled is set to false, the instanceCount and
instanceIndex are used by the binder to determine which partition(s) the instance subscribes to
(you must have at least as many partitions as there are instances). The binder allocates the
partitions instead of Kafka. This might be useful if you want messages for a particular partition to
always go to the same instance. When a binder configuration requires them, it is important to set
both values correctly in order to ensure that all of the data is consumed and that the application
instances receive mutually exclusive datasets.
While a scenario in which using multiple instances for partitioned data processing may be complex
to set up in a standalone case, Spring Cloud Dataflow can simplify the process significantly by
populating both the input and output values correctly and by letting you rely on the runtime
infrastructure to provide information about the instance index and instance count.
9. Testing
Spring Cloud Stream provides support for testing your microservice applications without
connecting to a messaging system.
While such light-weight approach is sufficient for a lot of cases, it usually requires additional
integration testing with real binders (e.g., Rabbit, Kafka etc). So we are effectively deprecating it.
To begin bridging the gap between unit and integration testing we’ve developed a new test binder
which uses Spring Integration framework as an in-JVM Message Broker essentially giving you the
best of both worlds - a real binder without the networking.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<version>${spring.cloud.stream.version}</version>
<type>test-jar</type>
<scope>test</scope>
<classifier>test-binder</classifier>
</dependency>
Or for build.gradle.kts
testImplementation("org.springframework.cloud:spring-cloud-stream") {
artifact {
name = "spring-cloud-stream"
extension = "jar"
type ="test-jar"
classifier = "test-binder"
}
}
@SpringBootTest
@RunWith(SpringRunner.class)
public class SampleStreamTests {
@Autowired
private InputDestination input;
@Autowired
private OutputDestination output;
@Test
public void testEmptyConfiguration() {
this.input.send(new GenericMessage<byte[]>("hello".getBytes()));
assertThat(output.receive().getPayload()).isEqualTo("HELLO".getBytes());
}
@SpringBootApplication
@Import(TestChannelBinderConfiguration.class)
public static class SampleConfiguration {
@Bean
public Function<String, String> uppercase() {
return v -> v.toUpperCase();
}
}
}
And if you need more control or want to test several configurations in the same test suite you can
also do the following:
@EnableAutoConfiguration
public static class MyTestConfiguration {
@Bean
public Function<String, String> uppercase() {
return v -> v.toUpperCase();
}
}
. . .
@Test
public void sampleTest() {
try (ConfigurableApplicationContext context = new SpringApplicationBuilder(
TestChannelBinderConfiguration.getCompleteConfiguration(
MyTestConfiguration.class))
.run("--spring.cloud.function.definition=uppercase")) {
InputDestination source = context.getBean(InputDestination.class);
OutputDestination target = context.getBean(OutputDestination.class);
source.send(new GenericMessage<byte[]>("hello".getBytes()));
assertThat(target.receive().getPayload()).isEqualTo("HELLO".getBytes());
}
}
For cases where you have multiple bindings and/or multiple inputs and outputs, or simply want to
be explicit about names of the destination you are sending to or receiving from, the send() and
receive() methods of InputDestination and OutputDestination are overridden to allow you to
provide the name of the input and output destination.
@EnableAutoConfiguration
public static class SampleFunctionConfiguration {
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
@Bean
public Function<String, String> reverse() {
return value -> new StringBuilder(value).reverse().toString();
}
}
Message<byte[]> inputMessage =
MessageBuilder.withPayload("Hello".getBytes()).build();
inputDestination.send(inputMessage, "uppercase-in-0");
inputDestination.send(inputMessage, "reverse-in-0");
For cases where you have additional mapping properties such as destination you should use those
names. For example, consider a different version of the preceding test where we explicitly map
inputs and outputs of the uppercase function to myInput and myOutput binding names:
@Test
public void testMultipleFunctions() {
try (ConfigurableApplicationContext context = new SpringApplicationBuilder(
TestChannelBinderConfiguration.getCompleteConfiguration(
SampleFunctionConfiguration.class))
.run(
"--spring.cloud.function.definition=uppercase;reverse",
"--spring.cloud.stream.bindings.uppercase-in-
0.destination=myInput",
"--spring.cloud.stream.bindings.uppercase-out-
0.destination=myOutput"
)) {
Message<byte[]> inputMessage =
MessageBuilder.withPayload("Hello".getBytes()).build();
inputDestination.send(inputMessage, "myInput");
inputDestination.send(inputMessage, "reverse-in-0");
Spring Integration Test Binder also allows you to write tests when working with
PollableMessageSource (see Using Polled Consumers for more details).
The important thing that needs to be understood though is that polling is not event-driven, and that
PollableMessageSource is a strategy which exposes operation to produce (poll for) a Message
(singular). How often you poll or how many threads you use or where you’re polling from (message
queue or file system) is entirely up to you; In other words it is your responsibility to configure
Poller or Threads or the actual source of Message. Luckily Spring has plenty of abstractions to
configure exactly that.
@Import(TestChannelBinderConfiguration.class)
@EnableAutoConfiguration
public static class SamplePolledConfiguration {
@Bean
public ApplicationRunner poller(PollableMessageSource polledMessageSource,
StreamBridge output, TaskExecutor taskScheduler) {
return args -> {
taskScheduler.execute(() -> {
for (int i = 0; i < 3; i++) {
try {
if (!polledMessageSource.poll(m -> {
String newPayload = ((String)
m.getPayload()).toUpperCase();
output.send("myOutput", newPayload);
})) {
Thread.sleep(2000);
}
}
catch (Exception e) {
// handle failure
}
}
});
};
}
}
The above (very rudimentary) example will produce 3 messages in 2 second intervals sending them
to the output destination of Source which this binder sends to OutputDestination where we retrieve
them (for any assertions). Currently, it prints the following:
Message 1: POLLED DATA
Message 2: POLLED DATA
Message 3: POLLED DATA
As you can see the data is the same. That is because this binder defines a default implementation of
the actual MessageSource - the source from which the Messages are polled using poll() operation.
While sufficient for most testing scenarios, there are cases where you may want to define your own
MessageSource. To do so simply configure a bean of type MessageSource in your test configuration
providing your own implementation of Message sourcing.
@Bean
public MessageSource<?> source() {
return () -> new GenericMessage<>("My Own Data " + UUID.randomUUID());
}
DO NOT name this bean messageSource as it is going to be in conflict with the bean
of the same name (different type) provided by Spring Boot for unrelated reasons.
To enable health check you first need to enable both "web" and "actuator" by including its
dependencies (see Binding visualization and control)
You can use Spring Boot actuator health endpoint to access the health indicator - /actuator/health.
By default, you will only receive the top level application status when you hit the above endpoint.
In order to receive the full details from the binder specific health indicators, you need to include
the property management.endpoint.health.show-details with the value ALWAYS in your application.
Health indicators are binder-specific and certain binder implementations may not necessarily
provide a health indicator.
If you want to completely disable all health indicators available out of the box and instead provide
your own health indicators, you can do so by setting property management.health.binders.enabled to
false and then provide your own HealthIndicator beans in your application. In this case, the health
indicator infrastructure from Spring Boot will still pick up these custom beans. Even if you are not
disabling the binder health indicators, you can still enhance the health checks by providing your
own HealthIndicator beans in addition to the out of the box health checks.
When you have multiple binders in the same application, health indicators are enabled by default
unless the application turns them off by setting management.health.binders.enabled to false. In this
case, if the user wants to disable health check for a subset of the binders, then that should be done
by setting management.health.binders.enabled to false in the multi binder configurations’s
environment. See Connecting to Multiple Systems for details on how environment specific
properties can be provided.
If there are multiple binders present in the classpath but not all of them are used in the application,
this may cause some issues in the context of health indicators. There may be implementation
specific details as to how the health checks are performed. For example, a Kafka binder may decide
the status as DOWN if there are no destinations registered by the binder.
Lets take a concrete situation. Imagine you have both Kafka and Kafka Streams binders present in
the classpath, but only use the Kafka Streams binder in the application code, i.e. only provide
bindings using the Kafka Streams binder. Since Kafka binder is not used and it has specific checks
to see if any destinations are registered, the binder health check will fail. The top level application
health check status will be reported as DOWN. In this situation, you can simply remove the
dependency for kafka binder from your application since you are not using it.
11. Samples
For Spring Cloud Stream samples, see the spring-cloud-stream-samples repository on GitHub.
When configuring your binder connections, you can use the values from an environment variable
as explained on the dataflow Cloud Foundry Server docs.
• RabbitMQ
• Apache Kafka
• Amazon Kinesis
• Google PubSub (partner maintained)
As it was mentioned earlier Binder abstraction is also one of the extension points of the framework.
So if you can’t find a suitable binder in the preceding list you can implement your own binder on
top of Spring Cloud Stream. In the How to create a Spring Cloud Stream Binder from scratch post a
community member documents in details, with an example, a set of steps necessary to implement a
custom binder. The steps are also highlighted in the Implementing Custom Binders section.
Copies of this document may be made for your own use and for distribution to others, provided
that you do not charge any fee for such copies and further provided that each copy contains this
Copyright Notice, whether distributed in print or electronically.
Preface
This section provides a brief overview of the Spring Cloud Task reference documentation. Think of
it as a map for the rest of the document. You can read this reference guide in a linear fashion or you
can skip sections if something does not interest you.
Copies of this document may be made for your own use and for distribution to others, provided
that you do not charge any fee for such copies and further provided that each copy contains this
Copyright Notice, whether distributed in print or electronically.
2. Getting help
Having trouble with Spring Cloud Task? We would like to help!
3. First Steps
If you are just getting started with Spring Cloud Task or with 'Spring' in general, we suggesting
reading the getting-started.pdf chapter.
• System Requirements
To follow the tutorial, read Developing Your First Spring Cloud Task Application
To run your example, read Running the Example
Getting started
If you are just getting started with Spring Cloud Task, you should read this section. Here, we answer
the basic “what?”, “how?”, and “why?” questions. We start with a gentle introduction to Spring
Cloud Task. We then build a Spring Cloud Task application, discussing some core principles as we
go.
2. System Requirements
You need to have Java installed (Java 8 or better). To build, you need to have Maven installed as
well.
• DB2
• H2
• HSQLDB
• MySql
• Oracle
• Postgres
• SqlServer
The spring.io web site contains many “Getting Started” guides that use Spring
Boot. If you need to solve a specific problem, check there first. You can shortcut the
following steps by going to the Spring Initializr and creating a new project. Doing
so automatically generates a new project structure so that you can start coding
right away. We recommend experimenting with the Spring Initializr to become
familiar with it.
To do so:
a. Create a new Maven project with a Group name of io.spring.demo and an Artifact name of
helloworld.
b. In the Dependencies text box, type task and then select the Cloud Task dependency.
c. In the Dependencies text box, type jdbc and then select the JDBC dependency.
d. In the Dependencies text box, type h2 and then select the H2. (or your favorite database)
2. Unzip the helloworld.zip file and import the project into your favorite IDE.
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
@EnableTask
public class HelloworldApplication {
@Bean
public CommandLineRunner commandLineRunner() {
return new HelloWorldCommandLineRunner();
}
@Override
public void run(String... strings) throws Exception {
System.out.println("Hello, World!");
}
}
}
While it may seem small, quite a bit is going on. For more about Spring Boot specifics, see the
Spring Boot reference documentation.
Now we can open the application.properties file in src/main/resources. We need to configure two
properties in application.properties:
• application.name: To set the application name (which is translated to the task name)
• logging.level: To set the logging for Spring Cloud Task to DEBUG in order to get a view of what is
going on.
logging.level.org.springframework.cloud.task=DEBUG
spring.application.name=helloWorld
When including Spring Cloud Task Starter dependency, Task auto configures all beans to bootstrap
it’s functionality. Part of this configuration registers the TaskRepository and the infrastructure for its
use.
In our demo, the TaskRepository uses an embedded H2 database to record the results of a task. This
H2 embedded database is not a practical solution for a production environment, since the H2 DB
goes away once the task ends. However, for a quick getting-started experience, we can use this in
our example as well as echoing to the logs what is being updated in that repository. In the
Configuration section (later in this documentation), we cover how to customize the configuration of
the pieces provided by Spring Cloud Task.
When our sample application runs, Spring Boot launches our HelloWorldCommandLineRunner and
outputs our “Hello, World!” message to standard out. The TaskLifecycleListener records the start of
the task and the end of the task in the repository.
The main method serves as the entry point to any java application. Our main method delegates to
Spring Boot’s SpringApplication class.
Spring includes many ways to bootstrap an application’s logic. Spring Boot provides a convenient
method of doing so in an organized manner through its *Runner interfaces (CommandLineRunner or
ApplicationRunner). A well behaved task can bootstrap any logic by using one of these two runners.
The lifecycle of a task is considered from before the *Runner#run methods are executed to once they
are all complete. Spring Boot lets an application use multiple *Runner implementations, as does
Spring Cloud Task.
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.3.RELEASE)
A simple task application can be found in the samples module of the Spring Cloud
Task Project here.
Features
This section goes into more detail about Spring Cloud Task, including how to use it, how to
configure it, and the appropriate extension points.
While this functionality is useful in a cloud environment, the same issues can arise in a traditional
deployment model as well. When running Spring Boot applications with a scheduler such as cron, it
can be useful to be able to monitor the results of the application after its completion.
Spring Cloud Task takes the approach that a Spring Boot application can have a start and an end
and still be successful. Batch applications are one example of how processes that are expected to
end (and that are often short-lived) can be helpful.
Spring Cloud Task records the lifecycle events of a given task. Most long-running processes, typified
by most web applications, do not save their lifecycle events. The tasks at the heart of Spring Cloud
Task do.
The lifecycle consists of a single task execution. This is a physical execution of a Spring Boot
application configured to be a task (that is, it has the Sprint Cloud Task dependencies).
Upon completion of all of the *Runner#run calls from Spring Boot or the failure of an
ApplicationContext (indicated by an ApplicationFailedEvent), the task execution is updated in the
repository with the results.
Field Description
executionid The unique ID for the task’s run.
exitCode The exit code generated from an
ExitCodeExceptionMapper implementation. If
there is no exit code generated but an
ApplicationFailedEvent is thrown, 1 is set.
Otherwise, it is assumed to be 0.
taskName The name for the task, as determined by the
configured TaskNameResolver.
startTime The time the task was started, as indicated by
the SmartLifecycle#start call.
endTime The time the task was completed, as indicated by
the ApplicationReadyEvent.
exitMessage Any information available at the time of exit.
This can programmatically be set by a
TaskExecutionListener.
errorMessage If an exception is the cause of the end of the task
(as indicated by an ApplicationFailedEvent), the
stack trace for that exception is stored here.
arguments A List of the string command line arguments as
they were passed into the executable boot
application.
1.2. Mapping Exit Codes
When a task completes, it tries to return an exit code to the OS. If we take a look at our original
example, we can see that we are not controlling that aspect of our application. So, if an exception is
thrown, the JVM returns a code that may or may not be of any use to you in debugging.
Consequently, Spring Boot provides an interface, ExitCodeExceptionMapper, that lets you map
uncaught exceptions to exit codes. Doing so lets you indicate, at the level of exit codes, what went
wrong. Also, by mapping exit codes in this manner, Spring Cloud Task records the returned exit
code.
If the task terminates with a SIG-INT or a SIG-TERM, the exit code is zero unless otherwise specified
within the code.
While the task is running, the exit code is stored as a null in the repository. Once
the task completes, the appropriate exit code is stored based on the guidelines
described earlier in this section.
2. Configuration
Spring Cloud Task provides a ready-to-use configuration, as defined in the DefaultTaskConfigurer
and SimpleTaskConfiguration classes. This section walks through the defaults and how to customize
Spring Cloud Task for your needs.
2.1. DataSource
Spring Cloud Task uses a datasource for storing the results of task executions. By default, we
provide an in-memory instance of H2 to provide a simple method of bootstrapping development.
However, in a production environment, you probably want to configure your own DataSource.
If your application uses only a single DataSource and that serves as both your business schema and
the task repository, all you need to do is provide any DataSource (the easiest way to do so is through
Spring Boot’s configuration conventions). This DataSource is automatically used by Spring Cloud
Task for the repository.
If your application uses more than one DataSource, you need to configure the task repository with
the appropriate DataSource. This customization can be done through an implementation of
TaskConfigurer.
By using the spring.cloud.task.tablePrefix, a user assumes the responsibility to create the task
tables that meet both the criteria for the task table schema but with modifications that are required
for a user’s business needs. You can utilize the Spring Cloud Task Schema DDL as a guide when
creating your own Task DDL as seen here.
spring.cloud.task.initialize-enabled=false
It defaults to true.
In order to configure your Task to use a generated TaskExecutionId, add the following property:
spring.cloud.task.executionid=yourtaskId
spring.cloud.task.external-execution-id=<externalTaskId>
spring.cloud.task.parent-execution-id=<parentExecutionTaskId>
2.7. TaskConfigurer
The TaskConfigurer is a strategy interface that lets you customize the way components of Spring
Cloud Task are configured. By default, we provide the DefaultTaskConfigurer that provides logical
defaults: Map-based in-memory components (useful for development if no DataSource is provided)
and JDBC based components (useful if there is a DataSource available).
You can customize any of the components described in the preceding table by creating a custom
implementation of the TaskConfigurer interface. Typically, extending the DefaultTaskConfigurer
(which is provided if a TaskConfigurer is not found) and overriding the required getter is sufficient.
However, implementing your own from scratch may be required.
Users should not directly use getter methods from a TaskConfigurer directly unless
they are using it to supply implementations to be exposed as Spring Beans.
By default, Spring Cloud Task provides the SimpleTaskNameResolver, which uses the following options
(in order of precedence):
1. A Spring Boot property (configured in any of the ways Spring Boot allows) called
spring.cloud.task.name.
2. The application name as resolved using Spring Boot’s rules (obtained through
ApplicationContext#getId).
2.9. Task Execution Listener
TaskExecutionListener lets you register listeners for specific events that occur during the task
lifecycle. To do so, create a class that implements the TaskExecutionListener interface. The class that
implements the TaskExecutionListener interface is notified of the following events:
• onTaskEnd: Prior to updating the TaskExecution entry in the TaskRepository and marking the final
state of the task.
• onTaskFailed: Prior to the onTaskEnd method being invoked when an unhandled exception is
thrown by the task.
Spring Cloud Task also lets you add TaskExecution Listeners to methods within a bean by using the
following method annotations:
• @AfterTask: Prior to the updating of the TaskExecution entry in the TaskRepository marking the
final state of the task.
• @FailedTask: Prior to the @AfterTask method being invoked when an unhandled exception is
thrown by the task.
@BeforeTask
public void methodA(TaskExecution taskExecution) {
}
@AfterTask
public void methodB(TaskExecution taskExecution) {
}
@FailedTask
public void methodC(TaskExecution taskExecution, Throwable throwable) {
}
}
If an exception is thrown by a TaskExecutionListener event handler, all listener processing for that
event handler stops. For example, if three onTaskStartup listeners have started and the first
onTaskStartup event handler throws an exception, the other two onTaskStartup methods are not
called. However, the other event handlers (onTaskEnd and onTaskFailed) for the
TaskExecutionListeners are called.
The exit code returned when a exception is thrown by a TaskExecutionListener event handler is the
exit code that was reported by the ExitCodeEvent. If no ExitCodeEvent is emitted, the Exception
thrown is evaluated to see if it is of type ExitCodeGenerator. If so, it returns the exit code from the
ExitCodeGenerator. Otherwise, 1 is returned.
In the case that an exception is thrown in an onTaskStartup method, the exit code for the application
will be 1. If an exception is thrown in either a onTaskEnd or onTaskFailed method, the exit code for
the application will be the one established using the rules enumerated above.
You can set the exit message for a task programmatically by using a TaskExecutionListener. This is
done by setting the TaskExecution’s exitMessage, which then gets passed into the
TaskExecutionListener. The following example shows a method that is annotated with the
@AfterTask ExecutionListener :
@AfterTask
public void afterMe(TaskExecution taskExecution) {
taskExecution.setExitMessage("AFTER EXIT MESSAGE");
}
An ExitMessage can be set at any of the listener events (onTaskStartup, onTaskFailed, and onTaskEnd).
The order of precedence for the three listeners follows:
1. onTaskEnd
2. onTaskFailed
3. onTaskStartup
For example, if you set an exitMessage for the onTaskStartup and onTaskFailed listeners and the task
ends without failing, the exitMessage from the onTaskStartup is stored in the repository. Otherwise, if
a failure occurs, the exitMessage from the onTaskFailed is stored. Also if you set the exitMessage with
an onTaskEnd listener, the exitMessage from the onTaskEnd supersedes the exit messages from both
the onTaskStartup and onTaskFailed.
spring.cloud.task.single-instance-enabled=true or false
To use this feature, you must add the following Spring Integration dependencies to your
application:
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-jdbc</artifactId>
</dependency>
The exit code for the application will be 1 if the task fails because this feature is
enabled and another task is running with the same task name.
@EnableAutoConfiguration(exclude={SimpleTaskAutoConfiguration.class})
Another case to close the context is when the Task Execution completes however the application
does not terminate. In these cases the context is held open because a thread has been allocated (for
example: if you are using a TaskExecutor). In these cases set the
spring.cloud.task.closecontextEnabled property to true when launching your task. This will close
the application’s context once the task is complete. Thus allowing the application to terminate.
Batch
This section goes into more detail about Spring Cloud Task’s integration with Spring Batch. Tracking
the association between a job execution and the task in which it was executed as well as remote
partitioning through Spring Cloud Deployer are covered in this section.
Spring Cloud Task achieves this functionality by using the TaskBatchExecutionListener. By default,
this listener is auto configured in any context that has both a Spring Batch Job configured (by
having a bean of type Job defined in the context) and the spring-cloud-task-batch jar on the
classpath. The listener is injected into all jobs that meet those conditions.
To only have the listener injected into particular jobs within the context, override the
batchTaskExecutionListenerBeanPostProcessor and provide a list of job bean IDs, as shown in the
following example:
public TaskBatchExecutionListenerBeanPostProcessor
batchTaskExecutionListenerBeanPostProcessor() {
TaskBatchExecutionListenerBeanPostProcessor postProcessor =
new TaskBatchExecutionListenerBeanPostProcessor();
return postProcessor;
}
You can find a sample batch application in the samples module of the Spring Cloud
Task Project, here.
2. Remote Partitioning
Spring Cloud Deployer provides facilities for launching Spring Boot-based applications on most
cloud infrastructures. The DeployerPartitionHandler and DeployerStepExecutionHandler delegate the
launching of worker step executions to Spring Cloud Deployer.
@Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher,
JobExplorer jobExplorer) throws Exception {
Resource resource =
MavenResource.parse(String.format("%s:%s:%s",
"io.spring.cloud",
"partitioned-batch-job",
"1.1.0.RELEASE"), mavenProperties);
DeployerPartitionHandler partitionHandler =
new DeployerPartitionHandler(taskLauncher, jobExplorer, resource,
"workerStep");
partitionHandler.setCommandLineArgsProvider(
new PassThroughCommandLineArgsProvider(commandLineArgs));
partitionHandler.setEnvironmentVariablesProvider(new
NoOpEnvironmentVariablesProvider());
partitionHandler.setMaxWorkers(2);
partitionHandler.setApplicationName("PartitionedBatchJobTask");
return partitionHandler;
}
Notice in the example above that we have set the maximum number of workers to 2. Setting the
maximum of workers establishes the maximum number of partitions that should be running at one
time.
@Bean
public DeployerStepExecutionHandler stepExecutionHandler(JobExplorer jobExplorer) {
DeployerStepExecutionHandler handler =
new DeployerStepExecutionHandler(this.context, jobExplorer,
this.jobRepository);
return handler;
}
You can find a sample remote partition application in the samples module of the
Spring Cloud Task project, here.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-deployer-kubernetes</artifactId>
</dependency>
• The application name for the task application and its partitions need to follow the following
regex pattern: [a-z0-9]([-a-z0-9]*[a-z0-9]). Otherwise, an exception is thrown.
• When configuring the partition handler, Cloud Foundry Deployment environment variables
need to be established so that the partition handler can start the partitions. The following list
shows the required environment variables:
◦ spring_cloud_deployer_cloudfoundry_url
◦ spring_cloud_deployer_cloudfoundry_org
◦ spring_cloud_deployer_cloudfoundry_space
◦ spring_cloud_deployer_cloudfoundry_domain
◦ spring_cloud_deployer_cloudfoundry_username
◦ spring_cloud_deployer_cloudfoundry_password
◦ spring_cloud_deployer_cloudfoundry_services
◦ spring_cloud_deployer_cloudfoundry_taskTimeout
An example set of deployment environment variables for a partitioned task that uses a mysql
database service might resemble the following:
spring_cloud_deployer_cloudfoundry_url=https://api.local.pcfdev.io
spring_cloud_deployer_cloudfoundry_org=pcfdev-org
spring_cloud_deployer_cloudfoundry_space=pcfdev-space
spring_cloud_deployer_cloudfoundry_domain=local.pcfdev.io
spring_cloud_deployer_cloudfoundry_username=admin
spring_cloud_deployer_cloudfoundry_password=admin
spring_cloud_deployer_cloudfoundry_services=mysql
spring_cloud_deployer_cloudfoundry_taskTimeout=300
This functionality uses a new CommandLineRunner that replaces the one provided by Spring Boot. By
default, it is configured with the same order. However, if you want to customize the order in which
the CommandLineRunner is run, you can set its order by setting the
spring.cloud.task.batch.commandLineRunnerOrder property. To have your task return the exit code
based on the result of the batch job execution, you need to write your own CommandLineRunner.
To obtain the starter for Maven, add the following to your build:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-single-step-batch-job</artifactId>
<version>2.3.0</version>
</dependency>
To obtain the starter for Gradle, add the following to your build:
compile "org.springframework.cloud:spring-cloud-starter-single-step-batch-
job:2.3.0"
1. Defining a Job
You can use the starter to define as little as an ItemReader or an ItemWriter or as much as a full Job.
In this section, we define which properties are required to be defined to configure a Job.
1.1. Properties
To begin, the starter provides a set of properties that let you configure the basics of a Job with one
Step:
With the above properties configured, you have a job with a single, chunk-based step. This chunk-
based step reads, processes, and writes Map<String, Object> instances as the items. However, the
step does not yet do anything. You need to configure an ItemReader, an optional ItemProcessor, and
an ItemWiter to give it something to do. To configure one of these, you can either use properties and
configure one of the options that has provided autoconfiguration or you can configure your own
with the standard Spring configuration mechanisms.
If you configure your own, the input and output types must match the others in
the step. The ItemReader implementations and ItemWriter implementations in this
starter all use a Map<String, Object> as the input and the output item.
2.1. AmqpItemReader
You can read from a queue or topic with AMQP by using the AmqpItemReader. The autoconfiguration
for this ItemReader implementation is dependent upon two sets of configuration. The first is the
configuration of an AmqpTemplate. You can either configure this yourself or use the
autoconfiguration provided by Spring Boot. See the Spring Boot AMQP documentation. Once you
have configured the AmqpTemplate, you can enable the batch capabilities to support it by setting the
following properties:
2.2. FlatFileItemReader
FlatFileItemReader lets you read from flat files (such as CSVs and other file formats). To read from a
file, you can provide some components yourself through normal Spring configuration
(LineTokenizer, RecordSeparatorPolicy, FieldSetMapper, LineMapper, or SkippedLinesCallback). You can
also use the following properties to configure the reader:
2.3. JdbcCursorItemReader
The JdbcCursorItemReader runs a query against a relational database and iterates over the resulting
cursor (ResultSet) to provide the resulting items. This autoconfiguration lets you provide a
PreparedStatementSetter, a RowMapper, or both. You can also use the following properties to configure
a JdbcCursorItemReader:
2.4. KafkaItemReader
Ingesting a partition of data from a Kafka topic is useful and exactly what the KafkaItemReader can
do. To configure a KafkaItemReader, two pieces of configuration are required. First, configuring
Kafka with Spring Boot’s Kafka autoconfiguration is required (see the Spring Boot Kafka
documentation). Once you have configured the Kafka properties from Spring Boot, you can
configure the KafkaItemReader itself by setting the following properties:
3. ItemProcessor Configuration
The single-step batch job autoconfiguration accepts an ItemProcessor if one is available within the
ApplicationContext. If one is found of the correct type (ItemProcessor<Map<String, Object>,
Map<String, Object>>), it is autowired into the step.
4.1. AmqpItemWriter
To write to a RabbitMQ queue, you need two sets of configuration. First, you need an AmqpTemplate.
The easiest way to get this is by using Spring Boot’s RabbitMQ autoconfiguration. See the Spring
Boot RabbitMQ documentation. Once you have configured the AmqpTemplate, you can configure the
AmqpItemWriter by setting the following properties:
4.2. FlatFileItemWriter
To write a file as the output of the step, you can configure FlatFileItemWriter. Autoconfiguration
accepts components that have been explicitly configured (such as LineAggregator, FieldExtractor,
FlatFileHeaderCallback, or a FlatFileFooterCallback) and components that have been configured by
setting the following properties specified:
4.3. JdbcBatchItemWriter
To write the output of a step to a relational database, this starter provides the ability to
autoconfigure a JdbcBatchItemWriter. The autoconfiguration lets you provide your own
ItemPreparedStatementSetter or ItemSqlParameterSourceProvider and configuration options by
setting the following properties:
4.4. KafkaItemWriter
To write step output to a Kafka topic, you need KafkaItemWriter. This starter provides
autoconfiguration for a KafkaItemWriter by using facilities from two places. The first is Spring Boot’s
Kafka autoconfiguration. (See the Spring Boot Kafka documentation.) Second, this starter lets you
configure two properties on the writer.
For more about the configuration options for the KafkaItemWiter, see the KafkaItemWiter
documentation.
• applicationName: The name that is associated with the task. If no applicationName is set, the
TaskLaunchRequest generates a task name comprised of the following: Task-<UUID>.
• commandLineArguments: A list containing the command line arguments for the task.
• deploymentProperties: A map containing the properties that are used by the deployer to deploy
the task.
For example, a stream can be created that has a processor that takes in data from an HTTP source
and creates a GenericMessage that contains the TaskLaunchRequest and sends the message to its
output channel. The task sink would then receive the message from its input channnel and then
launch the task.
To create a taskSink, you need only create a Spring Boot application that includes the
EnableTaskLauncher annotation, as shown in the following example:
@SpringBootApplication
@EnableTaskLauncher
public class TaskSinkApplication {
public static void main(String[] args) {
SpringApplication.run(TaskSinkApplication.class, args);
}
}
The samples module of the Spring Cloud Task project contains a sample Sink and Processor. To
install these samples into your local maven repository, run a maven build from the spring-cloud-
task-samples directory with the skipInstall property set to false, as shown in the following
example:
The following example shows how to create a stream from the Spring Cloud Data Flow shell:
@SpringBootApplication
public class TaskEventsApplication {
@Configuration
public static class TaskConfiguration {
@Bean
public CommandLineRunner commandLineRunner() {
return new CommandLineRunner() {
@Override
public void run(String... args) throws Exception {
System.out.println("The CommandLineRunner was executed");
}
};
}
}
}
A sample task event application can be found in the samples module of the Spring
Cloud Task Project, here.
These listeners are autoconfigured into any AbstractJob when the appropriate beans (a Job and a
TaskLifecycleListener) exist in the context. Configuration to listen to these events is handled the
same way binding to any other Spring Cloud Stream channel is done. Our task (the one running the
batch job) serves as a Source, with the listening applications serving as either a Processor or a Sink.
An example could be to have an application listening to the job-execution-events channel for the
start and stop of a job. To configure the listening application, you would configure the input to be
job-execution-events as follows:
spring.cloud.stream.bindings.input.destination=job-execution-events
A sample batch event application can be found in the samples module of the
Spring Cloud Task Project, here.
spring.cloud.stream.bindings.step-execution-events.destination=my-step-execution-events
spring.cloud.task.batch.events.enabled=false
The following listing shows individual listeners that you can disable:
spring.cloud.task.batch.events.job-execution.enabled=false
spring.cloud.task.batch.events.step-execution.enabled=false
spring.cloud.task.batch.events.chunk.enabled=false
spring.cloud.task.batch.events.item-read.enabled=false
spring.cloud.task.batch.events.item-process.enabled=false
spring.cloud.task.batch.events.item-write.enabled=false
spring.cloud.task.batch.events.skip.enabled=false
spring.cloud.task.batch.events.job-execution-order=5
spring.cloud.task.batch.events.step-execution-order=5
spring.cloud.task.batch.events.chunk-order=5
spring.cloud.task.batch.events.item-read-order=5
spring.cloud.task.batch.events.item-process-order=5
spring.cloud.task.batch.events.item-write-order=5
spring.cloud.task.batch.events.skip-order=5
Appendices
1. Task Repository Schema
This appendix provides an ERD for the database schema used in the task repository.
END FAL DAT X Spring Cloud Task Framework at app exit establishes
_TI SE ETI the value.
ME ME
TAS FAL VAR 100 Spring Cloud Task Framework at app startup will set
K_N SE CHA this to "Application" unless user establish the name
AM R using the spring.cloud.task.name as discussed here
E
EXI FAL INT X Follows Spring Boot defaults unless overridden by the
T_C SE EGE user as discussed here.
ODE R
ERR FAL VAR 2500 Spring Cloud Task Framework at app exit establishes
OR_ SE CHA the value.
MES R
SAG
E
TASK_EXECUTION_PARAMS
Stores the parameters used for a task execution
TASK_TASK_BATCH
Used to link the task execution to the batch execution.
TASK_LOCK
Used for the single-instance-enabled feature discussed here.
Col Req Typ Fiel Notes
um uire e d
n d Len
Na gth
me
REG TRU VAR 100 User can establish a group of locks using this field.
ION E CHA
R
CLIE TRU CHA 36 The task execution id that contains the name of the app
NT_I E R to lock.
D
CRE TRU DAT X The date that the entry was created
ATE E ETI
D_D ME
ATE
The DDL for setting up tables for each database type can be found here.
Copies of this document may be made for your own use and for distribution to
others, provided that you do not charge any fee for such copies and further provided
that each copy contains this Copyright Notice, whether distributed in print or
electronically.
Spring Cloud Vault Config provides client-side support for externalized configuration in a
distributed system. With HashiCorp’s Vault you have a central place to manage external secret
properties for applications across all environments. Vault can manage static and dynamic secrets
such as username/password for remote applications/resources and provide credentials for external
services such as MySQL, PostgreSQL, Apache Cassandra, Couchbase, MongoDB, Consul, AWS and
more.
2. Quick Start
Prerequisites
To get started with Vault and this guide you need a *NIX-like operating systems that provides:
This guide explains Vault setup from a Spring Cloud Vault perspective for
integration testing. You can find a getting started guide directly on the Vault
project site: learn.hashicorp.com/vault
Install Vault
$ wget
https://releases.hashicorp.com/vault/${vault_version}/vault_${vault_version}_${platfor
m}.zip
$ unzip vault_${vault_version}_${platform}.zip
• Root CA
backend "inmem" {
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "work/ca/certs/localhost.cert.pem"
tls_key_file = "work/ca/private/localhost.decrypted.key.pem"
}
disable_mlock = true
Vault is started listening on 0.0.0.0:8200 using the inmem storage and https. Vault is sealed and not
initialized when starting up.
If you want to run tests, leave Vault uninitialized. The tests will initialize Vault and
create a root token 00000000-0000-0000-0000-000000000000.
If you want to use Vault for your application or give it a try then you need to initialize it first.
$ export VAULT_ADDR="https://localhost:8200"
$ export VAULT_SKIP_VERIFY=true # Don't do this for production
$ vault init
You should see something like:
Key 1: 7149c6a2e16b8833f6eb1e76df03e47f6113a3288b3093faf5033d44f0e70fe701
Key 2: 901c534c7988c18c20435a85213c683bdcf0efcd82e38e2893779f152978c18c02
Key 3: 03ff3948575b1165a20c20ee7c3e6edf04f4cdbe0e82dbff5be49c63f98bc03a03
Key 4: 216ae5cc3ddaf93ceb8e1d15bb9fc3176653f5b738f5f3d1ee00cd7dccbe926e04
Key 5: b2898fc8130929d569c1677ee69dc5f3be57d7c4b494a6062693ce0b1c4d93d805
Initial Root Token: 19aefa97-cccc-bbbb-aaaa-225940e63d76
Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.
Vault will initialize and return a set of unsealing keys and the root token. Pick 3 keys and unseal
Vault. Store the Vault token in the VAULT_TOKEN environment variable.
Spring Cloud Vault accesses different resources. By default, the secret backend is enabled which
accesses secret config settings via JSON endpoints.
/secret/{application}/{profile}
/secret/{application}
/secret/{defaultContext}/{profile}
/secret/{defaultContext}
where the "application" is injected as the spring.application.name in the SpringApplication (i.e. what
is normally "application" in a regular Spring Boot app), "profile" is an active profile (or comma-
separated list of properties). Properties retrieved from Vault will be used "as-is" without further
prefixing of the property names.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.0.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-vault-config</artifactId>
<version>3.0.3</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
Then you can create a standard Spring Boot application, like this simple HTTP server:
@SpringBootApplication
@RestController
public class Application {
@RequestMapping("/")
public String home() {
return "Hello World!";
}
When it runs it will pick up the external configuration from the default local Vault server on port
8200 if it is running. To modify the startup behavior you can change the location of the Vault server
using application.properties, for example
spring.cloud.vault:
host: localhost
port: 8200
scheme: https
uri: https://localhost:8200
connection-timeout: 5000
read-timeout: 15000
config:
spring.config.import: vault://
• host sets the hostname of the Vault host. The host name will be used for SSL certificate
validation
• scheme setting the scheme to http will use plain HTTP. Supported schemes are http and https.
• uri configure the Vault endpoint with an URI. Takes precedence over host/port/scheme
configuration
• spring.config.import mounts Vault as PropertySource using all enabled secret backends (key-
value enabled by default)
The vault health indicator can be enabled or disabled through the property
management.health.vault.enabled (default to true).
With Spring Cloud Vault 3.0 and Spring Boot 2.4, the bootstrap context
initialization (bootstrap.yml, bootstrap.properties) of property sources was
deprecated. Instead, Spring Cloud Vault favors Spring Boot’s Config Data API which
allows importing configuration from Vault. With Spring Boot Config Data
approach, you need to set the spring.config.import property in order to bind to
Vault. You can read more about it in the Config Data Locations section. You can
enable the bootstrap context either by setting the configuration property
spring.cloud.bootstrap.enabled=true or by including the dependency
org.springframework.cloud:spring-cloud-starter-bootstrap.
3.1. Authentication
Vault requires an authentication mechanism to authorize client requests.
Spring Cloud Vault supports multiple authentication mechanisms to authenticate applications with
Vault.
For a quickstart, use the root token printed by the Vault initialization.
spring.cloud.vault:
token: 19aefa97-cccc-bbbb-aaaa-225940e63d76
spring.config.import: vault://
4. ConfigData API
Spring Boot provides since version 2.4 a ConfigData API that allows the declaration of configuration
sources and importing these as property sources.
Spring Cloud Vault uses as of version 3.0 the ConfigData API to mount Vault’s secret backends as
property sources. In previous versions, the Bootstrap context was used. The ConfigData API is much
more flexible as it allows specifying which configuration systems to import and in which order.
You can enable the deprecated bootstrap context either by setting the
configuration property spring.cloud.bootstrap.enabled=true or by including the
dependency org.springframework.cloud:spring-cloud-starter-bootstrap.
Using the default location mounts property sources for all enabled Secret Backends. Without
further configuration, Spring Cloud Vault mounts the key-value backend at
/secret/${spring.application.name}. Each activated profile adds another context path following the
form /secret/${spring.application.name}/${profile}. Adding further modules to the classpath, such
as spring-cloud-config-databases, provides additional secret backend configuration options which
get mounted as property sources if enabled.
If you want to control which context paths are mounted from Vault as PropertySource, you can
either use a contextual location (vault:///my/context/path) or configure a VaultConfigurer.
Contextual locations are specified and mounted individually. Spring Cloud Vault mounts each
location as a unique PropertySource. You can mix the default locations with contextual locations (or
other config systems) to control the order of property sources. This approach is useful in particular
if you want to disable the default key-value path computation and mount each key-value backend
yourself instead.
You can customize the infrastructure used by Spring Cloud Vault by registering custom instances
using the Bootstrapper API:
InstanceSupplier<RestTemplateBuilder> builderSupplier = ctx -> RestTemplateBuilder
.builder()
.requestFactory(ctx.get(ClientFactoryWrapper.class).getClientHttpRequestFactory())
.defaultHeader("X-Vault-Namespace", "my-namespace");
See also Customize which secret backends to expose as PropertySource and the source of
VaultConfigDataLoader for customization hooks.
5. Authentication methods
Different organizations have different requirements for security and authentication. Vault reflects
that need by shipping multiple authentication methods. Spring Cloud Vault supports token and
AppId authentication.
spring.cloud.vault:
authentication: TOKEN
token: 00000000-0000-0000-0000-000000000000
• authentication setting this value to TOKEN selects the Token authentication method
spring.cloud.vault:
authentication: NONE
spring.cloud.vault:
authentication: APPID
app-id:
user-id: IP_ADDRESS
• authentication setting this value to APPID selects the AppId authentication method
• user-id sets the UserId method. Possible values are IP_ADDRESS, MAC_ADDRESS or a class name
implementing a custom AppIdUserIdMechanism
The corresponding command to generate the IP address UserId from a command line is:
Including the line break of echo leads to a different hash value so make sure to
include the -n flag.
Mac address-based UserId’s obtain their network device from the localhost-bound device. The
configuration also allows specifying a network-interface hint to pick the right device. The value of
network-interface is optional and can be either an interface name or interface index (0-based).
spring.cloud.vault:
authentication: APPID
app-id:
user-id: MAC_ADDRESS
network-interface: eth0
The corresponding command to generate the IP address UserId from a command line is:
The Mac address is specified uppercase and without colons. Including the line
break of echo leads to a different hash value so make sure to include the -n flag.
The UserId generation is an open mechanism. You can set spring.cloud.vault.app-id.user-id to any
string and the configured value will be used as static UserId.
spring.cloud.vault:
authentication: APPID
app-id:
user-id: com.examlple.MyUserIdMechanism
Example 85. MyUserIdMechanism.java
@Override
public String createUserId() {
String userId = ...
return userId;
}
}
Spring Vault supports various AppRole scenarios (push/pull mode and wrapped).
RoleId and optionally SecretId must be provided by configuration, Spring Vault will not look up
these or create a custom SecretId.
spring.cloud.vault:
authentication: APPROLE
app-role:
role-id: bde2076b-cccb-3cf0-d57e-bca7b1e83a52
The following scenarios are supported along the required configuration details:
Wrapped Provided
Provided Provided ✅
Provided Pull ✅
Provided Wrapped ✅
Provided Absent ✅
Pull Provided ✅
Pull Pull ✅
Pull Wrapped ❌
Pull Absent ❌
Wrapped Provided ✅
Wrapped Pull ❌
Wrapped Wrapped ✅
Wrapped Absent ❌
spring.cloud.vault:
authentication: APPROLE
app-role:
role-id: bde2076b-cccb-3cf0-d57e-bca7b1e83a52
secret-id: 1696536f-1976-73b1-b241-0b4213908d39
role: my-role
app-role-path: approle
• secret-id sets the SecretId. SecretId can be omitted if AppRole is configured without requiring
SecretId (See bind_secret_id).
spring.cloud.vault:
authentication: AWS_EC2
AWS-EC2 authentication enables nonce by default to follow the Trust On First Use (TOFU) principle.
Any unintended party that gains access to the PKCS#7 identity metadata can authenticate against
Vault.
During the first login, Spring Cloud Vault generates a nonce that is stored in the auth backend aside
the instance Id. Re-authentication requires the same nonce to be sent. Any other party does not
have the nonce and can raise an alert in Vault for further investigation.
The nonce is kept in memory and is lost during application restart. You can configure a static nonce
with spring.cloud.vault.aws-ec2.nonce.
AWS-EC2 authentication roles are optional and default to the AMI. You can configure the
authentication role by setting the spring.cloud.vault.aws-ec2.role property.
spring.cloud.vault:
authentication: AWS_EC2
aws-ec2:
role: application-server
spring.cloud.vault:
authentication: AWS_EC2
aws-ec2:
role: application-server
aws-ec2-path: aws-ec2
identity-document: http://...
nonce: my-static-nonce
• authentication setting this value to AWS_EC2 selects the AWS EC2 authentication method
• role sets the name of the role against which the login is being attempted.
• nonce used for AWS-EC2 authentication. An empty nonce defaults to nonce generation
The current IAM role the application is running in is automatically calculated. If you are running
your application on AWS ECS then the application will use the IAM role assigned to the ECS task of
the running container. If you are running your application naked on top of an EC2 instance then
the IAM role used will be the one assigned to the EC2 instance.
When using the AWS-IAM authentication you must create a role in Vault and assign it to your IAM
role. An empty role defaults to the friendly name the current IAM role.
Example 91. application.yml with required AWS-IAM Authentication properties
spring.cloud.vault:
authentication: AWS_IAM
spring.cloud.vault:
authentication: AWS_IAM
aws-iam:
role: my-dev-role
aws-path: aws
server-name: some.server.name
endpoint-uri: https://sts.eu-central-1.amazonaws.com
• role sets the name of the role against which the login is being attempted. This should be bound
to your IAM role. If one is not supplied then the friendly name of the current IAM user will be
used as the vault role.
• server-name sets the value to use for the X-Vault-AWS-IAM-Server-ID header preventing certain
types of replay attacks.
• endpoint-uri sets the value to use for the AWS STS API used for the iam_request_url parameter.
spring.cloud.vault:
authentication: AZURE_MSI
azure-msi:
role: my-dev-role
spring.cloud.vault:
authentication: AZURE_MSI
azure-msi:
role: my-dev-role
azure-path: azure
metadata-service: http://169.254.169.254/metadata/instance…
identity-token-service: http://169.254.169.254/metadata/identity…
• role sets the name of the role against which the login is being attempted.
• metadata-service sets the URI at which to access the instance metadata service
• identity-token-service sets the URI at which to access the identity token service
Azure MSI authentication obtains environmental details about the virtual machine (subscription Id,
resource group, VM name) from the instance metadata service. The Vault server has Resource Id
defaults to vault.hashicorp.com. To change this, set spring.cloud.vault.azure-msi.identity-token-
service accordingly.
See also:
2. Configure a Java Keystore that contains the client certificate and the private key
spring.cloud.vault:
authentication: CERT
ssl:
key-store: classpath:keystore.jks
key-store-password: changeit
key-store-type: JKS
cert-auth-path: cert
spring.cloud.vault:
authentication: CUBBYHOLE
token: 397ccb93-ff6c-b17b-9389-380b01ca2645
See also:
GCP GCE (Google Compute Engine) authentication creates a signature in the form of a JSON Web
Token (JWT) for a service account. A JWT for a Compute Engine instance is obtained from the GCE
metadata service using Instance identification. This API creates a JSON Web Token that can be used
to confirm the instance identity.
Unlike most Vault authentication backends, this backend does not require first-deploying, or
provisioning security-sensitive credentials (tokens, username/password, client certificates, etc.).
Instead, it treats GCP as a Trusted Third Party and uses the cryptographically signed dynamic
metadata information that uniquely represents each GCP service account.
spring.cloud.vault:
authentication: GCP_GCE
gcp-gce:
role: my-dev-role
spring.cloud.vault:
authentication: GCP_GCE
gcp-gce:
gcp-path: gcp
role: my-dev-role
service-account: [email protected]
• role sets the name of the role against which the login is being attempted.
• service-account allows overriding the service account Id to a specific value. Defaults to the
default service account.
See also:
GCP IAM authentication creates a signature in the form of a JSON Web Token (JWT) for a service
account. A JWT for a service account is obtained by calling GCP IAM’s
projects.serviceAccounts.signJwt API. The caller authenticates against GCP IAM and proves thereby
its identity. This Vault backend treats GCP as a Trusted Third Party.
IAM credentials can be obtained from either the runtime environment , specifically the
GOOGLE_APPLICATION_CREDENTIALS environment variable, the Google Compute metadata service, or
supplied externally as e.g. JSON or base64 encoded. JSON is the preferred form as it carries the
project id and service account identifier required for calling projects.serviceAccounts.signJwt.
spring.cloud.vault:
authentication: GCP_IAM
gcp-iam:
role: my-dev-role
spring.cloud.vault:
authentication: GCP_IAM
gcp-iam:
credentials:
location: classpath:credentials.json
encoded-key: e+KApn0=
gcp-path: gcp
jwt-validity: 15m
project-id: my-project-id
role: my-dev-role
service-account-id: [email protected]
• role sets the name of the role against which the login is being attempted.
• credentials.location path to the credentials resource that contains Google credentials in JSON
format.
• credentials.encoded-key the base64 encoded contents of an OAuth2 account private key in the
JSON format.
• service-account allows overriding the service account Id to a specific value. Defaults to the
service account from the obtained credential.
GCP IAM authentication requires the Google Cloud Java SDK dependency (com.google.apis:google-
api-services-iam and com.google.auth:google-auth-library-oauth2-http) as the authentication
implementation uses Google APIs for credentials and JWT signing.
Google credentials require an OAuth 2 token maintaining the token lifecycle. All
API is synchronous therefore, GcpIamAuthentication
AuthenticationSteps which is required for reactive usage.
does not support
See also:
A file containing a JWT token for a pod’s service account is automatically mounted at
/var/run/secrets/kubernetes.io/serviceaccount/token.
spring.cloud.vault:
authentication: KUBERNETES
kubernetes:
role: my-dev-role
kubernetes-path: kubernetes
service-account-token-file:
/var/run/secrets/kubernetes.io/serviceaccount/token
• service-account-token-file sets the location of the file containing the Kubernetes Service
Account Token. Defaults to /var/run/secrets/kubernetes.io/serviceaccount/token.
See also:
spring.cloud.vault:
authentication: PCF
pcf:
role: my-dev-role
spring.cloud.vault:
authentication: PCF
pcf:
role: my-dev-role
pcf-path: path
instance-certificate: /etc/cf-instance-credentials/instance.crt
instance-key: /etc/cf-instance-credentials/instance.key
• role sets the name of the role against which the login is being attempted.
• instance-certificate sets the path to the PCF instance identity certificate. Defaults to
${CF_INSTANCE_CERT} env variable.
• instance-key sets the path to the PCF instance identity key. Defaults to ${CF_INSTANCE_KEY} env
variable.
create POST/PUT
read GET
update POST/PUT
delete DELETE
6.1. Authentication
Login: POST auth/$authMethod/login
6.3. SecretLeaseContainer
SecretLeaseContainer uses different paths depending on the configured lease endpoint.
LeaseEndpoints.Legacy
LeaseEndpoints.Leases (SysLeases)
/secret/{application}/{profile}
/secret/{application}
/secret/{default-context}/{profile}
/secret/{default-context}
• spring.cloud.vault.kv.application-name
• spring.cloud.vault.application-name
• spring.application.name
• spring.cloud.vault.kv.profiles
• spring.profiles.active
Secrets can be obtained from other contexts within the key-value backend by adding their paths to
the application name, separated by commas. For example, given the application name
usefulapp,mysql1,projectx/aws, each of these folders will be used:
• /secret/usefulapp
• /secret/mysql1
• /secret/projectx/aws
Spring Cloud Vault adds all active profiles to the list of possible context paths. No active profiles will
skip accessing contexts with a profile name.
Properties are exposed like they are stored (i.e. without additional prefixes).
Spring Cloud Vault adds the data/ context between the mount path and the actual
context path depending on whether the mount uses the versioned key-value
backend.
spring.cloud.vault:
kv:
enabled: true
backend: secret
profile-separator: '/'
default-context: application
application-name: my-app
profiles: local, cloud
• enabled setting this value to false disables the secret backend config usage
• application-name overrides the application name for use in the key-value backend
• profiles overrides the active profiles for use in the key-value backend
• profile-separator separates the profile name from the context in property sources with profiles
The key-value secret backend can be operated in versioned (v2) and non-versioned
(v1) modes.
See also:
• Vault Documentation: Using the KV Secrets Engine - Version 1 (generic secret backend)
• Vault Documentation: Using the KV Secrets Engine - Version 2 (versioned key-value backend)
7.2. Consul
Spring Cloud Vault can obtain credentials for HashiCorp Consul. The Consul integration requires
the spring-cloud-vault-config-consul dependency.
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-config-consul</artifactId>
<version>3.0.3</version>
</dependency>
</dependencies>
spring.cloud.vault:
consul:
enabled: true
role: readonly
backend: consul
token-property: spring.cloud.consul.token
• enabled setting this value to true enables the Consul backend config usage
• token-property sets the property name in which the Consul ACL token is stored
7.3. RabbitMQ
Spring Cloud Vault can obtain credentials for RabbitMQ.
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-config-rabbitmq</artifactId>
<version>3.0.3</version>
</dependency>
</dependencies>
• enabled setting this value to true enables the RabbitMQ backend config usage
• username-property sets the property name in which the RabbitMQ username is stored
• password-property sets the property name in which the RabbitMQ password is stored
7.4. AWS
Spring Cloud Vault can obtain credentials for AWS.
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-config-aws</artifactId>
<version>3.0.3</version>
</dependency>
</dependencies>
The integration can be enabled by setting spring.cloud.vault.aws=true (default false) and providing
the role name with spring.cloud.vault.aws.role=….
• iam_user (Defaults)
• assumed_role (STS)
• federation_token (STS)
The access key and secret key are stored in cloud.aws.credentials.accessKey and
cloud.aws.credentials.secretKey. So using Spring Cloud AWS will pick up the generated credentials
without further configuration.
For STS security token, you can configure the property name by setting
spring.cloud.vault.aws.session-token-key-property. The security token is stored under
cloud.aws.credentials.sessionToken (defaults).
Example: iam_user
spring.cloud.vault:
aws:
enabled: true
role: readonly
backend: aws
access-key-property: cloud.aws.credentials.accessKey
secret-key-property: cloud.aws.credentials.secretKey
spring.cloud.vault:
aws:
enabled: true
role: sts-vault-role
backend: aws
credential-type: assumed_role
access-key-property: cloud.aws.credentials.accessKey
secret-key-property: cloud.aws.credentials.secretKey
session-token-key-property: cloud.aws.credentials.sessionToken
ttl: 3600s
role-arn: arn:aws:iam::${AWS_ACCOUNT}:role/sts-app-role
• enabled setting this value to true enables the AWS backend config usage
• access-key-property sets the property name in which the AWS access key is stored
• secret-key-property sets the property name in which the AWS secret key is stored
• session-token-key-property sets the property name in which the AWS STS security token is
stored.
• credential-type sets the aws credential type to use for this backend. Defaults to iam_user
• ttl sets the ttl for the STS token when using assumed_role or federation_token. Defaults to the ttl
specified by the vault role. Min/Max values are also limited to what AWS would support for STS.
• role-arn sets the IAM role to assume if more than one are configured for the vault role when
using assumed_role.
8. Database backends
Vault supports several database secret backends to generate database credentials dynamically
based on configured roles. This means services that need to access a database no longer need to
configure credentials: they can request them from Vault, and use Vault’s leasing mechanism to
more easily roll keys.
• Database
• Apache Cassandra
• Couchbase Database
• Elasticsearch
• MongoDB
• MySQL
• PostgreSQL
Using a database secret backend requires to enable the backend in the configuration and the
spring-cloud-vault-config-databases dependency.
Vault ships since 0.7.1 with a dedicated database secret backend that allows database integration via
plugins. You can use that specific backend by using the generic database backend. Make sure to
specify the appropriate backend path, e.g. spring.cloud.vault.mysql.role.backend=database.
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-config-databases</artifactId>
<version>3.0.3</version>
</dependency>
</dependencies>
While the database backend is a generic one, spring.cloud.vault.database specifically targets JDBC
databases. Username and password are available from spring.datasource.username and
spring.datasource.password properties so using Spring Boot will pick up the generated credentials
for your DataSource without further configuration. You can configure the property names by setting
spring.cloud.vault.database.username-property and spring.cloud.vault.database.password-
property.
spring.cloud.vault:
database:
enabled: true
role: readonly
backend: database
username-property: spring.datasource.username
password-property: spring.datasource.password
• enabled setting this value to true enables the Database backend config usage
• username-property sets the property name in which the Database username is stored
• password-property sets the property name in which the Database password is stored
Spring Cloud Vault does not support getting new credentials and configuring your
DataSource with them when the maximum lease time has been reached. That is, if
max_ttl of the Database role in Vault is set to 24h that means that 24 hours after
your application has started it can no longer authenticate with the database.
Spring Cloud Vault can obtain credentials for Apache Cassandra. The integration can be enabled by
setting spring.cloud.vault.cassandra.enabled=true (default false) and providing the role name with
spring.cloud.vault.cassandra.role=….
Username and password are available from spring.data.cassandra.username and
spring.data.cassandra.password properties so using Spring Boot will pick up the generated
credentials without further configuration. You can configure the property names by setting
spring.cloud.vault.cassandra.username-property and spring.cloud.vault.cassandra.password-
property.
spring.cloud.vault:
cassandra:
enabled: true
role: readonly
backend: cassandra
username-property: spring.data.cassandra.username
password-property: spring.data.cassandra.password
• enabled setting this value to true enables the Cassandra backend config usage
• username-property sets the property name in which the Cassandra username is stored
• password-property sets the property name in which the Cassandra password is stored
spring.cloud.vault:
couchbase:
enabled: true
role: readonly
backend: database
username-property: spring.couchbase.username
password-property: spring.couchbase.password
• enabled setting this value to true enables the Couchbase backend config usage
• username-property sets the property name in which the Couchbase username is stored
• password-property sets the property name in which the Couchbase password is stored
8.4. Elasticsearch
Spring Cloud Vault can obtain since version 3.0 credentials for Elasticsearch. The integration can be
enabled by setting spring.cloud.vault.elasticsearch.enabled=true (default false) and providing the
role name with spring.cloud.vault.elasticsearch.role=….
spring.cloud.vault:
elasticsearch:
enabled: true
role: readonly
backend: mongodb
username-property: spring.elasticsearch.rest.username
password-property: spring.elasticsearch.rest.password
• enabled setting this value to true enables the Elasticsearch database backend config usage
• username-property sets the property name in which the Elasticsearch username is stored
• password-property sets the property name in which the Elasticsearch password is stored
8.5. MongoDB
The mongodb backend has been deprecated in Vault 0.7.1 and it is recommended to
use the database backend and mount it as mongodb.
Spring Cloud Vault can obtain credentials for MongoDB. The integration can be enabled by setting
spring.cloud.vault.mongodb.enabled=true (default false) and providing the role name with
spring.cloud.vault.mongodb.role=….
spring.cloud.vault:
mongodb:
enabled: true
role: readonly
backend: mongodb
username-property: spring.data.mongodb.username
password-property: spring.data.mongodb.password
• enabled setting this value to true enables the MongodB backend config usage
• username-property sets the property name in which the MongoDB username is stored
• password-property sets the property name in which the MongoDB password is stored
8.6. MySQL
The mysql backend has been deprecated in Vault 0.7.1 and it is recommended to
use the database backend and mount it as
spring.cloud.vault.mysql will be removed in a future version.
mysql. Configuration for
Spring Cloud Vault can obtain credentials for MySQL. The integration can be enabled by setting
spring.cloud.vault.mysql.enabled=true (default false) and providing the role name with
spring.cloud.vault.mysql.role=….
• enabled setting this value to true enables the MySQL backend config usage
• username-property sets the property name in which the MySQL username is stored
• password-property sets the property name in which the MySQL password is stored
8.7. PostgreSQL
The postgresql backend has been deprecated in Vault 0.7.1 and it is recommended
to use the database backend and mount it as postgresql. Configuration for
spring.cloud.vault.postgresql will be removed in a future version.
Spring Cloud Vault can obtain credentials for PostgreSQL. The integration can be enabled by setting
spring.cloud.vault.postgresql.enabled=true (default false) and providing the role name with
spring.cloud.vault.postgresql.role=….
spring.cloud.vault:
postgresql:
enabled: true
role: readonly
backend: postgresql
username-property: spring.datasource.username
password-property: spring.datasource.password
• enabled setting this value to true enables the PostgreSQL backend config usage
• role sets the role name of the PostgreSQL role definition
• username-property sets the property name in which the PostgreSQL username is stored
• password-property sets the property name in which the PostgreSQL password is stored
You can register a VaultConfigurer for customization. Default key-value and discovered backend
registration is disabled if you provide a VaultConfigurer. You can however enable default
registration with SecretBackendConfigurer.registerDefaultKeyValueSecretBackends() and
SecretBackendConfigurer.registerDefaultDiscoveredSecretBackends().
@Override
public void addSecretBackends(SecretBackendConfigurer configurer) {
configurer.add("secret/my-application");
configurer.registerDefaultKeyValueSecretBackends(false);
configurer.registerDefaultDiscoveredSecretBackends(true);
}
}
• org.springframework.cloud.vault.config.VaultSecretBackendDescriptor
• org.springframework.cloud.vault.config.SecretBackendMetadataFactory
The discovery client implementations all support some kind of metadata map (e.g. for Eureka we
have eureka.instance.metadataMap). Some additional properties of the service may need to be
configured in its service registration metadata so that clients can connect correctly. Service
registries that do not provide details about transport layer security need to provide a scheme
metadata entry to be set either to https or http. If no scheme is configured and the service is not
exposed as secure service, then configuration defaults to spring.cloud.vault.scheme which is https
when it’s not set.
spring.cloud.vault.discovery:
enabled: true
service-id: my-vault-service
12. Vault Client Fail Fast
In some cases, it may be desirable to fail startup of a service if it cannot connect to the Vault Server.
If this is the desired behavior, set the bootstrap configuration property spring.cloud.vault.fail-
fast=true and the client will halt with an Exception.
spring.cloud.vault:
fail-fast: true
Please note that this feature is not supported by Vault Community edition and has no effect on Vault
operations.
spring.cloud.vault:
namespace: my-namespace
spring.cloud.vault:
ssl:
trust-store: classpath:keystore.jks
trust-store-password: changeit
trust-store-type: JKS
enabled-protocols: TLSv1.2,TLSv1.3
enabled-cipher-suites: TLS_AES_128_GCM_SHA256
• trust-store sets the resource for the trust-store. SSL-secured Vault communication will validate
the Vault SSL certificate with the specified trust-store.
• trust-store-type sets the trust-store type. Supported values are all supported KeyStore types
including PEM.
• enabled-cipher-suites sets the list of enabled SSL/TLS cipher suites (since 3.0.2).
Please note that configuring spring.cloud.vault.ssl.* can be only applied when either Apache Http
Components or the OkHttp client is on your class-path.
Vault promises that the data will be valid for the given duration, or Time To Live (TTL). Once the
lease is expired, Vault can revoke the data, and the consumer of the secret can no longer be certain
that it is valid.
Spring Cloud Vault maintains a lease lifecycle beyond the creation of login tokens and secrets. That
said, login tokens and secrets associated with a lease are scheduled for renewal just before the lease
expires until terminal expiry. Application shutdown revokes obtained login tokens and renewable
leases.
Secret service and database backends (such as MongoDB or MySQL) usually generate a renewable
lease so generated credentials will be disabled on application shutdown.
Lease renewal and revocation is enabled by default and can be disabled by setting
spring.cloud.vault.config.lifecycle.enabled to false. This is not recommended as leases can
expire and Spring Cloud Vault cannot longer access Vault or services using generated credentials
and valid credentials remain active after application shutdown.
spring.cloud.vault:
config.lifecycle:
enabled: true
min-renewal: 10s
expiry-threshold: 1m
lease-endpoints: Legacy
• enabled controls whether leases associated with secrets are considered to be renewed and
expired secrets are rotated. Enabled by default.
• min-renewal sets the duration that is at least required before renewing a lease. This setting
prevents renewals from happening too often.
• expiry-threshold sets the expiry threshold. A lease is renewed the configured period of time
before it expires.
• lease-endpoints sets the endpoints for renew and revoke. Legacy for vault versions before 0.8
and SysLeases for later.
Spring Cloud Vault maintains the session token lifecycle by default. Session tokens are obtained
lazily so the actual login is deferred until the first session-bound use of Vault. Once Spring Cloud
Vault obtains a session token, it retains it until expiry. The next time a session-bound activity is
used, Spring Cloud Vault re-logins into Vault and obtains a new session token. On application shut
down, Spring Cloud Vault revokes the token if it was still active to terminate the session.
spring.cloud.vault:
session.lifecycle:
enabled: true
refresh-before-expiry: 10s
expiry-threshold: 20s
• enabled controls whether session lifecycle management is enabled to renew session tokens.
Enabled by default.
• refresh-before-expiry controls the point in time when the session token gets renewed. The
refresh time is calculated by subtracting refresh-before-expiry from the token expiry time.
Defaults to 5 seconds.
• expiry-threshold sets the expiry threshold. The threshold represents a minimum TTL duration
to consider a session token as valid. Tokens with a shorter TTL are considered expired and are
not used anymore. Should be greater than refresh-before-expiry to prevent token expiry.
Defaults to 7 seconds.
See also: Vault Documentation: Token Renewal
Property contributions can come from additional jar files on your classpath, so you
should not consider this an exhaustive list. Also, you can define your own
properties.
spring.cloud.vault.authenticatio
n
1. Quick Start
This quick start walks through using Spring Cloud Zookeeper for Service Discovery and Distributed
Configuration.
First, run Zookeeper on your machine. Then you can access it and use it as a Service Registry and
Configuration source with Spring Cloud Zookeeper.
pom.xml
<project>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>{spring-boot-version}</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zookeeper-discovery</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
plugins {
id 'org.springframework.boot' version ${spring-boot-version}
id 'io.spring.dependency-management' version ${spring-dependency-management-version}
id 'java'
}
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.cloud:spring-cloud-starter-zookeeper-discovery'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-
dependencies:${springCloudVersion}"
}
}
Depending on the version you are using, you might need to adjust Apache
Zookeeper version used in your project. You can read more about it in the Install
Zookeeper section.
Now you can create a standard Spring Boot application, such as the following HTTP server:
@SpringBootApplication
@RestController
public class Application {
@GetMapping("/")
public String home() {
return "Hello World!";
}
When this HTTP server runs, it connects to Zookeeper, which runs on the default local port (2181).
To modify the startup behavior, you can change the location of Zookeeper by using
application.properties, as shown in the following example:
spring:
cloud:
zookeeper:
connect-string: localhost:2181
@Autowired
private DiscoveryClient discoveryClient;
<project>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>{spring-boot-version}</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zookeeper-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
plugins {
id 'org.springframework.boot' version ${spring-boot-version}
id 'io.spring.dependency-management' version ${spring-dependency-management-version}
id 'java'
}
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.cloud:spring-cloud-starter-zookeeper-config'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-
dependencies:${springCloudVersion}"
}
}
Depending on the version you are using, you might need to adjust Apache
Zookeeper version used in your project. You can read more about it in the Install
Zookeeper section.
Now you can create a standard Spring Boot application, such as the following HTTP server:
@SpringBootApplication
@RestController
public class Application {
@GetMapping("/")
public String home() {
return "Hello World!";
}
2. Install Zookeeper
See the installation documentation for instructions on how to install Zookeeper.
Spring Cloud Zookeeper uses Apache Curator behind the scenes. While Zookeeper 3.5.x is still
considered "beta" by the Zookeeper development team, the reality is that it is used in production by
many users. However, Zookeeper 3.4.x is also used in production. Prior to Apache Curator 4.0, both
versions of Zookeeper were supported via two versions of Apache Curator. Starting with Curator
4.0 both versions of Zookeeper are supported via the same Curator libraries.
In case you are integrating with version 3.4 you need to change the Zookeeper dependency that
comes shipped with curator, and thus spring-cloud-zookeeper. To do so simply exclude that
dependency and add the 3.4.x version like shown below.
maven
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zookeeper-all</artifactId>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.12</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
gradle
compile('org.springframework.cloud:spring-cloud-starter-zookeeper-all') {
exclude group: 'org.apache.zookeeper', module: 'zookeeper'
}
compile('org.apache.zookeeper:zookeeper:3.4.12') {
exclude group: 'org.slf4j', module: 'slf4j-log4j12'
}
3.1. Activating
Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-discovery
enables autoconfiguration that sets up Spring Cloud Zookeeper Discovery.
When working with version 3.4 of Zookeeper you need to change the way you
include the dependency as described here.
@RequestMapping("/")
public String home() {
return "Hello world";
}
If Zookeeper is located somewhere other than localhost:2181, the configuration must provide the
location of the server, as shown in the following example:
application.yml
spring:
cloud:
zookeeper:
connect-string: localhost:2181
If you use Spring Cloud Zookeeper Config, the values shown in the preceding
example need to be in bootstrap.yml instead of application.yml.
The default service name, instance ID, and port (taken from the Environment) are
${spring.application.name}, the Spring Context ID, and ${server.port}, respectively.
If you would like to disable the Zookeeper Discovery Client, you can set
spring.cloud.zookeeper.discovery.enabled to false.
If you were previously using the StickyRule in Zookeeper, its replacement in the
current stack is the SameInstancePreferenceServiceInstanceListSupplier in SC
LoadBalancer. You can read on how to set it up in the Spring Cloud Commons
documentation.
The ServiceInstanceRegistration class offers a builder() method to create a Registration object that
can be used by the ServiceRegistry, as shown in the following example:
@Autowired
private ZookeeperServiceRegistry serviceRegistry;
6. Zookeeper Dependencies
The following topics cover how to work with Spring Cloud Zookeeper dependencies:
You can also use the Zookeeper Dependency Watchers functionality to control and monitor the state
of your dependencies.
application.yml
spring.application.name: yourServiceName
spring.cloud.zookeeper:
dependencies:
newsletter:
path: /path/where/newsletter/has/registered/in/zookeeper
loadBalancerType: ROUND_ROBIN
contentTypeTemplate: application/vnd.newsletter.$version+json
version: v1
headers:
header1:
- value1
header2:
- value2
required: false
stubs: org.springframework:foo:stubs
mailing:
path: /path/where/mailing/has/registered/in/zookeeper
loadBalancerType: ROUND_ROBIN
contentTypeTemplate: application/vnd.mailing.$version+json
version: v1
required: true
The next few sections go through each part of the dependency one by one. The root property name
is spring.cloud.zookeeper.dependencies.
6.3.1. Aliases
Below the root property you have to represent each dependency as an alias. This is due to the
constraints of Spring Cloud LoadBalancer, which requires that the application ID be placed in the
URL. Consequently, you cannot pass any complex path, suchas /myApp/myRoute/name). The alias is the
name you use instead of the serviceId for DiscoveryClient, Feign, or RestTemplate.
In the previous examples, the aliases are newsletter and mailing. The following example shows
Feign usage with a newsletter alias:
@FeignClient("newsletter")
public interface NewsletterService {
@RequestMapping(method = RequestMethod.GET, value = "/newsletter")
String getNewsletters();
}
6.3.2. Path
The path is represented by the path YAML property and is the path under which the dependency is
registered under Zookeeper. As described in the previous section, Spring Cloud LoadBalancer
operates on URLs. As a result, this path is not compliant with its requirement. That is why Spring
Cloud Zookeeper maps the alias to the proper path.
If you know what kind of load-balancing strategy has to be applied when calling this particular
dependency, you can provide it in the YAML file, and it is automatically applied. You can choose one
of the following load balancing strategies:
The Content-Type template and version are represented by the contentTypeTemplate and version
YAML properties.
If you version your API in the Content-Type header, you do not want to add this header to each of
your requests. Also, if you want to call a new version of the API, you do not want to roam around
your code to bump up the API version. That is why you can provide a contentTypeTemplate with a
special $version placeholder. That placeholder will be filled by the value of the version YAML
property. Consider the following example of a contentTypeTemplate:
application/vnd.newsletter.$version+json
application/vnd.newsletter.v1+json
Sometimes, each call to a dependency requires setting up of some default headers. To not do that in
code, you can set them up in the YAML file, as shown in the following example headers section:
headers:
Accept:
- text/html
- application/xhtml+xml
Cache-Control:
- no-cache
That headers section results in adding the Accept and Cache-Control headers with appropriate list of
values in your HTTP request.
If one of your dependencies is required to be up when your application boots, you can set the
required: true property in the YAML file.
If your application cannot localize the required dependency during boot time, it throws an
exception, and the Spring Context fails to set up. In other words, your application cannot start if the
required dependency is not registered in Zookeeper.
You can read more about Spring Cloud Zookeeper Presence Checker later in this document.
6.3.7. Stubs
You can provide a colon-separated path to the JAR containing stubs of the dependency, as shown in
the following example:
stubs: org.springframework:myApp:stubs
where:
Because stubs is the default classifier, the preceding example is equal to the following example:
stubs: org.springframework:myApp
• spring.cloud.zookeeper.dependencies: If you do not set this property, you cannot use Zookeeper
Dependencies.
7.1. Activating
Spring Cloud Zookeeper Dependencies functionality needs to be enabled for you to use the
Dependency Watcher mechanism.
If you want to register a listener for a particular dependency, the dependencyName would be the
discriminator for your concrete implementation. newState provides you with information about
whether your dependency has changed to CONNECTED or DISCONNECTED.
1. If the dependency is marked us required and is not in Zookeeper, when your application boots,
it throws an exception and shuts down.
• config/testApp,dev
• config/testApp
• config/application,dev
• config/application
The most specific property source is at the top, with the least specific at the bottom. Properties in
the config/application namespace apply to all applications that use zookeeper for configuration.
Properties in the config/testApp namespace are available only to the instances of the service named
testApp.
Configuration is currently read on startup of the application. Sending a HTTP POST request to
/refresh causes the configuration to be reloaded. Watching the configuration namespace (which
Zookeeper supports) is not currently implemented.
8.1. Activating
Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-config
enables autoconfiguration that sets up Spring Cloud Zookeeper Config.
When working with version 3.4 of Zookeeper you need to change the way you
include the dependency as described here.
application.properties
spring.config.import=optional:zookeeper:
This will connect to Zookeeper at the default location of "localhost:2181". Removing the optional:
prefix will cause Zookeeper Config to fail if it is unable to connect to Zookeeper. To change the
connection properties of Zookeeper Config either set spring.cloud.zookeeper.connect-string or add
the connect string to the spring.config.import statement such as,
spring.config.import=optional:zookeeper:myhost:2818. The location in the import property has
precedence over the connect-string property.
Zookeeper Config will try to load values from four automatic contexts based on
spring.cloud.zookeeper.config.name (which defaults to the value of the spring.application.name
property) and spring.cloud.zookeeper.config.default-context (which defaults to application). If
you want to specify the contexts rather than using the computed ones, you can add that
information to the spring.config.import statement.
application.properties
spring.config.import=optional:zookeeper:myhost:2181/contextone;/context/two
This will optionally load configuration only from /contextone and /context/two.
A bootstrap file (properties or yaml) is not needed for the Spring Boot Config Data
method of import via spring.config.import.
8.3. Customizing
Zookeeper Config may be customized by setting the following properties:
spring:
cloud:
zookeeper:
config:
enabled: true
root: configuration
defaultContext: apps
profileSeparator: '::'
• profileSeparator: Sets the value of the separator used to separate the profile name in property
sources with profiles.
@BoostrapConfiguration
public class CustomCuratorFrameworkConfig {
@Bean
public CuratorFramework curatorFramework() {
CuratorFramework curator = new CuratorFramework();
curator.addAuthInfo("digest", "user:password".getBytes());
return curator;
}
Consult the ZookeeperAutoConfiguration class to see how the CuratorFramework bean’s default
configuration.
Alternatively, you can add your credentials from a class that depends on the existing
CuratorFramework bean, as shown in the following example:
@BoostrapConfiguration
public class DefaultCuratorFrameworkConfig {
The creation of this bean must occur during the boostrapping phase. You can register configuration
classes to run during this phase by annotating them with @BootstrapConfiguration and including
them in a comma-separated list that you set as the value of the
org.springframework.cloud.bootstrap.BootstrapConfiguration property in the resources/META-
INF/spring.factories file, as shown in the following example:
resources/META-INF/spring.factories
org.springframework.cloud.bootstrap.BootstrapConfiguration=\
my.project.CustomCuratorFrameworkConfig,\
my.project.DefaultCuratorFrameworkConfig
Appendix: Compendium of
Configuration Properties
Name Default Description
eureka.server.peer-eureka- 0
nodes-update-interval-ms
eureka.server.peer-eureka- 0
status-refresh-time-interval-ms
feign.client.config ``
feign.client.default-config default
feign.client.default-to- true
properties
Name Default Description
feign.httpclient.connection- 2000
timeout
feign.httpclient.connection- 3000
timer-repeat
feign.httpclient.disable-ssl- false
validation
feign.httpclient.follow-redirects true
feign.httpclient.max- 200
connections
feign.httpclient.max- 50
connections-per-route
feign.httpclient.time-to-live 900
Name Default Description
feign.httpclient.time-to-live-unit ``
spring.cloud.cloudfoundry.skip- false
ssl-validation
spring.cloud.consul.config.acl- ``
token
spring.cloud.consul.config.defa application
ult-context
spring.cloud.consul.config.enab true
led
spring.cloud.consul.config.form ``
at
spring.cloud.consul.config.nam `` Alternative to
e spring.application.name to use
in looking up values in consul
KV.
spring.cloud.consul.config.prefi ``
x
spring.cloud.consul.config.prefi ``
xes
spring.cloud.consul.config.profi ,
le-separator
spring.cloud.consul.discovery.a ``
cl-token
spring.cloud.consul.discovery.h false
eartbeat.enabled
spring.cloud.consul.discovery.h ``
eartbeat.interval-ratio
spring.cloud.consul.discovery.h false
eartbeat.reregister-service-on-
failure
spring.cloud.consul.discovery.h 30s
eartbeat.ttl
spring.cloud.consul.discovery.li true
fecycle.enabled
spring.cloud.discovery.client.he true
alth-indicator.enabled
spring.cloud.discovery.client.he false
alth-indicator.include-
description
spring.cloud.discovery.client.si ``
mple.instances
spring.cloud.discovery.client.si ``
mple.order
spring.cloud.gateway.discovery. ``
locator.filters
spring.cloud.gateway.discovery. ``
locator.predicates
spring.cloud.gateway.filter.rem ``
ove-hop-by-hop.headers
spring.cloud.gateway.filter.rem ``
ove-hop-by-hop.order
spring.cloud.gateway.filter.secu nosniff
re-headers.content-type-options
spring.cloud.gateway.filter.secu ``
re-headers.disable
spring.cloud.gateway.filter.secu noopen
re-headers.download-options
spring.cloud.gateway.filter.secu DENY
re-headers.frame-options
spring.cloud.gateway.filter.secu none
re-headers.permitted-cross-
domain-policies
spring.cloud.gateway.filter.secu no-referrer
re-headers.referrer-policy
spring.cloud.gateway.filter.secu <code>max-
re-headers.strict-transport- age=631138519</code>
security
Name Default Description
spring.cloud.gateway.filter.secu <code>1 ;
re-headers.xss-protection- mode=block</code>
header
spring.cloud.gateway.globalcors ``
.cors-configurations
spring.cloud.gateway.loadbalan false
cer.use404
spring.cloud.gateway.redis-rate- ``
limiter.config
spring.cloud.gateway.streaming ``
-media-types
spring.cloud.hypermedia.refres 5000
h.fixed-delay
spring.cloud.hypermedia.refres 10000
h.initial-delay
spring.cloud.kubernetes.client.a ``
pi-version
spring.cloud.kubernetes.client.c ``
a-cert-data
spring.cloud.kubernetes.client.c ``
a-cert-file
spring.cloud.kubernetes.client.c ``
lient-cert-data
spring.cloud.kubernetes.client.c ``
lient-cert-file
spring.cloud.kubernetes.client.c ``
lient-key-algo
spring.cloud.kubernetes.client.c ``
lient-key-data
spring.cloud.kubernetes.client.c ``
lient-key-file
spring.cloud.kubernetes.client.c ``
lient-key-passphrase
spring.cloud.kubernetes.client.c ``
onnection-timeout
spring.cloud.kubernetes.client.h ``
ttp-proxy
spring.cloud.kubernetes.client.h ``
ttps-proxy
spring.cloud.kubernetes.client.l ``
ogging-interval
spring.cloud.kubernetes.client. ``
master-url
spring.cloud.kubernetes.client.n ``
o-proxy
spring.cloud.kubernetes.client.o ``
auth-token
spring.cloud.kubernetes.client.p ``
roxy-password
spring.cloud.kubernetes.client.p ``
roxy-username
spring.cloud.kubernetes.client.r ``
equest-timeout
spring.cloud.kubernetes.client.r ``
olling-timeout
spring.cloud.kubernetes.client.s /var/run/secrets/kubernetes.io
ervice-account-namespace-path /serviceaccount/namespace
spring.cloud.kubernetes.client.t ``
rust-certs
spring.cloud.kubernetes.client. ``
watch-reconnect-interval
spring.cloud.kubernetes.client. ``
watch-reconnect-limit
spring.cloud.kubernetes.config. true
enable-api
spring.cloud.kubernetes.config. ``
name
spring.cloud.kubernetes.config. ``
namespace
spring.cloud.kubernetes.config. ``
paths
spring.cloud.kubernetes.config. ``
sources
spring.cloud.kubernetes.discove ``
ry.order
spring.cloud.kubernetes.discove true
ry.wait-cache-ready
spring.cloud.kubernetes.loadbal `` {@link
ancer.mode KubernetesLoadBalancerMode}
setting load balancer server list
with ip of pod or service name.
default value is POD.
Name Default Description
spring.cloud.kubernetes.secrets. false
enable-api
spring.cloud.kubernetes.secrets. ``
labels
spring.cloud.kubernetes.secrets. ``
name
spring.cloud.kubernetes.secrets. ``
namespace
Name Default Description
spring.cloud.kubernetes.secrets. ``
paths
spring.cloud.kubernetes.secrets. ``
sources
spring.cloud.loadbalancer.healt ``
h-check.path
spring.cloud.stream.function.ba false
tch-mode
spring.cloud.stream.function.bi ``
ndings
spring.cloud.vault.authenticatio ``
n
spring.cloud.zookeeper.config.e true
nabled
spring.cloud.zookeeper.config.n `` Alternative to
ame spring.application.name to use
in looking up values in
zookeeper.
spring.cloud.zookeeper.depend ``
ency-configurations
spring.cloud.zookeeper.depend ``
ency-names
spring.cloud.zookeeper.discove true
ry.enabled
spring.sleuth.baggage.correlatio ``
n-fields
spring.sleuth.baggage.local- ``
fields
Name Default Description
spring.sleuth.baggage.tag-fields ``
spring.sleuth.enabled true
spring.sleuth.reactor.instrumen ``
tation-type
spring.sleuth.web.skip-pattern ``
Name Default Description
spring.zipkin.compression.enab false
led
stubrunner.server-id ``
wiremock.reset-mappings-after- false
each-test
Name Default Description
wiremock.rest-template-ssl- false
enabled
wiremock.server.files []
wiremock.server.https-port -1
wiremock.server.https-port- false
dynamic
wiremock.server.port 8080
wiremock.server.port-dynamic false
wiremock.server.stubs []