Spring Kafka Reference
Spring Kafka Reference
Spring Kafka Reference
Gary Russell, Artem Bilan, Biju Kunjummen, Jay Bryant, Soby Chacko, Tomaz
Fernandes
Version 2.9.2
Table of Contents
1. Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2. What’s new?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1.1. Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4. Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Factory Listeners. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Using KafkaTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Using RoutingKafkaTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Using DefaultKafkaProducerFactory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Using ReplyingKafkaTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Message Listeners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
@KafkaListener Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
@KafkaListener on a Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Rebalancing Listeners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Filtering Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Retrying Deliveries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
MessageListener Implementations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Prototype Beans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Event Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.1.12. Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
transactionIdPrefix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
JSON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
DefaultErrorHandler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Using Different Common Error Handlers for Record and Batch Listeners . . . . . . . . . . . . . . . 148
4.4.4. Using the Same Broker(s) for Multiple Test Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Copies of this document may be made for your own use and for distribution to others, provided
that you do not charge any fee for such copies and further provided that each copy contains this
Copyright Notice, whether distributed in print or electronically.
1
Chapter 1. Preface
The Spring for Apache Kafka project applies core Spring concepts to the development of Kafka-
based messaging solutions. We provide a “template” as a high-level abstraction for sending
messages. We also provide support for Message-driven POJOs.
2
Chapter 2. What’s new?
2.1. What’s New in 2.9 Since 2.8
This section covers the changes made from version 2.8 to version 2.9. For changes in earlier
version, see Change History.
The DefaultErrorHandler can now be configured to pause the container for one poll and use the
remaining results from the previous poll, instead of seeking to the offsets of the remaining records.
See DefaultErrorHandler for more information.
The DefaultErrorHandler now has a BackOffHandler property. See Back Off Handlers for more
information.
interceptBeforeTx now works with all transaction managers (previously it was only applied when a
KafkaAwareTransactionManager was used). See [interceptBeforeTx].
A new container property pauseImmediate is provided which allows the container to pause the
consumer after the current record is processed, instead of after all the records from the previous
poll have been processed. See [pauseImmediate].
You can now configure which inbound headers should be mapped. Also available in version 2.8.8 or
later. See Message Headers for more information.
In 3.0, the futures returned by this class will be CompletableFuture s instead of ListenableFuture s.
See Using KafkaTemplate for assistance in transitioning when using this release.
The template now provides a method to wait for assignment on the reply container, to avoid a race
when sending a request before the reply container is initialized. Also available in version 2.8.8 or
later. See Using ReplyingKafkaTemplate.
In 3.0, the futures returned by this class will be CompletableFuture s instead of ListenableFuture s.
3
See Using ReplyingKafkaTemplate and Request/Reply with Message<?> s for assistance in transitioning
when using this release.
4
Chapter 3. Introduction
This first part of the reference documentation is a high-level overview of Spring for Apache Kafka
and the underlying concepts and some code snippets that can help you get up and running as
quickly as possible.
If you are not using Spring Boot, declare the spring-kafka jar as a dependency in your project.
Maven
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.9.2</version>
</dependency>
Gradle
compile 'org.springframework.kafka:spring-kafka:2.9.2'
When using Spring Boot, (and you haven’t used start.spring.io to create your
project), omit the version and Boot will automatically bring in the correct version
that is compatible with your Boot version:
Maven
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
Gradle
compile 'org.springframework.kafka:spring-kafka'
However, the quickest way to get started is to use start.spring.io (or the wizards in Spring Tool Suits
and Intellij IDEA) and create a project, selecting 'Spring for Apache Kafka' as a dependency.
5
3.1.1. Compatibility
The simplest way to get started is to use start.spring.io (or the wizards in Spring Tool Suits and
Intellij IDEA) and create a project, selecting 'Spring for Apache Kafka' as a dependency. Refer to the
Spring Boot documentation for more information about its opinionated auto configuration of the
infrastructure beans.
6
Example 1. Application
Java
@SpringBootApplication
public class Application {
@Bean
public NewTopic topic() {
return TopicBuilder.name("topic1")
.partitions(10)
.replicas(1)
.build();
}
Kotlin
@SpringBootApplication
class Application {
@Bean
fun topic() = NewTopic("topic1", 10, 1)
Example 2. application.properties
spring.kafka.consumer.auto-offset-reset=earliest
7
The NewTopic bean causes the topic to be created on the broker; it is not needed if the topic already
exists.
8
Example 3. Application
Java
@SpringBootApplication
public class Application {
@Bean
public NewTopic topic() {
return TopicBuilder.name("topic1")
.partitions(10)
.replicas(1)
.build();
}
@Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("topic1", "test");
};
}
Kotlin
@SpringBootApplication
class Application {
@Bean
fun topic() = NewTopic("topic1", 10, 1)
@Bean
fun runner(template: KafkaTemplate<String?, String?>) =
ApplicationRunner { template.send("topic1", "test") }
companion object {
@JvmStatic
fun main(args: Array<String>) = runApplication<Application>(*args)
}
9
With Java Configuration (No Spring Boot)
Spring for Apache Kafka is designed to be used in a Spring Application Context. For
example, if you create the listener container yourself outside of a Spring context,
not all functions will work unless you satisfy all of the …Aware interfaces that the
container implements.
Here is an example of an application that does not use Spring Boot; it has both a Consumer and
Producer.
10
Example 4. Without Boot
Java
@Configuration
@EnableKafka
public class Config {
@Bean
ConcurrentKafkaListenerContainerFactory<Integer, String>
kafkaListenerContainerFactory(ConsumerFactory<Integer,
String> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
return factory;
}
@Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerProps());
}
11
private Map<String, Object> consumerProps() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
IntegerDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
// ...
return props;
}
@Bean
public Sender sender(KafkaTemplate<Integer, String> template) {
return new Sender(template);
}
@Bean
public Listener listener() {
return new Listener();
}
@Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(senderProps());
}
@Bean
public KafkaTemplate<Integer, String> kafkaTemplate(ProducerFactory<Integer,
String> producerFactory) {
return new KafkaTemplate<Integer, String>(producerFactory);
}
Kotlin
12
class Sender(private val template: KafkaTemplate<Int, String>) {
class Listener {
@Configuration
@EnableKafka
class Config {
@Bean
fun kafkaListenerContainerFactory(consumerFactory: ConsumerFactory<Int,
String>) =
ConcurrentKafkaListenerContainerFactory<Int, String>().also {
it.consumerFactory = consumerFactory }
@Bean
fun consumerFactory() = DefaultKafkaConsumerFactory<Int,
String>(consumerProps)
@Bean
fun sender(template: KafkaTemplate<Int, String>) = Sender(template)
@Bean
fun listener() = Listener()
@Bean
fun producerFactory() = DefaultKafkaProducerFactory<Int, String>(senderProps)
13
val senderProps = mapOf(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG to "localhost:9092",
ProducerConfig.LINGER_MS_CONFIG to 10,
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG to
IntegerSerializer::class.java,
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG to
StringSerializer::class.java
)
@Bean
fun kafkaTemplate(producerFactory: ProducerFactory<Int, String>) =
KafkaTemplate(producerFactory)
As you can see, you have to define several infrastructure beans when not using Spring Boot.
14
Chapter 4. Reference
This part of the reference documentation details the various components that comprise Spring for
Apache Kafka. The main chapter covers the core classes to develop a Kafka application with Spring.
Starting with version 2.5, each of these extends KafkaResourceFactory. This allows changing the
bootstrap servers at runtime by adding a Supplier<String> to their configuration:
setBootstrapServersSupplier(() → …). This will be called for all new connections to get the list of
servers. Consumers and Producers are generally long-lived. To close existing Producers, call reset()
on the DefaultKafkaProducerFactory. To close existing Consumers, call stop() (and then start()) on
the KafkaListenerEndpointRegistry and/or stop() and start() on any other listener container beans.
For convenience, the framework also provides an ABSwitchCluster which supports two sets of
bootstrap servers; one of which is active at any time. Configure the ABSwitchCluster and add it to the
producer and consumer factories, and the KafkaAdmin, by calling setBootstrapServersSupplier().
When you want to switch, call primary() or secondary() and call reset() on the producer factory to
establish new connection(s); for consumers, stop() and start() all listener containers. When using
@KafkaListener s, stop() and start() the KafkaListenerEndpointRegistry bean.
Factory Listeners
15
Producer Factory Listener
In each case, the id is created by appending the client-id property (obtained from the metrics()
after creation) to the factory beanName property, separated by ..
These listeners can be used, for example, to create and bind a Micrometer KafkaClientMetrics
instance when a new client is created (and close it when the client is closed).
The framework provides listeners that do exactly that; see Micrometer Native Metrics.
If you define a KafkaAdmin bean in your application context, it can automatically add topics to the
broker. To do so, you can add a NewTopic @Bean for each topic to the application context. Version 2.3
introduced a new class TopicBuilder to make creation of such beans more convenient. The
following example shows how to do so:
16
Java
@Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
return new KafkaAdmin(configs);
}
@Bean
public NewTopic topic1() {
return TopicBuilder.name("thing1")
.partitions(10)
.replicas(3)
.compact()
.build();
}
@Bean
public NewTopic topic2() {
return TopicBuilder.name("thing2")
.partitions(10)
.replicas(3)
.config(TopicConfig.COMPRESSION_TYPE_CONFIG, "zstd")
.build();
}
@Bean
public NewTopic topic3() {
return TopicBuilder.name("thing3")
.assignReplicas(0, Arrays.asList(0, 1))
.assignReplicas(1, Arrays.asList(1, 2))
.assignReplicas(2, Arrays.asList(2, 0))
.config(TopicConfig.COMPRESSION_TYPE_CONFIG, "zstd")
.build();
}
17
Kotlin
@Bean
fun admin() = KafkaAdmin(mapOf(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG to
"localhost:9092"))
@Bean
fun topic1() =
TopicBuilder.name("thing1")
.partitions(10)
.replicas(3)
.compact()
.build()
@Bean
fun topic2() =
TopicBuilder.name("thing2")
.partitions(10)
.replicas(3)
.config(TopicConfig.COMPRESSION_TYPE_CONFIG, "zstd")
.build()
@Bean
fun topic3() =
TopicBuilder.name("thing3")
.assignReplicas(0, Arrays.asList(0, 1))
.assignReplicas(1, Arrays.asList(1, 2))
.assignReplicas(2, Arrays.asList(2, 0))
.config(TopicConfig.COMPRESSION_TYPE_CONFIG, "zstd")
.build()
Starting with version 2.6, you can omit .partitions() and/or replicas() and the broker defaults will
be applied to those properties. The broker version must be at least 2.4.0 to support this feature - see
KIP-464.
18
Java
@Bean
public NewTopic topic4() {
return TopicBuilder.name("defaultBoth")
.build();
}
@Bean
public NewTopic topic5() {
return TopicBuilder.name("defaultPart")
.replicas(1)
.build();
}
@Bean
public NewTopic topic6() {
return TopicBuilder.name("defaultRepl")
.partitions(3)
.build();
}
Kotlin
@Bean
fun topic4() = TopicBuilder.name("defaultBoth").build()
@Bean
fun topic5() = TopicBuilder.name("defaultPart").replicas(1).build()
@Bean
fun topic6() = TopicBuilder.name("defaultRepl").partitions(3).build()
Starting with version 2.7, you can declare multiple NewTopic s in a single KafkaAdmin.NewTopics bean
definition:
19
Java
@Bean
public KafkaAdmin.NewTopics topics456() {
return new NewTopics(
TopicBuilder.name("defaultBoth")
.build(),
TopicBuilder.name("defaultPart")
.replicas(1)
.build(),
TopicBuilder.name("defaultRepl")
.partitions(3)
.build());
}
Kotlin
@Bean
fun topics456() = KafkaAdmin.NewTopics(
TopicBuilder.name("defaultBoth")
.build(),
TopicBuilder.name("defaultPart")
.replicas(1)
.build(),
TopicBuilder.name("defaultRepl")
.partitions(3)
.build()
)
When using Spring Boot, a KafkaAdmin bean is automatically registered so you only
need the NewTopic (and/or NewTopics) @Bean s.
By default, if the broker is not available, a message is logged, but the context continues to load. You
can programmatically invoke the admin’s initialize() method to try again later. If you wish this
condition to be considered fatal, set the admin’s fatalIfBrokerNotAvailable property to true. The
context then fails to initialize.
If the broker supports it (1.0.0 or higher), the admin increases the number of
partitions if it is found that an existing topic has fewer partitions than the
NewTopic.numPartitions.
Starting with version 2.7, the KafkaAdmin provides methods to create and examine topics at runtime.
• createOrModifyTopics
• describeTopics
For more advanced features, you can use the AdminClient directly. The following example shows
20
how to do so:
@Autowired
private KafkaAdmin admin;
...
Using KafkaTemplate
Overview
The KafkaTemplate wraps a producer and provides convenience methods to send data to Kafka
topics. The following listing shows the relevant methods from KafkaTemplate:
21
ListenableFuture<SendResult<K, V>> sendDefault(V data);
void flush();
In version 3.0, the methods that return ListenableFuture will be changed to return
CompletableFuture. To facilitate the migration, the 2.9 version has a method
.usingCompletableFuture() which will provide the same methods with
CompletableFuture return types.
22
KafkaOperations2<String, String> template = new KafkaTemplate<>()
.usingCompletableFuture();
CompletableFuture<SendResult<String, String>> future = template.send(topic1, 0, 0,
"buz")
.whenComplete((sr, thrown) -> {
...
});
)
The sendDefault API requires that a default topic has been provided to the template.
The API takes in a timestamp as a parameter and stores this timestamp in the record. How the user-
provided timestamp is stored depends on the timestamp type configured on the Kafka topic. If the
topic is configured to use CREATE_TIME, the user specified timestamp is recorded (or generated if not
specified). If the topic is configured to use LOG_APPEND_TIME, the user-specified timestamp is ignored
and the broker adds in the local broker time.
The metrics and partitionsFor methods delegate to the same methods on the underlying Producer.
The execute method provides direct access to the underlying Producer.
To use the template, you can configure a producer factory and provide it in the template’s
constructor. The following example shows how to do so:
@Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class
);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.
class);
// See https://kafka.apache.org/documentation/#producerconfigs for more
properties
return props;
}
@Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
23
Starting with version 2.5, you can now override the factory’s ProducerConfig properties to create
templates with different producer configurations from the same factory.
@Bean
public KafkaTemplate<String, String> stringTemplate(ProducerFactory<String,
String> pf) {
return new KafkaTemplate<>(pf);
}
@Bean
public KafkaTemplate<String, byte[]> bytesTemplate(ProducerFactory<String, byte[]>
pf) {
return new KafkaTemplate<>(pf,
Collections.singletonMap(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
ByteArraySerializer.class));
}
Note that a bean of type ProducerFactory<?, ?> (such as the one auto-configured by Spring Boot) can
be referenced with different narrowed generic types.
You can also configure the template by using standard <bean/> definitions.
Then, to use the template, you can invoke one of its methods.
When you use the methods with a Message<?> parameter, the topic, partition, and key information is
provided in a message header that includes the following items:
• KafkaHeaders.TOPIC
• KafkaHeaders.PARTITION
• KafkaHeaders.KEY
• KafkaHeaders.TIMESTAMP
Optionally, you can configure the KafkaTemplate with a ProducerListener to get an asynchronous
callback with the results of the send (success or failure) instead of waiting for the Future to
complete. The following listing shows the definition of the ProducerListener interface:
24
public interface ProducerListener<K, V> {
By default, the template is configured with a LoggingProducerListener, which logs errors and does
nothing when the send is successful.
For convenience, default method implementations are provided in case you want to implement
only one of the methods.
Notice that the send methods return a ListenableFuture<SendResult>. You can register a callback
with the listener to receive the result of the send asynchronously. The following example shows
how to do so:
@Override
public void onSuccess(SendResult<Integer, String> result) {
...
}
@Override
public void onFailure(Throwable ex) {
...
}
});
SendResult has two properties, a ProducerRecord and RecordMetadata. See the Kafka API
documentation for information about those objects.
Starting with version 2.5, you can use a KafkaSendCallback instead of a ListenableFutureCallback,
making it easier to extract the failed ProducerRecord, avoiding the need to cast the Throwable:
25
ListenableFuture<SendResult<Integer, String>> future = template.send("topic", 1,
"thing");
future.addCallback(new KafkaSendCallback<Integer, String>() {
@Override
public void onSuccess(SendResult<Integer, String> result) {
...
}
@Override
public void onFailure(KafkaProducerException ex) {
ProducerRecord<Integer, String> failed = ex.getFailedProducerRecord();
...
}
});
If you wish to block the sending thread to await the result, you can invoke the future’s get()
method; using the method with a timeout is recommended. You may wish to invoke flush() before
waiting or, for convenience, the template has a constructor with an autoFlush parameter that
causes the template to flush() on each send. Flushing is only needed if you have set the linger.ms
producer property and want to immediately send a partial batch.
Examples
26
Example 5. Non Blocking (Async)
@Override
public void onSuccess(SendResult<Integer, String> result) {
handleSuccess(data);
}
@Override
public void onFailure(KafkaProducerException ex) {
handleFailure(data, record, ex);
}
});
}
Blocking (Sync)
try {
template.send(record).get(10, TimeUnit.SECONDS);
handleSuccess(data);
}
catch (ExecutionException e) {
handleFailure(data, record, e.getCause());
}
catch (TimeoutException | InterruptedException e) {
handleFailure(data, record, e);
}
}
Using RoutingKafkaTemplate
Starting with version 2.5, you can use a RoutingKafkaTemplate to select the producer at runtime,
based on the destination topic name.
27
The routing template does not support transactions, execute, flush, or metrics
operations because the topic is not known for those operations.
The following simple Spring Boot application provides an example of how to use the same template
to send to different topics, each using a different value serializer.
@SpringBootApplication
public class Application {
@Bean
public RoutingKafkaTemplate routingTemplate(GenericApplicationContext context,
ProducerFactory<Object, Object> pf) {
@Bean
public ApplicationRunner runner(RoutingKafkaTemplate routingTemplate) {
return args -> {
routingTemplate.send("one", "thing1");
routingTemplate.send("two", "thing2".getBytes());
};
}
28
The corresponding @KafkaListener s for this example are shown in Annotation Properties.
For another technique to achieve similar results, but with the additional capability of sending
different types to the same topic, see Delegating Serializer and Deserializer.
Using DefaultKafkaProducerFactory
When creating a DefaultKafkaProducerFactory, key and/or value Serializer classes can be picked up
from configuration by calling the constructor that only takes in a Map of properties (see example in
Using KafkaTemplate), or Serializer instances may be passed to the DefaultKafkaProducerFactory
constructor (in which case all Producer s share the same instances). Alternatively you can provide
Supplier<Serializer> s (starting with version 2.3) that will be used to obtain separate Serializer
instances for each Producer:
@Bean
public ProducerFactory<Integer, CustomValue> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs(), null, () -> new
CustomValueSerializer());
}
@Bean
public KafkaTemplate<Integer, CustomValue> kafkaTemplate() {
return new KafkaTemplate<Integer, CustomValue>(producerFactory());
}
Starting with version 2.5.10, you can now update the producer properties after the factory is
created. This might be useful, for example, if you have to update SSL key/trust store locations after
a credentials change. The changes will not affect existing producer instances; call reset() to close
any existing producers so that new producers will be created using the new properties. NOTE: You
cannot change a transactional producer factory to non-transactional, and vice-versa.
29
void updateConfigs(Map<String, Object> updates);
Starting with version 2.8, if you provide serializers as objects (in the constructor or via the setters),
the factory will invoke the configure() method to configure them with the configuration properties.
Using ReplyingKafkaTemplate
Version 2.1.3 introduced a subclass of KafkaTemplate to provide request/reply semantics. The class is
named ReplyingKafkaTemplate and has two additional methods; the following shows the method
signatures:
The result is a ListenableFuture that is asynchronously populated with the result (or an exception,
for a timeout). The result also has a sendFuture property, which is the result of calling
KafkaTemplate.send(). You can use this future to determine the result of the send operation.
In version 3.0, the futures returned by these methods (and their sendFuture
properties) will be CompletableFuture s instead of ListenableFuture s. To assit in the
transition, using this release, you can convert these types to a CompleteableFuture
by calling asCompletable() on the returned Future.
If the first method is used, or the replyTimeout argument is null, the template’s defaultReplyTimeout
property is used (5 seconds by default).
Starting with version 2.8.8, the template has a new method waitForAssignment. This is useful if the
reply container is configured with auto.offset.reset=latest to avoid sending a request and a reply
sent before the container is initialized.
When using manual partition assignment (no group management), the duration
for the wait must be greater than the container’s pollTimeout property because the
notification will not be sent until after the first poll is completed.
The following Spring Boot application shows an example of how to use the feature:
30
@SpringBootApplication
public class KRequestingApplication {
@Bean
public ApplicationRunner runner(ReplyingKafkaTemplate<String, String, String>
template) {
return args -> {
if (!template.waitForAssignment(Duration.ofSeconds(10))) {
throw new IllegalStateException("Reply container did not
initialize");
}
ProducerRecord<String, String> record = new ProducerRecord<>(
"kRequests", "foo");
RequestReplyFuture<String, String, String> replyFuture = template
.sendAndReceive(record);
SendResult<String, String> sendResult = replyFuture.getSendFuture()
.get(10, TimeUnit.SECONDS);
System.out.println("Sent ok: " + sendResult.getRecordMetadata());
ConsumerRecord<String, String> consumerRecord = replyFuture.get(10,
TimeUnit.SECONDS);
System.out.println("Return value: " + consumerRecord.value());
};
}
@Bean
public ReplyingKafkaTemplate<String, String, String> replyingTemplate(
ProducerFactory<String, String> pf,
ConcurrentMessageListenerContainer<String, String> repliesContainer) {
@Bean
public ConcurrentMessageListenerContainer<String, String> repliesContainer(
ConcurrentKafkaListenerContainerFactory<String, String>
containerFactory) {
@Bean
31
public NewTopic kRequests() {
return TopicBuilder.name("kRequests")
.partitions(10)
.replicas(2)
.build();
}
@Bean
public NewTopic kReplies() {
return TopicBuilder.name("kReplies")
.partitions(10)
.replicas(2)
.build();
}
Note that we can use Boot’s auto-configured container factory to create the reply container.
Starting with version 2.6.7, in addition to detecting DeserializationException s, the template will call
the replyErrorChecker function, if provided. If it returns an exception, the future will be completed
exceptionally.
Here is an example:
32
template.setReplyErrorChecker(record -> {
Header error = record.headers().lastHeader("serverSentAnError");
if (error != null) {
return new MyException(new String(error.value()));
}
else {
return null;
}
});
...
The template sets a header (named KafkaHeaders.CORRELATION_ID by default), which must be echoed
back by the server side.
33
@SpringBootApplication
public class KReplyingApplication {
@Bean
public NewTopic kRequests() {
return TopicBuilder.name("kRequests")
.partitions(10)
.replicas(2)
.build();
}
The @KafkaListener infrastructure echoes the correlation ID and determines the reply topic.
See Forwarding Listener Results using @SendTo for more information about sending replies. The
template uses the default header KafKaHeaders.REPLY_TOPIC to indicate the topic to which the reply
goes.
Starting with version 2.2, the template tries to detect the reply topic or partition from the
configured reply container. If the container is configured to listen to a single topic or a single
TopicPartitionOffset, it is used to set the reply headers. If the container is configured otherwise, the
user must set up the reply headers. In this case, an INFO log message is written during initialization.
The following example uses KafkaHeaders.REPLY_TOPIC:
34
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "kReplies"
.getBytes()));
When you configure with a single reply TopicPartitionOffset, you can use the same reply topic for
multiple templates, as long as each instance listens on a different partition. When configuring with
a single reply topic, each instance must use a different group.id. In this case, all instances receive
each reply, but only the instance that sent the request finds the correlation ID. This may be useful
for auto-scaling, but with the overhead of additional network traffic and the small cost of
discarding each unwanted reply. When you use this setting, we recommend that you set the
template’s sharedReplyTopic to true, which reduces the logging level of unexpected replies to DEBUG
instead of the default ERROR.
The following is an example of configuring the reply container to use the same shared reply topic:
@Bean
public ConcurrentMessageListenerContainer<String, String> replyContainer(
ConcurrentKafkaListenerContainerFactory<String, String> containerFactory)
{
If you have multiple client instances and you do not configure them as discussed
in the preceding paragraph, each instance needs a dedicated reply topic. An
alternative is to set the KafkaHeaders.REPLY_PARTITION and use a dedicated partition
for each instance. The Header contains a four-byte int (big-endian). The server must
use this header to route the reply to the correct partition (@KafkaListener does
this). In this case, though, the reply container must not use Kafka’s group
management feature and must be configured to listen on a fixed partition (by
using a TopicPartitionOffset in its ContainerProperties constructor).
35
By default, 3 headers are used:
These header names are used by the @KafkaListener infrastructure to route the reply.
Starting with version 2.3, you can customize the header names - the template has 3 properties
correlationHeaderName, replyTopicHeaderName, and replyPartitionHeaderName. This is useful if your
server is not a Spring application (or does not use the @KafkaListener).
Version 2.7 added methods to the ReplyingKafkaTemplate to send and receive spring-messaging 's
Message<?> abstraction:
These will use the template’s default replyTimeout, there are also overloaded versions that can take
a timeout in the method call.
In version 3.0, the futures returned by these methods (and their sendFuture
properties) will be CompletableFuture s instead of ListenableFuture s. To assit in the
transition, using this release, you can convert these types to a CompleteableFuture
by calling asCompletable() on the returned Future.
Use the first method if the consumer’s Deserializer or the template’s MessageConverter can convert
the payload without any additional information, either via configuration or type metadata in the
reply message.
Use the second method if you need to provide type information for the return type, to assist the
message converter. This also allows the same template to receive different types, even if there is no
type metadata in the replies, such as when the server side is not a Spring application. The following
is an example of the latter:
36
Example 6. Template Bean
Java
@Bean
ReplyingKafkaTemplate<String, String, String> template(
ProducerFactory<String, String> pf,
ConcurrentKafkaListenerContainerFactory<String, String> factory) {
Kotlin
@Bean
fun template(
pf: ProducerFactory<String?, String>?,
factory: ConcurrentKafkaListenerContainerFactory<String?, String?>
): ReplyingKafkaTemplate<String?, String, String?> {
val replyContainer = factory.createContainer("replies")
replyContainer.containerProperties.groupId = "request.replies"
val template = ReplyingKafkaTemplate(pf, replyContainer)
template.messageConverter = ByteArrayJsonMessageConverter()
template.defaultTopic = "requests"
return template
}
37
Example 7. Using the template
Java
Kotlin
When the @KafkaListener returns a Message<?>, with versions before 2.5, it was necessary to
populate the reply topic and correlation id headers. In this example, we use the reply topic header
from the request:
38
@KafkaListener(id = "requestor", topics = "request")
@SendTo
public Message<?> messageReturn(String in) {
return MessageBuilder.withPayload(in.toUpperCase())
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.KEY, 42)
.setHeader(KafkaHeaders.CORRELATION_ID, correlation)
.build();
}
Starting with version 2.5, the framework will detect if these headers are missing and populate them
with the topic - either the topic determined from the @SendTo value or the incoming
KafkaHeaders.REPLY_TOPIC header (if present). It will also echo the incoming
KafkaHeaders.CORRELATION_ID and KafkaHeaders.REPLY_PARTITION, if present.
The template in Using ReplyingKafkaTemplate is strictly for a single request/reply scenario. For cases
where multiple receivers of a single message return a reply, you can use the
AggregatingReplyingKafkaTemplate. This is an implementation of the client-side of the Scatter-Gather
Enterprise Integration Pattern.
There is an additional property returnPartialOnTimeout (default false). When this is set to true,
instead of completing the future with a KafkaReplyTimeoutException, a partial result completes the
future normally (as long as at least one reply record has been received).
Starting with version 2.3.5, the predicate is also called after a timeout (if returnPartialOnTimeout is
true). The first argument is the current list of records; the second is true if this call is due to a
timeout. The predicate can modify the list of records.
39
AggregatingReplyingKafkaTemplate<Integer, String, String> template =
new AggregatingReplyingKafkaTemplate<>(producerFactory, container,
coll -> coll.size() == releaseSize);
...
RequestReplyFuture<Integer, String, Collection<ConsumerRecord<Integer, String>>>
future =
template.sendAndReceive(record);
future.getSendFuture().get(10, TimeUnit.SECONDS); // send ok
ConsumerRecord<Integer, Collection<ConsumerRecord<Integer, String>>>
consumerRecord =
future.get(30, TimeUnit.SECONDS);
Notice that the return type is a ConsumerRecord with a value that is a collection of ConsumerRecord s.
The "outer" ConsumerRecord is not a "real" record, it is synthesized by the template, as a holder for
the actual reply records received for the request. When a normal release occurs (release strategy
returns true), the topic is set to aggregatedResults; if returnPartialOnTimeout is true, and timeout
occurs (and at least one reply record has been received), the topic is set to
partialResultsAfterTimeout. The template provides constant static variables for these "topic" names:
/**
* Pseudo topic name for the "outer" {@link ConsumerRecords} that has the
aggregated
* results in its value after a normal release by the release strategy.
*/
public static final String AGGREGATED_RESULTS_TOPIC = "aggregatedResults";
/**
* Pseudo topic name for the "outer" {@link ConsumerRecords} that has the
aggregated
* results in its value after a timeout.
*/
public static final String PARTIAL_RESULTS_AFTER_TIMEOUT_TOPIC =
"partialResultsAfterTimeout";
The real ConsumerRecord s in the Collection contain the actual topic(s) from which the replies are
received.
40
The listener container for the replies MUST be configured with AckMode.MANUAL or
AckMode.MANUAL_IMMEDIATE; the consumer property enable.auto.commit must be
false (the default since version 2.3). To avoid any possibility of losing messages,
the template only commits offsets when there are zero requests outstanding, i.e.
when the last outstanding request is released by the release strategy. After a
rebalance, it is possible for duplicate reply deliveries; these will be ignored for any
in-flight requests; you may see error log messages when duplicate replies are
received for already released replies.
Message Listeners
When you use a message listener container, you must provide a listener to receive data. There are
currently eight supported interfaces for message listeners. The following listing shows these
interfaces:
41
public interface MessageListener<K, V> { ①
42
void onMessage(List<ConsumerRecord<K, V>> data, Acknowledgment acknowledgment,
Consumer<?, ?> consumer);
① Use this interface for processing individual ConsumerRecord instances received from the
Kafka consumer poll() operation when using auto-commit or one of the container-
managed commit methods.
② Use this interface for processing individual ConsumerRecord instances received from the
Kafka consumer poll() operation when using one of the manual commit methods.
③ Use this interface for processing individual ConsumerRecord instances received from the
Kafka consumer poll() operation when using auto-commit or one of the container-
managed commit methods. Access to the Consumer object is provided.
④ Use this interface for processing individual ConsumerRecord instances received from the
Kafka consumer poll() operation when using one of the manual commit methods. Access to
the Consumer object is provided.
⑤ Use this interface for processing all ConsumerRecord instances received from the Kafka
consumer poll() operation when using auto-commit or one of the container-managed
commit methods. AckMode.RECORD is not supported when you use this interface, since the
listener is given the complete batch.
⑥ Use this interface for processing all ConsumerRecord instances received from the Kafka
consumer poll() operation when using one of the manual commit methods.
⑦ Use this interface for processing all ConsumerRecord instances received from the Kafka
consumer poll() operation when using auto-commit or one of the container-managed
commit methods. AckMode.RECORD is not supported when you use this interface, since the
listener is given the complete batch. Access to the Consumer object is provided.
⑧ Use this interface for processing all ConsumerRecord instances received from the Kafka
consumer poll() operation when using one of the manual commit methods. Access to the
Consumer object is provided.
The Consumer object is not thread-safe. You must only invoke its methods on the
thread that calls the listener.
You should not execute any Consumer<?, ?> methods that affect the consumer’s
positions and or committed offsets in your listener; the container needs to manage
such information.
• KafkaMessageListenerContainer
• ConcurrentMessageListenerContainer
43
The KafkaMessageListenerContainer receives all message from all topics or partitions on a single
thread. The ConcurrentMessageListenerContainer delegates to one or more
KafkaMessageListenerContainer instances to provide multi-threaded consumption.
Starting with version 2.2.7, you can add a RecordInterceptor to the listener container; it will be
invoked before calling the listener allowing inspection or modification of the record. If the
interceptor returns null, the listener is not called. Starting with version 2.7, it has additional
methods which are called after the listener exits (normally, or by throwing an exception). Also,
starting with version 2.7, there is now a BatchInterceptor, providing similar functionality for Batch
Listeners. In addition, the ConsumerAwareRecordInterceptor (and BatchInterceptor) provide access to
the Consumer<?, ?>. This might be used, for example, to access the consumer metrics in the
interceptor.
You should not execute any methods that affect the consumer’s positions and or
committed offsets in these interceptors; the container needs to manage such
information.
If the interceptor mutates the record (by creating a new one), the topic, partition,
and offset must remain the same to avoid unexpected side effects such as record
loss.
By default, starting with version 2.8, when using transactions, the interceptor is invoked before the
transaction has started. You can set the listener container’s interceptBeforeTx property to false to
invoke the interceptor after the transaction has started instead. Starting with version 2.9, this will
apply to any transaction manager, not just KafkaAwareTransactionManager s. This allows, for example,
the interceptor to participate in a JDBC transaction started by the container.
Starting with versions 2.3.8, 2.4.6, the ConcurrentMessageListenerContainer now supports Static
Membership when the concurrency is greater than one. The group.instance.id is suffixed with -n
with n starting at 1. This, together with an increased session.timeout.ms, can be used to reduce
rebalance events, for example, when application instances are restarted.
Using KafkaMessageListenerContainer
It receives a ConsumerFactory and information about topics and partitions, as well as other
configuration, in a ContainerProperties object. ContainerProperties has the following constructors:
44
public ContainerProperties(TopicPartitionOffset... topicPartitions)
The first constructor takes an array of TopicPartitionOffset arguments to explicitly instruct the
container about which partitions to use (using the consumer assign() method) and with an optional
initial offset. A positive value is an absolute offset by default. A negative value is relative to the
current last offset within a partition by default. A constructor for TopicPartitionOffset that takes an
additional boolean argument is provided. If this is true, the initial offsets (positive or negative) are
relative to the current position for this consumer. The offsets are applied when the container is
started. The second takes an array of topics, and Kafka allocates the partitions based on the
group.id property — distributing partitions across the group. The third uses a regex Pattern to select
the topics.
Note that when creating a DefaultKafkaConsumerFactory, using the constructor that just takes in the
properties as above means that key and value Deserializer classes are picked up from
configuration. Alternatively, Deserializer instances may be passed to the
DefaultKafkaConsumerFactory constructor for key and/or value, in which case all Consumers share
the same instances. Another option is to provide Supplier<Deserializer> s (starting with version 2.3)
that will be used to obtain separate Deserializer instances for each Consumer:
45
DefaultKafkaConsumerFactory<Integer, CustomValue> cf =
new DefaultKafkaConsumerFactory<>(consumerProps(), null,
() -> new CustomValueDeserializer());
KafkaMessageListenerContainer<Integer, String> container =
new KafkaMessageListenerContainer<>(cf, containerProps);
return container;
Refer to the Javadoc for ContainerProperties for more information about the various properties that
you can set.
Since version 2.1.1, a new property called logContainerConfig is available. When true and INFO
logging is enabled each listener container writes a log message summarizing its configuration
properties.
By default, logging of topic offset commits is performed at the DEBUG logging level. Starting with
version 2.1.2, a property in ContainerProperties called commitLogLevel lets you specify the log level
for these messages. For example, to change the log level to INFO, you can use
containerProperties.setCommitLogLevel(LogIfLevelEnabled.Level.INFO);.
Starting with version 2.2, a new container property called missingTopicsFatal has been added
(default: false since 2.3.4). This prevents the container from starting if any of the configured topics
are not present on the broker. It does not apply if the container is configured to listen to a topic
pattern (regex). Previously, the container threads looped within the consumer.poll() method
waiting for the topic to appear while logging many messages. Aside from the logs, there was no
indication that there was a problem.
As of version 2.8, a new container property authExceptionRetryInterval has been introduced. This
causes the container to retry fetching messages after getting any AuthenticationException or
AuthorizationException from the KafkaConsumer. This can happen when, for example, the configured
user is denied access to read a certain topic or credentials are incorrect. Defining
authExceptionRetryInterval allows the container to recover when proper permissions are granted.
Starting with version 2.8, when creating the consumer factory, if you provide deserializers as
objects (in the constructor or via the setters), the factory will invoke the configure() method to
configure them with the configuration properties.
Using ConcurrentMessageListenerContainer
The single constructor is similar to the KafkaListenerContainer constructor. The following listing
shows the constructor’s signature:
46
public ConcurrentMessageListenerContainer(ConsumerFactory<K, V> consumerFactory,
ContainerProperties containerProperties)
For the first constructor, Kafka distributes the partitions across the consumers using its group
management capabilities.
When listening to multiple topics, the default partition distribution may not be
what you expect. For example, if you have three topics with five partitions each
and you want to use concurrency=15, you see only five active consumers, each
assigned one partition from each topic, with the other 10 consumers being idle.
This is because the default Kafka PartitionAssignor is the RangeAssignor (see its
Javadoc). For this scenario, you may want to consider using the RoundRobinAssignor
instead, which distributes the partitions across all of the consumers. Then, each
consumer is assigned one topic or partition. To change the PartitionAssignor, you
can set the partition.assignment.strategy consumer property
(ConsumerConfigs.PARTITION_ASSIGNMENT_STRATEGY_CONFIG) in the properties provided
to the DefaultKafkaConsumerFactory.
When using Spring Boot, you can assign set the strategy as follows:
spring.kafka.consumer.properties.partition.assignment.strategy=\
org.apache.kafka.clients.consumer.RoundRobinAssignor
If, say, six TopicPartitionOffset instances are provided and the concurrency is 3; each container gets
two partitions. For five TopicPartitionOffset instances, two containers get two partitions, and the
third gets one. If the concurrency is greater than the number of TopicPartitions, the concurrency is
adjusted down such that each container gets one partition.
The client.id property (if set) is appended with -n where n is the consumer
instance that corresponds to the concurrency. This is required to provide unique
names for MBeans when JMX is enabled.
Starting with version 1.3, the MessageListenerContainer provides access to the metrics of the
underlying KafkaConsumer. In the case of ConcurrentMessageListenerContainer, the metrics() method
returns the metrics for all the target KafkaMessageListenerContainer instances. The metrics are
grouped into the Map<MetricName, ? extends Metric> by the client-id provided for the underlying
47
KafkaConsumer.
Starting with version 2.3, the ContainerProperties provides an idleBetweenPolls option to let the
main loop in the listener container to sleep between KafkaConsumer.poll() calls. An actual sleep
interval is selected as the minimum from the provided option and difference between the
max.poll.interval.ms consumer config and the current records batch processing time.
Committing Offsets
Several options are provided for committing offsets. If the enable.auto.commit consumer property is
true, Kafka auto-commits the offsets according to its configuration. If it is false, the containers
support several AckMode settings (described in the next list). The default AckMode is BATCH. Starting
with version 2.3, the framework sets enable.auto.commit to false unless explicitly set in the
configuration. Previously, the Kafka default (true) was used if the property was not set.
The consumer poll() method returns one or more ConsumerRecords. The MessageListener is called for
each record. The following lists describes the action taken by the container for each AckMode (when
transactions are not being used):
• RECORD: Commit the offset when the listener returns after processing the record.
• BATCH: Commit the offset when all the records returned by the poll() have been processed.
• TIME: Commit the offset when all the records returned by the poll() have been processed, as
long as the ackTime since the last commit has been exceeded.
• COUNT: Commit the offset when all the records returned by the poll() have been processed, as
long as ackCount records have been received since the last commit.
• COUNT_TIME: Similar to TIME and COUNT, but the commit is performed if either condition is true.
• MANUAL: The message listener is responsible to acknowledge() the Acknowledgment. After that, the
same semantics as BATCH are applied.
When using transactions, the offset(s) are sent to the transaction and the semantics are equivalent
to RECORD or BATCH, depending on the listener type (record or batch).
Depending on the syncCommits container property, the commitSync() or commitAsync() method on the
consumer is used. syncCommits is true by default; also see setSyncCommitTimeout. See
setCommitCallback to get the results of asynchronous commits; the default callback is the
LoggingCommitCallback which logs errors (and successes at debug level).
Because the listener container has it’s own mechanism for committing offsets, it prefers the Kafka
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG to be false. Starting with version 2.3, it unconditionally
sets it to false unless specifically set in the consumer factory or the container’s consumer property
overrides.
48
The Acknowledgment has the following method:
void acknowledge();
This method gives the listener control over when offsets are committed.
Starting with version 2.3, the Acknowledgment interface has two additional methods nack(long sleep)
and nack(int index, long sleep). The first one is used with a record listener, the second with a
batch listener. Calling the wrong method for your listener type will throw an IllegalStateException.
If you want to commit a partial batch, using nack(), When using transactions, set
the AckMode to MANUAL; invoking nack() will send the offsets of the successfully
processed records to the transaction.
nack() can only be called on the consumer thread that invokes your listener.
With a record listener, when nack() is called, any pending offsets are committed, the remaining
records from the last poll are discarded, and seeks are performed on their partitions so that the
failed record and unprocessed records are redelivered on the next poll(). The consumer can be
paused before redelivery, by setting the sleep argument. This is similar functionality to throwing an
exception when the container is configured with a DefaultErrorHandler.
When using a batch listener, you can specify the index within the batch where the failure occurred.
When nack() is called, offsets will be committed for records before the index and seeks are
performed on the partitions for the failed and discarded records so that they will be redelivered on
the next poll().
The consumer is paused during the sleep so that we continue to poll the broker to
keep the consumer alive. The actual sleep time, and its resolution, depends on the
container’s pollTimeout which defaults to 5 seconds. The minimum sleep time is
equal to the pollTimeout and all sleep times will be a multiple of it. For small sleep
times or, to increase its accuracy, consider reducing the container’s pollTimeout.
The listener containers implement SmartLifecycle, and autoStartup is true by default. The
containers are started in a late phase (Integer.MAX-VALUE - 100). Other components that implement
SmartLifecycle, to handle data from listeners, should be started in an earlier phase. The - 100
49
leaves room for later phases to enable components to be auto-started after the containers.
@KafkaListener Annotation
The @KafkaListener annotation is used to designate a bean method as a listener for a listener
container. The bean is wrapped in a MessagingMessageListenerAdapter configured with various
features, such as converters to convert the data, if necessary, to match the method parameters.
You can configure most attributes on the annotation with SpEL by using #{…} or property
placeholders (${…}). See the Javadoc for more information.
Record Listeners
The @KafkaListener annotation provides a mechanism for simple POJO listeners. The following
example shows how to use it:
This mechanism requires an @EnableKafka annotation on one of your @Configuration classes and a
listener container factory, which is used to configure the underlying
ConcurrentMessageListenerContainer. By default, a bean with name kafkaListenerContainerFactory is
expected. The following example shows how to use ConcurrentMessageListenerContainer:
50
@Configuration
@EnableKafka
public class KafkaConfig {
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer,
String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
@Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
@Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka
.getBrokersAsString());
...
return props;
}
}
Notice that, to set container properties, you must use the getContainerProperties() method on the
factory. It is used as a template for the actual properties injected into the container.
Starting with version 2.1.1, you can now set the client.id property for consumers created by the
annotation. The clientIdPrefix is suffixed with -n, where n is an integer representing the container
number when using concurrency.
Starting with version 2.2, you can now override the container factory’s concurrency and autoStartup
properties by using properties on the annotation itself. The properties can be simple values,
property placeholders, or SpEL expressions. The following example shows how to do so:
51
@KafkaListener(id = "myListener", topics = "myTopic",
autoStartup = "${listen.auto.start:true}", concurrency =
"${listen.concurrency:3}")
public void listen(String data) {
...
}
You can also configure POJO listeners with explicit topics and partitions (and, optionally, their
initial offsets). The following example shows how to do so:
You can specify each partition in the partitions or partitionOffsets attribute but not both.
As with most annotation properties, you can use SpEL expressions; for an example of how to
generate a large list of partitions, see Manually Assigning All Partitions.
Starting with version 2.5.5, you can apply an initial offset to all assigned partitions:
The * wildcard represents all partitions in the partitions attribute. There must only be one
@PartitionOffset with the wildcard in each @TopicPartition.
52
even when using manual assignment. This allows, for example, any arbitrary seek operations at
that time.
Starting with version 2.6.4, you can specify a comma-delimited list of partitions, or partition ranges:
The range is inclusive; the example above will assign partitions 0, 1, 2, 3, 4, 5, 7, 10, 11, 12,
13, 14, 15.
Manual Acknowledgment
When using manual AckMode, you can also provide the listener with the Acknowledgment. The
following example also shows how to use a different container factory.
Finally, metadata about the record is available from message headers. You can use the following
53
header names to retrieve the headers of the message:
• KafkaHeaders.OFFSET
• KafkaHeaders.RECEIVED_KEY
• KafkaHeaders.RECEIVED_TOPIC
• KafkaHeaders.RECEIVED_PARTITION
• KafkaHeaders.RECEIVED_TIMESTAMP
• KafkaHeaders.TIMESTAMP_TYPE
Starting with version 2.5 the RECEIVED_KEY is not present if the incoming record has a null key;
previously the header was populated with a null value. This change is to make the framework
consistent with spring-messaging conventions where null valued headers are not present.
Starting with version 2.5, instead of using discrete headers, you can receive record metadata in a
ConsumerRecordMetadata parameter.
@KafkaListener(...)
public void listen(String str, ConsumerRecordMetadata meta) {
...
}
This contains all the data from the ConsumerRecord except the key and value.
Batch Listeners
Starting with version 1.1, you can configure @KafkaListener methods to receive the entire batch of
consumer records received from the consumer poll. To configure the listener container factory to
create batch listeners, you can set the batchListener property. The following example shows how to
do so:
54
@Bean
public KafkaListenerContainerFactory<?> batchFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true); // <<<<<<<<<<<<<<<<<<<<<<<<<
return factory;
}
Starting with version 2.8, you can override the factory’s batchListener propery
using the batch property on the @KafkaListener annotation. This, together with the
changes to Container Error Handlers allows the same factory to be used for both
record and batch listeners.
The topic, partition, offset, and so on are available in headers that parallel the payloads. The
following example shows how to use the headers:
Alternatively, you can receive a List of Message<?> objects with each offset and other details in each
message, but it must be the only parameter (aside from optional Acknowledgment, when using
manual commits, and/or Consumer<?, ?> parameters) defined on the method. The following example
shows how to do so:
55
@KafkaListener(id = "listMsg", topics = "myTopic", containerFactory =
"batchFactory")
public void listen14(List<Message<?>> list) {
...
}
You can also receive a list of ConsumerRecord<?, ?> objects, but it must be the only parameter (aside
from optional Acknowledgment, when using manual commits and Consumer<?, ?> parameters) defined
on the method. The following example shows how to do so:
Starting with version 2.2, the listener can receive the complete ConsumerRecords<?, ?> object
returned by the poll() method, letting the listener access additional methods, such as partitions()
(which returns the TopicPartition instances in the list) and records(TopicPartition) (which gets
56
selective records). Again, this must be the only parameter (aside from optional Acknowledgment,
when using manual commits or Consumer<?, ?> parameters) on the method. The following example
shows how to do so:
Annotation Properties
Starting with version 2.0, the id property (if present) is used as the Kafka consumer group.id
property, overriding the configured property in the consumer factory, if present. You can also set
groupId explicitly or set idIsGroup to false to restore the previous behavior of using the consumer
factory group.id.
You can use property placeholders or SpEL expressions within most annotation properties, as the
following example shows:
@KafkaListener(topics = "${some.property}")
@KafkaListener(topics = "#{someBean.someProperty}",
groupId = "#{someBean.someProperty}.group")
Starting with version 2.1.2, the SpEL expressions support a special token: __listener. It is a pseudo
bean name that represents the current bean instance within which this annotation exists.
57
@Bean
public Listener listener1() {
return new Listener("topic1");
}
@Bean
public Listener listener2() {
return new Listener("topic2");
}
Given the beans in the previous example, we can then use the following:
@KafkaListener(topics = "#{__listener.topic}",
groupId = "#{__listener.topic}.group")
public void listen(...) {
...
}
If, in the unlikely event that you have an actual bean called __listener, you can change the
expression token byusing the beanRef attribute. The following example shows how to do so:
Starting with version 2.2.4, you can specify Kafka consumer properties directly on the annotation,
these will override any properties with the same name configured in the consumer factory. You
cannot specify the group.id and client.id properties this way; they will be ignored; use the groupId
and clientIdPrefix annotation properties for those.
58
The properties are specified as individual strings with the normal Java Properties file format:
foo:bar, foo=bar, or foo bar.
The following is an example of the corresponding listeners for the example in Using
RoutingKafkaTemplate.
When running the same listener code in multiple containers, it may be useful to be able to
determine which container (identified by its group.id consumer property) that a record came from.
You can call KafkaUtils.getConsumerGroupId() on the listener thread to do this. Alternatively, you
can access the group id in a method parameter.
This is available in record listeners and batch listeners that receive a List<?> of
records. It is not available in a batch listener that receives a ConsumerRecords<?, ?>
argument. Use the KafkaUtils mechanism in that case.
59
Container Thread Naming
Listener containers currently use two task executors, one to invoke the consumer and another that
is used to invoke the listener when the kafka consumer property enable.auto.commit is false. You
can provide custom executors by setting the consumerExecutor and listenerExecutor properties of
the container’s ContainerProperties. When using pooled executors, be sure that enough threads are
available to handle the concurrency across all the containers in which they are used. When using
the ConcurrentMessageListenerContainer, a thread from each is used for each consumer (
concurrency).
If you do not provide a consumer executor, a SimpleAsyncTaskExecutor is used. This executor creates
threads with names similar to <beanName>-C-1 (consumer thread). For the
ConcurrentMessageListenerContainer, the <beanName> part of the thread name becomes <beanName>-m,
where m represents the consumer instance. n increments each time the container is started. So, with
a bean name of container, threads in this container will be named container-0-C-1, container-1-C-1
etc., after the container is started the first time; container-0-C-2, container-1-C-2 etc., after a stop
and subsequent start.
Starting with version 2.2, you can now use @KafkaListener as a meta annotation. The following
example shows how to do so:
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@KafkaListener
public @interface MyThreeConsumersListener {
You must alias at least one of topics, topicPattern, or topicPartitions (and, usually, id or groupId
unless you have specified a group.id in the consumer factory configuration). The following example
shows how to do so:
60
@MyThreeConsumersListener(id = "my.group", topics = "my.topic")
public void listen1(String in) {
...
}
@KafkaListener on a Class
When you use @KafkaListener at the class-level, you must specify @KafkaHandler at the method level.
When messages are delivered, the converted message payload type is used to determine which
method to call. The following example shows how to do so:
@KafkaHandler
public void listen(String foo) {
...
}
@KafkaHandler
public void listen(Integer bar) {
...
}
@KafkaHandler(isDefault = true)
public void listenDefault(Object object) {
...
}
Starting with version 2.1.3, you can designate a @KafkaHandler method as the default method that is
invoked if there is no match on other methods. At most, one method can be so designated. When
using @KafkaHandler methods, the payload must have already been converted to the domain object
(so the match can be performed). Use a custom deserializer, the JsonDeserializer, or the
JsonMessageConverter with its TypePrecedence set to TYPE_ID. See Serialization, Deserialization, and
Message Conversion for more information.
Due to some limitations in the way Spring resolves method arguments, a default
@KafkaHandler cannot receive discrete headers; it
ConsumerRecordMetadata as discussed in Consumer Record Metadata.
must use the
For example:
61
@KafkaHandler(isDefault = true)
public void listenDefault(Object object, @Header(KafkaHeaders.RECEIVED_TOPIC)
String topic) {
...
}
This won’t work if the object is a String; the topic parameter will also get a reference to object.
If you need metadata about the record in a default method, use this:
@KafkaHandler(isDefault = true)
void listen(Object in, @Header(KafkaHeaders.RECORD_METADATA)
ConsumerRecordMetadata meta) {
String topic = meta.topic();
...
}
Starting with version 2.7.2, you can now programmatically modify annotation attributes before the
container is created. To do so, add one or more
KafkaListenerAnnotationBeanPostProcessor.AnnotationEnhancer to the application context.
AnnotationEnhancer is a BiFunction<Map<String, Object>, AnnotatedElement, Map<String, Object>
and must return a map of attributes. The attribute values can contain SpEL and/or property
placeholders; the enhancer is called before any resolution is performed. If more than one enhancer
is present, and they implement Ordered, they will be invoked in order.
An example follows:
62
@Bean
public static AnnotationEnhancer groupIdEnhancer() {
return (attrs, element) -> {
attrs.put("groupId", attrs.get("id") + "." + (element instanceof Class
? ((Class<?>) element).getSimpleName()
: ((Method) element).getDeclaringClass().getSimpleName()
+ "." + ((Method) element).getName()));
return attrs;
};
}
The listener containers created for @KafkaListener annotations are not beans in the application
context. Instead, they are registered with an infrastructure bean of type
KafkaListenerEndpointRegistry. This bean is automatically declared by the framework and manages
the containers' lifecycles; it will auto-start any containers that have autoStartup set to true. All
containers created by all container factories must be in the same phase. See Listener Container Auto
Startup for more information. You can manage the lifecycle programmatically by using the registry.
Starting or stopping the registry will start or stop all the registered containers. Alternatively, you
can get a reference to an individual container by using its id attribute. You can set autoStartup on
the annotation, which overrides the default setting configured into the container factory. You can
get a reference to the bean from the application context, such as auto-wiring, to manage its
registered containers. The following examples show how to do so:
@Autowired
private KafkaListenerEndpointRegistry registry;
...
this.registry.getListenerContainer("myContainer").start();
...
The registry only maintains the life cycle of containers it manages; containers declared as beans are
not managed by the registry and can be obtained from the application context. A collection of
managed containers can be obtained by calling the registry’s getListenerContainers() method.
Version 2.2.5 added a convenience method getAllListenerContainers(), which returns a collection
of all containers, including those managed by the registry and those declared as beans. The
collection returned will include any prototype beans that have been initialized, but it will not
63
initialize any lazy bean declarations.
Endpoints registered after the application context has been refreshed will start
immediately, regardless of their autoStartup property, to comply with the
SmartLifecycle contract, where autoStartup is only considered during application
context initialization. An example of late registration is a bean with a
@KafkaListener in prototype scope where an instance is created after the context is
initialized. Starting with version 2.8.7, you can set the registry’s
alwaysStartAfterRefresh property to false and then the container’s autoStartup
property will define whether or not the container is started.
Starting with version 2.2, it is now easier to add a Validator to validate @KafkaListener @Payload
arguments. Previously, you had to configure a custom DefaultMessageHandlerMethodFactory and add
it to the registrar. Now, you can add the validator to the registrar itself. The following code shows
how to do so:
@Configuration
@EnableKafka
public class Config implements KafkaListenerConfigurer {
...
@Override
public void configureKafkaListeners(KafkaListenerEndpointRegistrar registrar)
{
registrar.setValidator(new MyValidator());
}
When you use Spring Boot with the validation starter, a LocalValidatorFactoryBean
is auto-configured, as the following example shows:
64
@Configuration
@EnableKafka
public class Config implements KafkaListenerConfigurer {
@Autowired
private LocalValidatorFactoryBean validator;
...
@Override
public void configureKafkaListeners(KafkaListenerEndpointRegistrar registrar)
{
registrar.setValidator(this.validator);
}
}
65
public static class ValidatedClass {
@Max(10)
private int bar;
@Bean
public KafkaListenerErrorHandler validationErrorHandler() {
return (m, e) -> {
...
};
}
Starting with version 2.5.11, validation now works on payloads for @KafkaHandler methods in a
class-level listener. See @KafkaListener on a Class.
Rebalancing Listeners
66
public interface ConsumerAwareRebalanceListener extends ConsumerRebalanceListener
{
Notice that there are two callbacks when partitions are revoked. The first is called immediately. The
second is called after any pending offsets are committed. This is useful if you wish to maintain
offsets in some external repository, as the following example shows:
containerProperties.setConsumerRebalanceListener(new
ConsumerAwareRebalanceListener() {
@Override
public void onPartitionsRevokedBeforeCommit(Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
// acknowledge any pending Acknowledgments (if using manual acks)
}
@Override
public void onPartitionsRevokedAfterCommit(Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
// ...
store(consumer.position(partition));
// ...
}
@Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
// ...
consumer.seek(partition, offsetTracker.getOffset() + 1);
// ...
}
});
67
Starting with version 2.4, a new method onPartitionsLost() has been added
(similar to a method with the same name in ConsumerRebalanceLister). The default
implementation on ConsumerRebalanceLister simply calls onPartionsRevoked. The
default implementation on ConsumerAwareRebalanceListener does nothing. When
supplying the listener container with a custom listener (of either type), it is
important that your implementation not call onPartitionsRevoked from
onPartitionsLost. If you implement ConsumerRebalanceListener you should override
the default method. This is because the listener container will call its own
onPartitionsRevoked from its implementation of onPartitionsLost after calling the
method on your implementation. If you implementation delegates to the default
behavior, onPartitionsRevoked will be called twice each time the Consumer calls that
method on the container’s listener.
Starting with version 2.0, if you also annotate a @KafkaListener with a @SendTo annotation and the
method invocation returns a result, the result is forwarded to the topic specified by the @SendTo.
◦ request: The inbound ConsumerRecord (or ConsumerRecords object for a batch listener))
Starting with versions 2.1.11 and 2.2.1, property placeholders are resolved within @SendTo values.
The result of the expression evaluation must be a String that represents the topic name. The
following examples show the various ways to use @SendTo:
68
@KafkaListener(topics = "annotated21")
@SendTo("!{request.value()}") // runtime SpEL
public String replyingListener(String in) {
...
}
@KafkaListener(topics = "${some.property:annotated22}")
@SendTo("#{myBean.replyTopic}") // config time SpEL
public Collection<String> replyingBatchListener(List<String> in) {
...
}
@KafkaHandler
public String foo(String in) {
...
}
@KafkaHandler
@SendTo("!{'annotated25reply2'}")
public String bar(@Payload(required = false) KafkaNull nul,
@Header(KafkaHeaders.RECEIVED_KEY) int key) {
...
}
In order to support @SendTo, the listener container factory must be provided with a
KafkaTemplate (in its replyTemplate property), which is used to send the reply. This
should be a KafkaTemplate and not a ReplyingKafkaTemplate which is used on the
client-side for request/reply processing. When using Spring Boot, boot will auto-
configure the template into the factory; when configuring your own factory, it
must be set as shown in the examples below.
Starting with version 2.2, you can add a ReplyHeadersConfigurer to the listener container factory.
This is consulted to determine which headers you want to set in the reply message. The following
example shows how to add a ReplyHeadersConfigurer:
69
@Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(cf());
factory.setReplyTemplate(template());
factory.setReplyHeadersConfigurer((k, v) -> k.equals("cat"));
return factory;
}
You can also add more headers if you wish. The following example shows how to do so:
@Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(cf());
factory.setReplyTemplate(template());
factory.setReplyHeadersConfigurer(new ReplyHeadersConfigurer() {
@Override
public boolean shouldCopy(String headerName, Object headerValue) {
return false;
}
@Override
public Map<String, Object> additionalHeaders() {
return Collections.singletonMap("qux", "fiz");
}
});
return factory;
}
When you use @SendTo, you must configure the ConcurrentKafkaListenerContainerFactory with a
KafkaTemplate in its replyTemplate property to perform the send.
Unless you use request/reply semantics only the simple send(topic, value) method
is used, so you may wish to create a subclass to generate the partition or key. The
following example shows how to do so:
70
@Bean
public KafkaTemplate<String, String> myReplyingTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory()) {
@Override
public ListenableFuture<SendResult<String, String>> send(String topic,
String data) {
return super.send(topic, partitionForData(data), keyForData(data),
data);
}
...
};
}
When using request/reply semantics, the target partition can be requested by the sender.
71
You can annotate a @KafkaListener method with @SendTo even if no result is
returned. This is to allow the configuration of an errorHandler that can forward
information about a failed message delivery to some topic. The following example
shows how to do so:
}
throw new RuntimeException("fail");
@Bean
public KafkaListenerErrorHandler voidSendToErrorHandler() {
return (m, e) -> {
return ... // some information about the failure and input
data
};
}
Filtering Messages
In certain scenarios, such as rebalancing, a message that has already been processed may be
redelivered. The framework cannot know whether such a message has been processed or not. That
is an application-level function. This is known as the Idempotent Receiver pattern and Spring
Integration provides an implementation of it.
The Spring for Apache Kafka project also provides some assistance by means of the
FilteringMessageListenerAdapter class, which can wrap your MessageListener. This class takes an
implementation of RecordFilterStrategy in which you implement the filter method to signal that a
message is a duplicate and should be discarded. This has an additional property called
ackDiscarded, which indicates whether the adapter should acknowledge the discarded record. It is
false by default.
When you use @KafkaListener, set the RecordFilterStrategy (and optionally ackDiscarded) on the
container factory so that the listener is wrapped in the appropriate filtering adapter.
72
In addition, a FilteringBatchMessageListenerAdapter is provided, for when you use a batch message
listener.
Starting with version 2.8.4, you can override the listener container factory’s default
RecordFilterStrategy by using the filter property on the listener annotations.
Retrying Deliveries
A common use case is to start a listener after another listener has consumed all the records in a
topic. For example, you may want to load the contents of one or more compacted topics into
memory before processing records from other topics. Starting with version 2.7.3, a new component
ContainerGroupSequencer has been introduced. It uses the @KafkaListener containerGroup property to
group containers together and start the containers in the next group, when all the containers in the
current group have gone idle.
73
@KafkaListener(id = "listen1", topics = "topic1", containerGroup = "g1",
concurrency = "2")
public void listen1(String in) {
}
@Bean
ContainerGroupSequencer sequencer(KafkaListenerEndpointRegistry registry) {
return new ContainerGroupSequencer(registry, 5000, "g1", "g2");
}
During application context initialization, the sequencer, sets the autoStartup property of all the
containers in the provided groups to false. It also sets the idleEventInterval for any containers
(that do not already have one set) to the supplied value (5000ms in this case). Then, when the
sequencer is started by the application context, the containers in the first group are started. As
ListenerContainerIdleEvent s are received, each individual child container in each container is
stopped. When all child containers in a ConcurrentMessageListenerContainer are stopped, the parent
container is stopped. When all containers in a group have been stopped, the containers in the next
group are started. There is no limit to the number of groups or containers in a group.
By default, the containers in the final group (g2 above) are not stopped when they go idle. To modify
that behavior, set stopLastGroupWhenIdle to true on the sequencer.
74
Using KafkaTemplate to Receive
Starting with version 2.8, the template has four receive() methods:
As you can see, you need to know the partition and offset of the record(s) you need to retrieve; a
new Consumer is created (and closed) for each operation.
With the last two methods, each record is retrieved individually and the results assembled into a
ConsumerRecords object. When creating the TopicPartitionOffset s for the request, only positive,
absolute offsets are supported.
75
Property Default Description
76
Property Default Description
77
Property Default Description
78
Property Default Description
79
Property Default Description
80
Property Default Description
81
Property Default Description
containerPaused n/a True if pause has been requested and the consumer has
actually paused.
containerPaused n/a True if pause has been requested and all child containers'
consumer has actually paused.
There are several techniques that can be used to create listener containers at runtime. This section
explores some of those techniques.
MessageListener Implementations
If you implement your own listener directly, you can simply use the container factory to create a
raw container for that listener:
82
Example 8. User Listener
Java
@Override
public void onMessage(ConsumerRecord<String, String> data) {
// ...
}
Kotlin
83
Prototype Beans
Containers for methods annotated with @KafkaListener can be created dynamically by declaring the
bean as prototype:
84
Example 9. Prototype
Java
@Bean
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
MyPojo pojo(String id, String topic) {
return new MyPojo(id, topic);
}
85
Kotlin
@Bean
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
fun pojo(id: String?, topic: String?): MyPojo {
return MyPojo(id, topic)
}
Listeners must have unique IDs. Starting with version 2.8.9, the
KafkaListenerEndpointRegistry has a new method
unregisterListenerContainer(String id) to allow you to re-use an id. Unregistering
a container does not stop() the container, you must do that yourself.
The following Spring application events are published by listener containers and their consumers:
86
method.
• container: The listener container or the parent listener container, if the source container is a
child.
• idleTime: The time the container had been idle when the event was published.
• topicPartitions: The topics and partitions that the container was assigned at the time the event
was generated.
• consumer: A reference to the Kafka Consumer object. For example, if the consumer’s pause()
method was previously called, it can resume() when the event is received.
• paused: Whether the container is currently paused. See Pausing and Resuming Listener
Containers for more information.
The ListenerContainerNoLongerIdleEvent has the same properties, except idleTime and paused.
• container: The listener container or the parent listener container, if the source container is a
child.
• idleTime: The time partition consumption had been idle when the event was published.
• consumer: A reference to the Kafka Consumer object. For example, if the consumer’s pause()
method was previously called, it can resume() when the event is received.
• paused: Whether that partition consumption is currently paused for that consumer. See Pausing
87
and Resuming Listener Containers for more information.
• container: The listener container or the parent listener container, if the source container is a
child.
• timeSinceLastPoll: The time just before the container last called poll().
• topicPartitions: The topics and partitions that the container was assigned at the time the event
was generated.
• consumer: A reference to the Kafka Consumer object. For example, if the consumer’s pause()
method was previously called, it can resume() when the event is received.
• paused: Whether the container is currently paused. See Pausing and Resuming Listener
Containers for more information.
• container: The listener container or the parent listener container, if the source container is a
child.
• container: The listener container or the parent listener container, if the source container is a
child.
• container: The listener container or the parent listener container, if the source container is a
child.
All containers (whether a child or a parent) publish ContainerStoppedEvent. For a parent container,
the source and container properties are identical.
88
• reason
◦ FENCED - the transactional producer was fenced and the stopContainerWhenFenced container
property is true.
◦ NO_OFFSET - there is no offset for a partition and the auto.offset.reset policy is none.
You can use this event to restart the container after such a condition:
if (event.getReason.equals(Reason.FENCED)) {
event.getSource(MessageListenerContainer.class).start();
}
While efficient, one problem with asynchronous consumers is detecting when they are idle. You
might want to take some action if no messages arrive for some period of time.
You can configure the listener container to publish a ListenerContainerIdleEvent when some time
passes with no message delivery. While the container is idle, an event is published every
idleEventInterval milliseconds.
To configure this feature, set the idleEventInterval on the container. The following example shows
how to do so:
@Bean
public KafkaMessageListenerContainer(ConsumerFactory<String, String>
consumerFactory) {
ContainerProperties containerProps = new ContainerProperties("topic1", "
topic2");
...
containerProps.setIdleEventInterval(60000L);
...
KafkaMessageListenerContainer<String, String> container = new
KafKaMessageListenerContainer<>(...);
return container;
}
The following example shows how to set the idleEventInterval for a @KafkaListener:
89
@Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
...
factory.getContainerProperties().setIdleEventInterval(60000L);
...
return factory;
}
In each of these cases, an event is published once per minute while the container is idle.
If, for some reason, the consumer poll() method does not exit, no messages are received and idle
events cannot be generated (this was a problem with early versions of the kafka-clients when the
broker wasn’t reachable). In this case, the container publishes a NonResponsiveConsumerEvent if a poll
does not return within 3x the pollTimeout property. By default, this check is performed once every
30 seconds in each container. You can modify this behavior by setting the monitorInterval (default
30 seconds) and noPollThreshold (default 3.0) properties in the ContainerProperties when
configuring the listener container. The noPollThreshold should be greater than 1.0 to avoid getting
spurious events due to a race condition. Receiving such an event lets you stop the containers, thus
waking the consumer so that it can stop.
Starting with version 2.6.2, if a container has published a ListenerContainerIdleEvent, it will publish
a ListenerContainerNoLongerIdleEvent when a record is subsequently received.
Event Consumption
The next example combines @KafkaListener and @EventListener into a single class. You should
understand that the application listener gets events for all containers, so you may need to check the
listener ID if you want to take specific action based on which container is idle. You can also use the
@EventListener condition for this purpose.
The event is normally published on the consumer thread, so it is safe to interact with the Consumer
object.
90
public class Listener {
@EventListener(condition = "event.listenerId.startsWith('qux-')")
public void eventHandler(ListenerContainerIdleEvent event) {
...
}
Event listeners see events for all containers. Consequently, in the preceding
example, we narrow the events received based on the listener ID. Since containers
created for the @KafkaListener support concurrency, the actual containers are
named id-n where the n is a unique value for each instance to support the
concurrency. That is why we use startsWith in the condition.
If you wish to use the idle event to stop the lister container, you should not call
container.stop() on the thread that calls the listener. Doing so causes delays and
unnecessary log messages. Instead, you should hand off the event to a different
thread that can then stop the container. Also, you should not stop() the container
instance if it is a child container. You should stop the concurrent container instead.
Note that you can obtain the current positions when idle is detected by implementing
ConsumerSeekAware in your listener. See onIdleContainer() in Seeking to a Specific Offset.
There are several ways to set the initial offset for a partition.
When manually assigning partitions, you can set the initial offset (if desired) in the configured
TopicPartitionOffset arguments (see Message Listener Containers). You can also seek to a specific
offset at any time.
When you use group management where the broker assigns partitions:
• For a new group.id, the initial offset is determined by the auto.offset.reset consumer property
(earliest or latest).
• For an existing group ID, the initial offset is the current offset for that group ID. You can,
however, seek to a specific offset during initialization (or at any time thereafter).
91
4.1.9. Seeking to a Specific Offset
In order to seek, your listener must implement ConsumerSeekAware, which has the following
methods:
The registerSeekCallback is called when the container is started and whenever partitions are
assigned. You should use this callback when seeking at some arbitrary time after initialization. You
should save a reference to the callback. If you use the same listener in multiple containers (or in a
ConcurrentMessageListenerContainer), you should store the callback in a ThreadLocal or some other
structure keyed by the listener Thread.
When using group management, onPartitionsAssigned is called when partitions are assigned. You
can use this method, for example, for setting initial offsets for the partitions, by calling the callback.
You can also use this method to associate this thread’s callback with the assigned partitions (see the
example below). You must use the callback argument, not the one passed into registerSeekCallback.
Starting with version 2.5.5, this method is called, even when using manual partition assignment.
onPartitionsRevoked is called when the container is stopped or Kafka revokes assignments. You
should discard this thread’s callback and remove any associations to the revoked partitions.
92
void seek(String topic, int partition, long offset);
• offset negative and toCurrent false - seek relative to the end of the partition.
• offset positive and toCurrent false - seek relative to the beginning of the partition.
• offset negative and toCurrent true - seek relative to the current position (rewind).
• offset positive and toCurrent true - seek relative to the current position (fast forward).
When seeking to the same timestamp for multiple partitions in the onIdleContainer
or onPartitionsAssigned methods, the second method is preferred because it is
more efficient to find the offsets for the timestamps in a single call to the
consumer’s offsetsForTimes method. When called from other locations, the
container will gather all timestamp seek requests and make one call to
offsetsForTimes.
You can also perform seek operations from onIdleContainer() when an idle container is detected.
See Detecting Idle and Non-Responsive Consumers for how to enable idle container detection.
The seekToBeginning method that accepts a collection is useful, for example, when
processing a compacted topic and you wish to seek to the beginning every time the
application is started:
93
public class MyListener implements ConsumerSeekAware {
...
@Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments,
ConsumerSeekCallback callback) {
callback.seekToBeginning(assignments.keySet());
}
To arbitrarily seek at runtime, use the callback reference from the registerSeekCallback for the
appropriate thread.
Here is a trivial Spring Boot application that demonstrates how to use the callback; it sends 10
records to the topic; hitting <Enter> in the console causes all partitions to seek to the beginning.
94
@SpringBootApplication
public class SeekExampleApplication {
@Bean
public ApplicationRunner runner(Listener listener, KafkaTemplate<String,
String> template) {
return args -> {
IntStream.range(0, 10).forEach(i -> template.send(
new ProducerRecord<>("seekExample", i % 3, "foo", "bar")));
while (true) {
System.in.read();
listener.seekToStart();
}
};
}
@Bean
public NewTopic topic() {
return new NewTopic("seekExample", 3, (short) 1);
}
@Component
class Listener implements ConsumerSeekAware {
@Override
public void registerSeekCallback(ConsumerSeekCallback callback) {
this.callbackForThread.set(callback);
}
@Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments,
ConsumerSeekCallback callback) {
assignments.keySet().forEach(tp -> this.callbacks.put(tp, this
.callbackForThread.get()));
}
95
@Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
partitions.forEach(tp -> this.callbacks.remove(tp));
this.callbackForThread.remove();
}
@Override
public void onIdleContainer(Map<TopicPartition, Long> assignments,
ConsumerSeekCallback callback) {
}
To make things simpler, version 2.3 added the AbstractConsumerSeekAware class, which keeps track of
which callback is to be used for a topic/partition. The following example shows how to seek to the
last record processed, in each partition, each time the container goes idle. It also has methods that
allow arbitrary external calls to rewind partitions by one record.
96
public class SeekToLastOnIdleListener extends AbstractConsumerSeekAware {
@Override
public void onIdleContainer(Map<org.apache.kafka.common.TopicPartition, Long>
assignments,
ConsumerSeekCallback callback) {
/**
* Rewind all partitions one record.
*/
public void rewindAllOneRecord() {
getSeekCallbacks()
.forEach((tp, callback) ->
callback.seekRelative(tp.topic(), tp.partition(), -1, true));
}
/**
* Rewind one partition one record.
*/
public void rewindOnePartitionOneRecord(String topic, int partition) {
getSeekCallbackFor(new org.apache.kafka.common.TopicPartition(topic,
partition))
.seekRelative(topic, partition, -1, true);
}
• seekToTimestamp(long time) - seeks all assigned partitions to the offset represented by that
timestamp.
Example:
97
public class MyListener extends AbstractConsumerSeekAware {
@KafkaListener(...)
void listn(...) {
...
}
}
MyListener listener;
...
void someMethod() {
this.listener.seekToTimestamp(System.currentTimeMillis - 60_000);
}
Starting with version 2.2, you can use the same factory to create any
ConcurrentMessageListenerContainer. This might be useful if you want to create several containers
with similar properties or you wish to use some externally configured factory, such as the one
provided by Spring Boot auto-configuration. Once the container is created, you can further modify
its properties, many of which are set by using container.getContainerProperties(). The following
example configures a ConcurrentMessageListenerContainer:
@Bean
public ConcurrentMessageListenerContainer<String, String>(
ConcurrentKafkaListenerContainerFactory<String, String> factory) {
98
Containers created this way are not added to the endpoint registry. They should be
created as @Bean definitions so that they are registered with the application
context.
Starting with version 2.3.4, you can add a ContainerCustomizer to the factory to further configure
each container after it has been created and configured.
@Bean
public KafkaListenerContainerFactory<?> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
...
factory.setContainerCustomizer(container -> { /* customize the container */ }
);
return factory;
}
When using a concurrent message listener container, a single listener instance is invoked on all
consumer threads. Listeners, therefore, need to be thread-safe, and it is preferable to use stateless
listeners. If it is not possible to make your listener thread-safe or adding synchronization would
significantly reduce the benefit of adding concurrency, you can use one of a few techniques:
• Use n containers with concurrency=1 with a prototype scoped MessageListener bean so that each
container gets its own instance (this is not possible when using @KafkaListener).
• Have the singleton listener delegate to a bean that is declared in SimpleThreadScope (or a similar
scope).
To facilitate cleaning up thread state (for the second and third items in the preceding list), starting
with version 2.2, the listener container publishes a ConsumerStoppedEvent when each thread exits.
You can consume these events with an ApplicationListener or @EventListener method to remove
ThreadLocal<?> instances or remove() thread-scoped beans from the scope. Note that
SimpleThreadScope does not destroy beans that have a destruction interface (such as DisposableBean),
so you should destroy() the instance yourself.
4.1.12. Monitoring
99
Monitoring Listener Performance
Starting with version 2.3, the listener container will automatically create and update Micrometer
Timer s for the listener, if Micrometer is detected on the class path, and a single MeterRegistry is
present in the application context. The timers can be disabled by setting the ContainerProperty
micrometerEnabled to false.
Two timers are maintained - one for successful calls to the listener and one for failures.
The timers are named spring.kafka.listener and have the following tags:
You can add additional tags using the ContainerProperties micrometerTags property.
With the concurrent container, timers are created for each thread and the name tag
is suffixed with -n where n is 0 to concurrency-1.
Starting with version 2.5, the template will automatically create and update Micrometer Timer s for
send operations, if Micrometer is detected on the class path, and a single MeterRegistry is present in
the application context. The timers can be disabled by setting the template’s micrometerEnabled
property to false.
Two timers are maintained - one for successful calls to the listener and one for failures.
The timers are named spring.kafka.template and have the following tags:
You can add additional tags using the template’s micrometerTags property.
Starting with version 2.5, the framework provides Factory Listeners to manage a Micrometer
KafkaClientMetrics instance whenever producers and consumers are created and closed.
To enable this feature, simply add the listeners to your producer and consumer factories:
100
@Bean
public ConsumerFactory<String, String> myConsumerFactory() {
Map<String, Object> configs = consumerConfigs();
...
DefaultKafkaConsumerFactory<String, String> cf = new
DefaultKafkaConsumerFactory<>(configs);
...
cf.addListener(new MicrometerConsumerListener<String, String>(meterRegistry(),
Collections.singletonList(new ImmutableTag("customTag",
"customTagValue"))));
...
return cf;
}
@Bean
public ProducerFactory<String, String> myProducerFactory() {
Map<String, Object> configs = producerConfigs();
configs.put(ProducerConfig.CLIENT_ID_CONFIG, "myClientId");
...
DefaultKafkaProducerFactory<String, String> pf = new
DefaultKafkaProducerFactory<>(configs);
...
pf.addListener(new MicrometerProducerListener<String, String>(meterRegistry(),
Collections.singletonList(new ImmutableTag("customTag",
"customTagValue"))));
...
return pf;
}
The consumer/producer id passed to the listener is added to the meter’s tags with tag name
spring.id.
101
4.1.13. Transactions
This section describes how Spring for Apache Kafka supports transactions.
Overview
The 0.11.0.0 client library added support for transactions. Spring for Apache Kafka adds support in
the following ways:
• Transactional KafkaMessageListenerContainer
While transactions are supported with batch listeners, by default, zombie fencing
is not supported because a batch may contain records from multiple topics or
partitions. However, starting with version 2.3.2, zombie fencing is supported if you
set the container property subBatchPerPartition to true. In that case, the batch
listener is invoked once per partition received from the last poll, as if each poll
only returned records for a single partition. This is true by default since version
2.5 when transactions are enabled with EOSMode.ALPHA; set it to false if you are
using transactions but are not concerned about zombie fencing. Also see Exactly
Once Semantics.
102
Starting with version 2.5.8, you can now configure the maxAge property on the
producer factory. This is useful when using transactional producers that might lay
idle for the broker’s transactional.id.expiration.ms. With current kafka-clients,
this can cause a ProducerFencedException without a rebalance. By setting the maxAge
to less than transactional.id.expiration.ms, the factory will refresh the producer if
it is past it’s max age.
Using KafkaTransactionManager
You can use the KafkaTransactionManager with normal Spring transaction support (@Transactional,
TransactionTemplate, and others). If a transaction is active, any KafkaTemplate operations performed
within the scope of the transaction use the transaction’s Producer. The manager commits or rolls
back the transaction, depending on success or failure. You must configure the KafkaTemplate to use
the same ProducerFactory as the transaction manager.
Transaction Synchronization
This section refers to producer-only transactions (transactions not started by a listener container);
see Using Consumer-Initiated Transactions for information about chaining transactions when the
container starts the transaction.
If you want to send records to kafka and perform some database updates, you can use normal
Spring transaction management with, say, a DataSourceTransactionManager.
@Transactional
public void process(List<Thing> things) {
things.forEach(thing -> this.kafkaTemplate.send("topic", thing));
updateDb(things);
}
The interceptor for the @Transactional annotation starts the transaction and the KafkaTemplate will
synchronize a transaction with that transaction manager; each send will participate in that
transaction. When the method exits, the database transaction will commit followed by the Kafka
transaction. If you wish the commits to be performed in the reverse order (Kafka first), use nested
@Transactional methods, with the outer method configured to use the DataSourceTransactionManager,
and the inner method configured to use the KafkaTransactionManager.
See Examples of Kafka Transactions with Other Transaction Managers for examples of an
application that synchronizes JDBC and Kafka transactions in Kafka-first or DB-first configurations.
103
Starting with versions 2.5.17, 2.6.12, 2.7.9 and 2.8.0, if the commit fails on the
synchronized transaction (after the primary transaction has committed), the
exception will be thrown to the caller. Previously, this was silently ignored (logged
at debug). Applications should take remedial action, if necessary, to compensate
for the committed primary transaction.
The ChainedKafkaTransactionManager is now deprecated, since version 2.7; see the javadocs for its
super class ChainedTransactionManager for more information. Instead, use a KafkaTransactionManager
in the container to start the Kafka transaction and annotate the listener method with
@Transactional to start the other transaction.
See Examples of Kafka Transactions with Other Transaction Managers for an example application
that chains JDBC and Kafka transactions.
You can use the KafkaTemplate to execute a series of operations within a local transaction. The
following example shows how to do so:
The argument in the callback is the template itself (this). If the callback exits normally, the
transaction is committed. If an exception is thrown, the transaction is rolled back.
transactionIdPrefix
As mentioned in the overview, the producer factory is configured with this property to build the
producer transactional.id property. There is a dichotomy when specifying this property in that,
when running multiple instances of the application with EOSMode.ALPHA, it must be the same on all
instances to satisfy fencing zombies (also mentioned in the overview) when producing records on a
listener container thread. However, when producing records using transactions that are not started
by a listener container, the prefix has to be different on each instance. Version 2.3, makes this
simpler to configure, especially in a Spring Boot application. In previous versions, you had to create
two producer factories and KafkaTemplate s - one for producing records on a listener container
thread and one for stand-alone transactions started by kafkaTemplate.executeInTransaction() or by
a transaction interceptor on a @Transactional method.
104
Now, you can override the factory’s transactionalIdPrefix on the KafkaTemplate and the
KafkaTransactionManager.
When using a transaction manager and template for a listener container, you would normally leave
this to default to the producer factory’s property. This value should be the same for all application
instances when using EOSMode.ALPHA. With EOSMode.BETA it is no longer necessary to use the same
transactional.id, even for consumer-initiated transactions; in fact, it must be unique on each
instance the same as producer-initiated transactions. For transactions started by the template (or
the transaction manager for @Transaction) you should set the property on the template and
transaction manager respectively. This property must have a different value on each application
instance.
This problem (different rules for transactional.id) has been eliminated when
EOSMode.BETA is being used (with broker versions >= 2.5); see Exactly Once
Semantics.
When a listener fails while transactions are being used, the AfterRollbackProcessor is invoked to
take some action after the rollback occurs. When using the default AfterRollbackProcessor with a
record listener, seeks are performed so that the failed record will be redelivered. With a batch
listener, however, the whole batch will be redelivered because the framework doesn’t know which
record in the batch failed. See After-rollback Processor for more information.
When using a batch listener, version 2.4.2 introduced an alternative mechanism to deal with
failures while processing a batch; the BatchToRecordAdapter. When a container factory with
batchListener set to true is configured with a BatchToRecordAdapter, the listener is invoked with one
record at a time. This enables error handling within the batch, while still making it possible to stop
processing the entire batch, depending on the exception type. A default BatchToRecordAdapter is
provided, that can be configured with a standard ConsumerRecordRecoverer such as the
DeadLetterPublishingRecoverer. The following test case configuration snippet illustrates how to use
this feature:
105
public static class TestListener {
@Configuration
@EnableKafka
public static class Config {
@Bean
public TestListener test() {
return new TestListener();
}
@Bean
public ConsumerFactory<?, ?> consumerFactory() {
return mock(ConsumerFactory.class);
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory factory = new
ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
factory.setBatchToRecordAdapter(new DefaultBatchToRecordAdapter<>((record,
ex) -> {
this.failed = record;
}));
return factory;
}
106
4.1.14. Exactly Once Semantics
This means that, for a read→process-write sequence, it is guaranteed that the sequence is completed
exactly once. (The read and process are have at least once semantics).
Spring for Apache Kafka version 2.5 and later supports two EOS modes:
With mode V1, the producer is "fenced" if another instance with the same transactional.id is
started. Spring manages this by using a Producer for each group.id/topic/partition; when a
rebalance occurs a new instance will use the same transactional.id and the old producer is fenced.
With mode V2, it is not necessary to have a producer for each group.id/topic/partition because
consumer metadata is sent along with the offsets to the transaction and the broker can determine if
the producer is fenced using that information instead.
To configure the container to use mode ALPHA, set the container property EOSMode to ALPHA, to revert
to the previous behavior.
With V2 (default), your brokers must be version 2.5 or later; kafka-clients version
3.0, the producer will no longer fall back to V1; if the broker does not support V2, an
exception is thrown. If your brokers are earlier than 2.5, you must set the EOSMode
to V1, leave the DefaultKafkaProducerFactory producerPerConsumerPartition set to
true and, if you are using a batch listener, you should set subBatchPerPartition to
true.
When your brokers are upgraded to 2.5 or later, you should switch the mode to V2, but the number
of producers will remain as before. You can then do a rolling upgrade of your application with
producerPerConsumerPartition set to false to reduce the number of producers; you should also no
longer set the subBatchPerPartition container property.
107
If your brokers are already 2.5 or newer, you should set the DefaultKafkaProducerFactory
producerPerConsumerPartition property to false, to reduce the number of producers needed.
When using V2 mode, it is no longer necessary to set the subBatchPerPartition to true; it will default
to false when the EOSMode is V2.
V1 and V2 were previously ALPHA and BETA; they have been changed to align the framework with KIP-
732.
Apache Kafka provides a mechanism to add interceptors to producers and consumers. These
objects are managed by Kafka, not Spring, and so normal Spring dependency injection won’t work
for wiring in dependent Spring Beans. However, you can manually wire in those dependencies
using the interceptor config() method. The following Spring Boot application shows how to do this
by overriding boot’s default factories to add some dependent bean into the configuration
properties.
108
@SpringBootApplication
public class Application {
@Bean
public ConsumerFactory<?, ?> kafkaConsumerFactory(SomeBean someBean) {
Map<String, Object> consumerProperties = new HashMap<>();
// consumerProperties.put(..., ...)
// ...
consumerProperties.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG,
MyConsumerInterceptor.class.getName());
consumerProperties.put("some.bean", someBean);
return new DefaultKafkaConsumerFactory<>(consumerProperties);
}
@Bean
public ProducerFactory<?, ?> kafkaProducerFactory(SomeBean someBean) {
Map<String, Object> producerProperties = new HashMap<>();
// producerProperties.put(..., ...)
// ...
Map<String, Object> producerProperties = properties
.buildProducerProperties();
producerProperties.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,
MyProducerInterceptor.class.getName());
producerProperties.put("some.bean", someBean);
DefaultKafkaProducerFactory<?, ?> factory = new
DefaultKafkaProducerFactory<>(producerProperties);
return factory;
}
@Bean
public SomeBean someBean() {
return new SomeBean();
}
@Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> template.send("kgh897", "test");
}
@Bean
109
public NewTopic kRequests() {
return TopicBuilder.name("kgh897")
.partitions(1)
.replicas(1)
.build();
}
@Override
public void configure(Map<String, ?> configs) {
this.bean = (SomeBean) configs.get("some.bean");
}
@Override
public ProducerRecord<String, String> onSend(ProducerRecord<String, String>
record) {
this.bean.someMethod("producer interceptor");
return record;
}
@Override
public void onAcknowledgement(RecordMetadata metadata, Exception exception) {
}
@Override
public void close() {
}
110
public class MyConsumerInterceptor implements ConsumerInterceptor<String, String>
{
@Override
public void configure(Map<String, ?> configs) {
this.bean = (SomeBean) configs.get("some.bean");
}
@Override
public ConsumerRecords<String, String> onConsume(ConsumerRecords<String,
String> records) {
this.bean.someMethod("consumer interceptor");
return records;
}
@Override
public void onCommit(Map<TopicPartition, OffsetAndMetadata> offsets) {
}
@Override
public void close() {
}
Result:
Version 2.1.3 added pause() and resume() methods to listener containers. Previously, you could
pause a consumer within a ConsumerAwareMessageListener and resume it by listening for a
ListenerContainerIdleEvent, which provides access to the Consumer object. While you could pause a
consumer in an idle container by using an event listener, in some cases, this was not thread-safe,
since there is no guarantee that the event listener is invoked on the consumer thread. To safely
pause and resume consumers, you should use the pause and resume methods on the listener
containers. A pause() takes effect just before the next poll(); a resume() takes effect just after the
current poll() returns. When a container is paused, it continues to poll() the consumer, avoiding a
rebalance if group management is being used, but it does not retrieve any records. See the Kafka
111
documentation for more information.
Starting with version 2.1.5, you can call isPauseRequested() to see if pause() has been called.
However, the consumers might not have actually paused yet. isConsumerPaused() returns true if all
Consumer instances have actually paused.
In addition (also since 2.1.5), ConsumerPausedEvent and ConsumerResumedEvent instances are published
with the container as the source property and the TopicPartition instances involved in the
partitions property.
Starting with version 2.9, a new container property pauseImmediate, when set to true, causes the
pause to take effect after the current record is processed. By default, the pause takes effect when all
of the records from the previous poll have been processed. See [pauseImmediate].
The following simple Spring Boot application demonstrates by using the container registry to get a
reference to a @KafkaListener method’s container and pausing or resuming its consumers as well as
receiving the corresponding events:
112
@SpringBootApplication
public class Application implements ApplicationListener<KafkaEvent> {
@Override
public void onApplicationEvent(KafkaEvent event) {
System.out.println(event);
}
@Bean
public ApplicationRunner runner(KafkaListenerEndpointRegistry registry,
KafkaTemplate<String, String> template) {
return args -> {
template.send("pause.resume.topic", "thing1");
Thread.sleep(10_000);
System.out.println("pausing");
registry.getListenerContainer("pause.resume").pause();
Thread.sleep(10_000);
template.send("pause.resume.topic", "thing2");
Thread.sleep(10_000);
System.out.println("resuming");
registry.getListenerContainer("pause.resume").resume();
Thread.sleep(10_000);
};
}
@Bean
public NewTopic topic() {
return TopicBuilder.name("pause.resume.topic")
.partitions(2)
.replicas(1)
.build();
}
113
partitions assigned: [pause.resume.topic-1, pause.resume.topic-0]
thing1
pausing
ConsumerPausedEvent [partitions=[pause.resume.topic-1, pause.resume.topic-0]]
resuming
ConsumerResumedEvent [partitions=[pause.resume.topic-1, pause.resume.topic-0]]
thing2
Since version 2.7 you can pause and resume the consumption of specific partitions assigned to that
consumer by using the pausePartition(TopicPartition topicPartition) and
resumePartition(TopicPartition topicPartition) methods in the listener containers. The pausing
and resuming takes place respectively before and after the poll() similar to the pause() and
resume() methods. The isPartitionPauseRequested() method returns true if pause for that partition
has been requested. The isPartitionPaused() method returns true if that partition has effectively
been paused.
Overview
Apache Kafka provides a high-level API for serializing and deserializing record values as well as
their keys. It is present with the org.apache.kafka.common.serialization.Serializer<T> and
org.apache.kafka.common.serialization.Deserializer<T> abstractions with some built-in
implementations. Meanwhile, we can specify serializer and deserializer classes by using Producer or
Consumer configuration properties. The following example shows how to do so:
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class
);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.
class);
...
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
For more complex or particular cases, the KafkaConsumer (and, therefore, KafkaProducer) provides
overloaded constructors to accept Serializer and Deserializer instances for keys and values,
respectively.
When you use this API, the DefaultKafkaProducerFactory and DefaultKafkaConsumerFactory also
114
provide properties (through constructors or setter methods) to inject custom Serializer and
Deserializer instances into the target Producer or Consumer. Also, you can pass in
Supplier<Serializer> or Supplier<Deserializer> instances through constructors - these Supplier s
are called on creation of each Producer or Consumer.
String serialization
Since version 2.5, Spring for Apache Kafka provides ToStringSerializer and
ParseStringDeserializer classes that use String representation of entities. They rely on methods
toString and some Function<String> or BiFunction<String, Headers> to parse the String and
populate properties of an instance. Usually, this would invoke some static method on the class, such
as parse:
By default, the ToStringSerializer is configured to convey type information about the serialized
entity in the record Headers. You can disable this by setting the addTypeInfo property to false. This
information can be used by ParseStringDeserializer on the receiving side.
if (entityType.contains("Thing")) {
return Thing.parse(str);
}
else {
// ...parsing logic
}
});
You can configure the Charset used to convert String to/from byte[] with the default being UTF-8.
You can configure the deserializer with the name of the parser method using ConsumerConfig
properties:
• ParseStringDeserializer.KEY_PARSER
115
• ParseStringDeserializer.VALUE_PARSER
The properties must contain the fully qualified name of the class followed by the method name,
separated by a period .. The method must be static and have a signature of either (String, Headers)
or (String).
JSON
Spring for Apache Kafka also provides JsonSerializer and JsonDeserializer implementations that
are based on the Jackson JSON object mapper. The JsonSerializer allows writing any Java object as
a JSON byte[]. The JsonDeserializer requires an additional Class<?> targetType argument to allow
the deserialization of a consumed byte[] to the proper target object. The following example shows
how to create a JsonDeserializer:
You can customize both JsonSerializer and JsonDeserializer with an ObjectMapper. You can also
extend them to implement some particular configuration logic in the configure(Map<String, ?>
configs, boolean isKey) method.
Starting with version 2.3, all the JSON-aware components are configured by default with a
JacksonUtils.enhancedObjectMapper() instance, which comes with the
MapperFeature.DEFAULT_VIEW_INCLUSION and DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES
features disabled. Also such an instance is supplied with well-known modules for custom data
types, such a Java time and Kotlin support. See JacksonUtils.enhancedObjectMapper() JavaDocs for
more information. This method also registers a
org.springframework.kafka.support.JacksonMimeTypeModule for org.springframework.util.MimeType
objects serialization into the plain string for inter-platform compatibility over the network. A
JacksonMimeTypeModule can be registered as a bean in the application context and it will be auto-
configured into the Spring Boot ObjectMapper instance.
Also starting with version 2.3, the JsonDeserializer provides TypeReference-based constructors for
better handling of target generic container types.
Starting with version 2.1, you can convey type information in record Headers, allowing the handling
of multiple types. In addition, you can configure the serializer and deserializer by using the
following Kafka properties. They have no effect if you have provided Serializer and Deserializer
instances for KafkaConsumer and KafkaProducer, respectively.
Configuration Properties
116
set by the serializer.
Starting with version 2.2, the type information headers (if added by the serializer) are removed by
the deserializer. You can revert to the previous behavior by setting the removeTypeHeaders property
to false, either directly on the deserializer or with the configuration property described earlier.
Mapping Types
Starting with version 2.2, when using JSON, you can now provide type mappings by using the
properties in the preceding list. Previously, you had to customize the type mapper within the
serializer and deserializer. Mappings consist of a comma-delimited list of token:className pairs. On
outbound, the payload’s class name is mapped to the corresponding token. On inbound, the token
in the type header is mapped to the corresponding class name.
117
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.
class);
senderProps.put(JsonSerializer.TYPE_MAPPINGS, "cat:com.mycat.Cat,
hat:com.myhat.hat");
...
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
JsonDeserializer.class);
consumerProps.put(JsonDeSerializer.TYPE_MAPPINGS, "cat:com.yourcat.Cat,
hat:com.yourhat.hat");
If you use Spring Boot, you can provide these properties in the application.properties (or yaml)
file. The following example shows how to do so:
spring.kafka.producer.value-
serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.producer.properties.spring.json.type.mapping=cat:com.mycat.Cat,hat:co
m.myhat.Hat
118
You can perform only simple configuration with properties. For more advanced
configuration (such as using a custom ObjectMapper in the serializer and
deserializer), you should use the producer and consumer factory constructors that
accept a pre-built serializer and deserializer. The following Spring Boot example
overrides the default factories:
@Bean
public ConsumerFactory<String, Thing> kafkaConsumerFactory
(JsonDeserializer customValueDeserializer) {
Map<String, Object> properties = new HashMap<>();
// properties.put(..., ...)
// ...
return new DefaultKafkaConsumerFactory<>(properties,
new StringDeserializer(), customValueDeserializer);
}
@Bean
public ProducerFactory<String, Thing> kafkaProducerFactory
(JsonSerializer customValueSerializer) {
Starting with version 2.2, you can explicitly configure the deserializer to use the supplied target
type and ignore type information in headers by using one of the overloaded constructors that have
a boolean useHeadersIfPresent (which is true by default). The following example shows how to do
so:
Starting with version 2.5, you can now configure the deserializer, via properties, to invoke a
method to determine the target type. If present, this will override any of the other techniques
discussed above. This can be useful if the data is published by an application that does not use the
Spring serializer and you need to deserialize to different types depending on the data, or other
headers. Set these properties to the method name - a fully qualified class name followed by the
method name, separated by a period .. The method must be declared as public static, have one of
119
three signatures (String topic, byte[] data, Headers headers), (byte[] data, Headers headers) or
(byte[] data) and return a Jackson JavaType.
• JsonDeserializer.KEY_TYPE_METHOD : spring.json.key.type.method
• JsonDeserializer.VALUE_TYPE_METHOD : spring.json.value.type.method
You can use arbitrary headers or inspect the data to determine the type.
Example
For more sophisticated data inspection consider using JsonPath or similar but, the simpler the test
to determine the type, the more efficient the process will be.
The following is an example of creating the deserializer programmatically (when providing the
consumer factory with the deserializer in the constructor):
...
Programmatic Construction
120
@Bean
public ProducerFactory<MyKeyType, MyValueType> pf() {
Map<String, Object> props = new HashMap<>();
// props.put(..., ...)
// ...
DefaultKafkaProducerFactory<MyKeyType, MyValueType> pf = new
DefaultKafkaProducerFactory<>(props,
new JsonSerializer<MyKeyType>()
.forKeys()
.noTypeInfo(),
new JsonSerializer<MyValueType>()
.noTypeInfo());
return pf;
}
@Bean
public ConsumerFactory<MyKeyType, MyValueType> cf() {
Map<String, Object> props = new HashMap<>();
// props.put(..., ...)
// ...
DefaultKafkaConsumerFactory<MyKeyType, MyValueType> cf = new
DefaultKafkaConsumerFactory<>(props,
new JsonDeserializer<>(MyKeyType.class)
.forKeys()
.ignoreTypeHeaders(),
new JsonDeserializer<>(MyValueType.class)
.ignoreTypeHeaders());
return cf;
}
To provide type mapping programmatically, similar to Using Methods to Determine Types, use the
typeFunction property.
Example
Alternatively, as long as you don’t use the fluent API to configure properties, or set them using
set*() methods, the factories will configure the serializer/deserializer using the configuration
properties; see Configuration Properties.
121
Using Headers
For incoming records, the deserializer uses the same headers to select the deserializer to use; if a
match is not found or the header is not present, the raw byte[] is returned.
You can configure the map of selector to Serializer / Deserializer via a constructor, or you can
configure it via Kafka producer/consumer properties with the keys
DelegatingSerializer.VALUE_SERIALIZATION_SELECTOR_CONFIG and
DelegatingSerializer.KEY_SERIALIZATION_SELECTOR_CONFIG. For the serializer, the producer property
can be a Map<String, Object> where the key is the selector and the value is a Serializer instance, a
serializer Class or the class name. The property can also be a String of comma-delimited map
entries, as shown below.
For the deserializer, the consumer property can be a Map<String, Object> where the key is the
selector and the value is a Deserializer instance, a deserializer Class or the class name. The
property can also be a String of comma-delimited map entries, as shown below.
producerProps.put(DelegatingSerializer.VALUE_SERIALIZATION_SELECTOR_CONFIG,
"thing1:com.example.MyThing1Serializer, thing2:com.example.MyThing2Serializer
")
consumerProps.put(DelegatingDeserializer.VALUE_SERIALIZATION_SELECTOR_CONFIG,
"thing1:com.example.MyThing1Deserializer,
thing2:com.example.MyThing2Deserializer")
This technique supports sending different types to the same topic (or different topics).
Starting with version 2.5.1, it is not necessary to set the selector header, if the type
(key or value) is one of the standard types supported by Serdes (Long, Integer, etc).
Instead, the serializer will set the header to the class name of the type. It is not
necessary to configure serializers or deserializers for these types, they will be
created (once) dynamically.
For another technique to send different types to different topics, see Using RoutingKafkaTemplate.
122
By Type
@Bean
public ProducerFactory<Integer, Object> producerFactory(Map<String, Object>
config) {
return new DefaultKafkaProducerFactory<>(config,
null, new DelegatingByTypeSerializer(Map.of(
byte[].class, new ByteArraySerializer(),
Bytes.class, new BytesSerializer(),
String.class, new StringSerializer())));
}
Starting with version 2.8.3, you can configure the serializer to check if the map key is assignable
from the target object, useful when a delegate serializer can serialize sub classes. In this case, if
there are amiguous matches, an ordered Map, such as a LinkedHashMap should be provided.
By Topic
producerConfigs.put(DelegatingByTopicSerializer.VALUE_SERIALIZATION_TOPIC_CONFIG,
"topic[0-4]:" + ByteArraySerializer.class.getName()
+ ", topic[5-9]:" + StringSerializer.class.getName());
...
ConsumerConfigs.put(DelegatingByTopicDeserializer.VALUE_SERIALIZATION_TOPIC_CONFIG
,
"topic[0-4]:" + ByteArrayDeserializer.class.getName()
+ ", topic[5-9]:" + StringDeserializer.class.getName());
123
@Bean
public ProducerFactory<Integer, Object> producerFactory(Map<String, Object>
config) {
return new DefaultKafkaProducerFactory<>(config,
null,
new DelegatingByTopicSerializer(Map.of(
Pattern.compile("topic[0-4]"), new ByteArraySerializer(),
Pattern.compile("topic[5-9]"), new StringSerializer())),
new JsonSerializer<Object>()); // default
}
You can specify a default serializer/deserializer to use when there is no pattern match using
DelegatingByTopicSerialization.KEY_SERIALIZATION_TOPIC_DEFAULT and
DelegatingByTopicSerialization.VALUE_SERIALIZATION_TOPIC_DEFAULT.
Retrying Deserializer
Refer to the spring-retry project for configuration of the RetryTemplate with a retry policy, back off
policy, etc.
Although the Serializer and Deserializer API is quite simple and flexible from the low-level Kafka
Consumer and Producer perspective, you might need more flexibility at the Spring Messaging level,
when using either @KafkaListener or Spring Integration’s Apache Kafka Support. To let you easily
convert to and from org.springframework.messaging.Message, Spring for Apache Kafka provides a
MessageConverter abstraction with the MessagingMessageConverter implementation and its
JsonMessageConverter (and subclasses) customization. You can inject the MessageConverter into a
KafkaTemplate instance directly and by using AbstractKafkaListenerContainerFactory bean definition
for the @KafkaListener.containerFactory() property. The following example shows how to do so:
124
@Bean
public KafkaListenerContainerFactory<?> kafkaJsonListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setMessageConverter(new JsonMessageConverter());
return factory;
}
...
@KafkaListener(topics = "jsonData",
containerFactory = "kafkaJsonListenerContainerFactory")
public void jsonListener(Cat cat) {
...
}
When using Spring Boot, simply define the converter as a @Bean and Spring Boot auto configuration
will wire it into the auto-configured template and container factory.
When you use a @KafkaListener, the parameter type is provided to the message converter to assist
with the conversion.
This type inference can be achieved only when the @KafkaListener annotation is
declared at the method level. With a class-level @KafkaListener, the payload type is
used to select which @KafkaHandler method to invoke, so it must already have been
converted before the method can be chosen.
125
On the consumer side, you can configure a JsonMessageConverter; it can handle
ConsumerRecord values of type byte[], Bytes and String so should be used in
conjunction with a ByteArrayDeserializer, BytesDeserializer or
StringDeserializer. (byte[] and Bytes are more efficient because they avoid an
unnecessary byte[] to String conversion). You can also configure the specific
subclass of JsonMessageConverter corresponding to the deserializer, if you so wish.
Again, using byte[] or Bytes is more efficient because they avoid a String to byte[]
conversion.
For convenience, starting with version 2.3, the framework also provides a
StringOrBytesSerializer which can serialize all three value types so it can be used
with any of the message converters.
Starting with version 2.7.1, message payload conversion can be delegated to a spring-messaging
SmartMessageConverter; this enables conversion, for example, to be based on the
MessageHeaders.CONTENT_TYPE header.
When the default converter is used in the KafkaTemplate and listener container factory, you
configure the SmartMessageConverter by calling setMessagingConverter() on the template and via the
contentMessageConverter property on @KafkaListener methods.
Examples:
126
template.setMessagingConverter(mySmartConverter);
Starting with version 2.1.1, you can convert JSON to a Spring Data Projection interface instead of a
concrete type. This allows very selective, and low-coupled bindings to data, including the lookup of
values from multiple places inside the JSON document. For example the following interface can be
defined as message payload type:
interface SomeSample {
Accessor methods will be used to lookup the property name as field in the received JSON document
by default. The @JsonPath expression allows customization of the value lookup, and even to define
multiple JSON Path expressions, to lookup values from multiple places until an expression returns
an actual value.
When used as the parameter to a @KafkaListener method, the interface type is automatically passed
to the converter as normal.
127
Using ErrorHandlingDeserializer
When a deserializer fails to deserialize a message, Spring has no way to handle the problem,
because it occurs before the poll() returns. To solve this problem, the ErrorHandlingDeserializer
has been introduced. This deserializer delegates to a real deserializer (key or value). If the delegate
fails to deserialize the record content, the ErrorHandlingDeserializer returns a null value and a
DeserializationException in a header that contains the cause and the raw bytes. When you use a
record-level MessageListener, if the ConsumerRecord contains a DeserializationException header for
either the key or value, the container’s ErrorHandler is called with the failed ConsumerRecord. The
record is not passed to the listener.
You can use the DefaultKafkaConsumerFactory constructor that takes key and value Deserializer
objects and wire in appropriate ErrorHandlingDeserializer instances that you have configured with
the proper delegates. Alternatively, you can use consumer configuration properties (which are used
by the ErrorHandlingDeserializer) to instantiate the delegates. The property names are
ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS and
ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS. The property value can be a class or class
name. The following example shows how to set these properties:
128
public class BadFoo extends Foo {
@Override
public Foo apply(FailedDeserializationInfo info) {
return new BadFoo(info);
}
...
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
ErrorHandlingDeserializer.class);
consumerProps.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS,
JsonDeserializer.class);
consumerProps.put(ErrorHandlingDeserializer.VALUE_FUNCTION, FailedFooProvider
.class);
...
129
@Bean
public ProducerFactory<String, Object> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfiguration(), new
StringSerializer(),
new DelegatingByTypeSerializer(Map.of(byte[].class, new ByteArraySerializer(),
MyNormalObject.class, new JsonSerializer<Object>())));
}
@Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
When using an ErrorHandlingDeserializer with a batch listener, you must check for the
deserialization exceptions in message headers. When used with a DefaultBatchErrorHandler, you
can use that header to determine which record the exception failed on and communicate to the
error handler via a BatchListenerFailedException.
130
@KafkaListener(id = "kgh2036", topics = "kgh2036")
void listen(List<ConsumerRecord<String, Thing>> in) {
for (int i = 0; i < in.size(); i++) {
ConsumerRecord<String, Thing> rec = in.get(i);
if (rec.value() == null) {
DeserializationException deserEx = ListenerUtils
.getExceptionFromHeader(rec,
SerializationUtils.VALUE_DESERIALIZER_EXCEPTION_HEADER, this
.logger);
if (deserEx != null) {
logger.error(deserEx, "Record at offset " + rec.offset() + " could
not be deserialized");
throw new BatchListenerFailedException("Deserialization", deserEx,
i);
}
}
process(rec.value());
}
}
By default, the type for the conversion is inferred from the listener argument. If you configure the
JsonMessageConverter with a DefaultJackson2TypeMapper that has its TypePrecedence set to TYPE_ID
(instead of the default INFERRED), the converter uses the type information in headers (if present)
instead. This allows, for example, listener methods to be declared with interfaces instead of
concrete classes. Also, the type converter supports mapping, so the deserialization can be to a
different type than the source (as long as the data is compatible). This is also useful when you use
class-level @KafkaListener instances where the payload must have already been converted to
determine which method to invoke. The following example creates beans that use this method:
131
@Bean
public KafkaListenerContainerFactory<?> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
factory.setMessageConverter(new BatchMessagingMessageConverter(converter()));
return factory;
}
@Bean
public JsonMessageConverter converter() {
return new JsonMessageConverter();
}
Note that, for this to work, the method signature for the conversion target must be a container
object with a single generic parameter type, such as the following:
@KafkaListener(topics = "blc1")
public void listen(List<Foo> foos, @Header(KafkaHeaders.OFFSET) List<Long>
offsets) {
...
}
If the batch converter has a record converter that supports it, you can also receive a list of messages
where the payloads are converted according to the generic type. The following example shows how
to do so:
ConversionService Customization
132
• org.springframework.core.convert.converter.Converter
• org.springframework.core.convert.converter.GenericConverter
• org.springframework.format.Formatter
This lets you further customize listener deserialization without changing the default configuration
for ConsumerFactory and KafkaListenerContainerFactory.
Starting with version 2.4.2 you are able to add your own HandlerMethodArgumentResolver and resolve
custom method parameters. All you need is to implement KafkaListenerConfigurer and use method
setCustomMethodArgumentResolvers() from class KafkaListenerEndpointRegistrar.
@Configuration
class CustomKafkaConfig implements KafkaListenerConfigurer {
@Override
public void configureKafkaListeners(KafkaListenerEndpointRegistrar registrar)
{
registrar.setCustomMethodArgumentResolvers(
new HandlerMethodArgumentResolver() {
@Override
public boolean supportsParameter(MethodParameter parameter) {
return CustomMethodArgument.class.isAssignableFrom(parameter
.getParameterType());
}
@Override
public Object resolveArgument(MethodParameter parameter, Message<
?> message) {
return new CustomMethodArgument(
message.getHeaders().get(KafkaHeaders.RECEIVED_TOPIC,
String.class)
);
}
}
);
}
You can also completely replace the framework’s argument resolution by adding a custom
133
MessageHandlerMethodFactory to the KafkaListenerEndpointRegistrar bean. If you do this, and your
application needs to handle tombstone records, with a null value() (e.g. from a compacted topic),
you should add a KafkaNullAwarePayloadArgumentResolver to the factory; it must be the last resolver
because it supports all types and can match arguments without a @Payload annotation. If you are
using a DefaultMessageHandlerMethodFactory, set this resolver as the last custom resolver; the factory
will ensure that this resolver will be used before the standard PayloadMethodArgumentResolver,
which has no knowledge of KafkaNull payloads.
The 0.11.0.0 client introduced support for headers in messages. As of version 2.0, Spring for Apache
Kafka now supports mapping these headers to and from spring-messaging MessageHeaders.
Apache Kafka headers have a simple API, shown in the following interface definition:
String key();
byte[] value();
The KafkaHeaderMapper strategy is provided to map header entries between Kafka Headers and
MessageHeaders. Its interface definition is as follows:
The SimpleKafkaHeaderMapper maps raw headers as byte[], with configuration options for
conversion to String values.
134
The DefaultKafkaHeaderMapper maps the key to the MessageHeaders header name and, in order to
support rich header types for outbound messages, JSON conversion is performed. A “special”
header (with a key of spring_json_header_types) contains a JSON map of <key>:<type>. This header is
used on the inbound side to provide appropriate conversion of each header value to the original
type.
On the inbound side, all Kafka Header instances are mapped to MessageHeaders. On the outbound
side, by default, all MessageHeaders are mapped, except id, timestamp, and the headers that map to
ConsumerRecord properties.
You can specify which headers are to be mapped for outbound messages, by providing patterns to
the mapper. The following listing shows a number of example mappings:
public DefaultKafkaHeaderMapper() { ①
...
}
① Uses a default Jackson ObjectMapper and maps most headers, as discussed before the
example.
② Uses the provided Jackson ObjectMapper and maps most headers, as discussed before the
example.
③ Uses a default Jackson ObjectMapper and maps headers according to the provided patterns.
④ Uses the provided Jackson ObjectMapper and maps headers according to the provided
patterns.
Patterns are rather simple and can contain a leading wildcard (), a trailing wildcard, or both
(for example, .cat.*). You can negate patterns with a leading !. The first pattern that matches a
header name (whether positive or negative) wins.
When you provide your own patterns, we recommend including !id and !timestamp, since these
headers are read-only on the inbound side.
135
By default, the mapper deserializes only classes in java.lang and java.util. You
can trust other (or all) packages by adding trusted packages with the
addTrustedPackages method. If you receive messages from untrusted sources, you
may wish to add only those packages you trust. To trust all packages, you can use
mapper.addTrustedPackages("*").
Mapping String header values in a raw form is useful when communicating with
systems that are not aware of the mapper’s JSON format.
Starting with version 2.2.5, you can specify that certain string-valued headers should not be
mapped using JSON, but to/from a raw byte[]. The AbstractKafkaHeaderMapper has new properties;
mapAllStringsOut when set to true, all string-valued headers will be converted to byte[] using the
charset property (default UTF-8). In addition, there is a property rawMappedHeaders, which is a map of
header name : boolean; if the map contains a header name, and the header contains a String value,
it will be mapped as a raw byte[] using the charset. This map is also used to map raw incoming
byte[] headers to String using the charset if, and only if, the boolean in the map value is true. If the
boolean is false, or the header name is not in the map with a true value, the incoming header is
simply mapped as the raw unmapped header.
@Test
public void testSpecificStringConvert() {
DefaultKafkaHeaderMapper mapper = new DefaultKafkaHeaderMapper();
Map<String, Boolean> rawMappedHeaders = new HashMap<>();
rawMappedHeaders.put("thisOnesAString", true);
rawMappedHeaders.put("thisOnesBytes", false);
mapper.setRawMappedHeaders(rawMappedHeaders);
Map<String, Object> headersMap = new HashMap<>();
headersMap.put("thisOnesAString", "thing1");
headersMap.put("thisOnesBytes", "thing2");
headersMap.put("alwaysRaw", "thing3".getBytes());
MessageHeaders headers = new MessageHeaders(headersMap);
Headers target = new RecordHeaders();
mapper.fromHeaders(headers, target);
assertThat(target).containsExactlyInAnyOrder(
new RecordHeader("thisOnesAString", "thing1".getBytes()),
new RecordHeader("thisOnesBytes", "thing2".getBytes()),
new RecordHeader("alwaysRaw", "thing3".getBytes()));
headersMap.clear();
mapper.toHeaders(target, headersMap);
assertThat(headersMap).contains(
entry("thisOnesAString", "thing1"),
entry("thisOnesBytes", "thing2".getBytes()),
entry("alwaysRaw", "thing3".getBytes()));
}
136
Both header mappers map all inbound headers, by default. Starting with version 2.8.8, the patterns,
can also applied to inbound mapping. To create a mapper for inbound mapping, use one of the
static methods on the respective mapper:
For example:
This will exclude all headers beginning with abc and include all others.
With the batch converter, the converted headers are available in the
KafkaHeaders.BATCH_CONVERTED_HEADERS as a List<Map<String, Object>> where the map in a position
of the list corresponds to the data position in the payload.
If there is no converter (either because Jackson is not present or it is explicitly set to null), the
headers from the consumer record are provided unconverted in the KafkaHeaders.NATIVE_HEADERS
header. This header is a Headers object (or a List<Headers> in the case of the batch converter), where
the position in the list corresponds to the data position in the payload).
Certain types are not suitable for JSON serialization, and a simple toString()
serialization might be preferred for these types. The DefaultKafkaHeaderMapper has
a method called addToStringClasses() that lets you supply the names of classes that
should be treated this way for outbound mapping. During inbound mapping, they
are mapped as String. By default, only org.springframework.util.MimeType and
org.springframework.http.MediaType are mapped this way.
137
Starting with version 2.3, handling of String-valued headers is simplified. Such
headers are no longer JSON encoded, by default (i.e. they do not have enclosing "…
" added). The type is still added to the JSON_TYPES header so the receiving system
can convert back to a String (from byte[]). The mapper can handle (decode)
headers produced by older versions (it checks for a leading "); in this way an
application using 2.3 can consume records from older versions.
@Bean
MessagingMessageConverter converter() {
MessagingMessageConverter converter = new MessagingMessageConverter();
DefaultKafkaHeaderMapper mapper = new DefaultKafkaHeaderMapper();
mapper.setEncodeStrings(true);
converter.setHeaderMapper(mapper);
return converter;
}
If using Spring Boot, it will auto configure this converter bean into the auto-configured
KafkaTemplate; otherwise you should add this converter to the template.
When you use Log Compaction, you can send and receive messages with null payloads to identify
the deletion of a key.
You can also receive null values for other reasons, such as a Deserializer that might return null
when it cannot deserialize a value.
To send a null payload by using the KafkaTemplate, you can pass null into the value argument of the
send() methods. One exception to this is the send(Message<?> message) variant. Since spring-
messaging Message<?> cannot have a null payload, you can use a special payload type called
KafkaNull, and the framework sends null. For convenience, the static KafkaNull.INSTANCE is
provided.
When you use a message listener container, the received ConsumerRecord has a null value().
To configure the @KafkaListener to handle null payloads, you must use the @Payload annotation with
required = false. If it is a tombstone message for a compacted log, you usually also need the key so
that your application can determine which key was “deleted”. The following example shows such a
configuration:
138
@KafkaListener(id = "deletableListener", topics = "myTopic")
public void listen(@Payload(required = false) String value, @Header(KafkaHeaders
.RECEIVED_KEY) String key) {
// value == null represents key deletion
}
When you use a class-level @KafkaListener with multiple @KafkaHandler methods, some additional
configuration is needed. Specifically, you need a @KafkaHandler method with a KafkaNull payload.
The following example shows how to configure one:
@KafkaHandler
public void listen(String cat) {
...
}
@KafkaHandler
public void listen(Integer hat) {
...
}
@KafkaHandler
public void delete(@Payload(required = false) KafkaNull nul, @Header
(KafkaHeaders.RECEIVED_KEY) int key) {
...
}
This section describes how to handle various exceptions that may arise when you use Spring for
Apache Kafka.
139
Listener Error Handlers
Starting with version 2.0, the @KafkaListener annotation has a new attribute: errorHandler.
You can use the errorHandler to provide the bean name of a KafkaListenerErrorHandler
implementation. This functional interface has one method, as the following listing shows:
@FunctionalInterface
public interface KafkaListenerErrorHandler {
You have access to the spring-messaging Message<?> object produced by the message converter and
the exception that was thrown by the listener, which is wrapped in a
ListenerExecutionFailedException. The error handler can throw the original or a new exception,
which is thrown to the container. Anything returned by the error handler is ignored.
Starting with version 2.7, you can set the rawRecordHeader property on the
MessagingMessageConverter and BatchMessagingMessageConverter which causes the raw
ConsumerRecord to be added to the converted Message<?> in the KafkaHeaders.RAW_DATA header. This is
useful, for example, if you wish to use a DeadLetterPublishingRecoverer in a listener error handler.
It might be used in a request/reply scenario where you wish to send a failure result to the sender,
after some number of retries, after capturing the failed record in a dead letter topic.
@Bean
KafkaListenerErrorHandler eh(DeadLetterPublishingRecoverer recoverer) {
return (msg, ex) -> {
if (msg.getHeaders().get(KafkaHeaders.DELIVERY_ATTEMPT, Integer.class) >
9) {
recoverer.accept(msg.getHeaders().get(KafkaHeaders.RAW_DATA,
ConsumerRecord.class), ex);
return "FAILED";
}
throw ex;
};
}
140
Object handleError(Message<?> message, ListenerExecutionFailedException exception,
Consumer<?, ?> consumer);
In either case, you should NOT perform any seeks on the consumer because the container would be
unaware of them.
Starting with version 2.8, the legacy ErrorHandler and BatchErrorHandler interfaces have been
superceded by a new CommonErrorHandler. These error handlers can handle errors for both record
and batch listeners, allowing a single listener container factory to create containers for both types
of listener. CommonErrorHandler implementations to replace most legacy framework error handler
implementations are provided and the legacy error handlers deprecated. The legacy interfaces are
still supported by listener containers and listener container factories; they will be deprecated in a
future release.
When transactions are being used, no error handlers are configured, by default, so that the
exception will roll back the transaction. Error handling for transactional containers are handled by
the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must
throw an exception if you want the transaction rolled back.
This interface has a default method isAckAfterHandle() which is called by the container to
determine whether the offset(s) should be committed if the error handler returns without throwing
an exception; it returns true by default.
Typically, the error handlers provided by the framework will throw an exception when the error is
not "handled" (e.g. after performing a seek operation). By default, such exceptions are logged by the
container at ERROR level. All of the framework error handlers extend KafkaExceptionLogLevelAware
which allows you to control the level at which these exceptions are logged.
141
/**
* Set the level at which the exception thrown by this handler is logged.
* @param logLevel the level (default ERROR).
*/
public void setLogLevel(KafkaException.Level logLevel) {
...
}
You can specify a global error handler to be used for all listeners in the container factory. The
following example shows how to do so:
@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer,
String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
...
factory.setCommonErrorHandler(myErrorHandler);
...
return factory;
}
By default, if an annotated listener method throws an exception, it is thrown to the container, and
the message is handled according to the container configuration.
The container commits any pending offset commits before calling the error handler.
If you are using Spring Boot, you simply need to add the error handler as a @Bean and Boot will add
it to the auto-configured factory.
Error handlers such as the DefaultErrorHandler use a BackOff to determine how long to wait before
retrying a delivery. Starting with version 2.9, you can configure a custom BackOffHandler. The
default handler simply suspends the thread until the back off time passes (or the container is
stopped). The framework also provides the ContainerPausingBackOffHandler which pauses the
listener container until the back off time passes and then resumes the container. This is useful
when the delays are longer than the max.poll.interval.ms consumer property. Note that the
resolution of the actual back off time will be affected by the pollTimeout container property.
DefaultErrorHandler
142
fallback behavior for batch listeners (when an exception other than a BatchListenerFailedException
is thrown) is the equivalent of the Retrying Complete Batches.
Starting with version 2.9, the DefaultErrorHandler can be configured to provide the
same semantics as seeking the unprocessed record offsets as discussed below, but
without actually seeking. Instead, the records are retained by the listener
container and resubmitted to the listener after the error handler exits (and after
performing a single paused poll(), to keep the consumer alive; if Non-Blocking
Retries or a ContainerPausingBackOffHandler are being used, the pause may extend
over multiple polls). The error handler returns a result to the container that
indicates whether the current failing record can be resubmitted, or if it was
recovered and then it will not be sent to the listener again. To enable this mode, set
the property seekAfterError to false.
The error handler can recover (skip) a record that keeps failing. By default, after ten failures, the
failed record is logged (at the ERROR level). You can configure the handler with a custom recoverer
(BiConsumer) and a BackOff that controls the delivery attempts and delays between each. Using a
FixedBackOff with FixedBackOff.UNLIMITED_ATTEMPTS causes (effectively) infinite retries. The
following example configures recovery after three tries:
DefaultErrorHandler errorHandler =
new DefaultErrorHandler((record, exception) -> {
// recover after 3 failures, with no back off - e.g. send to a dead-letter
topic
}, new FixedBackOff(0L, 2L));
To configure the listener container with a customized instance of this handler, add it to the
container factory.
For example, with the @KafkaListener container factory, you can add DefaultErrorHandler as follows:
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new
ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setCommonErrorHandler(new DefaultErrorHandler(new FixedBackOff(1000L,
2L)));
return factory;
}
For a record listener, this will retry a delivery up to 2 times (3 delivery attempts) with a back off of
143
1 second, instead of the default configuration (FixedBackOff(0L, 9)). Failures are simply logged after
retries are exhausted.
As an example; if the poll returns six records (two from each partition 0, 1, 2) and the listener
throws an exception on the fourth record, the container acknowledges the first three messages by
committing their offsets. The DefaultErrorHandler seeks to offset 1 for partition 1 and offset 0 for
partition 2. The next poll() returns the three unprocessed records.
If the AckMode was BATCH, the container commits the offsets for the first two partitions before calling
the error handler.
For a batch listener, the listener must throw a BatchListenerFailedException indicating which
records in the batch failed.
• If retries are not exhausted, perform seeks so that all the remaining records (including the
failed record) will be redelivered.
• If retries are exhausted, attempt recovery of the failed record (default log only) and perform
seeks so that the remaining records (excluding the failed record) will be redelivered. The
recovered record’s offset is committed
• If retries are exhausted and recovery fails, seeks are performed as if retries are not exhausted.
Starting with version 2.9, the DefaultErrorHandler can be configured to provide the
same semantics as seeking the unprocessed record offsets as discussed above, but
without actually seeking. Instead, error handler creates a new ConsumerRecords<?,
?> containing just the unprocessed records which will then be submitted to the
listener (after performing a single paused poll(), to keep the consumer alive). To
enable this mode, set the property seekAfterError to false.
The default recoverer logs the failed record after retries are exhausted. You can use a custom
recoverer, or one provided by the framework such as the DeadLetterPublishingRecoverer.
When using a POJO batch listener (e.g. List<Thing>), and you don’t have the full consumer record to
add to the exception, you can just add the index of the record that failed:
144
@KafkaListener(id = "recovering", topics = "someTopic")
public void listen(List<Thing> things) {
for (int i = 0; i < records.size(); i++) {
try {
process(things.get(i));
}
catch (Exception e) {
throw new BatchListenerFailedException("Failed to process", i);
}
}
}
When the container is configured with AckMode.MANUAL_IMMEDIATE, the error handler can be
configured to commit the offset of recovered records; set the commitRecovered property to true.
The DefaultErrorHandler considers certain exceptions to be fatal, and retries are skipped for such
exceptions; the recoverer is invoked on the first failure. The exceptions that are considered fatal, by
default, are:
• DeserializationException
• MessageConversionException
• ConversionException
• MethodArgumentResolutionException
• NoSuchMethodException
• ClassCastException
You can add more exception types to the not-retryable category, or completely replace the map of
classified exceptions. See the Javadocs for DefaultErrorHandler.addNotRetryableException() and
DefaultErrorHandler.setClassifications() for more information, as well as those for the spring-
retry BinaryExceptionClassifier.
145
@Bean
public DefaultErrorHandler errorHandler(ConsumerRecordRecoverer recoverer) {
DefaultErrorHandler handler = new DefaultErrorHandler(recoverer);
handler.addNotRetryableExceptions(IllegalArgumentException.class);
return handler;
}
The error handler can be configured with one or more RetryListener s, receiving notifications of
retry and recovery progress. Starting with version 2.8.10, methods for batch listeners were added.
@FunctionalInterface
public interface RetryListener {
If the recoverer fails (throws an exception), the failed record will be included in
the seeks. If the recoverer fails, the BackOff will be reset by default and redeliveries
will again go through the back offs before recovery is attempted again. To skip
retries after a recovery failure, set the error handler’s resetStateOnRecoveryFailure
to false.
146
You can provide the error handler with a BiFunction<ConsumerRecord<?, ?>, Exception, BackOff> to
determine the BackOff to use, based on the failed record and/or the exception:
If the function returns null, the handler’s default BackOff will be used.
Set resetStateOnExceptionChange to true and the retry sequence will be restarted (including the
selection of a new BackOff, if so configured) if the exception type changes between failures. By
default, the exception type is not considered.
Starting with version 2.8, batch listeners can now properly handle conversion errors, when using a
MessageConverter with a ByteArrayDeserializer, a BytesDeserializer or a StringDeserializer, as well
as a DefaultErrorHandler. When a conversion error occurs, the payload is set to null and a
deserialization exception is added to the record headers, similar to the ErrorHandlingDeserializer. A
list of ConversionException s is available in the listener so the listener can throw a
BatchListenerFailedException indicating the first index at which a conversion exception occurred.
Example:
This is now the fallback behavior of the DefaultErrorHandler for a batch listener where the listener
throws an exception other than a BatchListenerFailedException.
There is no guarantee that, when a batch is redelivered, the batch has the same number of records
147
and/or the redelivered records are in the same order. It is impossible, therefore, to easily maintain
retry state for a batch. The FallbackBatchErrorHandler takes a the following approach. If a batch
listener throws an exception that is not a BatchListenerFailedException, the retries are performed
from the in-memory batch of records. In order to avoid a rebalance during an extended retry
sequence, the error handler pauses the consumer, polls it before sleeping for the back off, for each
retry, and calls the listener again. If/when retries are exhausted, the ConsumerRecordRecoverer is
called for each record in the batch. If the recoverer throws an exception, or the thread is
interrupted during its sleep, the batch of records will be redelivered on the next poll. Before exiting,
regardless of the outcome, the consumer is resumed.
While waiting for a BackOff interval, the error handler will loop with a short sleep until the desired
delay is reached, while checking to see if the container has been stopped, allowing the sleep to exit
soon after the stop() rather than causing a delay.
After the container stops, an exception that wraps the ListenerExecutionFailedException is thrown.
This is to cause the transaction to roll back (if transactions are enabled).
The CommonLoggingErrorHandler simply logs the exception; with a record listener, the remaining
records from the previous poll are passed to the listener. For a batch listener, all the records in the
batch are logged.
Using Different Common Error Handlers for Record and Batch Listeners
If you wish to use a different error handling strategy for record and batch listeners, the
CommonMixedErrorHandler is provided allowing the configuration of a specific error handler for each
listener type.
• DefaultErrorHandler
• CommonContainerStoppingErrorHandler
148
• CommonDelegatingErrorHandler
• CommonLoggingErrorHandler
• CommonMixedErrorHandler
To replace any BatchErrorHandler implementation, you should implement handleBatch() You should
also implement handleOtherException() - to handle exceptions that occur outside the scope of record
processing (e.g. consumer errors).
After-rollback Processor
When using transactions, if the listener throws an exception (and an error handler, if present,
throws an exception), the transaction is rolled back. By default, any unprocessed records (including
the failed record) are re-fetched on the next poll. This is achieved by performing seek operations in
the DefaultAfterRollbackProcessor. With a batch listener, the entire batch of records is reprocessed
(the container has no knowledge of which record in the batch failed). To modify this behavior, you
149
can configure the listener container with a custom AfterRollbackProcessor. For example, with a
record-based listener, you might want to keep track of the failed record and give up after some
number of attempts, perhaps by publishing it to a dead-letter topic.
Starting with version 2.2, the DefaultAfterRollbackProcessor can now recover (skip) a record that
keeps failing. By default, after ten failures, the failed record is logged (at the ERROR level). You can
configure the processor with a custom recoverer (BiConsumer) and maximum failures. Setting the
maxFailures property to a negative number causes infinite retries. The following example
configures recovery after three tries:
When you do not use transactions, you can achieve similar functionality by configuring a
DefaultErrorHandler. See Container Error Handlers.
Recovery is not possible with a batch listener, since the framework has no
knowledge about which record in the batch keeps failing. In such cases, the
application listener must handle a record that keeps failing.
Starting with version 2.2.5, the DefaultAfterRollbackProcessor can be invoked in a new transaction
(started after the failed transaction rolls back). Then, if you are using the
DeadLetterPublishingRecoverer to publish a failed record, the processor will send the recovered
record’s offset in the original topic/partition to the transaction. To enable this feature, set the
commitRecovered and kafkaTemplate properties on the DefaultAfterRollbackProcessor.
If the recoverer fails (throws an exception), the failed record will be included in
the seeks. Starting with version 2.5.5, if the recoverer fails, the BackOff will be reset
by default and redeliveries will again go through the back offs before recovery is
attempted again. With earlier versions, the BackOff was not reset and recovery was
re-attempted on the next failure. To revert to the previous behavior, set the
processor’s resetStateOnRecoveryFailure property to false.
Starting with version 2.6, you can now provide the processor with a BiFunction<ConsumerRecord<?,
?>, Exception, BackOff> to determine the BackOff to use, based on the failed record and/or the
exception:
150
If the function returns null, the processor’s default BackOff will be used.
Starting with version 2.6.3, set resetStateOnExceptionChange to true and the retry sequence will be
restarted (including the selection of a new BackOff, if so configured) if the exception type changes
between failures. By default, the exception type is not considered.
• DeserializationException
• MessageConversionException
• ConversionException
• MethodArgumentResolutionException
• NoSuchMethodException
• ClassCastException
You can add more exception types to the not-retryable category, or completely replace the map of
classified exceptions. See the Javadocs for DefaultAfterRollbackProcessor.setClassifications() for
more information, as well as those for the spring-retry BinaryExceptionClassifier.
@Bean
public DefaultAfterRollbackProcessor errorHandler(BiConsumer<ConsumerRecord<?, ?>,
Exception> recoverer) {
DefaultAfterRollbackProcessor processor = new DefaultAfterRollbackProcessor
(recoverer);
processor.addNotRetryableException(IllegalArgumentException.class);
return processor;
}
151
With current kafka-clients, the container cannot detect whether a
ProducerFencedException is caused by a rebalance or if the producer’s
transactional.id has been revoked due to a timeout or expiry. Because, in most
cases, it is caused by a rebalance, the container does not call the
AfterRollbackProcessor (because it’s not appropriate to seek the partitions because
we no longer are assigned them). If you ensure the timeout is large enough to
process each transaction and periodically perform an "empty" transaction (e.g. via
a ListenerContainerIdleEvent) you can avoid fencing due to timeout and expiry. Or,
you can set the stopContainerWhenFenced container property to true and the
container will stop, avoiding the loss of records. You can consume a
ConsumerStoppedEvent and check the Reason property for FENCED to detect this
condition. Since the event also has a reference to the container, you can restart the
container using this event.
Starting with version 2.7, while waiting for a BackOff interval, the error handler will loop with a
short sleep until the desired delay is reached, while checking to see if the container has been
stopped, allowing the sleep to exit soon after the stop() rather than causing a delay.
Starting with version 2.7, the processor can be configured with one or more RetryListener s,
receiving notifications of retry and recovery progress.
@FunctionalInterface
public interface RetryListener {
Starting with version 2.5, when using an ErrorHandler or AfterRollbackProcessor that implements
DeliveryAttemptAware, it is possible to enable the addition of the KafkaHeaders.DELIVERY_ATTEMPT
header (kafka_deliveryAttempt) to the record. The value of this header is an incrementing integer
starting at 1. When receiving a raw ConsumerRecord<?, ?> the integer is in a byte[4].
152
int delivery = ByteBuffer.wrap(record.headers()
.lastHeader(KafkaHeaders.DELIVERY_ATTEMPT).value())
.getInt()
To enable population of this header, set the container property deliveryAttemptHeader to true. It is
disabled by default to avoid the (small) overhead of looking up the state for each record and adding
the header.
In some cases, it is useful to be able to know which container a listener is running in.
Starting with version 2.8.4, you can now set the listenerInfo property on the listener container, or
set the info attribute on the @KafkaListener annotation. Then, the container will add this in the
KafkaListener.LISTENER_INFO header to all incoming messages; it can then be used in record
interceptors, filters, etc., or in the listener itself.
The header mappers also convert to String when creating MessageHeaders from the consumer
record and never map this header on an outbound record.
For POJO batch listeners, starting with version 2.8.6, the header is copied into each member of the
batch and is also available as a single String parameter after conversion.
153
@KafkaListener(id = "list2", topics = "someTopic", containerFactory =
"batchFactory",
info = "info for batch")
public void listen(List<Thing> list,
@Header(KafkaHeaders.RECEIVED_KEY) List<Integer> keys,
@Header(KafkaHeaders.RECEIVED_PARTITION) List<Integer> partitions,
@Header(KafkaHeaders.RECEIVED_TOPIC) List<String> topics,
@Header(KafkaHeaders.OFFSET) List<Long> offsets,
@Header(KafkaHeaders.LISTENER_INFO) String info) {
...
}
If the batch listener has a filter and the filter results in an empty batch, you will
need to add required = false to the @Header parameter because the info is not
available for an empty batch.
If the returned TopicPartition has a negative partition, the partition is not set in the ProducerRecord,
so the partition is selected by Kafka. Starting with version 2.2.4, any
ListenerExecutionFailedException (thrown, for example, when an exception is detected in a
@KafkaListener method) is enhanced with the groupId property. This allows the destination resolver
to use this, in addition to the information in the ConsumerRecord to select the dead letter topic.
154
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer
(template,
(r, e) -> {
if (e instanceof FooException) {
return new TopicPartition(r.topic() + ".Foo.failures", r.
partition());
}
else {
return new TopicPartition(r.topic() + ".other.failures", r
.partition());
}
});
CommonErrorHandler errorHandler = new DefaultErrorHandler(recoverer, new
FixedBackOff(0L, 2L));
The record sent to the dead-letter topic is enhanced with the following headers:
155
1. Subclass the recoverer and override createProducerRecord() - call super.createProducerRecord()
and add more headers.
2. Provide a BiFunction to receive the consumer record and exception, returning a Headers object;
headers from there will be copied to the final producer record; also see Managing Dead Letter
Record Headers. Use setHeadersFunction() to set the BiFunction.
The second is simpler to implement but the first has more information available, including the
already assembled standard headers.
Starting with version 2.3, when used in conjunction with an ErrorHandlingDeserializer, the
publisher will restore the record value(), in the dead-letter producer record, to the original value
that failed to be deserialized. Previously, the value() was null and user code had to decode the
DeserializationException from the message headers. In addition, you can provide multiple
KafkaTemplate s to the publisher; this might be needed, for example, if you want to publish the
byte[] from a DeserializationException, as well as values using a different serializer from records
that were deserialized successfully. Here is an example of configuring the publisher with
KafkaTemplate s that use a String and byte[] serializer:
@Bean
public DeadLetterPublishingRecoverer publisher(KafkaTemplate<?, ?> stringTemplate,
KafkaTemplate<?, ?> bytesTemplate) {
The publisher uses the map keys to locate a template that is suitable for the value() about to be
published. A LinkedHashMap is recommended so that the keys are examined in order.
When publishing null values, when there are multiple templates, the recoverer will look for a
template for the Void class; if none is present, the first template from the values().iterator() will
be used.
Since 2.7 you can use the setFailIfSendResultIsError method so that an exception is thrown when
message publishing fails. You can also set a timeout for the verification of the sender success with
setWaitForSendResultTimeout.
If the recoverer fails (throws an exception), the failed record will be included in
the seeks. Starting with version 2.5.5, if the recoverer fails, the BackOff will be reset
by default and redeliveries will again go through the back offs before recovery is
attempted again. With earlier versions, the BackOff was not reset and recovery was
re-attempted on the next failure. To revert to the previous behavior, set the error
handler’s resetStateOnRecoveryFailure property to false.
156
Starting with version 2.6.3, set resetStateOnExceptionChange to true and the retry sequence will be
restarted (including the selection of a new BackOff, if so configured) if the exception type changes
between failures. By default, the exception type is not considered.
Starting with version 2.3, the recoverer can also be used with Kafka Streams - see Recovery from
Deserialization Exceptions for more information.
If incoming records are dependent on each other, but may arrive out of order, it may be useful to
republish a failed record to the tail of the original topic (for some number of times), instead of
sending it directly to the dead letter topic. See this Stack Overflow Question for an example.
@Bean
public ErrorHandler eh(KafkaOperations<String, String> template) {
return new DefaultErrorHandler(new DeadLetterPublishingRecoverer(template,
(rec, ex) -> {
org.apache.kafka.common.header.Header retries = rec.headers()
.lastHeader("retries");
if (retries == null) {
retries = new RecordHeader("retries", new byte[] { 1 });
rec.headers().add(retries);
}
else {
retries.value()[0]++;
}
return retries.value()[0] > 5
? new TopicPartition("topic.DLT", rec.partition())
: new TopicPartition("topic", rec.partition());
}), new FixedBackOff(0L, 0L));
}
Starting with version 2.7, the recoverer checks that the partition selected by the destination
resolver actually exists. If the partition is not present, the partition in the ProducerRecord is set to
null, allowing the KafkaProducer to select the partition. You can disable this check by setting the
verifyPartition property to false.
157
dead letter record that failed, including when using Non-Blocking Retries).
Apache Kafka supports multiple headers with the same name; to obtain the "latest" value, you can
use headers.lastHeader(headerName); to get an iterator over multiple headers, use
headers.headers(headerName).iterator().
When repeatedly republishing a failed record, these headers can grow (and eventually cause
publication to fail due to a RecordTooLargeException); this is especially true for the exception
headers and particularly for the stack trace headers.
The reason for the two properties is because, while you might want to retain only the last exception
information, you might want to retain the history of which topic(s) the record passed through for
each failure.
Starting with version 2.8.4, you now can control which of the standard headers will be added to the
output record. See the enum HeadersToAdd for the generic names of the (currently) 10 standard
headers that are added by default (these are not the actual header names, just an abstraction; the
actual header names are set up by the getHeaderNames() method which subclasses can override.
To exclude headers, use the excludeHeaders() method; for example, to suppress adding the
exception stack trace in a header, use:
In addition, you can completely customize the addition of exception headers by adding an
ExceptionHeadersCreator; this also disables all standard exception headers.
Also starting with version 2.8.4, you can now provide multiple headers functions, via the
addHeadersFunction method. This allows additional functions to apply, even if another function has
already been registered, for example, when using Non-Blocking Retries.
158
Also see Failure Header Management with Non-Blocking Retries.
ExponentialBackOffWithMaxRetries Implementation
@Bean
DefaultErrorHandler handler() {
ExponentialBackOffWithMaxRetries bo = new ExponentialBackOffWithMaxRetries(6);
bo.setInitialInterval(1_000L);
bo.setMultiplier(2.0);
bo.setMaxInterval(10_000L);
return new DefaultErrorHandler(myRecoverer, bo);
}
This will retry after 1, 2, 4, 8, 10, 10 seconds, before calling the recoverer.
Starting with version 2.0, a KafkaJaasLoginModuleInitializer class has been added to assist with
Kerberos configuration. You can add this bean, with the desired configuration, to your application
context. The following example configures such a bean:
@Bean
public KafkaJaasLoginModuleInitializer jaasConfig() throws IOException {
KafkaJaasLoginModuleInitializer jaasConfig = new
KafkaJaasLoginModuleInitializer();
jaasConfig.setControlFlag("REQUIRED");
Map<String, String> options = new HashMap<>();
options.put("useKeyTab", "true");
options.put("storeKey", "true");
options.put("keyTab", "/etc/security/keytabs/kafka_client.keytab");
options.put("principal", "[email protected]");
jaasConfig.setOptions(options);
return jaasConfig;
}
Starting with versions 2.7.12, 2.8.4, you can determine how these records will be rendered in debug
159
logs, etc.
Version 2.9 changed the mechanism to bootstrap infrastructure beans; see Configuration for the
two mechanisms that are now required to bootstrap the feature.
After these changes, we are intending to remove the experimental designation, probably in version
3.0.
Achieving non-blocking retry / dlt functionality with Kafka usually requires setting up extra topics
and creating and configuring the corresponding listeners. Since 2.7 Spring for Apache Kafka offers
support for that via the @RetryableTopic annotation and RetryTopicConfiguration class to simplify
that bootstrapping.
If message processing fails, the message is forwarded to a retry topic with a back off timestamp.
The retry topic consumer then checks the timestamp and if it’s not due it pauses the consumption
for that topic’s partition. When it is due the partition consumption is resumed, and the message is
consumed again. If the message processing fails again the message will be forwarded to the next
retry topic, and the pattern is repeated until a successful processing occurs, or the attempts are
exhausted, and the message is sent to the Dead Letter Topic (if configured).
To illustrate, if you have a "main-topic" topic, and want to setup non-blocking retry with an
exponential backoff of 1000ms with a multiplier of 2 and 4 max attempts, it will create the main-
topic-retry-1000, main-topic-retry-2000, main-topic-retry-4000 and main-topic-dlt topics and
configure the respective consumers. The framework also takes care of creating the topics and
setting up and configuring the listeners.
By using this strategy you lose Kafka’s ordering guarantees for that topic.
You can set the AckMode mode you prefer, but RECORD is suggested.
160
4.2.2. Back Off Delay Precision
All message processing and backing off is handled by the consumer thread, and, as such, delay
precision is guaranteed on a best-effort basis. If one message’s processing takes longer than the
next message’s back off period for that consumer, the next message’s delay will be higher than
expected. Also, for short delays (about 1s or less), the maintenance work the thread has to do, such
as committing offsets, may delay the message processing execution. The precision can also be
affected if the retry topic’s consumer is handling more than one partition, because we rely on
waking up the consumer from polling and having full pollTimeouts to make timing adjustments.
That being said, for consumers handling a single partition the message’s processing should occur
approximately at its exact due time for most situations.
It is guaranteed that a message will never be processed before its due time.
4.2.3. Configuration
Starting with version 2.9, for default configuration, the @EnableKafkaRetryTopic annotation should
be used in a @Configuration annotated class. This enables the feature to bootstrap properly and
gives access to injecting some of the feature’s components to be looked up at runtime.
It is not necessary to also add @EnableKafka, if you add this annotation, because
@EnableKafkaRetryTopic is meta-annotated with @EnableKafka.
Also, starting with that version, for more advanced configuration of the feature’s components and
global features, the RetryTopicConfigurationSupport class should be extended in a @Configuration
class, and the appropriate methods overridden. For more details refer to Configuring Global
Settings and Features.
Only one of the above techniques can be used, and only one @Configuration class
can extend RetryTopicConfigurationSupport.
To configure the retry topic and dlt for a @KafkaListener annotated method, you just have to add the
@RetryableTopic annotation to it and Spring for Apache Kafka will bootstrap all the necessary topics
and consumers with the default configurations.
@RetryableTopic(kafkaTemplate = "myRetryableTopicKafkaTemplate")
@KafkaListener(topics = "my-annotated-topic", groupId = "myGroupId")
public void processMessage(MyPojo message) {
// ... message processing
}
161
You can specify a method in the same class to process the dlt messages by annotating it with the
@DltHandler annotation. If no DltHandler method is provided a default consumer is created which
only logs the consumption.
@DltHandler
public void processMessage(MyPojo message) {
// ... message processing, persistence, etc
}
You can also configure the non-blocking retry support by creating RetryTopicConfiguration beans in
a @Configuration annotated class.
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, Object>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.create(template);
}
This will create retry topics and a dlt, as well as the corresponding consumers, for all topics in
methods annotated with '@KafkaListener' using the default configurations. The KafkaTemplate
instance is required for message forwarding.
To achieve more fine-grained control over how to handle non-blocking retrials for each topic, more
than one RetryTopicConfiguration bean can be provided.
162
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackoff(3000)
.maxAttempts(5)
.includeTopics("my-topic", "my-other-topic")
.create(template);
}
@Bean
public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate<String,
MyOtherPojo> template) {
return RetryTopicConfigurationBuilder
.newInstance()
.exponentialBackoff(1000, 2, 5000)
.maxAttempts(4)
.excludeTopics("my-topic", "my-other-topic")
.retryOn(MyException.class)
.create(template);
}
The retry topics' and dlt’s consumers will be assigned to a consumer group with a
group id that is the combination of the one with you provide in the groupId
parameter of the @KafkaListener annotation with the topic’s suffix. If you don’t
provide any they’ll all belong to the same group, and rebalance on a retry topic
will cause an unnecessary rebalance on the main topic.
163
@Bean
public ProducerFactory<String, Object> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfiguration(), new
StringSerializer(),
new DelegatingByTypeSerializer(Map.of(byte[].class, new ByteArraySerializer(),
MyNormalObject.class, new JsonSerializer<Object>())));
}
@Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
Multiple @KafkaListener annotations can be used for the same topic with or
without manual partition assignment along with non-blocking retries, but only
one configuration will be used for a given topic. It’s best to use a single
RetryTopicConfiguration bean for configuration of such topics; if multiple
@RetryableTopic annotations are being used for the same topic, all of them should
have the same values, otherwise one of them will be applied to all of that topic’s
listeners and the other annotations' values will be ignored.
Since 2.9, the previous bean overriding approach for configuring components has been removed
(without deprecation, due to the aforementioned experimental nature of the API). This does not
change the RetryTopicConfiguration beans approach - only infrastructure components'
configurations. Now the RetryTopicConfigurationSupport class should be extended in a (single)
@Configuration class, and the proper methods overridden. An example follows:
164
@EnableKafka
@Configuration
public class MyRetryTopicConfiguration extends RetryTopicConfigurationSupport {
@Override
protected void configureBlockingRetries(BlockingRetriesConfigurer
blockingRetries) {
blockingRetries
.retryOn(MyBlockingRetriesException.class,
MyOtherBlockingRetriesException.class)
.backOff(new FixedBackOff(3000, 3));
}
@Override
protected void manageNonBlockingFatalExceptions(List<Class<? extends
Throwable>> nonBlockingFatalExceptions) {
nonBlockingFatalExceptions.add(MyNonBlockingException.class);
}
@Override
protected void configureCustomizers(CustomizersConfigurer
customizersConfigurer) {
// Use the new 2.9 mechanism to avoid re-fetching the same records after a
pause
customizersConfigurer.customizeErrorHandler(eh -> {
eh.setSeekAfterError(false);
});
}
When autoCreateTopics is true, the main and retry topics will be created with the specified number
of partitions and replication factor. To override these values for a particular topic (e.g. the main
topic or DLT), simply add a NewTopic @Bean with the required properties; that will override the auto
creation properties.
By default, records are published to the retry topic(s) using the original partition of
the received record. If the retry topics have fewer partitions than the main topic,
you should configure the framework appropriately; an example follows.
165
@EnableKafka
@Configuration
public class Config extends RetryTopicConfigurationSupport {
@Override
protected Consumer<DeadLetterPublishingRecovererFactory>
configureDeadLetterPublishingContainerFactory() {
return dlprf -> dlprf.setPartitionResolver((cr, nextTopic) -> null);
}
...
The parameters to the function are the consumer record and the name of the next topic. You can
return a specific partition number, or null to indicate that the KafkaProducer should determine the
partition.
4.2.4. Features
Most of the features are available both for the @RetryableTopic annotation and the
RetryTopicConfiguration beans.
BackOff Configuration
The BackOff configuration relies on the BackOffPolicy interface from the Spring Retry project.
It includes:
• No Back Off
166
@RetryableTopic(attempts = 5,
backoff = @Backoff(delay = 1000, multiplier = 2, maxDelay = 5000))
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackoff(3000)
.maxAttempts(4)
.build();
}
You can also provide a custom implementation of Spring Retry’s SleepingBackOffPolicy interface:
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.customBackOff(new MyCustomBackOffPolicy())
.maxAttempts(5)
.build();
}
The first attempt counts against maxAttempts, so if you provide a maxAttempts value
of 4 there’ll be the original attempt plus 3 retries.
If you’re using fixed delay policies such as FixedBackOffPolicy or NoBackOffPolicy you can use a
167
single topic to accomplish the non-blocking retries. This topic will be suffixed with the provided or
default suffix, and will not have either the index or the delay values appended.
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackoff(3000)
.maxAttempts(5)
.useSingleTopicForFixedDelays()
.build();
}
The default behavior is creating separate retry topics for each attempt, appended
with their index value: retry-0, retry-1, …
Global timeout
You can set the global timeout for the retrying process. If that time is reached, the next time the
consumer throws an exception the message goes straight to the DLT, or just ends the processing if
no DLT is available.
168
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackoff(2000)
.timeoutAfter(5000)
.build();
}
The default is having no timeout set, which can also be achieved by providing -1 as
the timout value.
Exception Classifier
You can specify which exceptions you want to retry on and which not to. You can also set it to
traverse the causes to lookup nested exceptions.
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyOtherPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.notRetryOn(MyDontRetryException.class)
.create(template);
}
The default behavior is retrying on all exceptions and not traversing causes.
Since 2.8.3 there’s a global list of fatal exceptions which will cause the record to be sent to the DLT
without any retries. See DefaultErrorHandler for the default list of fatal exceptions. You can add or
remove exceptions to and from this list by overriding the configureNonBlockingRetries method in a
@Configuration class that extends RetryTopicConfigurationSupport. See Configuring Global Settings
and Features for more information.
169
@Override
protected void manageNonBlockingRetriesFatalExceptions(List<Class<? extends
Throwable>> nonBlockingFatalExceptions) {
nonBlockingFatalExceptions.add(MyNonBlockingException.class);
}
You can decide which topics will and will not be handled by a RetryTopicConfiguration bean via the
.includeTopic(String topic), .includeTopics(Collection<String> topics) .excludeTopic(String topic) and
.excludeTopics(Collection<String> topics) methods.
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.includeTopics(List.of("my-included-topic", "my-other-included-topic"
))
.create(template);
}
@Bean
public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.excludeTopic("my-excluded-topic")
.create(template);
}
Topics AutoCreation
Unless otherwise specified the framework will auto create the required topics using NewTopic beans
that are consumed by the KafkaAdmin bean. You can specify the number of partitions and the
replication factor with which the topics will be created, and you can turn this feature off.
Note that if you’re not using Spring Boot you’ll have to provide a KafkaAdmin bean
in order to use this feature.
170
@RetryableTopic(numPartitions = 2, replicationFactor = 3)
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
@RetryableTopic(autoCreateTopics = false)
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.autoCreateTopicsWith(2, 3)
.create(template);
}
@Bean
public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.doNotAutoCreateRetryTopics()
.create(template);
}
By default the topics are autocreated with one partition and a replication factor of
one.
When considering how to manage failure headers (original headers and exception headers), the
framework delegates to the DeadLetterPublishingRecover to decide whether to append or replace the
headers.
This means that only the first "original" and last exception headers are retained with the default
configuration. This is to avoid creation of excessively large messages (due to the stack trace header,
for example) when many retry steps are involved.
171
See Managing Dead Letter Record Headers for more information.
To reconfigure the framework to use different settings for these properties, configure a
DeadLetterPublishingRecoverer customizer by overriding the configureCustomizers method in a
@Configuration class that extends RetryTopicConfigurationSupport. See Configuring Global Settings
and Features for more details.
@Override
protected void configureCustomizers(CustomizersConfigurer customizersConfigurer) {
customizersConfigurer.customizeDeadLetterPublishingRecoverer(dlpr -> {
dlpr.setAppendOriginalHeaders(true);
dlpr.setStripPreviousExceptionHeaders(false);
});
}
Starting with version 2.8.4, if you wish to add custom headers (in addition to the retry information
headers added by the factory, you can add a headersFunction to the factory -
factory.setHeadersFunction((rec, ex) → { … })
Starting in 2.8.4 you can configure the framework to use both blocking and non-blocking retries in
conjunction. For example, you can have a set of exceptions that would likely trigger errors on the
next records as well, such as DatabaseAccessException, so you can retry the same record a few times
before sending it to the retry topic, or straight to the DLT.
@Override
protected void configureBlockingRetries(BlockingRetriesConfigurer blockingRetries)
{
blockingRetries
.retryOn(MyBlockingRetryException.class,
MyOtherBlockingRetryException.class)
.backOff(new FixedBackOff(3000, 5));
}
In combination with the global retryable topic’s fatal exceptions classification, you
can configure the framework for any behavior you’d like, such as having some
exceptions trigger both blocking and non-blocking retries, trigger only one kind or
the other, or go straight to the DLT without retries of any kind.
172
Here’s an example with both configurations working together:
@Override
protected void configureBlockingRetries(BlockingRetriesConfigurer blockingRetries)
{
blockingRetries
.retryOn(ShouldRetryOnlyBlockingException.class,
ShouldRetryViaBothException.class)
.backOff(new FixedBackOff(50, 3));
}
@Override
protected void manageNonBlockingFatalExceptions(List<Class<? extends Throwable>>
nonBlockingFatalExceptions) {
nonBlockingFatalExceptions.add(ShouldSkipBothRetriesException.class);
}
In this example:
• ShouldRetryOnlyBlockingException.class would retry only via blocking and, if all retries fail,
would go straight to the DLT.
• ShouldRetryViaBothException.class would retry via blocking, and if all blocking retries fail
would be forwarded to the next retry topic for another set of attempts.
Note that the blocking retries behavior is allowlist - you add the exceptions you do
want to retry that way; while the non-blocking retries classification is geared
towards FATAL exceptions and as such is denylist - you add the exceptions you
don’t want to do non-blocking retries, but to send directly to the DLT instead.
Retry topics and DLT are named by suffixing the main topic with a provided or default value,
appended by either the delay or index for that topic.
Examples:
173
Retry Topics and Dlt Suffixes
You can specify the suffixes that will be used by the retry and dlt topics.
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyOtherPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.retryTopicSuffix("-my-retry-suffix")
.dltTopicSuffix("-my-dlt-suffix")
.create(template);
}
The default suffixes are "-retry" and "-dlt", for retry topics and dlt respectively.
You can either append the topic’s index or delay values after the suffix.
@RetryableTopic(topicSuffixingStrategy = TopicSuffixingStrategy
.SUFFIX_WITH_INDEX_VALUE)
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
174
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.suffixTopicsWithIndexValues()
.create(template);
}
The default behavior is to suffix with the delay values, except for fixed delay
configurations with multiple topics, in which case the topics are suffixed with the
topic’s index.
More complex naming strategies can be accomplished by registering a bean that implements
RetryTopicNamesProviderFactory. The default implementation is
SuffixingRetryTopicNamesProviderFactory and a different implementation can be registered in the
following way:
@Override
protected RetryTopicComponentFactory createComponentFactory() {
return new RetryTopicComponentFactory() {
@Override
public RetryTopicNamesProviderFactory retryTopicNamesProviderFactory() {
return new CustomRetryTopicNamesProviderFactory();
}
};
}
As an example the following implementation, in addition to the standard suffix, adds a prefix to
retry/dl topics names:
175
public class CustomRetryTopicNamesProviderFactory implements
RetryTopicNamesProviderFactory {
@Override
public RetryTopicNamesProvider createRetryTopicNamesProvider(
DestinationTopic.Properties properties) {
if(properties.isMainEndpoint()) {
return new SuffixingRetryTopicNamesProvider(properties);
}
else {
return new SuffixingRetryTopicNamesProvider(properties) {
@Override
public String getTopicName(String topic) {
return "my-prefix-" + super.getTopicName(topic);
}
};
}
}
The framework provides a few strategies for working with DLTs. You can provide a method for DLT
processing, use the default logging method, or have no DLT at all. Also you can choose what
happens if DLT processing fails.
You can specify the method used to process the Dlt for the topic, as well as the behavior if that
processing fails.
To do that you can use the @DltHandler annotation in a method of the class with the @RetryableTopic
annotation(s). Note that the same method will be used for all the @RetryableTopic annotated
methods within that class.
176
@RetryableTopic
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
@DltHandler
public void processMessage(MyPojo message) {
// ... message processing, persistence, etc
}
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.dltProcessor("myCustomDltProcessor", "processDltMessage")
.create(template);
}
@Component
public class MyCustomDltProcessor {
Starting with version 2.8, if you don’t want to consume from the DLT in this application at all,
including by the default handler (or you wish to defer consumption), you can control whether or
not the DLT container starts, independent of the container factory’s autoStartup property.
177
When using the @RetryableTopic annotation, set the autoStartDltHandler property to false; when
using the configuration builder, use .autoStartDltHandler(false) .
You can later start the DLT handler via the KafkaListenerEndpointRegistry.
Should the DLT processing fail, there are two possible behaviors available: ALWAYS_RETRY_ON_ERROR
and FAIL_ON_ERROR.
In the former the record is forwarded back to the DLT topic so it doesn’t block other DLT records'
processing. In the latter the consumer ends the execution without forwarding the message.
@RetryableTopic(dltProcessingFailureStrategy =
DltStrategy.FAIL_ON_ERROR)
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.dltProcessor(MyCustomDltProcessor.class, "processDltMessage")
.doNotRetryOnDltFailure()
.create(template);
}
Starting with version 2.8.3, ALWAYS_RETRY_ON_ERROR will NOT route a record back to
the DLT if the record causes a fatal exception to be thrown, such as a
DeserializationException because, generally, such exceptions will always be
thrown.
• DeserializationException
• MessageConversionException
• ConversionException
• MethodArgumentResolutionException
• NoSuchMethodException
178
• ClassCastException
You can add exceptions to and remove exceptions from this list using methods on the
DestinationTopicResolver bean.
Configuring No DLT
The framework also provides the possibility of not configuring a DLT for the topic. In this case after
retrials are exhausted the processing simply ends.
@RetryableTopic(dltProcessingFailureStrategy =
DltStrategy.NO_DLT)
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.doNotConfigureDlt()
.create(template);
}
By default the RetryTopic configuration will use the provided factory from the @KafkaListener
annotation, but you can specify a different one to be used to create the retry topic and dlt listener
containers.
For the @RetryableTopic annotation you can provide the factory’s bean name, and using the
RetryTopicConfiguration bean you can either provide the bean name or the instance itself.
179
@RetryableTopic(listenerContainerFactory = "my-retry-topic-factory")
@KafkaListener(topics = "my-annotated-topic")
public void processMessage(MyPojo message) {
// ... message processing
}
@Bean
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<Integer, MyPojo>
template,
ConcurrentKafkaListenerContainerFactory<Integer, MyPojo> factory) {
return RetryTopicConfigurationBuilder
.newInstance()
.listenerFactory(factory)
.create(template);
}
@Bean
public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate<Integer, MyPojo>
template) {
return RetryTopicConfigurationBuilder
.newInstance()
.listenerFactory("my-retry-topic-factory")
.create(template);
}
Since 2.8.3 you can use the same factory for retryable and non-retryable topics.
If you need to revert the factory configuration behavior to prior 2.8.3, you can override the
configureRetryTopicConfigurer method of a @Configuration class that extends
RetryTopicConfigurationSupport as explained in Configuring Global Settings and Features and set
useLegacyFactoryConfigurer to true, such as:
@Override
protected Consumer<RetryTopicConfigurer> configureRetryTopicConfigurer() {
return rtc -> rtc.useLegacyFactoryConfigurer(true);
}
Since 2.9, you can access information regarding the topic chain at runtime by injecting the provided
DestinationTopicContainer bean. This interface provides methods to look up the next topic in the
chain or the DLT for a topic if configured, as well as useful properties such as the topic’s name,
180
delay and type.
As a real-world use-case example, you can use such information so a console application can resend
a record from the DLT to the first retry topic in the chain after the cause of the failed processing,
e.g. bug / inconsistent state, has been resolved.
When a message in the retry topic is not due for consumption, a KafkaBackOffException is thrown.
Such exceptions are logged by default at DEBUG level, but you can change this behavior by setting an
error handler customizer in the ListenerContainerFactoryConfigurer in a @Configuration class.
For example, to change the logging level to WARN you might add:
@Override
protected void configureCustomizers(CustomizersConfigurer customizersConfigurer) {
customizersConfigurer.customizeErrorHandler(defaultErrorHandler ->
defaultErrorHandler.setLogLevel(KafkaException.Level.WARN))
}
4.3.1. Basics
The reference Apache Kafka Streams documentation suggests the following way of using the API:
181
// Use the builders to define the actual processing topology, e.g. to specify
// from which input topics to read, which stream operations (filter, map, etc.)
// should be called, and so on.
// Use the configuration to tell your application where the Kafka cluster is,
// which serializers/deserializers to use by default, to specify security
settings,
// and so on.
StreamsConfig config = ...;
To simplify using Kafka Streams from the Spring application context perspective and use the
lifecycle management through a container, the Spring for Apache Kafka introduces
StreamsBuilderFactoryBean. This is an AbstractFactoryBean implementation to expose a
StreamsBuilder singleton instance as a bean. The following example creates such a bean:
182
@Bean
public FactoryBean<StreamsBuilder> myKStreamBuilder(KafkaStreamsConfiguration
streamsConfig) {
return new StreamsBuilderFactoryBean(streamsConfig);
}
@Bean
public KStream<?, ?> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream = kStreamBuilder.stream(STREAMING_TOPIC1);
// Fluent KStream API
return stream;
}
If you would like to control the lifecycle manually (for example, stopping and starting by some
condition), you can reference the StreamsBuilderFactoryBean bean directly by using the factory bean
(&) prefix. Since StreamsBuilderFactoryBean use its internal KafkaStreams instance, it is safe to stop
and restart it again. A new KafkaStreams is created on each start(). You might also consider using
different StreamsBuilderFactoryBean instances, if you would like to control the lifecycles for KStream
instances separately.
183
@Bean
public StreamsBuilderFactoryBean myKStreamBuilder(KafkaStreamsConfiguration
streamsConfig) {
return new StreamsBuilderFactoryBean(streamsConfig);
}
...
@Autowired
private StreamsBuilderFactoryBean myKStreamBuilderFactoryBean;
Alternatively, you can add @Qualifier for injection by name if you use interface bean definition. The
following example shows how to do so:
@Bean
public FactoryBean<StreamsBuilder> myKStreamBuilder(KafkaStreamsConfiguration
streamsConfig) {
return new StreamsBuilderFactoryBean(streamsConfig);
}
...
@Autowired
@Qualifier("&myKStreamBuilder")
private StreamsBuilderFactoryBean myKStreamBuilderFactoryBean;
Starting with version 2.4.1, the factory bean has a new property infrastructureCustomizer with type
KafkaStreamsInfrastructureCustomizer; this allows customization of the StreamsBuilder (e.g. to add a
state store) and/or the Topology before the stream is created.
Default no-op implementations are provided to avoid having to implement both methods if one is
not required.
184
4.3.3. KafkaStreams Micrometer Support
streamsBuilderFactoryBean.addListener(new KafkaStreamsMicrometerListener
(meterRegistry,
Collections.singletonList(new ImmutableTag("customTag", "customTagValue")
)));
For serializing and deserializing data when reading or writing to topics or state stores in JSON
format, Spring for Apache Kafka provides a JsonSerde implementation that uses JSON, delegating to
the JsonSerializer and JsonDeserializer described in Serialization, Deserialization, and Message
Conversion. The JsonSerde implementation provides the same configuration options through its
constructor (target type or ObjectMapper). In the following example, we use the JsonSerde to serialize
and deserialize the Cat payload of a Kafka stream (the JsonSerde can be used in a similar fashion
wherever an instance is required):
stream.through(new JsonSerde<>(MyKeyType.class)
.forKeys()
.noTypeInfo(),
new JsonSerde<>(MyValueType.class)
.noTypeInfo(),
"myTypes");
The KafkaStreamBrancher class introduces a more convenient way to build conditional branches on
top of KStream.
185
KStream<String, String>[] branches = builder.stream("source").branch(
(key, value) -> value.contains("A"),
(key, value) -> value.contains("B"),
(key, value) -> true
);
branches[0].to("A");
branches[1].to("B");
branches[2].to("C");
4.3.6. Configuration
To avoid boilerplate code for most cases, especially when you develop microservices, Spring for
Apache Kafka provides the @EnableKafkaStreams annotation, which you should place on a
@Configuration class. All you need is to declare a KafkaStreamsConfiguration bean named
defaultKafkaStreamsConfig. A StreamsBuilderFactoryBean bean, named defaultKafkaStreamsBuilder, is
automatically declared in the application context. You can declare and use any additional
StreamsBuilderFactoryBean beans as well. You can perform additional customization of that bean, by
providing a bean that implements StreamsBuilderFactoryBeanConfigurer. If there are multiple such
beans, they will be applied according to their Ordered.order property.
By default, when the factory bean is stopped, the KafkaStreams.cleanUp() method is called. Starting
with version 2.1.2, the factory bean has additional constructors, taking a CleanupConfig object that
has properties to let you control whether the cleanUp() method is called during start() or stop() or
neither. Starting with version 2.7, the default is to never clean up local state.
186
4.3.7. Header Enricher
Version 2.3 added the HeaderEnricher implementation of Transformer. This can be used to add
headers within the stream processing; the header values are SpEL expressions; the root object of
the expression evaluation has 3 properties:
The expressions must return a byte[] or a String (which will be converted to byte[] using UTF-8).
The transformer does not change the key or value; it simply adds headers.
If your stream is multi-threaded, you need a new instance for each record.
Here is a simple example, adding one literal header and one variable:
4.3.8. MessagingTransformer
Version 2.3 added the MessagingTransformer this allows a Kafka Streams topology to interact with a
Spring Messaging component, such as a Spring Integration flow. The transformer requires an
implementation of MessagingFunction.
187
@FunctionalInterface
public interface MessagingFunction {
Version 2.3 introduced the RecoveringDeserializationExceptionHandler which can take some action
when a deserialization exception occurs. Refer to the Kafka documentation about
DeserializationExceptionHandler, of which the RecoveringDeserializationExceptionHandler is an
implementation. The RecoveringDeserializationExceptionHandler is configured with a
ConsumerRecordRecoverer implementation. The framework provides the
DeadLetterPublishingRecoverer which sends the failed record to a dead-letter topic. See Publishing
Dead-letter Records for more information about this recoverer.
To configure the recoverer, add the following properties to your streams configuration:
@Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
...
props.put(StreamsConfig.
DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
props.put(RecoveringDeserializationExceptionHandler
.KSTREAM_DESERIALIZATION_RECOVERER, recoverer());
...
return new KafkaStreamsConfiguration(props);
}
@Bean
public DeadLetterPublishingRecoverer recoverer() {
return new DeadLetterPublishingRecoverer(kafkaTemplate(),
(record, ex) -> new TopicPartition("recovererDLQ", -1));
}
188
4.3.10. Kafka Streams Example
The following example combines all the topics we have covered in this chapter:
189
190
@Configuration
@EnableKafka
@EnableKafkaStreams
public static class KafkaStreamsConfig {
@Bean(name = KafkaStreamsDefaultConfiguration.
DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "testStreams");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.Integer()
.getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String()
.getClass().getName());
props.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG,
WallclockTimestampExtractor.class.getName());
return new KafkaStreamsConfiguration(props);
}
@Bean
public StreamsBuilderFactoryBeanConfigurer configurer() {
return fb -> fb.setStateListener((newState, oldState) -> {
System.out.println("State transition from " + oldState + " to " +
newState);
});
}
@Bean
public KStream<Integer, String> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream = kStreamBuilder.stream("streamingTopic1"
);
stream
.mapValues((ValueMapper<String, String>) String::toUpperCase)
.groupByKey()
.windowedBy(TimeWindows.of(Duration.ofMillis(1000)))
.reduce((String value1, String value2) -> value1 + value2,
Named.as("windowStore"))
.toStream()
.map((windowedId, value) -> new KeyValue<>(windowedId.key(),
value))
.filter((i, s) -> s.length() > 40)
.to("streamingTopic2");
stream.print(Printed.toSysOut());
return stream;
}
191
4.4. Testing Applications
The spring-kafka-test jar contains some useful utilities to assist with testing your applications.
4.4.1. KafkaTestUtils
4.4.2. JUnit
/**
* Set up test properties for an {@code <Integer, String>} consumer.
* @param group the group id.
* @param autoCommit the auto commit.
* @param embeddedKafka a {@link EmbeddedKafkaBroker} instance.
* @return the properties.
*/
public static Map<String, Object> consumerProps(String group, String autoCommit,
EmbeddedKafkaBroker embeddedKafka) { ... }
/**
* Set up test properties for an {@code <Integer, String>} producer.
* @param embeddedKafka a {@link EmbeddedKafkaBroker} instance.
* @return the properties.
*/
public static Map<String, Object> producerProps(EmbeddedKafkaBroker embeddedKafka)
{ ... }
192
A JUnit 4 @Rule wrapper for the EmbeddedKafkaBroker is provided to create an embedded Kafka and
an embedded Zookeeper server. (See @EmbeddedKafka Annotation for information about using
@EmbeddedKafka with JUnit 5). The following listing shows the signatures of those methods:
/**
* Create embedded Kafka brokers.
* @param count the number of brokers.
* @param controlledShutdown passed into TestUtils.createBrokerConfig.
* @param topics the topics to create (2 partitions per).
*/
public EmbeddedKafkaRule(int count, boolean controlledShutdown, String... topics)
{ ... }
/**
*
* Create embedded Kafka brokers.
* @param count the number of brokers.
* @param controlledShutdown passed into TestUtils.createBrokerConfig.
* @param partitions partitions per topic.
* @param topics the topics to create.
*/
public EmbeddedKafkaRule(int count, boolean controlledShutdown, int partitions,
String... topics) { ... }
The EmbeddedKafkaBroker class has a utility method that lets you consume for all the topics it created.
The following example shows how to use it:
The KafkaTestUtils has some utility methods to fetch results from the consumer. The following
listing shows those method signatures:
193
/**
* Poll the consumer, expecting a single record for the specified topic.
* @param consumer the consumer.
* @param topic the topic.
* @return the record.
* @throws org.junit.ComparisonFailure if exactly one record is not received.
*/
public static <K, V> ConsumerRecord<K, V> getSingleRecord(Consumer<K, V> consumer,
String topic) { ... }
/**
* Poll the consumer for records.
* @param consumer the consumer.
* @return the records.
*/
public static <K, V> ConsumerRecords<K, V> getRecords(Consumer<K, V> consumer) {
... }
...
template.sendDefault(0, 2, "bar");
ConsumerRecord<Integer, String> received = KafkaTestUtils.getSingleRecord(
consumer, "topic");
...
When the embedded Kafka and embedded Zookeeper server are started by the EmbeddedKafkaBroker,
a system property named spring.embedded.kafka.brokers is set to the address of the Kafka brokers
and a system property named spring.embedded.zookeeper.connect is set to the address of Zookeeper.
Convenient constants (EmbeddedKafkaBroker.SPRING_EMBEDDED_KAFKA_BROKERS and
EmbeddedKafkaBroker.SPRING_EMBEDDED_ZOOKEEPER_CONNECT) are provided for this property.
The following example configuration creates topics called cat and hat with five partitions, a topic
called thing1 with 10 partitions, and a topic called thing2 with 15 partitions:
194
public class MyTests {
@ClassRule
private static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1,
false, 5, "cat", "hat");
@Test
public void test() {
embeddedKafkaRule.getEmbeddedKafka()
.addTopics(new NewTopic("thing1", 10, (short) 1), new NewTopic(
"thing2", 15, (short) 1));
...
}
By default, addTopics will throw an exception when problems arise (such as adding a topic that
already exists). Version 2.6 added a new version of that method that returns a Map<String,
Exception>; the key is the topic name and the value is null for success, or an Exception for a failure.
There is no built-in support for doing so, but you can use the same broker for multiple test classes
with something similar to the following:
195
public final class EmbeddedKafkaHolder {
private EmbeddedKafkaHolder() {
super();
}
This assumes a Spring Boot environment and the embedded broker replaces the bootstrap servers
property.
Then, in each test class, you can use something similar to the following:
static {
EmbeddedKafkaHolder.getEmbeddedKafka().addTopics("topic1", "topic2");
}
If you are not using Spring Boot, you can obtain the bootstrap servers using
broker.getBrokersAsString().
196
The preceding example provides no mechanism for shutting down the broker(s)
when all tests are complete. This could be a problem if, say, you run your tests in a
Gradle daemon. You should not use this technique in such a situation, or you
should use something to call destroy() on the EmbeddedKafkaBroker when your tests
are complete.
We generally recommend that you use the rule as a @ClassRule to avoid starting and stopping the
broker between tests (and use a different topic for each test). Starting with version 2.0, if you use
Spring’s test application context caching, you can also declare a EmbeddedKafkaBroker bean, so a
single broker can be used across multiple test classes. For convenience, we provide a test class-level
annotation called @EmbeddedKafka to register the EmbeddedKafkaBroker bean. The following example
shows how to use it:
197
@RunWith(SpringRunner.class)
@DirtiesContext
@EmbeddedKafka(partitions = 1,
topics = {
KafkaStreamsTests.STREAMING_TOPIC1,
KafkaStreamsTests.STREAMING_TOPIC2 })
public class KafkaStreamsTests {
@Autowired
private EmbeddedKafkaBroker embeddedKafka;
@Test
public void someTest() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(
"testGroup", "true", this.embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>
(consumerProps);
Consumer<Integer, String> consumer = cf.createConsumer();
this.embeddedKafka.consumeFromAnEmbeddedTopic(consumer, KafkaStreamsTests
.STREAMING_TOPIC2);
ConsumerRecords<Integer, String> replies = KafkaTestUtils.getRecords
(consumer);
assertThat(replies.count()).isGreaterThanOrEqualTo(1);
}
@Configuration
@EnableKafkaStreams
public static class KafkaStreamsConfiguration {
@Bean(name = KafkaStreamsDefaultConfiguration
.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "testStreams");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, this.
brokerAddresses);
return new KafkaStreamsConfiguration(props);
}
}
Starting with version 2.2.4, you can also use the @EmbeddedKafka annotation to specify the Kafka
198
ports property.
The following example sets the topics, brokerProperties, and brokerPropertiesLocation attributes of
@EmbeddedKafka support property placeholder resolutions:
@TestPropertySource(locations = "classpath:/test.properties")
@EmbeddedKafka(topics = { "any-topic", "${kafka.topics.another-topic}" },
brokerProperties = { "log.dir=${kafka.broker.logs-dir}",
"listeners=PLAINTEXT://localhost:${kafka.broker.port}
",
"auto.create.topics.enable=${kafka.broker.topics-
enable:true}" },
brokerPropertiesLocation = "classpath:/broker.properties")
Starting with version 2.3, there are two ways to use the @EmbeddedKafka annotation with JUnit5.
When used with the @SpringJunitConfig annotation, the embedded broker is added to the test
application context. You can auto wire the broker into your test, at the class or method level, to get
the broker address list.
When not using the spring test context, the EmbdeddedKafkaCondition creates a broker; the condition
includes a parameter resolver so you can access the broker in your test method…
@EmbeddedKafka
public class EmbeddedKafkaConditionTests {
@Test
public void test(EmbeddedKafkaBroker broker) {
String brokerList = broker.getBrokersAsString();
...
}
A stand-alone (not Spring test context) broker will be created if the class annotated with
199
@EmbeddedBroker is not also annotated (or meta annotated) with
ExtendedWith(SpringExtension.class). @SpringJunitConfig and @SpringBootTest are so meta
annotated and the context-based broker will be used when either of those annotations are also
present.
When there is a Spring test application context available, the topics and broker
properties can contain property placeholders, which will be resolved as long as the
property is defined somewhere. If there is no Spring context available, these
placeholders won’t be resolved.
Spring Initializr now automatically adds the spring-kafka-test dependency in test scope to the
project configuration.
If your application uses the Kafka binder in spring-cloud-stream and if you want to
use an embedded broker for tests, you must remove the spring-cloud-stream-test-
support dependency, because it replaces the real binder with a test binder for test
cases. If you wish some tests to use the test binder and some to use the embedded
broker, tests that use the real binder need to disable the test binder by excluding
the binder auto configuration in the test class. The following example shows how
to do so:
@RunWith(SpringRunner.class)
@SpringBootTest(properties = "spring.autoconfigure.exclude="
+
"org.springframework.cloud.stream.test.binder.TestSupportBinderAuto
Configuration")
public class MyApplicationTests {
...
}
There are several ways to use an embedded broker in a Spring Boot application test.
They include:
The following example shows how to use a JUnit4 class rule to create an embedded broker:
200
@RunWith(SpringRunner.class)
@SpringBootTest
public class MyApplicationTests {
@ClassRule
public static EmbeddedKafkaRule broker = new EmbeddedKafkaRule(1,
false, "someTopic")
.brokerListProperty("spring.kafka.bootstrap-servers");
}
@Autowired
private KafkaTemplate<String, String> template;
@Test
public void test() {
...
}
Notice that, since this is a Spring Boot application, we override the broker list property to set Boot’s
property.
The following example shows how to use an @EmbeddedKafka Annotation to create an embedded
broker:
@RunWith(SpringRunner.class)
@EmbeddedKafka(topics = "someTopic",
bootstrapServersProperty = "spring.kafka.bootstrap-servers")
public class MyApplicationTests {
@Autowired
private KafkaTemplate<String, String> template;
@Test
public void test() {
...
}
201
4.4.8. Hamcrest Matchers
/**
* @param key the key
* @param <K> the type.
* @return a Matcher that matches the key in a consumer record.
*/
public static <K> Matcher<ConsumerRecord<K, ?>> hasKey(K key) { ... }
/**
* @param value the value.
* @param <V> the type.
* @return a Matcher that matches the value in a consumer record.
*/
public static <V> Matcher<ConsumerRecord<?, V>> hasValue(V value) { ... }
/**
* @param partition the partition.
* @return a Matcher that matches the partition in a consumer record.
*/
public static Matcher<ConsumerRecord<?, ?>> hasPartition(int partition) { ... }
/**
* Matcher testing the timestamp of a {@link ConsumerRecord} assuming the topic
has been set with
* {@link org.apache.kafka.common.record.TimestampType#CREATE_TIME CreateTime}.
*
* @param ts timestamp of the consumer record.
* @return a Matcher that matches the timestamp in a consumer record.
*/
public static Matcher<ConsumerRecord<?, ?>> hasTimestamp(long ts) {
return hasTimestamp(TimestampType.CREATE_TIME, ts);
}
/**
* Matcher testing the timestamp of a {@link ConsumerRecord}
* @param type timestamp type of the record
* @param ts timestamp of the consumer record.
* @return a Matcher that matches the timestamp in a consumer record.
*/
public static Matcher<ConsumerRecord<?, ?>> hasTimestamp(TimestampType type, long
ts) {
return new ConsumerRecordTimestampMatcher(type, ts);
}
202
4.4.9. AssertJ Conditions
203
/**
* @param key the key
* @param <K> the type.
* @return a Condition that matches the key in a consumer record.
*/
public static <K> Condition<ConsumerRecord<K, ?>> key(K key) { ... }
/**
* @param value the value.
* @param <V> the type.
* @return a Condition that matches the value in a consumer record.
*/
public static <V> Condition<ConsumerRecord<?, V>> value(V value) { ... }
/**
* @param key the key.
* @param value the value.
* @param <K> the key type.
* @param <V> the value type.
* @return a Condition that matches the key in a consumer record.
* @since 2.2.12
*/
public static <K, V> Condition<ConsumerRecord<K, V>> keyValue(K key, V value) { .
.. }
/**
* @param partition the partition.
* @return a Condition that matches the partition in a consumer record.
*/
public static Condition<ConsumerRecord<?, ?>> partition(int partition) { ... }
/**
* @param value the timestamp.
* @return a Condition that matches the timestamp value in a consumer record.
*/
public static Condition<ConsumerRecord<?, ?>> timestamp(long value) {
return new ConsumerRecordTimestampCondition(TimestampType.CREATE_TIME, value);
}
/**
* @param type the type of timestamp
* @param value the timestamp.
* @return a Condition that matches the timestamp value in a consumer record.
*/
public static Condition<ConsumerRecord<?, ?>> timestamp(TimestampType type, long
value) {
return new ConsumerRecordTimestampCondition(type, value);
}
204
4.4.10. Example
The following example brings together most of the topics covered in this chapter:
205
public class KafkaTemplateTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true,
TEMPLATE_TOPIC);
@Test
public void testTemplate() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("testT",
"false",
embeddedKafka.getEmbeddedKafka());
DefaultKafkaConsumerFactory<Integer, String> cf =
new DefaultKafkaConsumerFactory<Integer, String>
(consumerProps);
ContainerProperties containerProperties = new ContainerProperties
(TEMPLATE_TOPIC);
KafkaMessageListenerContainer<Integer, String> container =
new KafkaMessageListenerContainer<>(cf,
containerProperties);
final BlockingQueue<ConsumerRecord<Integer, String>> records = new
LinkedBlockingQueue<>();
container.setupMessageListener(new MessageListener<Integer, String>() {
@Override
public void onMessage(ConsumerRecord<Integer, String> record) {
System.out.println(record);
records.add(record);
}
});
container.setBeanName("templateTests");
container.start();
ContainerTestUtils.waitForAssignment(container,
embeddedKafka.getEmbeddedKafka().
getPartitionsPerTopic());
Map<String, Object> producerProps =
KafkaTestUtils.producerProps(embeddedKafka
.getEmbeddedKafka());
ProducerFactory<Integer, String> pf =
new DefaultKafkaProducerFactory<Integer, String>
(producerProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf);
template.setDefaultTopic(TEMPLATE_TOPIC);
template.sendDefault("foo");
assertThat(records.poll(10, TimeUnit.SECONDS), hasValue("foo"));
template.sendDefault(0, 2, "bar");
ConsumerRecord<Integer, String> received = records.poll(10, TimeUnit
206
.SECONDS);
assertThat(received, hasKey(2));
assertThat(received, hasPartition(0));
assertThat(received, hasValue("bar"));
template.send(TEMPLATE_TOPIC, 0, 2, "baz");
received = records.poll(10, TimeUnit.SECONDS);
assertThat(received, hasKey(2));
assertThat(received, hasPartition(0));
assertThat(received, hasValue("baz"));
}
The preceding example uses the Hamcrest matchers. With AssertJ, the final part looks like the
following code:
assertThat(records.poll(10, TimeUnit.SECONDS)).has(value("foo"));
template.sendDefault(0, 2, "bar");
ConsumerRecord<Integer, String> received = records.poll(10, TimeUnit.SECONDS);
// using individual assertions
assertThat(received).has(key(2));
assertThat(received).has(value("bar"));
assertThat(received).has(partition(0));
template.send(TEMPLATE_TOPIC, 0, 2, "baz");
received = records.poll(10, TimeUnit.SECONDS);
// using allOf()
assertThat(received).has(allOf(keyValue(2, "baz"), partition(0)));
207
Chapter 5. Tips, Tricks and Examples
5.1. Manually Assigning All Partitions
Let’s say you want to always read all records from all partitions (such as when using a compacted
topic to load a distributed cache), it can be useful to manually assign the partitions and not use
Kafka’s group management. Doing so can be unwieldy when there are many partitions, because
you have to list the partitions. It’s also an issue if the number of partitions changes over time,
because you would have to recompile your application each time the partition count changes.
The following is an example of how to use the power of a SpEL expression to create the partition
list dynamically when the application starts:
@Bean
public PartitionFinder finder(ConsumerFactory<String, String> consumerFactory) {
return new PartitionFinder(consumerFactory);
}
208
Using this in conjunction with ConsumerConfig.AUTO_OFFSET_RESET_CONFIG=earliest will load all
records each time the application is started. You should also set the container’s AckMode to MANUAL to
prevent the container from committing offsets for a null consumer group. Howewever, starting
with version 2.5.5, as shown above, you can apply an initial offset to all partitions; see Explicit
Partition Assignment for more information.
209
@SpringBootApplication
public class Application {
@Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> template.executeInTransaction(t -> t.send("topic1", "test"
));
}
@Bean
public DataSourceTransactionManager dstm(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
@Component
public static class Listener {
}
@Bean
public NewTopic topic1() {
return TopicBuilder.name("topic1").build();
210
}
@Bean
public NewTopic topic2() {
return TopicBuilder.name("topic2").build();
}
spring.datasource.url=jdbc:mysql://localhost/integration?serverTimezone=UTC
spring.datasource.username=root
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.properties.isolation.level=read_committed
spring.kafka.producer.transaction-id-prefix=tx-
#logging.level.org.springframework.transaction=trace
#logging.level.org.springframework.kafka.transaction=debug
#logging.level.org.springframework.jdbc=debug
@Transactional("dstm")
public void someMethod(String in) {
this.kafkaTemplate.send("topic2", in.toUpperCase());
this.jdbcTemplate.execute("insert into mytable (data) values ('" + in + "')");
}
The KafkaTemplate will synchronize its transaction with the DB transaction and the commit/rollback
occurs after the database.
If you wish to commit the Kafka transaction first, and only commit the DB transaction if the Kafka
transaction is successful, use nested @Transactional methods:
211
@Transactional("dstm")
public void someMethod(String in) {
this.jdbcTemplate.execute("insert into mytable (data) values ('" + in + "')");
sendToKafka(in);
}
@Transactional("kafkaTransactionManager")
public void sendToKafka(String in) {
this.kafkaTemplate.send("topic2", in.toUpperCase());
}
public CustomJsonSerializer() {
super(customizedObjectMapper());
}
212
Chapter 6. Other Resources
In addition to this reference documentation, we recommend a number of other resources that may
help you learn about Spring and Apache Kafka.
213
Appendix A: Override Spring Boot
Dependencies
When using Spring for Apache Kafka in a Spring Boot application, the Apache Kafka dependency
versions are determined by Spring Boot’s dependency management. If you wish to use a different
version of kafka-clients or kafka-streams, and use the embedded kafka broker for testing, you need
to override their version used by Spring Boot dependency management and add two test artifacts
for Apache Kafka.
Maven
<properties>
<kafka.version>3.2.3</kafka.version>
</properties>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<!-- optional - only needed when using kafka-streams -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
Gradle
ext['kafka.version'] = '3.2.3'
dependencies {
implementation 'org.springframework.kafka:spring-kafka'
implementation "org.apache.kafka:kafka-streams" // optional - only needed when
using kafka-streams
testImplementation ('org.springframework.kafka:spring-kafka-test') {
// needed if downgrading to Apache Kafka 2.8.1
exclude group: 'org.apache.zookeeper', module: 'zookeeper'
}
testImplementation "org.apache.kafka:kafka-clients:${kafka.version}:test"
testImplementation "org.apache.kafka:kafka_2.13:${kafka.version}:test"
}
214
The test scope dependencies are only needed if you are using the embedded Kafka broker in tests.
215
Appendix B: Change History
B.1. What’s New in 2.8 Since 2.7
This section covers the changes made from version 2.7 to version 2.8. For changes in earlier
version, see Change History.
Classes and interfaces related to type mapping have been moved from …support.converter to …
support.mapping.
• AbstractJavaTypeMapper
• ClassMapper
• DefaultJackson2JavaTypeMapper
• Jackson2JavaTypeMapper
The listener container can now be configured to accept manual offset commits out of order (usually
asynchronously). The container will defer the commit until the missing offset is acknowledged. See
Manually Committing Offsets for more information.
It is now possible to specify whether the listener method is a batch listener on the method itself.
This allows the same container factory to be used for both record and batch listeners.
See Conversion Errors with Batch Error Handlers for more information.
RecordFilterStrategy, when used with batch listeners, can now filter the entire batch in one call.
See the note at the end of Batch Listeners for more information.
The @KafkaListener annotation now has the filter attribute, to override the container factory’s
RecordFilterStrategy for just this listener.
The @KafkaListener annotation now has the info attribute; this is used to populate the new listener
container property listenerInfo. This is then used to populate a KafkaHeaders.LISTENER_INFO header
in each record which can be used in RecordInterceptor, RecordFilterStrategy, or the listener itself.
216
See Listener Info Header and Abstract Listener Container Properties for more information.
You can now receive a single record, given the topic, partition and offset. See Using KafkaTemplate to
Receive for more information.
The legacy GenericErrorHandler and its sub-interface hierarchies for record an batch listeners have
been replaced by a new single interface CommonErrorHandler with implementations corresponding to
most legacy implementations of GenericErrorHandler. See Container Error Handlers and Migrating
Custom Legacy Error Handler Implementations to CommonErrorHandler for more information.
See Using KafkaMessageListenerContainer and Listener Container Properties for more information.
There are now several techniques to customize which headers are added to the output record.
Now you can use the same factory for retryable and non-retryable topics. See Specifying a
ListenerContainerFactory for more information.
There’s now a manageable global list of fatal exceptions that will make the failed record go straight
to the DLT. Refer to Exception Classifier to see how to manage it.
You can now use blocking and non-blocking retries in conjunction. See Combining Blocking and
Non-Blocking Retries for more information.
The KafkaBackOffException thrown when using the retryable topics feature is now logged at
217
DEBUG level. See Changing KafkaBackOffException Logging Level if you need to change the logging
level back to WARN or set it to any other level.
This version requires the 2.7.0 kafka-clients. It is also compatible with the 2.8.0 clients, since
version 2.7.1; see Override Spring Boot Dependencies.
This significant new feature is added in this release. When strict ordering is not important, failed
deliveries can be sent to another topic to be consumed later. A series of such retry topics can be
configured, with increasing delays. See Non-Blocking Retries for more information.
Error handlers that use a BackOff between delivery attempts (e.g. SeekToCurrentErrorHandler and
DefaultAfterRollbackProcessor) will now exit the back off interval soon after the container is
stopped, rather than delaying the stop.
Error handlers and after rollback processors that extend FailedRecordProcessor can now be
configured with one or more RetryListener s to receive information about retry and recovery
progress.
The RecordInterceptor now has additional methods called after the listener returns (normally, or by
throwing an exception). It also has a sub-interface ConsumerAwareRecordInterceptor. In addition,
there is now a BatchInterceptor for batch listeners. See Message Listener Containers for more
information.
You can now validate the payload parameter of @KafkaHandler methods (class-level listeners). See
@KafkaListener @Payload Validation for more information.
You can now set the rawRecordHeader property on the MessagingMessageConverter and
BatchMessagingMessageConverter which causes the raw ConsumerRecord to be added to the converted
Message<?>. This is useful, for example, if you wish to use a DeadLetterPublishingRecoverer in a
listener error handler. See Listener Error Handlers for more information.
You can now modify @KafkaListener annotations during application initialization. See
@KafkaListener Attribute Modification for more information.
218
B.2.5. DeadLetterPublishingRecover Changes
Now, if both the key and value fail deserialization, the original values are published to the DLT.
Previously, the value was populated but the key DeserializationException remained in the headers.
There is a breaking API change, if you subclassed the recoverer and overrode the
createProducerRecord method.
In addition, the recoverer verifies that the partition selected by the destination resolver actually
exists before publishing to it.
There is now a mechanism to examine a reply and fail the future exceptionally if some condition
exists.
Support for sending and receiving spring-messaging Message<?> s has been added.
By default, the StreamsBuilderFactoryBean is now configured to not clean up local state. See
Configuration for more information.
New methods createOrModifyTopics and describeTopics have been added. KafkaAdmin.NewTopics has
been added to facilitate configuring multiple topics in a single bean. See Configuring Topics for
more information.
B.2.12. ExponentialBackOffWithMaxRetries
A new BackOff implementation is provided, making it more convenient to configure the max retries.
See ExponentialBackOffWithMaxRetries Implementation for more information.
219
B.2.13. Conditional Delegating Error Handlers
These new error handlers can be configured to delegate to different error handlers, depending on
the exception type. See Delegating Error Handler for more information.
The default EOSMode is now BETA. See Exactly Once Semantics for more information.
You can now configure an adviceChain in the container properties. See Listener Container
Properties for more information.
When using manual partition assignment, you can now specify a wildcard for determining which
partitions should be reset to the initial offset. In addition, if the listener implements
ConsumerSeekAware, onPartitionsAssigned() is called after the manual assignment. (Also added in
version 2.5.5). See Explicit Partition Assignment for more information.
Convenience methods have been added to AbstractConsumerSeekAware to make seeking easier. See
Seeking to a Specific Offset for more information.
You can now set a maximum age for producers after which they will be closed and recreated. See
Transactions for more information.
You can now update the configuration map after the DefaultKafkaProducerFactory has been created.
This might be useful, for example, if you have to update SSL key/trust store locations after a
220
credentials change. See Using DefaultKafkaProducerFactory for more information.
The default consumer and producer factories can now invoke a callback whenever a consumer or
producer is created or closed. Implementations for native Micrometer metrics are provided. See
Factory Listeners for more information.
You can now change bootstrap server properties at runtime, enabling failover to another Kafka
cluster. See Connecting to Kafka for more information.
The factory bean can now invoke a callback whenever a KafkaStreams created or destroyed. An
Implementation for native Micrometer metrics is provided. See KafkaStreams Micrometer Support
for more information.
There is now an option to to add a header which tracks delivery attempts when using certain error
handlers and after rollback processors. See Delivery Attempts Header for more information.
Default reply headers will now be populated automatically if needed when a @KafkaListener return
type is Message<?>. See Reply Type Message<?> for more information.
The KafkaHeaders.RECEIVED_MESSAGE_KEY is no longer populated with a null value when the incoming
record has a null key; the header is omitted altogether.
221
B.4.7. Listener Container Changes
The subBatchPerPartition container property is now true by default when using transactions. See
Transactions for more information.
Static group membership is now supported. See Message Listener Containers for more information.
The default error handler is now the SeekToCurrentErrorHandler for record listeners and
RecoveringBatchErrorHandler for batch listeners. See Container Error Handlers for more
information.
You can now control the level at which exceptions intentionally thrown by standard error handlers
are logged. See Container Error Handlers for more information.
The getAssignmentsByClientId() method has been added, making it easier to determine which
consumers in a concurrent container are assigned which partition(s). See Listener Container
Properties for more information.
You can now suppress logging entire ConsumerRecord s in error, debug logs etc. See
onlyLogRecordMetadata in Listener Container Properties.
The KafkaTemplate can now maintain micrometer timers. See Monitoring for more information.
The KafkaTemplate can now be configured with ProducerConfig properties to override those in the
producer factory. See Using KafkaTemplate for more information.
A RoutingKafkaTemplate has now been provided. See Using RoutingKafkaTemplate for more
information.
You can now use KafkaSendCallback instead of ListenerFutureCallback to get a narrower exception,
making it easier to extract the failed ProducerRecord. See Using KafkaTemplate for more information.
B.4.10. JsonDeserializer
The JsonDeserializer now has more flexibility to determine the deserialization type. See Using
222
Methods to Determine Types for more information.
The DelegatingSerializer can now handle "standard" types, when the outbound record has no
header. See Delegating Serializer and Deserializer for more information.
This version requires the 2.4.0 kafka-clients or higher and supports the new incremental
rebalancing feature.
B.5.2. ConsumerAwareRebalanceListener
See the IMPORTANT note at the end of Rebalancing Listeners for more information.
B.5.3. GenericErrorHandler
B.5.4. KafkaTemplate
B.5.5. AggregatingReplyingKafkaTemplate
The releaseStrategy is now a BiConsumer. It is now called after a timeout (as well as when records
arrive); the second parameter is true in the case of a call after a timeout.
223
B.5.6. Listener Container
B.5.7. @KafkaListener
The @KafkaListener annotation has a new property splitIterables; default true. When a replying
listener returns an Iterable this property controls whether the return result is sent as a single
record or a record for each element is sent. See Forwarding Listener Results using @SendTo for more
information
Batch listeners can now be configured with a BatchToRecordAdapter; this allows, for example, the
batch to be processed in a transaction while the listener gets one record at a time. With the default
implementation, a ConsumerRecordRecoverer can be used to handle errors within the batch, without
stopping the processing of the entire batch - this might be useful when using transactions. See
Transactions with Batch Listeners for more information.
A new chapter Tips, Tricks and Examples has been added. Please submit GitHub issues and/or pull
requests for additional entries in that chapter.
Starting with version 2.3.4, the missingTopicsFatal container property is false by default. When this
is true, the application fails to start if the broker is down; many users were affected by this change;
given that Kafka is a high-availability platform, we did not anticipate that starting an application
with no active brokers would be a common use case.
224
B.6.5. Producer and Consumer Factory Changes
The DefaultKafkaProducerFactory can now be configured to create a producer per thread. You can
also provide Supplier<Serializer> instances in the constructor as an alternative to either
configured classes (which require no-arg constructors), or constructing with Serializer instances,
which are then shared between all Producers. See Using DefaultKafkaProducerFactory for more
information.
Because the listener container has it’s own mechanism for committing offsets, it prefers the Kafka
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG to be false. It now sets it to false automatically unless
specifically set in the consumer factory or the container’s consumer property overrides.
It is now possible to obtain the consumer’s group.id property in the listener method. See Obtaining
the Consumer group.id for more information.
The container has a new property recordInterceptor allowing records to be inspected or modified
before invoking the listener. A CompositeRecordInterceptor is also provided in case you need to
invoke multiple interceptors. See Message Listener Containers for more information.
The ConsumerSeekAware has new methods allowing you to perform seeks relative to the beginning,
end, or current position and to seek to the first offset greater than or equal to a time stamp. See
Seeking to a Specific Offset for more information.
The ContainerProperties provides an idleBetweenPolls option to let the main loop in the listener
container to sleep between KafkaConsumer.poll() calls. See its JavaDocs and Using
KafkaMessageListenerContainer for more information.
When using AckMode.MANUAL (or MANUAL_IMMEDIATE) you can now cause a redelivery by calling nack on
the Acknowledgment. See Committing Offsets for more information.
Listener performance can now be monitored using Micrometer Timer s. See Monitoring for more
information.
The containers now publish additional consumer lifecycle events relating to startup. See
Application Events for more information.
225
Transactional batch listeners can now support zombie fencing. See Transactions for more
information.
The listener container factory can now be configured with a ContainerCustomizer to further
configure each container after it has been created and configured. See Container factory for more
information.
The SeekToCurrentErrorHandler now treats certain exceptions as fatal and disables retry for those,
invoking the recoverer on first failure.
Starting with version 2.3.2, recovered records' offsets will be committed when the error handler
returns after recovering a failed record.
B.6.8. TopicBuilder
A new class TopicBuilder is provided for more convenient creation of NewTopic @Bean s for automatic
topic provisioning. See Configuring Topics for more information.
The HeaderEnricher transformer has been provided, using SpEL to generate the header values. See
Header Enricher for more information.
The MessagingTransformer has been provided. This allows a Kafka streams topology to interact with
a spring-messaging component, such as a Spring Integration flow. See MessagingTransformer and See
[Calling a Spring Integration Flow from a KStream] for more information.
Now all the JSON-aware components are configured by default with a Jackson ObjectMapper
produced by the JacksonUtils.enhancedObjectMapper(). The JsonDeserializer now provides
226
TypeReference-based constructors for better handling of target generic container types. Also a
JacksonMimeTypeModule has been introduced for serialization of org.springframework.util.MimeType
to plain string. See its JavaDocs and Serialization, Deserialization, and Message Conversion for
more information.
A ByteArrayJsonMessageConverter has been provided as well as a new super class for all Json
converters, JsonMessageConverter. Also, a StringOrBytesSerializer is now available; it can serialize
byte[], Bytes and String values in ProducerRecord s. See Spring Messaging Message Conversion for
more information.
The JsonSerializer, JsonDeserializer and JsonSerde now have fluent APIs to make programmatic
configuration simpler. See the javadocs, Serialization, Deserialization, and Message Conversion,
and Streams JSON Serialization and Deserialization for more informaion.
B.6.11. ReplyingKafkaTemplate
When a reply times out, the future is completed exceptionally with a KafkaReplyTimeoutException
instead of a KafkaException.
Also, an overloaded sendAndReceive method is now provided that allows specifying the reply
timeout on a per message basis.
B.6.12. AggregatingReplyingKafkaTemplate
Extends the ReplyingKafkaTemplate by aggregating replies from multiple receivers. See Aggregating
Multiple Replies for more information.
You can now override the producer factory’s transactionIdPrefix on the KafkaTemplate and
KafkaTransactionManager. See transactionIdPrefix for more information.
The framework now provides a delegating Serializer and Deserializer, utilizing a header to enable
producing and consuming records with multiple key/value types. See Delegating Serializer and
Deserializer for more information.
227
B.7.1. Kafka Client Version
You can now use the ConcurrentKafkaListenerContainerFactory to create and configure any
ConcurrentMessageListenerContainer, not only those for @KafkaListener annotations. See Container
factory for more information.
A ConsumerStoppedEvent is now emitted when a consumer stops. See Thread Safety for more
information.
Batch listeners can optionally receive the complete ConsumerRecords<?, ?> object instead of a
List<ConsumerRecord<?, ?>. See Batch Listeners for more information.
Starting with version 2.2.4, the consumer’s group ID can be used while selecting the dead letter
topic name.
The ConsumerStoppingEvent has been added. See Application Events for more information.
The SeekToCurrentErrorHandler can now be configured to commit the offset of a recovered record
when the container is configured with AckMode.MANUAL_IMMEDIATE (since 2.2.4).
228
B.7.6. @KafkaListener Changes
You can now override the concurrency and autoStartup properties of the listener container factory
by setting properties on the annotation. You can now add configuration to determine which
headers (if any) are copied to a reply message. See @KafkaListener Annotation for more information.
You can now use @KafkaListener as a meta-annotation on your own annotations. See @KafkaListener
as a Meta Annotation for more information.
It is now easier to configure a Validator for @Payload validation. See @KafkaListener @Payload
Validation for more information.
You can now specify kafka consumer properties directly on the annotation; these will override any
properties with the same name defined in the consumer factory (since version 2.2.4). See
Annotation Properties for more information.
Headers of type MimeType and MediaType are now mapped as simple strings in the RecordHeader value.
Previously, they were mapped as JSON and only MimeType was decoded. MediaType could not be
decoded. They are now simple strings for interoperability.
Also, the DefaultKafkaHeaderMapper has a new addToStringClasses method, allowing the specification
of types that should be mapped by using toString() instead of JSON. See Message Headers for more
information.
The KafkaEmbedded class and its KafkaRule interface have been deprecated in favor of the
EmbeddedKafkaBroker and its JUnit 4 EmbeddedKafkaRule wrapper. The @EmbeddedKafka annotation now
populates an EmbeddedKafkaBroker bean instead of the deprecated KafkaEmbedded. This change allows
the use of @EmbeddedKafka in JUnit 5 tests. The @EmbeddedKafka annotation now has the attribute ports
to specify the port that populates the EmbeddedKafkaBroker. See Testing Applications for more
information.
You can now provide type mapping information by using producer and consumer properties.
New constructors are available on the deserializer to allow overriding the type header information
with the supplied target type.
You can now configure the JsonDeserializer to ignore type information headers by using a Kafka
property (since 2.2.3).
229
B.7.10. Kafka Streams Changes
The KafkaStreamBrancher has been introduced for better end-user experience when conditional
branches are built on top of KStream instance.
See Apache Kafka Streams Support and Configuration for more information.
B.7.11. Transactional ID
When a transaction is started by the listener container, the transactional.id is now the
transactionIdPrefix appended with <group.id>.<topic>.<partition>. This change allows proper
fencing of zombies, as described here.
The StringJsonMessageConverter and JsonSerializer now add type information in Headers, letting the
converter and JsonDeserializer create specific types on reception, based on the message itself
rather than a fixed configured type. See Serialization, Deserialization, and Message Conversion for
more information.
Container error handlers are now provided for both record and batch listeners that treat any
exceptions thrown by the listener as fatal/ They stop the container. See Handling Exceptions for
more information.
The listener containers now have pause() and resume() methods (since version 2.1.3). See Pausing
and Resuming Listener Containers for more information.
Starting with version 2.1.3, you can configure stateful retry. See Stateful Retry for more
information.
230
B.8.6. Client ID
Starting with version 2.1.1, you can now set the client.id prefix on @KafkaListener. Previously, to
customize the client ID, you needed a separate consumer factory (and container factory) per
listener. The prefix is suffixed with -n to provide unique client IDs when you use concurrency.
By default, logging of topic offset commits is performed with the DEBUG logging level. Starting with
version 2.1.2, a new property in ContainerProperties called commitLogLevel lets you specify the log
level for these messages. See Using KafkaMessageListenerContainer for more information.
Starting with version 2.1.3, you can designate one of the @KafkaHandler annotations on a class-level
@KafkaListener as the default. See @KafkaListener on a Class for more information.
B.8.9. ReplyingKafkaTemplate
B.8.10. ChainedKafkaTransactionManager
The Spring for Apache Kafka project now requires Spring Framework 5.0 and Java 8.
You can now annotate @KafkaListener methods (and classes and @KafkaHandler methods) with
@SendTo. If the method returns a result, it is forwarded to the specified topic. See Forwarding
Listener Results using @SendTo for more information.
Message listeners can now be aware of the Consumer object. See Message Listeners for more
information.
231
B.9.4. Using ConsumerAwareRebalanceListener
Rebalance listeners can now access the Consumer object during rebalance notifications. See
Rebalancing Listeners for more information.
The 0.11.0.0 client library added support for transactions. The KafkaTransactionManager and other
support for transactions have been added. See Transactions for more information.
The 0.11.0.0 client library added support for message headers. These can now be mapped to and
from spring-messaging MessageHeaders. See Message Headers for more information.
The 0.11.0.0 client library provides an AdminClient, which you can use to create topics. The
KafkaAdmin uses this client to automatically add topics defined as @Bean instances.
KafkaTemplate now supports an API to add records with timestamps. New KafkaHeaders have been
introduced regarding timestamp support. Also, new KafkaConditions.timestamp() and
KafkaMatchers.hasTimestamp() testing utilities have been added. See Using KafkaTemplate,
@KafkaListener Annotation, and Testing Applications for more details.
You can now configure a KafkaListenerErrorHandler to handle exceptions. See Handling Exceptions
for more information.
By default, the @KafkaListener id property is now used as the group.id property, overriding the
property configured in the consumer factory (if present). Further, you can explicitly configure the
groupId on the annotation. Previously, you would have needed a separate container factory (and
consumer factory) to use different group.id values for listeners. To restore the previous behavior of
using the factory configured group.id, set the idIsGroup property on the annotation to false.
Support for configuring Kerberos is now provided. See JAAS and Kerberos for more information.
232
B.11. Changes Between 1.1 and 1.2
This version uses the 0.10.2.x client.
Listeners can be configured to receive the entire batch of messages returned by the consumer.poll()
operation, rather than one at a time.
Null payloads are used to “delete” keys when you use log compaction.
When explicitly assigning partitions, you can now configure the initial offset relative to the current
position for the consumer group, rather than absolute or relative to the current end.
B.12.5. Seek
You can now seek the position of each topic or partition. You can use this to set the initial position
during initialization when group management is in use and Kafka assigns the partitions. You can
also seek when an idle container is detected or at any arbitrary point in your application’s
execution. See Seeking to a Specific Offset for more information.
233