1. Introduction
Did you ever start reading a book, only to realize that something’s off and the story has become too complex? That awkward feeling is almost akin to when application experiences unscheduled hiccups during message processing from Kafka topics. Delivery Semantics come as saviours in times like these; they remember where your consumer paused its journey strictly – this bookmarking feature is known as “consumer offset”. Let us take a closer look at the Kafka Delivery Semantics in Spring Boot!
2. Apache Kafka Delivery Semantics Types
Delivery semantics in Apache Kafka refers to the guarantees that a message or event will be reliably delivered between two or more systems. These guarantees are called at most once, exactly once and at least once delivery.
2.1. At Most Once Semantics

At most once delivery, the offset is committed immediately, and the message delivery happens only once. If something goes wrong during the processing, the same message will be lost and not reprocessed.
This is, by default, an adopted type by Kafka consumers but should only be used when the prospect of losing a message is acceptable while high throughput and low latency are needed.
2.2. At Least Once, Semantics

With at least once delivery, the offset commit happens after the message delivery and processing; if something goes wrong, the consumer will reprocess the message. This requires creating an idempotent consumer (reprocessing a message that has already been processed won’t impact the system). While the throughput and latency may take a hit by reprocessing, no message will be lost as we can manually commit the consumer offset after a successful process.
2.3. Exactly Once Semantics
The message is processed once in the exact-once delivery and should never be lost. It is hard to create consumers for this semantic because we still need to think about idempotency and in the event of an exception atomic transaction where either all or none will happen, similar to the commit-rollback strategy in the database.
When the exactly once semantics happen, it will have lower throughput and latency than the other two.
3. Prerequisites and Setup
You may skip this section if you do not follow this tutorial thoroughly and only want to look at code examples.
If you are trying to follow this tutorial using your IDE, I will assume you already have Apache Kafka inside the docker image. If you don’t, you may want to learn how to run Kafka in docker.
Also, suppose you don’t know much about Kafka broker and where it belongs in the typical distributed system architecture. In that case, I recommend looking at reading this article.
3.1. Create a topic with data
First, we will insert data into the topic manually. Let’s get onto the broker image and create a topic partition with the following JSON data by executing all the steps below in order:
[1] docker exec -it broker bash [2] kafka-topics -bootstrap-server broker:9092 -create -topic events -partitions 1 [3] kafka-console-producer -broker-list broker:9092 -topic events >{"id":1,"name":"football game","date":"2020-12-03T10:15:00"} >{"id":2,"name":"cinema","date":"2020-12-10T11:00:00"} [4] kafka-console-consumer -bootstrap-server broker:9092 -topic events -from-beginning
Step four will verify the inserted data in the event’s Kafka topic.
3.2. Generate Project Template
Now that we have the event’s topic with data, we can generate a blank project using Spring Initializr using the template with all the required dependencies. Click generate and import into your IDE.

3.3. Add Project Configuration
We are going to start by adding some Gradle dependencies inside build.gradle file for Jackson so we can deserialize the JSON into POJO classes. Following by the creation of the application.yml config file inside the resources folder.
implementation 'com.fasterxml.jackson.core:jackson-databind' implementation 'com.fasterxml.jackson.core:jackson-annotations'
spring: kafka: consumer: group-id: default-spring-consumer auto-offset-reset: earliest
Next, we must define our ObjectMapper bean to be injected anywhere in our application.
@Configuration public class JsonConfig { @Bean public ObjectMapper objectMapper() { var objectMapper = new ObjectMapper(); objectMapper.findAndRegisterModules(); return objectMapper; } }
Finally, the event Java class to deserialize JSON into.
@Data @AllArgsConstructor @NoArgsConstructor public class Event { private Long id; private String name; private String date; }
4. Delivery Semantics in Spring Kafka Consumer
4.1. At Most Once Delivery Semantics
Let’s start by implementing the default at most once delivery semantics.
Having added the configuration already, we need to add Kafka consumer using the snipped below.
@Log4j2 @Service public class EventConsumer { private final ObjectMapper objectMapper; public EventConsumer(ObjectMapper objectMapper) { this.objectMapper = objectMapper; } @KafkaListener(topics = "events") public void listenAll(String message) throws JsonProcessingException { var event = objectMapper.readValue(message, Event.class); var eventTime = LocalDateTime.parse(event.getDate()); if (eventTime.isAfter(LocalDateTime.now())) { throw new IllegalArgumentException("Time in the future"); } log.info("Successfully processed event: {}", event); } }
Following the steps below, we should see log messages indicating successful processing if we run the Spring application. Since two records have been processed, checking consumer offset should show 2 in the current offset column value.
[1] Successfully processed event: Event(id=1, name=football game, date=2020-12-03T10:15:00) Successfully processed event: Event(id=2, name=cinema, date=2020-12-10T11:00:00) [2] kafka-consumer-groups -bootstrap-server broker:9092 -describe -group default-spring-consumer
Let’s add one more message that will cause an exception to occur. We should get an exception that event time is in the future.
kafka-console-producer -broker-list broker:9092 -topic events >{"id":3,"name":"shopping","date":"2030-12-03T10:15:00"}
The consumer process will retry several times based on the backoff max attempts setting until the message is gone. Even though the message wasn’t processed successfully, the consumer offset has been incremented and, in this case, updated to 3:
kafka-consumer-groups -bootstrap-server broker:9092 -describe -group default-spring-consumer
4.2. At Least Once Delivery Semantics
To achieve at least once delivery semantics, we need to update our application.yml file.
We are setting the auto-commit mode to false so the consumer offset won’t be updated automatically and ack mode on a listener to manual immediate as sometimes it may take a long time based on the user the setup.
spring: kafka: consumer: group-id: default-spring-consumer auto-offset-reset: earliest enable-auto-commit: false listener: ack-mode: manual_immediate
Next, we are updating our listener method to pass an acknowledgement parameter. It allows us to manually acknowledge when the message has been processed, updating the consumer offset.
@KafkaListener(topics = "events") public void listenAll(String message, Acknowledgment ack) throws JsonProcessingException { var event = objectMapper.readValue(message, Event.class); var eventTime = LocalDateTime.parse(event.getDate()); if (eventTime.isAfter(LocalDateTime.now())) { throw new IllegalArgumentException("Event time in the future"); } log.info("Successfully processed event: {}", event); ack.acknowledge(); }
Add the below data to the event’s topic and restart the application for the above changes to take effect:
kafka-console-producer -broker-list broker:9092 -topic events >{"id":4,"name":"fishing","date":"2020-12-03T10:20:00"} >{"id":5,"name":"restaurant","date":"2025-11-01T08:00:00"}
We will process the first message, but the second will throw an exception. If we look at consumer offset, it will be set to 4 instead of 5 since we didn’t manage to acknowledge it manually:
kafka-consumer-groups -bootstrap-server broker:9092 -describe -group default-spring-consumer
The message will be available on the topic to be reprocessed.
4.3. Exactly Once Delivery Semantics
The complexity of exactly once delivery semantics is beyond what we can preset using code samples since distributed systems consist of more than one application, and those can fail independently. However, let’s explore some factors that may cause failures. While some may seem unlikely at the scale, they may occur relatively often.
Producer-to-broker communication failure – when a producer sends a message, the broker is expected to acknowledge the received message. The broker may crash somewhere before or after writing the message to the topic before sending the acknowledgement.
Since there is no way for the producer to know the exact error, it will assume an unsuccessful write. The message will be retried, causing duplicates in the topic and consumer. The idempotent consumer is an excellent solution to the above problem.
Broker failure – while Kafka is a highly available durable system, it uses a replication factor to specify the number of replicas of a given partition. It can tolerate n-1 broker failures. So as long as at least one broker is active, the data on the partition will be available.
Transactions across multiple partitions – we want to be able to send batches of messages to multiple Kafka partitions where either all messages will get published on none. The transactions API supports atomic operations. It controls when to begin and commit a transaction, including all the steps in between.
The producer is required to set transactional—id config to provide state continuity across application restarts.
In the consumer, the isolation level will determine how the transactional messages are read. It can consume after the transaction is committed or processed according to consumer offset, regardless if the commit has already happened.
5. Summary
In this article, we have evaluated Kafka Delivery Semantics. We’ve deeply explored why Kafka’s at least once semantics is so popular. While avoiding message losses of at most once semantics may be preferable, exactly once semantics can often seem like an unimaginable feat – one we’d all love to crack!
Daniel Barczak is a software developer with over 9 years of professional experience. He has experience with several programming languages and technologies and has worked for businesses ranging from startups to big enterprises. Daniel in his leisure time likes to experiment with new smart home gadgets and explore the realm of home automation.
Leave a Reply