Apache Kafka is a widely-used platform for building real-time data pipelines, messaging systems, and event-driven architectures. It handles massive streams of data efficiently—but getting it running with a Spring Boot application can seem like a lot at first.
Don’t worry. In this guide, we’ll simplify things by showing how to set up Kafka using Docker and configure it in Spring Boot using
application.properties
. You’ll learn how to create separate setups for development and production, including Kubernetes support using environment variables.
Why Use Kafka with Docker?
Kafka requires Zookeeper and some additional configuration to run properly. Docker makes it easy to spin up everything you need without cluttering your system. With Docker, your development environment is portable and easy to reset.
Step 1: Run Kafka with Docker for Development
Here’s a quick Docker Compose file to set up Kafka and Zookeeper locally:
version: '2'
services:
zookeeper:
image: bitnami/zookeeper:latest
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka:latest
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
Start the services with:
docker-compose up -d
This launches both Zookeeper and Kafka with default settings, accessible via
localhost:9092
.
Step 2: Add Kafka Dependencies to Your Project
Add Spring Kafka support in your
pom.xml
:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
This dependency allows your Spring Boot app to both produce and consume Kafka messages.
Step 3: Configure application-dev.properties
for Local Development
Now, create a file named
application-dev.properties
in your
resources
folder. Define the Kafka connection settings as follows:
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=my-dev-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
This setup allows your application to consume and publish messages to your locally running Kafka instance.
To activate the development profile, use this environment variable when starting your app:
SPRING_PROFILES_ACTIVE=dev
Step 4: Configure Kafka for Production in Kubernetes
In production environments, you don’t want to hardcode settings like server addresses or credentials. Kubernetes offers a clean way to manage this using ConfigMaps and environment variables.
Step 4.1: Create a Kafka ConfigMap in Kubernetes
Here’s a sample ConfigMap that holds Kafka broker information:
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-config
namespace: your-namespace
data:
kafka_bootstrap_servers: kafka-service:9092
You can use Secrets as well for sensitive information like SASL credentials if needed.
Step 4.2: Inject Environment Variables in Deployment YAML
Update your deployment to inject environment variables into your app container:
containers:
- name: your-app
image: your-app-image
env:
- name: KAFKA_BOOTSTRAP_SERVERS
valueFrom:
configMapKeyRef:
name: kafka-config
key: kafka_bootstrap_servers
- name: SPRING_PROFILES_ACTIVE
value: prod
Step 5: Configure application-prod.properties
for Kubernetes
Create a separate
application-prod.properties
file for your production environment. Use placeholders to read values from environment variables:
spring.kafka.bootstrap-servers=${KAFKA_BOOTSTRAP_SERVERS}
spring.kafka.consumer.group-id=my-prod-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
Spring Boot will automatically replace the
${...}
placeholders with values provided by Kubernetes at runtime.
Step 6: Switching Profiles Between Dev and Prod
Spring Boot allows seamless profile switching using the
SPRING_PROFILES_ACTIVE
variable:
This keeps your configurations environment-specific and cleanly separated.
image quote pre code