Apache Kafka is the backbone of many modern data streaming platforms. Whether you're processing logs, events, or real-time messages, Kafka helps you build reliable and scalable pipelines. However, configuring Kafka in a Spring Boot project can seem intimidating—especially when transitioning between development and production environments.
This guide shows you how to set up Kafka using Docker and configure it in Spring Boot using
application.yml
. We'll walk through settings for both local development and production (running in Kubernetes), with best practices for managing configuration values securely and cleanly.
Why Use Docker for Kafka?
Kafka can be a bit tricky to install manually because it depends on Zookeeper and requires some configuration. Docker simplifies that entire process, allowing you to spin up a working Kafka environment in minutes—perfect for local development and testing.
Step 1: Run Kafka with Docker for Local Development
One of the easiest ways to start Kafka locally is with Docker Compose. Here’s a basic
docker-compose.yml
to run Kafka and Zookeeper:
version: '2'
services:
zookeeper:
image: bitnami/zookeeper:latest
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka:latest
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
To start the Kafka stack, run:
docker-compose up -d
This gives you a fully functional Kafka broker on your local machine, accessible on port 9092.
Step 2: Add Kafka Dependency to Spring Boot
To connect your Spring Boot application with Kafka, add the following Maven dependency:
<dependency>
<groupId>org.springframework.kafka</>groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
This brings in all the tools needed to produce and consume Kafka messages.
Step 3: Configure application-dev.yml
for Local Kafka
Now, set up your
application-dev.yml
to point to the locally running Kafka broker:
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
group-id: my-dev-group
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
This configuration enables Spring Boot to connect to Kafka, subscribe to topics, and produce messages during development.
To activate this profile locally, run your application with:
SPRING_PROFILES_ACTIVE=dev
Step 4: Set Up Kafka for Production in Kubernetes
For a production-ready setup—especially in a Kubernetes cluster—you should avoid hardcoding sensitive values. Use environment variables, injected via ConfigMaps or Secrets, to control Kafka connection details securely.
Step 4.1: Create a ConfigMap
Here’s an example of a Kubernetes ConfigMap holding your Kafka settings:
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-config
namespace: your-namespace
data:
kafka_bootstrap_servers: kafka-service:9092
You can also use a Secret if authentication credentials are needed.
Step 4.2: Pass Values into Your App Deployment
Update your Kubernetes deployment to pass these values as environment variables:
containers:
- name: your-app
image: your-app-image
env:
- name: KAFKA_BOOTSTRAP_SERVERS
valueFrom:
configMapKeyRef:
name: kafka-config
key: kafka_bootstrap_servers
- name: SPRING_PROFILES_ACTIVE
value: prod
Step 5: Configure application-prod.yml
for Kubernetes
In production, your Spring Boot app can read the environment variable using placeholders:
spring:
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS}
consumer:
group-id: my-prod-group
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
When your app starts, Spring Boot will automatically replace the
${...}
values with the matching environment variables.
Step 6: Using Spring Profiles
Using Spring profiles helps you cleanly separate development and production environments.
application-dev.yml
: for local development with Docker
application-prod.yml
: for production deployment in Kubernetes
Control which profile is active via:
- Local terminal:
SPRING_PROFILES_ACTIVE=dev
- Kubernetes deployment YAML:
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
This ensures the right configuration file is loaded in each environment without code changes.
image quote pre code