When you're building real-time systems or event-driven services with Dropwizard, integrating Apache Kafka can unlock powerful messaging capabilities. This guide helps you walk through setting up Kafka in your Dropwizard project — using Docker locally for development and Kubernetes for production — with configurations kept clean and environment-specific.
1. Add the Right Dependencies
Start by updating your
pom.xml
to include these important libraries:
<dependencies>
<!-- Dropwizard core -->
<dependency>
<groupId>io.dropwizard</groupId>
<artifactId>dropwizard-core</artifactId>
<version>${dropwizard.version}</version>
</dependency>
<!-- Kafka client for messaging -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<!-- Kubernetes client -->
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>${fabric8.version}</version>
</dependency>
</dependencies>
These dependencies give you everything you need: the Dropwizard web framework, Kafka messaging support, and optional Kubernetes integration for production.
2. Run Kafka with Docker for Development
For development, spin up Kafka and ZooKeeper using Docker Compose. Create a
docker-compose.yml
:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
Then run:
docker-compose up -d
This gives you a local Kafka broker at
localhost:9092
.
3. Configuration for Development: config-dev.yml
Create a configuration file tailored for development:
server:
applicationConnectors:
- type: http
port: 8080
kafka:
bootstrapServers: localhost:9092
clientId: dw-dev-client
topic: dev-topic
In your Dropwizard config class:
public class KafkaConfig {
@JsonProperty
public String bootstrapServers;
@JsonProperty
public String clientId;
@JsonProperty
public String topic;
}
public class AppConfig extends Configuration {
@JsonProperty("kafka")
public KafkaConfig kafka;
}
Then initialize the Kafka producer in your
Application
subclass:
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, config.kafka.bootstrapServers);
props.put(ProducerConfig.CLIENT_ID_CONFIG, config.kafka.clientId);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
Producer<String, String> producer = new KafkaProducer<>(props);
Run your app with:
java -jar target/myapp.jar server config-dev.yml
4. Prepare Kubernetes for Production
In production, you should avoid hardcoding values. Use Kubernetes ConfigMaps and Secrets to manage Kafka connection details and topics.
kubectl create configmap kafka-config \
--from-literal=KAFKA_BOOTSTRAP_SERVERS=kafka-service:9092 \
--from-literal=KAFKA_TOPIC=prod-topic
kubectl create secret generic kafka-secret \
--from-literal=KAFKA_CLIENT_ID=dw-prod-client
5. Production Configuration: config-prod.yml
Use environment variables in your production configuration:
server:
applicationConnectors:
- type: http
port: 8080
kafka:
bootstrapServers: ${KAFKA_BOOTSTRAP_SERVERS}
clientId: ${KAFKA_CLIENT_ID}
topic: ${KAFKA_TOPIC}
Dropwizard will pull values from container environment variables.
6. Kubernetes Deployment Example
Make sure your Kubernetes deployment injects the variables:
env:
- name: KAFKA_BOOTSTRAP_SERVERS
valueFrom:
configMapKeyRef:
name: kafka-config
key: KAFKA_BOOTSTRAP_SERVERS
- name: KAFKA_TOPIC
valueFrom:
configMapKeyRef:
name: kafka-config
key: KAFKA_TOPIC
- name: KAFKA_CLIENT_ID
valueFrom:
secretKeyRef:
name: kafka-secret
key: KAFKA_CLIENT_ID
Ensure you run your app with:
java -jar myapp.jar server config-prod.yml
7. Testing a Message
You can validate sending a message with a simple resource:
@Path("/produce")
@Produces(MediaType.TEXT_PLAIN)
public class KafkaResource {
private final Producer<String, String> producer;
private final AppConfig config;
@Inject
public KafkaResource(Producer<String, String> producer, AppConfig config) {
this.producer = producer;
this.config = config;
}
@POST
public String produce(String msg) {
producer.send(new ProducerRecord<>(config.kafka.topic, msg));
return "Message sent to topic " + config.kafka.topic;
}
}
Call
POST /produce
with a message body and check your Kafka logs or consumer.
Why This Setup Matters
- Local development is fast and isolated with Docker + dev config.
- Production is secure, with no secrets in source code using Kubernetes resources instead.
- Clear separation between
config-dev.yml
and config-prod.yml
improves maintainability.
- Dropwizard’s YAML support makes injecting config and managing services straightforward.
By combining Docker for local Kafka development and Kubernetes for production deployment — all backed by Dropwizard’s configuration system — you get a robust, secure, and maintainable setup for real-time messaging. Whether you're pushing events or handling data streams, this setup lays a solid foundation that you can build on confidently.
image quote pre code