Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

docker compose files to create a fully working kafka stack

License

NotificationsYou must be signed in to change notification settings

conduktor/kafka-stack-docker-compose

Repository files navigation

Actions Status

logo

An open-source project byConduktor

This project is sponsored byConduktor.io, the Enterprise Data ManagementPlatform for Streaming.

Once you have started your cluster, you can use Conduktor to easily manage it.Just connect againstlocalhost:9092. If you are on Mac or Windows and want to connect from another container, usehost.docker.internal:29092

kafka-stack-docker-compose

This replicates as well as possible real deployment configurations, where you have your zookeeper servers and Kafka servers distinct from each other. This solves all the networking hurdles that comes with Docker and Docker Compose, and is compatible cross platform.

Stack

  • Conduktor Platform
  • Zookeeper version
  • Kafka version
  • Kafka Schema Registry
  • Kafka Rest Proxy
  • Kafka Connect
  • ksqlDB Server
  • Zoonavigator

For a UI tool to access your local Kafka cluster, use the free version ofConduktor.

Requirements

Kafka will be exposed on127.0.0.1.

Apple M4 Support

At the time of writing there is an issue with Apple M4 chip machines and running certain Java based Docker images.

Modify theconduktor.yml file, uncomment the environment variableCONSOLE_JAVA_OPTS: "-XX:UseSVE=0".

Full stack

To ease you journey with Kafka just connect tolocalhost:8080

  • Conduktor-platform:$DOCKER_HOST_IP:8080
  • Single Zookeeper:$DOCKER_HOST_IP:2181
  • Single Kafka:$DOCKER_HOST_IP:9092
  • Kafka Schema Registry:$DOCKER_HOST_IP:8081
  • Kafka Rest Proxy:$DOCKER_HOST_IP:8082
  • Kafka Connect:$DOCKER_HOST_IP:8083
  • KSQL Server:$DOCKER_HOST_IP:8088
  • (experimental) JMX port at$DOCKER_HOST_IP:9001

Run with:

docker compose -f full-stack.yml updocker compose -f full-stack.yml down

Single Zookeeper / Single Kafka

This configuration fits most development requirements.

  • Zookeeper will be available at$DOCKER_HOST_IP:2181
  • Kafka will be available at$DOCKER_HOST_IP:9092
  • (experimental) JMX port at$DOCKER_HOST_IP:9999

Run with:

docker compose -f zk-single-kafka-single.yml updocker compose -f zk-single-kafka-single.yml down

Single Zookeeper / Multiple Kafka

If you want to have three brokers and experiment with Kafka replication / fault-tolerance.

  • Zookeeper will be available at$DOCKER_HOST_IP:2181
  • Kafka will be available at$DOCKER_HOST_IP:9092,$DOCKER_HOST_IP:9093,$DOCKER_HOST_IP:9094

Run with:

docker compose -f zk-single-kafka-multiple.yml updocker compose -f zk-single-kafka-multiple.yml down

Multiple Zookeeper / Single Kafka

If you want to have three zookeeper nodes and experiment with zookeeper fault-tolerance.

  • Zookeeper will be available at$DOCKER_HOST_IP:2181,$DOCKER_HOST_IP:2182,$DOCKER_HOST_IP:2183
  • Kafka will be available at$DOCKER_HOST_IP:9092
  • (experimental) JMX port at$DOCKER_HOST_IP:9999

Run with:

docker compose -f zk-multiple-kafka-single.yml updocker compose -f zk-multiple-kafka-single.yml down

Multiple Zookeeper / Multiple Kafka

If you want to have three zookeeper nodes and three Kafka brokers to experiment with production setup.

  • Zookeeper will be available at$DOCKER_HOST_IP:2181,$DOCKER_HOST_IP:2182,$DOCKER_HOST_IP:2183
  • Kafka will be available at$DOCKER_HOST_IP:9092,$DOCKER_HOST_IP:9093,$DOCKER_HOST_IP:9094

Run with:

docker compose -f zk-multiple-kafka-multiple.yml updocker compose -f zk-multiple-kafka-multiple.yml down

FAQ

Kafka

Q: Kafka's log is too verbose, how can I reduce it?

A: Add the following line to your docker compose environment variables:KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO". Full logging control can be accessed here:https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka/include/etc/confluent/docker/log4j.properties.template

Q: How do I delete data to start fresh?

A: Your data is persisted from within the docker compose folder, so if you want for example to reset the data in the full-stack docker compose, do adocker compose -f full-stack.yml down.

Q: Can I change the zookeeper ports?

A: yes. Say you want to changezoo1 port to12181 (only relevant lines are shown):

  zoo1:    ports:      - "12181:12181"    environment:        ZOO_PORT: 12181          kafka1:    environment:      KAFKA_ZOOKEEPER_CONNECT: "zoo1:12181"

Q: Can I change the Kafka ports?

A: yes. Say you want to changekafka1 port to12345 (only relevant lines are shown). Note onlyLISTENER_DOCKER_EXTERNAL changes:

  kafka1:    image: confluentinc/cp-kafka:7.2.1    hostname: kafka1    ports:      - "12345:12345"    environment:      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:19092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:12345,DOCKER://host.docker.internal:29092

Q: Kafka is using a lot of disk space for testing. Can I reduce it?

A: yes. This is for testing only!!! Reduce the KAFKA_LOG_SEGMENT_BYTES to 16MB and the KAFKA_LOG_RETENTION_BYTES to 128MB

  kafka1:    image: confluentinc/cp-kafka:7.2.1    ...    environment:      ...      # For testing small segments 16MB and retention of 128MB      KAFKA_LOG_SEGMENT_BYTES: 16777216      KAFKA_LOG_RETENTION_BYTES: 134217728

Q: How do I expose Kafka?

A: If you want to expose Kafka outside of your local machine, you must setKAFKA_ADVERTISED_LISTENERS to the IP of the machine so that Kafka is externally accessible. To achieve this you can setLISTENER_DOCKER_EXTERNAL to the IP of the machine.For example, if the IP of your machine is50.10.2.3, follow the sample mapping below:

  kafka1:    image: confluentinc/cp-kafka:7.2.1    ...    environment:      ...      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:19093,EXTERNAL://50.10.2.3:9093,DOCKER://host.docker.internal:29093

Q: How do I add connectors to Kafka connect?

Create aconnectors directory and place your connectors there (usually in a subdirectory)connectors/example/my.jar

The directory is automatically mounted by thekafka-connect Docker container

OR edit the bash command which pulls connectors at runtime

confluent-hub install --no-prompt debezium/debezium-connector-mysql:latest        confluent-hub install

Q: How to disable Confluent metrics?

Add this environment variable

KAFKA_CONFLUENT_SUPPORT_METRICS_ENABLE=false

About

docker compose files to create a fully working kafka stack

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors30

Languages


[8]ページ先頭

©2009-2025 Movatter.jp