βI learned very early the difference between knowing the name of something and knowing something.β
β Richard P. Feynman
These blog series capture the essence of Feynmanβs quote, emphasizing the distinction between superficial knowledge (knowing the name of something) and deep understanding (knowing something). If you want to learn beyond the name of Docker Compose then read away π
Pre-read
So far, I have shared the following stories:
- What is Docker? β https://medium.com/@ananya281294/beyond-names-labels-docker-47a5e446bda0
- Basic Docker Commands β https://medium.com/@ananya281294/beyond-names-labels-docker-commands-ce28d7793f1c
- Dockerfile β https://medium.com/@ananya281294/beyond-names-labels-dockerfile-58083bcc6c93
In all the examples of the above stories, we are only working with a single image and its container. However, in the real world, we often need to work with multiple containers interacting among themselves.
Multi-Container Apps
As the name suggests, multi-container is having more than one container in your application. A typical example is having a DB for your application.
You can deploy both services β API & DB as part of one container.
However, this setup has the following disadvantages:
- You may need to scale each service differently.
- You cannot perform updates to each service in isolation.
- You may not want to dockerize DB services in production and instead use some managed DB service.
- Containers can contain one startup command. So you will need a complex process manager to start multiple services in one container.
Hence, the better approach is to go for separate containers for each service.
Multi-Container Setup
As we already know, Containers are Isolated. Container cannot communicate to any service beyond its own scope by default. In order for multiple containers to talk to each other, we need to define Networking.
There are 2 ways to achieve a multi-container setup:
- Using docker CLI
- Using docker-compose
Let's see each of these options in detail.
For our demo, we will set up a single-node Kafka broker with one zookeeper.
Apache Kafka is an open-source distributed event streaming platform. Producer are clients that writes event messages to a Kafka topic. Kafka uses topic to store events. These events are then read and processed by a consumer.
We will do the following steps to verify the setup:
Multi-Container Setup using Docker CLI
Step 1: Create Network
Docker containers are isolated. In order for different containers to talk to each other they must be placed in the same network.
# Create a network named single-node-kafka
docker network create single-node-kafka
Step 2: Start zookeeper in the above network
# Start a zookeeper container in network single-node-kafka
docker run -d \
--network single-node-kafka --network-alias zookeeper-net \
--name zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2182 \
-p 2182:2182 \
confluentinc/cp-zookeeper:latest
# zookeeper has network alias of zookeeper-net. Other containers will use this name to connect to zookeeper
# Name of the conatiner is provided with --name flag
# Environment variables is provided with -e flag
# Port Mapping is provided to open port 2182
# confluentinc/cp-zookeeper:latest is the name of the zookeeper image
Step 3: Start Kafka broker in the same network
# Start a kafka broker container in network single-node-kafka
docker run -d \
--network single-node-kafka --network-alias broker-net \
--name kafka_broker \
-p 9094:9094 \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper-net:2182 \
-e KAFKA_ADVERTISED_LISTENERS=INTERNAL://broker-net:9092,EXTERNAL://localhost:9094 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT \
-e KAFKA_INTER_BROKER_LISTENER_NAME=INTERNAL \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:latest
# kafka broker has network alias of broker-net. Other containers will use this name to connect to kafka broker
# Name of the conatiner is provided with --name flag
# Environment variables is provided with -e flag
# KAFKA_ZOOKEEPER_CONNECT env contains host:port of zookeeper.
# Observe the zookeeper network-alias name is used here.
# KAFKA_ADVERTISED_LISTENERS env specifies what clients use to connect to kafka broker
# broker-net:9092 will be used internally by brokers as indicated by KAFKA_INTER_BROKER_LISTENER_NAME
# localhost:9094 will be used to connect to kafka externally from the host system.
# Port Mapping is provided to open port 9094
# confluentinc/cp-kafka:latest is the name of the kafka image
Step 4: Let's test the setup by creating a topic and producing and consuming messages.
We will use the docker exec command for this.
docker exec <containerId> <command>
# Create topic
docker exec kafka_broker kafka-topics --bootstrap-server broker-net:9092 --create --topic TestTopic
# Produce Message
docker exec -it kafka_broker kafka-console-producer --bootstrap-server broker-net:9092 --topic TestTopic
>T1
>T2
#Consume Message
docker exec -it kafka_broker kafka-console-consumer --bootstrap-server broker-net:9092 --topic TestTopic --f-beginning
# Observe how we used the broker-net (kafka broker network alias) to run command in docker container.
Alternatively, we can also do the above operations from localhost:
#Assuming you have kafka binaries on local
# Create Topic
bin/kafka-topics.sh --topic TestLocalTopic --bootstrap-server localhost:9094 --create
# Produce Message
bin/kafka-console-producer.sh --topic TestLocalTopic --bootstrap-server localhost:9094
>A1
>A2
>A3
# Consume Message
bin/kafka-console-consumer.sh --topic TestLocalTopic --bootstrap-server localhost:9094 --from-beginning
A1
A2
A3
# Observe how we used the 9094 (open port for external communication) to run command from localhost.
Introducing Compose
Compose is a utility provided by the docker platform for defining and running multi-container apps. Docker Compose uses a docker-compose.yml file to create multiple containers.
Multi-Container Setup using Compose
Step 1: Create docker-compose.yml
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2182
ports:
- 2182:2182
restart: always
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka_broker
depends_on:
- zookeeper
ports:
- 9094:9094
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2182
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Letβs understand this docker-compose file:
- Every docker-compose.yml file starts with a version.
- Each of the services in the multi-container app is added as a service. In our case, we have 2 services β zookeeper and kafka
- Each service contains information on how to create the docker container. In our case, the docker container is created by fetching an image as indicated by the image key. Alternatively, you can also specify a Dockerfile to create the image using the build key. https://docs.docker.com/compose/compose-file/compose-file-v3/#build
- The container_name key indicates the container name.
- Port mapping is specified by the ports key.
- Environment variables are specified using the environment key. Observe that the KAFKA_ZOOKEEPER_CONNECT is mentioned as zookeeper:2182. In docker-compose, there is no need to create any networking manually. The services are automatically added to a common network. The services can talk to each other by calling the service name and port.
- depends_on key indicates that kafka is dependent on the zookeeper. While starting the containers, kafka will be started only after the zookeeper. Similarly, while stopping the containers, services are stopped in dependency order. https://docs.docker.com/compose/compose-file/compose-file-v3/#depends_on
- restart specifies the restart policy of the container. There are four restart policies: always, on-failure, no, unless-stopped.
Step 2: Start the services by calling the docker-compose up command
Observer how the docker-compose up created the network first and then services respecting the dependency.
Step 3: Letβs test the setup by creating a topic and producing and consuming messages.
We will use the docker exec command for this.
docker exec <containerId> <command>
# Create topic
docker exec kafka_broker kafka-topics --bootstrap-server kafka:9092 --create --topic TestTopic
# Produce Message
docker exec -it kafka_broker kafka-console-producer --bootstrap-server kafka:9092 --topic TestTopic
>A1
>A2
# Consume Message
docker exec -it kafka_broker kafka-console-consumer --bootstrap-server kafka:9092 --topic TestTopic --from-beginning
A1
A2
# Observe how we used the broker-net (kafka broker network alias) to run command in docker container
Alternatively, we can also do the above operations from localhost:
# Assuming you have kafka binaries on local
# Create Topic
bin/kafka-topics.sh --topic TestLocalTopic --bootstrap-server localhost:9094 --create
# Produce Message
bin/kafka-console-producer.sh --topic TestLocalTopic --bootstrap-server localhost:9094
>A1
>A2
>A3
# Consume Message
bin/kafka-console-consumer.sh --topic TestLocalTopic --bootstrap-server localhost:9094 --from-beginning
A1
A2
A3
# Observe how we used the 9094 (open port for external communication) to run command from localhost.
Step 4: Stop the Containers using docker-compose down
Observer how the docker-compose down stopped the containers first (respecting the dependency) and then removed the network.
The basic docker client commands are available with the docker compose as shown below:
In Summary, we saw how setting up multi-container apps with the docker compose is so much easier and cleaner approach.
Thank you for giving me the gift of your precious time π Happy Reading!!