❌

Reading view

There are new articles available, click to refresh the page.

Docker EP 11 – Docker Networking & Docker Volumes

Alex is tasked with creating a new microservices-based web application for a growing e-commerce platform. The application needs to handle everything from user authentication to inventory management, and you decide to use Docker to containerize the different services.

Here are the services code with their dockerfile,

Auth Service (auth-service)

This service handles user authentication,


# auth-service.py
from flask import Flask, request, jsonify

app = Flask(__name__)

# Dummy user database
users = {
    "user1": "password1",
    "user2": "password2"
}

@app.route('/login', methods=['POST'])
def login():
    data = request.json
    username = data.get('username')
    password = data.get('password')
    
    if username in users and users[username] == password:
        return jsonify({"message": "Login successful"}), 200
    else:
        return jsonify({"message": "Invalid credentials"}), 401

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Dockerfile:

# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV NAME auth-service

# Run app.py when the container launches
CMD ["python", "auth_service.py"]


Inventory Service (inventory-service)

This service manages inventory data, inventory_service.py

from flask import Flask, request, jsonify

app = Flask(__name__)

# Dummy inventory database
inventory = {
    "item1": {"name": "Item 1", "quantity": 10},
    "item2": {"name": "Item 2", "quantity": 5}
}

@app.route('/inventory', methods=['GET'])
def get_inventory():
    return jsonify(inventory), 200

@app.route('/inventory/<item_id>', methods=['POST'])
def update_inventory(item_id):
    data = request.json
    if item_id in inventory:
        inventory[item_id]["quantity"] = data.get("quantity")
        return jsonify({"message": "Inventory updated"}), 200
    else:
        return jsonify({"message": "Item not found"}), 404

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)


Dockerfile

# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5001 available to the world outside this container
EXPOSE 5001

# Define environment variable
ENV NAME inventory-service

# Run inventory_service.py when the container launches
CMD ["python", "inventory_service.py"]

Dev Service (dev-service)

This service could be a simple service used during development for testing or managing files, dev-service.py


from flask import Flask, request, jsonify
import os

app = Flask(__name__)

@app.route('/files', methods=['GET'])
def list_files():
    files = os.listdir('/app/data')
    return jsonify(files), 200

@app.route('/files/<filename>', methods=['GET'])
def read_file(filename):
    try:
        with open(f'/app/data/{filename}', 'r') as file:
            content = file.read()
        return jsonify({"filename": filename, "content": content}), 200
    except FileNotFoundError:
        return jsonify({"message": "File not found"}), 404

@app.route('/files/<filename>', methods=['POST'])
def write_file(filename):
    data = request.json.get("content", "")
    with open(f'/app/data/{filename}', 'w') as file:
        file.write(data)
    return jsonify({"message": f"File {filename} written successfully"}), 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

Dockerfile


# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5002 available to the world outside this container
EXPOSE 5002

# Define environment variable
ENV NAME dev-service

# Run dev_service.py when the container launches
CMD ["python", "dev_service.py"]

Auth Service: http://localhost:5000/login (POST request with JSON {"username": "user1", "password": "password1"})

Inventory Service: http://localhost:5001/inventory (GET or POST request)

Dev Service:

  • List files: http://localhost:5002/files (GET request)
  • Read file: http://localhost:5002/files/<filename> (GET request)
  • Write file: http://localhost:5002/files/<filename> (POST request with JSON {"content": "Your content here"})

The Lonely Container

You start by creating a simple Flask application for user authentication. After writing the code, you decide to containerize it using Docker.

docker build -t auth-service .
docker run -d -p 5000:5000 --name auth-service auth-service

The service is up and running, and you can access it at http://localhost:5000. But there’s one problemβ€”it’s lonely. Your auth-service is a lone container in the vast sea of Docker networking. If you want to add more services, they need a way to communicate with each other.

  1. docker build -t auth-service .
  • This command builds a Docker image from the Dockerfile in the current directory (.) and tags it as auth-service.

2. docker run -d -p 5000:5000 --name auth-service auth-service

  • -d: Runs the container in detached mode (in the background).
  • -p 5000:5000: Maps port 5000 on the host to port 5000 in the container, making the Flask app accessible at http://localhost:5000.
  • --name auth-service: Names the container auth-service.
  • auth-service: The name of the image to run.

The Bridge of Communication

You decide to expand the application by adding a new inventory service. But how will these two services talk to each other? Enter the bridge network a magical construct that allows containers to communicate within their own private world.

You create a user-defined bridge network to allow your containers to talk to each other by name rather than by IP address.

docker network create ecommerce-network
docker run -d --name auth-service --network ecommerce-network auth-service
docker run -d --name inventory-service --network ecommerce-network inventory-service

Now, your services are not lonely anymore. The auth-service can talk to the inventory-service simply by using its name, like calling a friend across the room. In your code, you can reference inventory-service by name to establish a connection.

docker network create ecommerce-network

  • Creates a user-defined bridge network called ecommerce-network. This network allows containers to communicate with each other using their container names as hostnames.

docker run -d --name auth-service --network ecommerce-network auth-service

  • Runs the auth-service container on the ecommerce-network. The container can now communicate with other containers on the same network using their names.

docker run -d --name inventory-service --network ecommerce-network inventory-service

  • Runs the inventory-service container on the ecommerce-network. The auth-service container can now communicate with the inventory-service using the name inventory-service.

The City of Services

As your project grows, you realize that your application will eventually need to scale. Some services will run on different servers, possibly in different data centers. How will they communicate? It’s time to build a cityβ€”a network that spans multiple hosts.

You decide to use Docker Swarm, a tool that lets you manage a cluster of Docker hosts. You create an overlay network, a mystical web that allows containers across different servers to communicate as if they were right next to each other.

docker network create -d overlay ecommerce-overlay
docker service create --name auth-service --network ecommerce-overlay auth-service
docker service create --name inventory-service --network ecommerce-overlay inventory-service

Now, no matter where your containers are running, they can still talk to each other. It’s like giving each container a magic phone that works anywhere in the world.

docker network create -d overlay ecommerce-overlay

  • Creates an overlay network named ecommerce-overlay. Overlay networks are used for multi-host communication, typically in a Docker Swarm or Kubernetes environment.

docker service create --name auth-service --network ecommerce-overlay auth-service

  • Deploys the auth-service as a service on the ecommerce-overlay network. Services are used in Docker Swarm to manage containers across multiple hosts.

docker service create --name inventory-service --network ecommerce-overlay inventory-service

  • Deploys the inventory-service as a service on the ecommerce-overlay network, allowing it to communicate with the auth-service even if they are running on different physical or virtual machines.

The Treasure Chest of Data

Your services are talking, but they need to remember thingsβ€”like user data and inventory levels. Enter the Docker volumes, the treasure chests where your containers can store their precious data.

For your inventory-service, you create a volume to store all the inventory information,

docker volume create inventory-data
docker run -d --name inventory-service --network ecommerce-network -v inventory-data:/app/data inventory-service

Now, even if your inventory-service container is destroyed and replaced, the data remains safe in the inventory-data volume. It’s like having a secure vault where you keep all your valuables.

docker volume create inventory-data

  • Creates a named Docker volume called inventory-data. Named volumes persist data even if the container is removed, and they can be shared between containers.

docker run -d --name inventory-service --network ecommerce-network -v inventory-data:/app/data inventory-service

  • -v inventory-data:/app/data: Mounts the inventory-data volume to the /app/data directory inside the container. Any data written to /app/data inside the container is stored in the inventory-data volume.

The Hidden Pathway

Sometimes, you need to work directly with files on your host machine, like when debugging or developing. You create a bind mount, a secret pathway between your host and the container.

docker run -d --name dev-service --network ecommerce-network -v $(pwd)/data:/app/data dev-service

Now, as you make changes to files in your host’s data directory, those changes are instantly reflected in your container. It’s like having a secret door in your house that opens directly into your office at work.

-v $(pwd)/data:/app/data:

  • This creates a bind mount, where the data directory in the current working directory on the host ($(pwd)/data) is mounted to /app/data inside the container. Changes made to files in the data directory on the host are reflected inside the container and vice versa. This is particularly useful for development, as it allows you to edit files on your host and see the changes immediately inside the running container.

The Seamless City

As your application grows, Docker Compose comes into play. It’s like the city planner, helping you manage all the roads (networks) and buildings (containers) in your bustling metropolis. With a simple docker-compose.yml file, you define your entire application stack,

version: '3'
services:
  auth-service:
    image: auth-service
    networks:
      - ecommerce-network
  inventory-service:
    image: inventory-service
    networks:
      - ecommerce-network
    volumes:
      - inventory-data:/app/data

networks:
  ecommerce-network:

volumes:
  inventory-data:

  1. version: '3': Specifies the version of the Docker Compose file format.
  2. services:: Defines the services (containers) that make up your application.
  • auth-service:: Defines the auth-service container.
    • image: auth-service: Specifies the Docker image to use for this service.
    • networks:: Specifies the networks this service is connected to.
  • inventory-service:: Defines the inventory-service container.
    • volumes:: Specifies the volumes to mount. Here, the inventory-data volume is mounted to /app/data inside the container.

3. networks:: Defines the networks used by the services. ecommerce-network is the custom bridge network created for communication between the services.

4. volumes:: Defines the volumes used by the services. inventory-data is a named volume used by the inventory-service.

Now, you can start your entire city with a single command,

docker-compose up

Everything springs to life services find each other, data is stored securely, and your city of containers runs like a well-oiled machine.

The Tale of Magic Kitchen : A Docker Architecture

Setting the Scene

Once upon a time in the bustling town of Pettai, there was a famous restaurant called ArRahman Hotel. This wasn’t just any ordinary restaurant; it had a Magic Kitchen. The Magic Kitchen had the ability to create different kinds of meals almost instantly, without any confusion or delay. Let’s explore how the restaurant achieved this magical feat.

The Magic Kitchen (Docker Engine)

The heart of the restaurant was the Magic Kitchen, which had a unique power. Just like the Docker Engine, it could take orders and prepare meals consistently and efficiently. The kitchen knew how to handle everything without mixing ingredients or creating chaos, no matter how many orders came in.

The Docker engine comprises of dockerd (docker daemon), Docker Client (docker), REST Api, Container Runtime, Image Management, Networking, Volume management, Security mechanisms, Plugins & Extensions.

The Recipe Book (Docker Images)

In the Magic Kitchen, the chefs relied on a special Recipe Book. This Recipe Book contained detailed instructions on how to prepare each dish. These instructions are like Docker Images.

Each image is a blueprint for creating a meal (or application), containing everything needed to cook it up: the ingredients, the cooking method, and any special tools required.

The Prepped Ingredients – (Docker Containers)

When an order came in, the chefs would use the Recipe Book to gather prepped ingredients from the pantry. These ingredients are similar to Docker Containers. Containers are the actual meals created from the recipes, ready to be served. They are isolated from each other, ensuring that the flavors (or code) don’t mix and cause unwanted results.

The Pantry (Docker Registry)

The Magic Kitchen had a vast pantry where all the ingredients were stored. This pantry represents the Docker Registry. It holds all the images (recipes) that the kitchen might need, ready to be pulled and used whenever required. The registry ensures that every meal is prepared with the correct ingredients.

The Chefs (Docker Daemon)

The chefs in the Magic Kitchen are like the Docker Daemon. They are responsible for reading the recipes, pulling the necessary ingredients from the pantry, and preparing the meals in containers. They handle the entire process, ensuring everything runs smoothly and efficiently.

The Customers (Developers and Users)

The patrons of ArRahman’s are the Developers and Users. They come to the restaurant with various demands, and the kitchen satisfies them quickly and reliably. Developers can focus on creating new recipes without worrying about the cooking process, while users enjoy consistent and delicious meals every time.

The Special Dining Areas (Docker Networks and Volumes)

ArRahman also had special dining areas with unique themes and settings, ensuring each meal was experienced perfectly. These are like Docker Networks and Volumes. Networks allow different containers to communicate, much like guests chatting over dinner. Volumes store data persistently, like special utensils or decor kept for regular guests.

So atlast the docker architecture is,

❌