Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Postgres – Write-Ahead Logging (WAL) in PostgreSQL

16 November 2024 at 07:06

Write-Ahead Logging (WAL) is a fundamental feature of PostgreSQL, ensuring data integrity and facilitating critical functionalities like crash recovery, replication, and backup.

This series of experimentation explores WAL in detail, its importance, how it works, and provides examples to demonstrate its usage.

What is Write-Ahead Logging (WAL)?

WAL is a logging mechanism where changes to the database are first written to a log file before being applied to the actual data files. This ensures that in case of a crash or unexpected failure, the database can recover and replay these logs to restore its state.

Your question is right !

Why do we need a WAL, when we do a periodic backup ?

Write-Ahead Logging (WAL) is critical even when periodic backups are in place because it complements backups to provide data consistency, durability, and flexibility in the following scenarios.

1. Crash Recovery

  • Why It’s Important: Periodic backups only capture the database state at specific intervals. If a crash occurs after the latest backup, all changes made since that backup would be lost.
  • Role of WAL: WAL ensures that any committed transactions not yet written to data files (due to PostgreSQL’s lazy-writing behavior) are recoverable. During recovery, PostgreSQL replays the WAL logs to restore the database to its last consistent state, bridging the gap between the last checkpoint and the crash.

Example:

  • Backup Taken: At 12:00 PM.
  • Crash Occurs: At 1:30 PM.
  • Without WAL: All changes after 12:00 PM are lost.
  • With WAL: All changes up to 1:30 PM are recovered.

2. Point-in-Time Recovery (PITR)

  • Why It’s Important: Periodic backups restore the database to the exact time of the backup. However, this may not be sufficient if you need to recover to a specific point, such as just before a mistake (e.g., accidental data deletion).
  • Role of WAL: WAL records every change, enabling you to replay transactions up to a specific time. This allows fine-grained recovery beyond what periodic backups can provide.

Example:

  • Backup Taken: At 12:00 AM.
  • Mistake Made: At 9:45 AM, an important table is accidentally dropped.
  • Without WAL: Restore only to 12:00 AM, losing 9 hours and 45 minutes of data.
  • With WAL: Restore to 9:44 AM, recovering all valid changes except the accidental drop.

3. Replication and High Availability

  • Why It’s Important: In a high-availability setup, replicas must stay synchronized with the primary database to handle failovers. Periodic backups cannot provide real-time synchronization.
  • Role of WAL: WAL enables streaming replication by transmitting logs to replicas, ensuring near real-time synchronization.

Example:

  • A primary database sends WAL logs to replicas as changes occur. If the primary fails, a replica can quickly take over without data loss.

4. Handling Incremental Changes

  • Why It’s Important: Periodic backups store complete snapshots of the database, which can be time-consuming and resource-intensive. They also do not capture intermediate changes.
  • Role of WAL: WAL allows incremental updates by recording only the changes made since the last backup or checkpoint. This is crucial for efficient data recovery and backup optimization.

5. Ensuring Data Durability

  • Why It’s Important: Even during normal operations, a database crash (e.g., power failure) can occur. Without WAL, transactions committed by users but not yet flushed to disk are lost.
  • Role of WAL: WAL ensures durability by logging all changes before acknowledging transaction commits. This guarantees that committed transactions are recoverable even if the system crashes before flushing the changes to data files.

6. Supporting Hot Backups

  • Why It’s Important: For large, active databases, taking a backup while the database is running can result in inconsistent snapshots.
  • Role of WAL: WAL ensures consistency by recording changes that occur during the backup process. When replayed, these logs synchronize the backup, ensuring it is valid and consistent.

7. Debugging and Auditing

  • Why It’s Important: Periodic backups are static snapshots and don’t provide a record of what happened in the database between backups.
  • Role of WAL: WAL contains a sequential record of all database modifications, which can help in debugging issues or auditing transactions.
FeaturePeriodic BackupsWrite-Ahead Logging
Crash RecoveryLimited to the last backupEnsures full recovery to the crash point
Point-in-Time RecoveryRestores only to the backup timeAllows recovery to any specific point
ReplicationNot supportedEnables real-time replication
EfficiencyFull snapshotIncremental changes
DurabilityRelies on backup frequencyGuarantees transaction durability

In upcoming sessions, we will all experiment each one of the failure scenarios for understanding.

uv- A faster alternative to pip and pip-tools

26 April 2024 at 17:52

Introduction

If you’re a Python developer, you’re probably familiar with pip and pip-tools, the go-to tools for managing Python packages. However, did you know that there’s a faster alternative that can save you time and improve your workflow?

Meet UV

uv is a package installer used for installing packages in python in a faster way. Which is written on Rust 🦀 , makes a warping speed in installation of packages as compared to pip and pip-tools.

Also, It is Free and Open Source done by astral-sh. which has around 11.3k stars on GitHub makes a very trending alternative package manager for python.

pip vs uv

As per astral-sh, they claim uv makes as very faster on installation of python packages, as compared to poetry , which is a another python package manager and pip-compile

Image courtesy: astral-sh ( Package Installation )

Also, we can able to create a virtual environment, at a warping speed as compared to python3 venv or virtualenv.

Image courtesy: astral-sh ( Virtual Environment )

My experience and Benchmarks

Nowadays, I am using uv as a package manager for doing my side projects. Which feels very good on developing python applications and it will be definitely useful on containerizing python applications using docker.

But now, uv has make my life easier with warping speed in installation on packages, suitable for building and deploying our containers as our need with many repitition and hassle-free in building docker containers.

Here is my comparison on pip and uv, Let’s start with pip

creating virtual environment with pip

The above pic shows that it takes almost 3.84 or approximately 4 seconds to create a virtual environment in python whereas,

creating Virtual environment using uv

uv takes just 0.01 seconds to create a virtual environment in python. Now we move on with installing packages such as fastapi and langchain at same time, which has more dependencies than ever worked with.

pip install fastapi langchain
Installation of fastapi and langchain using pip

This takes around 22.5 seconds, which is fast today 😂, Sometimes which makes it even slower during installation at crucial time. Now let’s check with uv.

Installation of langchain and fastapi using uv

uv, makes a warping installation of langchain and fastapi at same time within 0.12 seconds. 🤯💥

Which makes me to use ‘uv’ as my package manager for python while developing my projects at recent times.

uv Installation and usage

Firstly copy the command for installation using linux,

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.1.38/uv-installer.sh | sh

on Windows,

powershell -c "irm https://github.com/astral-sh/uv/releases/download/0.1.38/uv-installer.ps1 | iex"

for mac users, go and download official binaries provided by astral-sh, given here.

Virtual Environment creation

for creating virtual environments for python, we can use

uv venv <environment-name>

for <environment-name> give any name your wish.

Package Installation

for package Installation , we have to use

uv pip install <package-name>

Conclusion

Through this blog post, we have learned about uv package manager and how it is effective in making our python workflows faster and building our containers faster and ease of deployment .

To know about me, click on my github, Linkedin.


uv- A faster alternative to pip and pip-tools was originally published in Towards Dev on Medium, where people are continuing the conversation by highlighting and responding to this story.

Docker : Creating and uploading docker image to docker hub – டாக்கர் படத்தை உருவாக்கி அதை டாக்கர் ஹப்பில் பதிவேற்றுதல்

By: Hariharan
25 September 2024 at 20:00

செப் 25, 2024

நான் டாக்கர் வகுப்பில் கற்றவற்றை வைத்து ஒரு டாக்கர் படத்தை டாக்கர் ஹப்பில் பதிவேற்றுதல் வரை நடந்த செயல்பாடுகளை இந்தப்பதிவில் குறிப்பிடுகிறேன்.

டாக்கர் ஹப் கணக்கை துவக்குதல்

https://app.docker.com/signup இணைப்பை சொடுக்கவும் அதில் கூகுள் கணக்கை வைத்து (நீங்கள் பிற உள்நுழைவு அமைப்புகளையும் பயன்படுத்திக் கொள்ளலாம்) உள்நுழையவும்.

வெற்றிகரமான உள்நுழைவுக்கு பிறகு https://hub.docker.com க்கு Docker Hub Link ஐ சொடுக்குவதுமூலம் செல்லவும்.

Click the Docker Hub Link

டாக்கர் ஹப்பில் நாம் படத்தை பதிவேற்றம் செய்யும் முன்னர் அதனை பதிவேற்ற ஒரு கோப்புறை ஒன்றை உருவாக்க வேண்டும்.

கோப்புறைஐ உருவாக்கிய பிறகு நாம் நமது கணினியில் டாக்கர் படத்தை உருவாக்கிய பின்னர் அதனை பதிவேற்றிக்கொள்ளலாம்.

கணிணியில் ஒரு டாக்கர் படத்தை உருவாக்குதல்

முதலில் டாக்கர் படத்தை உருவாக்கும் முன்னர் பழைய டாக்கர் கலன்களின் (Container) இயக்கத்தை நிறுத்திவிட்டு சற்று நினைவத்தினை தயார் செய்து கொள்கிறேன் (நினைவக பற்றாக்குறை இருப்பதால்).

docker rm $(docker ps -aq)

பின்னர் Dockerfile எழுத துவங்க வேண்டியதுதான்

டாக்கர் படத்தை உருவாக்க நமக்கு தேவயான சார்பு படங்களை முதலில் பதிவிறக்கி அதனை தயார்படுத்திக்கொள்வோம்.

என்னுடய டாக்கர் படம் மிகவும் சிறியதாக வேண்டும் என நினைப்பதால் நான் python3-alpine பயன்படுத்துகிறேன்.

https://hub.docker.com/r/activestate/python3-alpine

நிறுவல் சரிபார்த்தல்

மேற்கண்ட கட்டளைவரிகளை பயன்படுத்தி நாம் நமது நிறுவலை சரிபார்க்கலாம்.

டாக்கர் கோப்பை எழுதுதல் மற்றும் டாக்கர் படத்தை உருவாக்குதல்

# we are choosing the base image as python alpine
FROM activestate/python3-alpine:latest
# setting work directory
WORKDIR ./foss-event-aggregator
# Copying the workdirectory files to the container 
COPY ./foss-event-aggregator ./foss-event-aggregator
# Installing required dev-dependencies 
# RUN ["pip3","install","-r","./foss-event-aggregator/dev-requirements.txt"]
# Running PIP commands to update the dependencies for the
RUN ["apk","add","libxml2-dev","libxslt-dev","python-dev"]

RUN ["pip3","install","-r","./foss-event-aggregator/requirements.txt"]

CMD ["python","eventgator.py"]

டாக்கர் கோப்பு எழுதும் போது தேவையான சார்புகள் அனைத்தும் சரியாக நிறுவப்படுகிறதா என்பதை சரிபார்க்க பிழைச்செய்தி வரும்போது அதனை சரிசெய்ய டாக்கர் கோப்பை தேவைப்படி மாற்றுக.

வெற்றிகரமாக foss-event-aggregator எனும் டாக்கர் படம் உருவாக்கப்பட்டது.

உருவாக்கப்பட்ட படத்தை பரிசோதித்தாகிவிட்டது இப்பொழுது டாக்கர் ஹப்புக்கு பதிவேற்றலாம்.

டாக்கர் ஹப்புக்கு பதிவேற்றுதல்

படத்தை பரிசோதித்த பிறகு கோப்புறை பெயரில் டாக் செய்யவேண்டும்

docker image tag foss-event-aggregator:v1 itzmrevil/foss-events-aggregator:v1

டாக் செய்யபட்ட பிறகு டாக்கரில் CLIல் உள்நுழைவு செய்து டாக்கரில் பதிவேற்றம் செய்தால் மட்டுமே டாக்கர் ஏற்றுக்கொள்ளும்.

டாக்கரில் உள்நுழைய

docker login

கொடுத்து டெர்மினலில் வரும் படிகளை பின்பற்றவும்.

உள்நுழைவு செய்த பின்னர்

docker push itzmrevil/foss-events-aggregator:v1

கட்டளையை கொடுத்து டாக்கர் படத்தை பதிவேற்றவும்

பதிவேற்றம் செய்யபட்டதை டாக்கர் ஹப்பில் பார்க்க https://hub.docker.com/repository/docker/itzmrevil/foss-events-aggregator/general

வெற்றி ! வெற்றி !! வெற்றி !!!

Docker : Virtual Machines – மெய்நிகர் இயந்திரங்கள்

By: Hariharan
25 September 2024 at 16:54

செப் 25, 2024

ஒரு கணிணியில் ஒரு வலைப்பயன்பாடினை இயங்குவதற்கு 4 பயன்பாடுகள் பயன்படுத்த வேண்டுமெனில் அந்த பயன்பாடு இயக்கத்திற்காக சார்ந்திருக்கும் நுண்செயலி(CPU), நினைவகம்(RAM), சேமிப்பக (Storage) போன்ற வன்பொருள் தேவைகளை பூர்த்தி செய்ய வேண்டும்.

இதே தேவைகளை சில சமயங்களில் பயனர்களின் (Users) எண்ணிக்கைக்கு ஏற்றவாறும் பயன்பாட்டின் அளவுகளுக்கு (Usage) ஏற்றவாறு நாம் அதிகப்படுத்த (Scaling) வேண்டியுமுள்ளது.

ஓரே கணிணியில் அதிகளவு பயனர்களின் அணுகல்களை அனுமதித்தால் அதிகபயன்பாட்டின் காரணமாக வலைதளங்கள் முடங்கும் அபாயம் உள்ளது.இதனை தவிர்க்க தனித்தனி இயந்திரங்களை பயன்படுத்தும் போது தேவைக்கு அதிகமாக வன்பொருள் மீதமிருக்கும் அது முழுவதுமாக பயன்படுத்தப் படாமலும் இருக்கும் (proper utilisation).

எடுத்துக்காட்டாக கீழ்வரும் 4 பயன்பாடுகளை

  • அப்பாச்சி வலை சேவையகம்
  • கிராப் கிகுவெல் எந்திரம்
  • போஸ்டுகிறீஸ் தரவுதள அமைப்பு
  • எக்ஸ்பிரஸ் வலைச் சேவையகம்

ஒரு கணிணியில் இயக்குவற்கு 4 GB (RAM), 2 Core (CPU) மற்றும் 250 GB (Storage) தேவைப்படும் என வைத்துக்கொள்வோம்.

நம்மிடம் 16 GB (RAM), 16 Core (CPU) மற்றும் 1000 GB கொண்ட கணினி உள்ளது அதில் இரண்டு நிறுவல்களை அமைத்து சோதணை செய்து பார்க்க மெய்நிகர் இயந்திரங்கள் கருத்துரு வழிவகை செய்கிறது.

ஆகவே ஒரு கணினியில் வன்பொருள் அமைப்புகளை தேவைகளைப் பொறுத்து ஒரு கணினியை பல கணினிகளாக மாற்றி சோதனை செய்து பயன்படுத்தும்போது அந்த கணிணிகளை மெய்நிகர் இயந்திரங்கள் எனப் பொருள் கொள்ளலாம்.

Docker : Meet & Greet – அறிமுக வகுப்பு

By: Hariharan
16 September 2024 at 18:48

செப் 16, 2024

நேற்று (15-09-2024) காஞ்சிலக் சந்திப்பு முடித்தவுடன் docker வகுப்பில் இணைந்தேன்.

வகுப்பில் இணைந்தவர்கள் அனைவரும் தத்தம் அறிமுகங்களை வழங்கிய பிறகு ஜாபர் அவர்கள் docker ன் பண்புக்கூறுகளை விளக்கினார்.

பின்னர் வின்டோஸில் docker ஐ எப்படி நிறுவுவது குறித்து விளக்கினார்.

சிலரின் கணினியில் docker நிறுவுதலில் பிழைச்செய்தி வந்ததையடுத்து அதனை சரிசெய்து வகுப்பை முடித்துகொண்டோம்.

HAProxy EP 9: Load Balancing with Weighted Round Robin

11 September 2024 at 14:39

Load balancing helps distribute client requests across multiple servers to ensure high availability, performance, and reliability. Weighted Round Robin Load Balancing is an extension of the round-robin algorithm, where each server is assigned a weight based on its capacity or performance capabilities. This approach ensures that more powerful servers handle more traffic, resulting in a more efficient distribution of the load.

What is Weighted Round Robin Load Balancing?

Weighted Round Robin Load Balancing assigns a weight to each server. The weight determines how many requests each server should handle relative to the others. Servers with higher weights receive more requests compared to those with lower weights. This method is useful when backend servers have different processing capabilities or resources.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.

  • Flask App 1 (app1.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 1!"

@app.route("/data")
def data():
    return "Data from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)

  • Flask App 2 (app2.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 2!"

@app.route("/data")
def data():
    return "Data from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)

  • Flask App 3 (app3.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 3!"

@app.route("/data")
def data():
    return "Data from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Step 2: Create Dockerfiles for Each Flask Application

Create Dockerfiles for each of the Flask applications:

  • Dockerfile for Flask App 1 (Dockerfile.app1):

# Use the official Python image from Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the application file into the container
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

  • Dockerfile for Flask App 2 (Dockerfile.app2):

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

  • Dockerfile for Flask App 3 (Dockerfile.app3):

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create the HAProxy Configuration File

Create an HAProxy configuration file (haproxy.cfg) to implement Weighted Round Robin Load Balancing


global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance roundrobin
    server server1 app1:5001 weight 2 check
    server server2 app2:5002 weight 1 check
    server server3 app3:5003 weight 3 check

Explanation:

  • The balance roundrobin directive tells HAProxy to use the Round Robin load balancing algorithm.
  • The weight option for each server specifies the weight associated with each server:
    • server1 (App 1) has a weight of 2.
    • server2 (App 2) has a weight of 1.
    • server3 (App 3) has a weight of 3.
  • Requests will be distributed based on these weights: App 3 will receive the most requests, App 2 the least, and App 1 will be in between.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy):


# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a docker-compose.yml File

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3


Explanation:

  • The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
  • HAProxy depends on the three Flask applications to be up and running before it starts.

Step 6: Build and Run the Docker Containers

Run the following command to build and start all the containers


docker-compose up --build

This command builds Docker images for all three Flask apps and HAProxy, then starts them.

Step 7: Test the Load Balancer

Open your browser or use curl to make requests to the HAProxy server


curl http://localhost/
curl http://localhost/data

Observation:

  • With Weighted Round Robin Load Balancing, you should see that requests are distributed according to the weights specified in the HAProxy configuration.
  • For example, App 3 should receive three times more requests than App 2, and App 1 should receive twice as many as App 2.

Conclusion

By implementing Weighted Round Robin Load Balancing with HAProxy, you can distribute traffic more effectively according to the capacity or performance of each backend server. This approach helps optimize resource utilization and ensures a balanced load across servers.

HAProxy EP 8: Load Balancing with Random Load Balancing

11 September 2024 at 14:23

Load balancing distributes client requests across multiple servers to ensure high availability and reliability. One of the simplest load balancing algorithms is Random Load Balancing, which selects a backend server randomly for each client request.

Although this approach does not consider server load or other metrics, it can be effective for less critical applications or when the goal is to achieve simplicity.

What is Random Load Balancing?

Random Load Balancing assigns incoming requests to a randomly chosen server from the available pool of servers. This method is straightforward and ensures that requests are distributed in a non-deterministic manner, which may work well for environments with equally capable servers and minimal concerns about server load or state.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.

Flask App 1 – (app.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 1!"

@app.route("/data")
def data():
    return "Data from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 – (app.py)


from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 2!"

@app.route("/data")
def data():
    return "Data from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)

Flask App 3 – (app.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 3!"

@app.route("/data")
def data():
    return "Data from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)


Step 2: Create Dockerfiles for Each Flask Application

Create Dockerfiles for each of the Flask applications:

  • Dockerfile for Flask App 1 (Dockerfile.app1):
# Use the official Python image from Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the application file into the container
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

  • Dockerfile for Flask App 2 (Dockerfile.app2):
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]


  • Dockerfile for Flask App 3 (Dockerfile.app3):

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a Dockerfile for HAProxy

HAProxy Configuration file,


global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance random
    random draw 2
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • The balance random directive tells HAProxy to use the Random load balancing algorithm.
  • The random draw 2 setting makes HAProxy select 2 servers randomly and choose the one with the least number of connections. This adds a bit of load awareness to the random choice.
  • The server directives define the backend servers and their ports.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy):

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80


Step 5: Create a docker-compose.yml File

To manage all the containers together, create a docker-compose.yml file:


version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
  • HAProxy depends on the three Flask applications to be up and running before it starts.

Step 6: Build and Run the Docker Containers

Run the following command to build and start all the containers:


docker-compose up --build

This command builds Docker images for all three Flask apps and HAProxy, then starts them.

Step 7: Test the Load Balancer

Open your browser or use curl to make requests to the HAProxy server:

curl http://localhost/
curl http://localhost/data

Observation:

  • With Random Load Balancing, each request should randomly hit one of the three backend servers.
  • Since the selection is random, you may not see a predictable pattern; however, the requests should be evenly distributed across the servers over a large number of requests.

Conclusion

By implementing Random Load Balancing with HAProxy, we’ve demonstrated a simple way to distribute traffic across multiple servers without relying on complex metrics or state information. While this approach may not be ideal for all use cases, it can be useful in scenarios where simplicity is more valuable than fine-tuned load distribution.

HAProxy EP 7: Load Balancing with Source IP Hash, URI – Consistent Hashing

11 September 2024 at 13:55

Load balancing helps distribute traffic across multiple servers, enhancing performance and reliability. One common strategy is Source IP Hash load balancing, which ensures that requests from the same client IP are consistently directed to the same server.

This method is particularly useful for applications requiring session persistence, such as shopping carts or user sessions. In this blog, we’ll implement Source IP Hash load balancing using Flask and HAProxy, all within Docker containers.

What is Source IP Hash Load Balancing?

Source IP Hash Load Balancing is a technique that uses a hash function on the client’s IP address to determine which server should handle the request. This guarantees that a particular client will always be directed to the same backend server, ensuring session persistence and stateful behavior.

Consistent Hashing: https://parottasalna.com/2024/06/17/why-do-we-need-to-maintain-same-hash-in-load-balancer/

Step-by-Step Implementation with Docker

Step 1: Create Flask Application

We’ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance source
    hash-type consistent
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • The balance source directive tells HAProxy to use Source IP Hashing as the load balancing algorithm.
  • The hash-type consistent directive ensures consistent hashing, which is essential for minimizing disruption when backend servers are added or removed.
  • The server directives define the backend servers and their ports.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:

# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:

curl http://localhost

Observation:

  • With Source IP Hash load balancing, each unique IP address (e.g., your local IP) should always be directed to the same backend server.
  • If you access the HAProxy from different IPs (e.g., using different devices or by simulating different client IPs), you will see that requests are consistently sent to the same server for each IP.

For the URI based hashing we just need to add,

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance uri
    hash-type consistent
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check


Explanation:

  • The balance uri directive tells HAProxy to use URI Hashing as the load balancing algorithm.
  • The hash-type consistent directive ensures consistent hashing to minimize disruption when backend servers are added or removed.
  • The server directives define the backend servers and their ports.

HAProxy Ep 6: Load Balancing With Least Connection

11 September 2024 at 13:32

Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, we’ll explore how to implement Least Connection load balancing using Flask as our backend application and HAProxy as our load balancer.

What is Least Connection Load Balancing?

Least Connection Load Balancing is a dynamic algorithm that distributes requests to the server with the fewest active connections at any given time. This method ensures that servers with lighter loads receive more requests, preventing any single server from becoming a bottleneck.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py) – Introduced Slowness by adding sleep

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py) – Introduced Slowness by adding sleep.

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance leastconn
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
  • backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance leastconn directive sets the Least Connection for load balancing.
  • server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:

# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

You should see the responses alternating between “Hello from Flask App 1!”, “Hello from Flask App 2!”, and “Hello from Flask App 3!” as HAProxy uses the Round Robin algorithm to distribute requests.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:

curl http://localhost

You should see responses cycling between “Hello from Flask App 1!”, “Hello from Flask App 2!”, and “Hello from Flask App 3!” according to the Least Connection strategy.

HAProxy EP 5: Load Balancing With Round Robin

11 September 2024 at 12:56

Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, we’ll explore how to implement Round Robin load balancing using Flask as our backend application and HAProxy as our load balancer.

What is Round Robin Load Balancing?

Round Robin load balancing works by distributing incoming requests sequentially across a group of servers.

For example, the first request goes to Server A, the second to Server B, the third to Server C, and so on. Once all servers have received a request, the cycle repeats. This algorithm is simple and works well when all servers have similar capabilities.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py)


from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)


# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)


FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)


FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance roundrobin
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
  • backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance roundrobin directive sets the Round Robin algorithm for load balancing.
  • server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)


# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file


version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:


# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

You should see the responses alternating between “Hello from Flask App 1!”, “Hello from Flask App 2!”, and “Hello from Flask App 3!” as HAProxy uses the Round Robin algorithm to distribute requests.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:


curl http://localhost

HAProxy EP 4: Understanding ACL – Access Control List

10 September 2024 at 23:46

Imagine you are managing a busy highway with multiple lanes, and you want to direct specific types of vehicles to particular lanes: trucks to one lane, cars to another, and motorcycles to yet another. In the world of web traffic, this is similar to what Access Control Lists (ACLs) in HAProxy do—they help you direct incoming requests based on specific criteria.

Let’s dive into what ACLs are in HAProxy, why they are essential, and how you can use them effectively with some practical examples.

What are ACLs in HAProxy?

Access Control Lists (ACLs) in HAProxy are rules or conditions that allow you to define patterns to match incoming requests. These rules help you make decisions about how to route or manage traffic within your infrastructure.

Think of ACLs as powerful filters or guards that analyze incoming HTTP requests based on headers, IP addresses, URL paths, or other attributes. By defining ACLs, you can control how requests are handled—for example, sending specific traffic to different backends, applying security rules, or denying access under certain conditions.

Why Use ACLs in HAProxy?

Using ACLs offers several advantages:

  1. Granular Control Over Traffic: You can filter and route traffic based on very specific criteria, such as the content of HTTP headers, cookies, or request methods.
  2. Security: ACLs can block unwanted traffic, enforce security policies, and prevent malicious access.
  3. Performance Optimization: By directing traffic to specific servers optimized for certain types of content, ACLs can help balance the load and improve performance.
  4. Flexibility and Scalability: ACLs allow dynamic adaptation to changing traffic patterns or new requirements without significant changes to your infrastructure.

How ACLs Work in HAProxy

ACLs in HAProxy are defined in the configuration file (haproxy.cfg). The syntax is straightforward


acl <name> <criteria>
  • <name>: The name you give to your ACL rule, which you will use to reference it in further configuration.
  • <criteria>: The condition or match pattern, such as a path, header, method, or IP address.

It either returns True or False.

Examples of ACLs in HAProxy

Let’s look at some practical examples to understand how ACLs work.

Example 1: Routing Traffic Based on URL Path

Suppose you have a web application that serves both static and dynamic content. You want to route all requests for static files (like images, CSS, and JavaScript) to a server optimized for static content, while all other requests should go to a dynamic content server.

Configuration:


frontend http_front
    bind *:80
    acl is_static path_beg /static
    use_backend static_backend if is_static
    default_backend dynamic_backend

backend static_backend
    server static1 127.0.0.1:5001 check

backend dynamic_backend
    server dynamic1 127.0.0.1:5002 check

  • ACL Definition: acl is_static path_beg /static : checks if the request URL starts with /static.
  • Usage: use_backend static_backend if is_static routes the traffic to the static_backend if the ACL is_static matches. All other requests are routed to the dynamic_backend.

Example 2: Blocking Traffic from Specific IP Addresses

Let’s say you want to block traffic from a range of IP addresses that are known to be malicious.

Configurations

frontend http_front
    bind *:80
    acl block_ip src 192.168.1.0/24
    http-request deny if block_ip
    default_backend web_backend

backend web_backend
    server web1 127.0.0.1:5003 check


ACL Definition:acl block_ip src 192.168.1.0/24 defines an ACL that matches any source IP from the range 192.168.1.0/24.

Usage:http-request deny if block_ip denies the request if it matches the block_ip ACL.

Example 4: Redirecting Traffic Based on Request Method

You might want to redirect all POST requests to a different backend for further processing.

Configurations


frontend http_front
    bind *:80
    acl is_post_method method POST
    use_backend post_backend if is_post_method
    default_backend general_backend

backend post_backend
    server post1 127.0.0.1:5006 check

backend general_backend
    server general1 127.0.0.1:5007 check

Example 5: Redirect Traffic Based on User Agent

Imagine you want to serve a different version of your website to mobile users versus desktop users. You can achieve this by using ACLs that check the User-Agent header in the HTTP request.

Configuration:


frontend http_front
    bind *:80
    acl is_mobile_user_agent req.hdr(User-Agent) -i -m sub Mobile
    use_backend mobile_backend if is_mobile_user_agent
    default_backend desktop_backend

backend mobile_backend
    server mobile1 127.0.0.1:5008 check

backend desktop_backend
    server desktop1 127.0.0.1:5009 check

ACL Definition:acl is_mobile_user_agent req.hdr(User-Agent) -i -m sub Mobile checks if the User-Agent header contains the substring "Mobile" (case-insensitive).

Usage:use_backend mobile_backend if is_mobile_user_agent directs mobile users to mobile_backend and all other users to desktop_backend.

Example 6: Restrict Access to Admin Pages by IP Address

Let’s say you want to allow access to the /admin page only from a specific IP address or range, such as your company’s internal network.


frontend http_front
    bind *:80
    acl is_admin_path path_beg /admin
    acl is_internal_network src 192.168.10.0/24
    http-request deny if is_admin_path !is_internal_network
    default_backend web_backend

backend web_backend
    server web1 127.0.0.1:5015 check

Example with a Flask Application

Let’s see how you can use ACLs with a Flask application to enforce different rules.

Flask Application Setup

You have two Flask apps: app1.py for general requests and app2.py for special requests like form submissions.

app1.py

from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return "Welcome to the main page!"

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5003)

app2.py:

from flask import Flask

app = Flask(__name__)

@app.route('/submit', methods=['POST'])
def submit_form():
    return "Form submitted successfully!"

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5004)


HAProxy Configuration with ACLs


frontend http_front
    bind *:80
    acl is_post_method method POST
    acl is_submit_path path_beg /submit
    use_backend post_backend if is_post_method is_submit_path
    default_backend general_backend

backend post_backend
    server app2 127.0.0.1:5004 check

backend general_backend
    server app1 127.0.0.1:5003 check

ACLs:

  • is_post_method checks for the POST method.
  • is_submit_path checks if the path starts with /submit.

Traffic Handling: The traffic is directed to post_backend if both the ACLs match, otherwise, it goes to general_backend.

HAProxy EP 3: Sarah’s Adventure with L7 Load Balancing and HAProxy

10 September 2024 at 23:26

Meet Sarah, a backend developer at “BrightApps,” a fast-growing startup specializing in custom web applications. Recently, BrightApps launched a new service called “FitGuru,” a health and fitness platform that quickly gained traction. However, as the platform’s user base started to grow, the team noticed performance issues—page loads were slow, and users began to complain.

Sarah knew that simply scaling up their backend servers might not solve the problem. What they needed was a smarter way to handle incoming traffic and distribute it across their servers. That’s when she decided to dive into the world of Layer 7 (L7) load balancing with HAProxy.

Understanding L7 Load Balancing

Layer 7 load balancing operates at the Application Layer of the OSI model. Unlike Layer 4 (L4) load balancing, which only considers information from the Transport Layer (like IP addresses and ports), an L7 load balancer examines the actual content of the HTTP requests. This deeper inspection allows it to make more intelligent decisions on how to route traffic.

Here’s why Sarah chose L7 load balancing for “FitGuru”:

  1. Content-Based Routing: Sarah could route requests to different servers based on the URL path, HTTP headers, cookies, or even specific parameters in the request. For example, requests for video content could be directed to a server optimized for handling media, while API requests could go to a server focused on data processing.
  2. SSL Termination: The L7 load balancer could handle the SSL encryption and decryption, relieving the backend servers from this CPU-intensive task.
  3. Advanced Health Checks: Sarah could set up health checks that simulate real user traffic to ensure backend servers are actually serving content correctly, not just responding to pings.
  4. Enhanced Security: With L7, she could filter out malicious traffic more effectively by inspecting request contents, blocking suspicious patterns, and protecting the app from common web attacks.

Step 1: Sarah’s Plan with HAProxy as an HTTP Proxy

Sarah decided to configure HAProxy as an HTTP proxy. This way, it would operate at Layer 7 and provide advanced traffic management capabilities. She had a few objectives:

  • Route traffic based on the URL path to different servers.
  • Offload SSL termination to HAProxy.
  • Serve static files from specific backend servers and dynamic content from others.

Sarah started with a simple Flask application to test her configuration:

Flask Application Setup

Sarah created two basic Flask apps:

  1. Static Content Server (static_app.py):

from flask import Flask, send_from_directory

app = Flask(__name__)

@app.route('/static/<path:filename>')
def serve_static(filename):
    return send_from_directory('static', filename)

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)

This app served static content from a folder named static.

  1. Dynamic Content Server (dynamic_app.py):

from flask import Flask

app = Flask(__name__)

@app.route('/')
def home():
    return "Welcome to FitGuru!"

@app.route('/api/data')
def api_data():
    return {"data": "Some dynamic data"}

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

This app handled dynamic requests like API endpoints and the home page.

Step 2: Configuring HAProxy for HTTP Proxy

Sarah then moved on to configure HAProxy. She created an HAProxy configuration file (haproxy.cfg) to route traffic based on URL paths


global
    log stdout format raw local0
    maxconn 4096

defaults
    mode http
    log global
    option httplog
    option dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80

    acl is_static path_beg /static
    use_backend static_backend if is_static
    default_backend dynamic_backend

backend static_backend
    balance roundrobin
    server static1 127.0.0.1:5001 check

backend dynamic_backend
    balance roundrobin
    server dynamic1 127.0.0.1:5002 check

Explanation of the Configuration

  1. Frontend Configuration (http_front):
    • The frontend listens on ports 80 (HTTP).
    • An ACL (is_static) is defined to identify requests for static content based on the URL path prefix /static.
    • Requests that match the is_static ACL are routed to the static_backend. All other requests are routed to the dynamic_backend.
  2. Backend Configuration:
    • The static_backend handles static content requests and uses a round-robin strategy to distribute traffic between the servers (in this case, just static1).
    • The dynamic_backend handles all other requests in a similar manner.

Step 3: Deploying HAProxy with Docker

Sarah decided to use Docker to deploy HAProxy quickly:

Dockerfile for HAProxy:


FROM haproxy:2.4

COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

Build and Run:


docker build -t haproxy-http .
docker run -d -p 80:80 -p 443:443 haproxy-http


This command runs HAProxy in a Docker container, listening on ports 80.

Step 4: Testing the Setup

Now, it was time to test!

  1. Static Content Test:
    • Sarah visited http://localhost:5000/static/logo.png. The L7 load balancer identified the /static path and routed the request to static_backend.
  2. Dynamic Content Test:
    • Visiting http://localhost:5000 or http://localhost:5000/api/data confirmed that requests were routed to the dynamic_backend as expected.

The Result: A Smoother Experience for “FitGuru”

With L7 load balancing in place, “FitGuru” was now more responsive and could efficiently handle the incoming traffic surge:

  • Optimized Performance: Static content requests were efficiently served from servers dedicated to that purpose, while dynamic content was processed by more capable machines.
  • Enhanced Security: SSL termination was handled by HAProxy, and the backend servers were freed from CPU-intensive encryption tasks.
  • Flexible Traffic Management: Sarah could now easily add or modify rules to adapt to changing traffic patterns or requirements.

By implementing Layer 7 load balancing with HAProxy, Sarah provided “FitGuru” with a robust and scalable solution that ensured a seamless user experience, even during peak times. Now, she could confidently tackle the growing demands of their expanding user base, knowing the platform was built to handle whatever traffic came its way.

Layer 7 load balancing was more than just a tool; it was a strategy that allowed Sarah to understand, control, and optimize traffic in a way that best suited their application’s unique needs. And with HAProxy, she had all the flexibility and power she needed to keep “FitGuru” running smoothly.

HAProxy EP 2: TCP Proxy for Flask Application

10 September 2024 at 16:56

Meet Jafer, a backend engineer tasked with ensuring the new microservice they are building can handle high traffic smoothly. The microservice is a Flask application that needs to be accessed over TCP, and Jafer decided to use HAProxy to act as a TCP proxy to manage incoming traffic.

This guide will walk you through how Jafer sets up HAProxy to work as a TCP proxy for a sample Flask application.

Why Use HAProxy as a TCP Proxy?

HAProxy as a TCP proxy operates at Layer 4 (Transport Layer) of the OSI model. It forwards raw TCP connections from clients to backend servers without inspecting the contents of the packets. This is ideal for scenarios where:

  • You need to handle non-HTTP traffic, such as databases or other TCP-based applications.
  • You want to perform load balancing without application-level inspection.
  • Your services are using protocols other than HTTP/HTTPS.

In this layer, it can’t read the packets but can identify the ip address of the client.

Step 1: Set Up a Sample Flask Application

First, Jafer created a simple Flask application that listens on a TCP port. Let’s create a file named app.py

from flask import Flask, request

app = Flask(__name__)

@app.route('/', methods=['GET'])
def home():
    return "Hello from Flask over TCP!"

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=5000)  # Run the app on port 5000


Step 2: Dockerize the Flask Application

To make the Flask app easy to deploy, Jafer decided to containerize it using Docker.

Create a Dockerfile

# Use an official Python runtime as a parent image
FROM python:3.9-slim

# Set the working directory
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Run app.py when the container launches
CMD ["python", "app.py"]


To build and run the Docker container, use the following commands

docker build -t flask-app .
docker run -d -p 5000:5000 flask-app

This will start the Flask application on port 5000.

Step 3: Configure HAProxy as a TCP Proxy

Now, Jafer needs to configure HAProxy to act as a TCP proxy for the Flask application.

Create an HAProxy configuration file named haproxy.cfg

global
    log stdout format raw local0
    maxconn 4096

defaults
    mode tcp  # Operating in TCP mode
    log global
    option tcplog
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend tcp_front
    bind *:4000  # Bind to port 4000 for incoming TCP traffic
    default_backend flask_backend

backend flask_backend
    balance roundrobin  # Use round-robin load balancing
    server flask1 127.0.0.1:5000 check  # Proxy to Flask app running on port 5000

In this configuration:

  • Mode TCP: HAProxy is set to work in TCP mode.
  • Frontend: Listens on port 4000 and forwards incoming TCP traffic to the backend.
  • Backend: Contains a single server (flask1) where the Flask app is running.

Step 4: Run HAProxy with the Configuration

To start HAProxy with the above configuration, you can use Docker to run HAProxy in a container.

Create a Dockerfile for HAProxy

FROM haproxy:2.4

# Copy the HAProxy configuration file to the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

Build and run the HAProxy Docker container

docker build -t haproxy-tcp .
docker run -d -p 4000:4000 haproxy-tcp

This will start HAProxy on port 4000, which is configured to proxy TCP traffic to the Flask application running on port 5000.

Step 5: Test the TCP Proxy Setup

To test the setup, open a web browser or use curl to send a request to the HAProxy server

curl http://localhost:4000/

You should see the response

Hello from Flask over TCP!

This confirms that HAProxy is successfully proxying TCP traffic to the Flask application.

Step 6: Scaling Up

If Jafer wants to scale the application to handle more traffic, he can add more backend servers to the haproxy.cfg file

backend flask_backend
    balance roundrobin
    server flask1 127.0.0.1:5000 check
    server flask2 127.0.0.1:5001 check

Jafer could run another instance of the Flask application on a different port (5001), and HAProxy would balance the TCP traffic between the two instances.

Conclusion

By configuring HAProxy as a TCP proxy, Jafer could efficiently manage and balance incoming traffic to their Flask application. This setup ensures scalability and reliability for any TCP-based service, not just HTTP-based ones.

HAProxy EP 1: Traffic Police for Web

9 September 2024 at 16:59

In the world of web applications, imagine you’re running a very popular pizza place. Every evening, customers line up for a delicious slice of pizza. But if your single cashier can’t handle all the orders at once, customers might get frustrated and leave.

What if you could have a system that ensures every customer gets served quickly and efficiently? Enter HAProxy, a tool that helps manage and balance the flow of web traffic so that no single server gets overwhelmed.

Here’s a straightforward guide to understanding HAProxy, installing it, and setting it up to make your web application run smoothly.

What is HAProxy?

HAProxy stands for High Availability Proxy. It’s like a traffic director for your web traffic. It takes incoming requests (like people walking into your pizza place) and decides which server (or pizza station) should handle each request. This way, no single server gets too busy, and everything runs more efficiently.

Why Use HAProxy?

  • Handles More Traffic: Distributes incoming traffic across multiple servers so no single one gets overloaded.
  • Increases Reliability: If one server fails, HAProxy directs traffic to the remaining servers.
  • Improves Performance: Ensures that users get faster responses because the load is spread out.

Installing HAProxy

Here’s how you can install HAProxy on a Linux system:

  1. Open a Terminal: You’ll need to access your command line interface to install HAProxy.
  2. Install HAProxy: Type the following command and hit enter

sudo apt-get update
sudo apt-get install haproxy

3. Check Installation: Once installed, you can verify that HAProxy is running by typing


sudo systemctl status haproxy

This command shows you the current status of HAProxy, ensuring it’s up and running.

Configuring HAProxy

HAProxy’s configuration file is where you set up how it should handle incoming traffic. This file is usually located at /etc/haproxy/haproxy.cfg. Let’s break down the main parts of this configuration file,

1. The global Section

The global section is like setting the rules for the entire pizza place. It defines general settings for HAProxy itself, such as how it should operate, what kind of logging it should use, and what resources it needs. Here’s an example of what you might see in the global section


global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660
    user haproxy
    group haproxy
    daemon

Let’s break it down line by line:

  • log /dev/log local0: This line tells HAProxy to send log messages to the system log at /dev/log and to use the local0 logging facility. Logs help you keep track of what’s happening with HAProxy.
  • log /dev/log local1 notice: Similar to the previous line, but it uses the local1 logging facility and sets the log level to notice, which is a type of log message indicating important events.
  • chroot /var/lib/haproxy: This line tells HAProxy to run in a restricted area of the file system (/var/lib/haproxy). It’s a security measure to limit access to the rest of the system.
  • stats socket /run/haproxy/admin.sock mode 660: This sets up a special socket (a kind of communication endpoint) for administrative commands. The mode 660 part defines the permissions for this socket, allowing specific users to manage HAProxy.
  • user haproxy: Specifies that HAProxy should run as the user haproxy. Running as a specific user helps with security.
  • group haproxy: Similar to the user directive, this specifies that HAProxy should run under the haproxy group.
  • daemon: This tells HAProxy to run as a background service, rather than tying up a terminal window.

2. The defaults Section

The defaults section sets up default settings for HAProxy’s operation and is like defining standard procedures for the pizza place. It applies default configurations to both the frontend and backend sections unless overridden. Here’s an example of a defaults section


defaults
    log     global
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

Here’s what each line means:

  • log global: Tells HAProxy to use the logging settings defined in the global section for logging.
  • option httplog: Enables HTTP-specific logging. This means HAProxy will log details about HTTP requests and responses, which helps with troubleshooting and monitoring.
  • option dontlognull: Prevents logging of connections that don’t generate any data (null connections). This keeps the logs cleaner and more relevant.
  • timeout connect 5000ms: Sets the maximum time HAProxy will wait when trying to connect to a backend server to 5000 milliseconds (5 seconds). If the connection takes longer, it will be aborted.
  • timeout client 50000ms: Defines the maximum time HAProxy will wait for data from the client to 50000 milliseconds (50 seconds). If the client doesn’t send data within this time, the connection will be closed.
  • timeout server 50000ms: Similar to timeout client, but it sets the maximum time to wait for data from the server to 50000 milliseconds (50 seconds).

3. Frontend Section

The frontend section defines how HAProxy listens for incoming requests. Think of it as the entrance to your pizza place.


frontend http_front
    bind *:80
    default_backend http_back
  • frontend http_front: This is a name for your frontend configuration.
  • bind *:80: Tells HAProxy to listen for traffic on port 80 (the standard port for web traffic).
  • default_backend http_back: Specifies where the traffic should be sent (to the backend section).

4. Backend Section

The backend section describes where the traffic should be directed. Think of it as the different pizza stations where orders are processed.


backend http_back
    balance roundrobin
    server app1 192.168.1.2:5000 check
    server app2 192.168.1.3:5000 check
    server app3 192.168.1.4:5000 check
  • backend http_back: This is a name for your backend configuration.
  • balance roundrobin: Distributes traffic evenly across servers.
  • server app1 192.168.1.2:5000 check: Specifies a server (app1) at IP address 192.168.1.2 on port 5000. The check option ensures HAProxy checks if the server is healthy before sending traffic to it.
  • server app2 and server app3: Additional servers to handle traffic.

Testing Your Configuration

After setting up your configuration, you’ll need to restart HAProxy to apply the changes:


sudo systemctl restart haproxy

To check if everything is working, you can use a web browser or a tool like curl to send requests to HAProxy and see if it correctly distributes them across your servers.

Tasks – Docker

19 August 2024 at 10:09
  1. Install Docker on your local machine. Verify the installation by running the hello-world container.
  2. Pull the nginx image from Docker Hub and run it as a container. Map port 80 of the container to port 8080 of your host.
  3. Create a Dockerfile for a simple Node.js application that serves “Hello World” on port 3000. Build the Docker image with tag my-node-app and run a container. Below is the sample index.js file.

const express = require('express');
const app = express();

app.get('/', (req, res) => {
    res.send('Hello World');
});

app.listen(3000, () => {
    console.log('Server is running on port 3000');
});

4. Tag the Docker image my-node-app from Task 3 with a version tag v1.0.0.

5. Push the tagged image from Task 4 to your Docker Hub repository.

6. Run a container from the ubuntu image and start an interactive shell session inside it. You can run commands like ls, pwd, etc.

7. Create a Dockerfile for a Go application that uses multi-stage builds to reduce the final image size. The application should print “Hello Docker”. Sample Go code.


package main

import "fmt"

func main() {
    fmt.Println("Hello Docker")
}

8. Create a Docker volume and use it to persist data for a MySQL container. Verify that the data persists even after the container is removed. Try creating a test db.

9. Create a custom Docker network and run two containers (e.g., nginx and mysql) on that network. Verify that they can communicate with each other.

10. Create a docker-compose.yml file to define and run a multi-container Docker application with nginx as a web server and mysql as a database.

11. Scale the nginx service in the previous Docker Compose setup to run 3 instances.

12. Create a bind mount to share data between your host system and a Docker container running nginx. Modify a file on your host and see the changes reflected in the container.

13. Add a health check to a Docker container running a simple Node.js application. The health check should verify that the application is running and accessible.

Sample Healthcheck API in node.js,


const express = require('express');
const app = express();

// A simple route to return the status of the application
app.get('/health', (req, res) => {
    res.status(200).send('OK');
});

// Example main route
app.get('/', (req, res) => {
    res.send('Hello, Docker!');
});

// Start the server on port 3000
const port = 3000;
app.listen(port, () => {
    console.log(`App is running on http://localhost:${port}`);
});

14. Modify a Dockerfile to take advantage of Docker’s build cache, ensuring that layers that don’t change are reused.

15. Run a PostgreSQL database in a Docker container and connect to it using a database client from your host.

16. Create a custom Docker network and run a Node.js application and a MongoDB container on the same network. The Node.js application should connect to MongoDB using the container name.


const mongoose = require('mongoose');

mongoose.connect('mongodb://mongodb:27017/mydatabase', {
  useNewUrlParser: true,
  useUnifiedTopology: true,
}).then(() => {
  console.log('Connected to MongoDB');
}).catch(err => {
  console.error('Connection error', err);
});

17. Create a docker-compose.yml file to set up a MEAN (MongoDB, Express.js, Angular, Node.js) stack with services for each component.

18. Use the docker stats command to monitor resource usage (CPU, memory, etc.) of running Docker containers.

docker run -d --name busybox1 busybox sleep 1000
docker run -d --name busybox2 busybox sleep 1000

19. Create a Dockerfile for a simple Python Flask application that serves “Hello World”.

20. Configure Nginx as a reverse proxy to forward requests to a Flask application running in a Docker container.

21. Use docker exec to run a command inside a running container.


docker run -d --name ubuntu-container ubuntu sleep infinity

22. Modify a Dockerfile to create and use a non-root user inside the container.

23. Use docker logs to monitor the output of a running container.

24. Use docker system prune to remove unused Docker data (e.g., stopped containers, unused networks).

25. Run a Docker container in detached mode and verify that it’s running in the background.

26. Configure a Docker container to use a different logging driver (e.g., json-file or syslog).

27. Use build arguments in a Dockerfile to customize the build process.


from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, Docker Build Arguments!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

28. Set CPU and memory limits for a Docker container (for busybox)

Docker Ep 12 – Cheatsheet

19 August 2024 at 01:27

Here’s a Docker cheat sheet that covers the most commonly used Docker commands, organized by categories.

Docker Basics

  • docker --version: Show the Docker version installed on your system.
  • docker info: Display system-wide information, including Docker version, number of containers, and images.
  • docker help: Get help on Docker commands.

Docker Images

  • docker images: List all Docker images on your system.
  • docker pull <image>: Download an image from a Docker registry (e.g., Docker Hub).
  • docker build -t <image_name> .: Build an image from a Dockerfile in the current directory and tag it with a name.
  • docker tag <image_id> <new_image_name>: Tag an image with a new name.
  • docker rmi <image>: Remove one or more images.
  • docker history <image>: Show the history of an image (layers).

Docker Containers

  • docker ps: List all running containers.
  • docker ps -a: List all containers (running and stopped).
  • docker run <image>: Run a container from an image.
  • docker run -d <image>: Run a container in detached mode (in the background).
  • docker run -it <image>: Run a container in interactive mode with a terminal.
  • docker run -p <host_port>:<container_port> <image>: Map a port from the host to the container.
  • docker stop <container>: Stop a running container.
  • docker start <container>: Start a stopped container.
  • docker restart <container>: Restart a running container.
  • docker rm <container>: Remove a stopped container.
  • docker logs <container>: View the logs of a container.
  • docker exec -it <container> <command>: Execute a command inside a running container (e.g., bash to open a shell).

Docker Networks

  • docker network ls: List all Docker networks.
  • docker network create <network_name>: Create a new Docker network.
  • docker network inspect <network_name>: View detailed information about a network.
  • docker network connect <network_name> <container>: Connect a container to a network.
  • docker network disconnect <network_name> <container>: Disconnect a container from a network.
  • docker network rm <network_name>: Remove a Docker network.

Docker Volumes

  • docker volume ls: List all Docker volumes.
  • docker volume create <volume_name>: Create a new Docker volume.
  • docker volume inspect <volume_name>: View detailed information about a volume.
  • docker volume rm <volume_name>: Remove a Docker volume.
  • docker run -v <volume_name>:<container_path> <image>: Mount a volume inside a container.

Docker Compose

  • docker-compose up: Start the services defined in a docker-compose.yml file.
  • docker-compose down: Stop and remove containers, networks, volumes, and images created by docker-compose up.
  • docker-compose build: Build or rebuild services defined in a docker-compose.yml file.
  • docker-compose ps: List containers managed by Docker Compose.
  • docker-compose logs: View logs for services managed by Docker Compose.
  • docker-compose exec <service> <command>: Execute a command in a running service.

Dockerfile Directives

  • FROM: Specifies the base image.
  • WORKDIR: Sets the working directory inside the container.
  • COPY: Copies files from the host to the container.
  • RUN: Executes a command in the container.
  • CMD: Specifies the command to run when the container starts.
  • EXPOSE: Specifies the port on which the container will listen.
  • ENV: Sets environment variables.
  • ENTRYPOINT: Configures the container to run as an executable.

Docker Cleanup Commands

  • docker system prune: Remove unused data (stopped containers, unused networks, dangling images, etc.).
  • docker container prune: Remove all stopped containers.
  • docker image prune: Remove unused images.
  • docker volume prune: Remove all unused volumes.
  • docker network prune: Remove all unused networks.

Miscellaneous

  • docker inspect <container_or_image>: Return low-level information on Docker objects (containers, images, volumes, etc.).
  • docker stats: Display a live stream of container(s) resource usage statistics.
  • docker top <container>: Display the running processes of a container.
  • docker cp <container>:<path> <local_path>: Copy files from a container to the host or vice versa.

Docker EP 11 – Docker Networking & Docker Volumes

19 August 2024 at 00:56

Alex is tasked with creating a new microservices-based web application for a growing e-commerce platform. The application needs to handle everything from user authentication to inventory management, and you decide to use Docker to containerize the different services.

Here are the services code with their dockerfile,

Auth Service (auth-service)

This service handles user authentication,


# auth-service.py
from flask import Flask, request, jsonify

app = Flask(__name__)

# Dummy user database
users = {
    "user1": "password1",
    "user2": "password2"
}

@app.route('/login', methods=['POST'])
def login():
    data = request.json
    username = data.get('username')
    password = data.get('password')
    
    if username in users and users[username] == password:
        return jsonify({"message": "Login successful"}), 200
    else:
        return jsonify({"message": "Invalid credentials"}), 401

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Dockerfile:

# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV NAME auth-service

# Run app.py when the container launches
CMD ["python", "auth_service.py"]


Inventory Service (inventory-service)

This service manages inventory data, inventory_service.py

from flask import Flask, request, jsonify

app = Flask(__name__)

# Dummy inventory database
inventory = {
    "item1": {"name": "Item 1", "quantity": 10},
    "item2": {"name": "Item 2", "quantity": 5}
}

@app.route('/inventory', methods=['GET'])
def get_inventory():
    return jsonify(inventory), 200

@app.route('/inventory/<item_id>', methods=['POST'])
def update_inventory(item_id):
    data = request.json
    if item_id in inventory:
        inventory[item_id]["quantity"] = data.get("quantity")
        return jsonify({"message": "Inventory updated"}), 200
    else:
        return jsonify({"message": "Item not found"}), 404

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)


Dockerfile

# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5001 available to the world outside this container
EXPOSE 5001

# Define environment variable
ENV NAME inventory-service

# Run inventory_service.py when the container launches
CMD ["python", "inventory_service.py"]

Dev Service (dev-service)

This service could be a simple service used during development for testing or managing files, dev-service.py


from flask import Flask, request, jsonify
import os

app = Flask(__name__)

@app.route('/files', methods=['GET'])
def list_files():
    files = os.listdir('/app/data')
    return jsonify(files), 200

@app.route('/files/<filename>', methods=['GET'])
def read_file(filename):
    try:
        with open(f'/app/data/{filename}', 'r') as file:
            content = file.read()
        return jsonify({"filename": filename, "content": content}), 200
    except FileNotFoundError:
        return jsonify({"message": "File not found"}), 404

@app.route('/files/<filename>', methods=['POST'])
def write_file(filename):
    data = request.json.get("content", "")
    with open(f'/app/data/{filename}', 'w') as file:
        file.write(data)
    return jsonify({"message": f"File {filename} written successfully"}), 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

Dockerfile


# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5002 available to the world outside this container
EXPOSE 5002

# Define environment variable
ENV NAME dev-service

# Run dev_service.py when the container launches
CMD ["python", "dev_service.py"]

Auth Service: http://localhost:5000/login (POST request with JSON {"username": "user1", "password": "password1"})

Inventory Service: http://localhost:5001/inventory (GET or POST request)

Dev Service:

  • List files: http://localhost:5002/files (GET request)
  • Read file: http://localhost:5002/files/<filename> (GET request)
  • Write file: http://localhost:5002/files/<filename> (POST request with JSON {"content": "Your content here"})

The Lonely Container

You start by creating a simple Flask application for user authentication. After writing the code, you decide to containerize it using Docker.

docker build -t auth-service .
docker run -d -p 5000:5000 --name auth-service auth-service

The service is up and running, and you can access it at http://localhost:5000. But there’s one problem—it’s lonely. Your auth-service is a lone container in the vast sea of Docker networking. If you want to add more services, they need a way to communicate with each other.

  1. docker build -t auth-service .
  • This command builds a Docker image from the Dockerfile in the current directory (.) and tags it as auth-service.

2. docker run -d -p 5000:5000 --name auth-service auth-service

  • -d: Runs the container in detached mode (in the background).
  • -p 5000:5000: Maps port 5000 on the host to port 5000 in the container, making the Flask app accessible at http://localhost:5000.
  • --name auth-service: Names the container auth-service.
  • auth-service: The name of the image to run.

The Bridge of Communication

You decide to expand the application by adding a new inventory service. But how will these two services talk to each other? Enter the bridge network a magical construct that allows containers to communicate within their own private world.

You create a user-defined bridge network to allow your containers to talk to each other by name rather than by IP address.

docker network create ecommerce-network
docker run -d --name auth-service --network ecommerce-network auth-service
docker run -d --name inventory-service --network ecommerce-network inventory-service

Now, your services are not lonely anymore. The auth-service can talk to the inventory-service simply by using its name, like calling a friend across the room. In your code, you can reference inventory-service by name to establish a connection.

docker network create ecommerce-network

  • Creates a user-defined bridge network called ecommerce-network. This network allows containers to communicate with each other using their container names as hostnames.

docker run -d --name auth-service --network ecommerce-network auth-service

  • Runs the auth-service container on the ecommerce-network. The container can now communicate with other containers on the same network using their names.

docker run -d --name inventory-service --network ecommerce-network inventory-service

  • Runs the inventory-service container on the ecommerce-network. The auth-service container can now communicate with the inventory-service using the name inventory-service.

The City of Services

As your project grows, you realize that your application will eventually need to scale. Some services will run on different servers, possibly in different data centers. How will they communicate? It’s time to build a city—a network that spans multiple hosts.

You decide to use Docker Swarm, a tool that lets you manage a cluster of Docker hosts. You create an overlay network, a mystical web that allows containers across different servers to communicate as if they were right next to each other.

docker network create -d overlay ecommerce-overlay
docker service create --name auth-service --network ecommerce-overlay auth-service
docker service create --name inventory-service --network ecommerce-overlay inventory-service

Now, no matter where your containers are running, they can still talk to each other. It’s like giving each container a magic phone that works anywhere in the world.

docker network create -d overlay ecommerce-overlay

  • Creates an overlay network named ecommerce-overlay. Overlay networks are used for multi-host communication, typically in a Docker Swarm or Kubernetes environment.

docker service create --name auth-service --network ecommerce-overlay auth-service

  • Deploys the auth-service as a service on the ecommerce-overlay network. Services are used in Docker Swarm to manage containers across multiple hosts.

docker service create --name inventory-service --network ecommerce-overlay inventory-service

  • Deploys the inventory-service as a service on the ecommerce-overlay network, allowing it to communicate with the auth-service even if they are running on different physical or virtual machines.

The Treasure Chest of Data

Your services are talking, but they need to remember things—like user data and inventory levels. Enter the Docker volumes, the treasure chests where your containers can store their precious data.

For your inventory-service, you create a volume to store all the inventory information,

docker volume create inventory-data
docker run -d --name inventory-service --network ecommerce-network -v inventory-data:/app/data inventory-service

Now, even if your inventory-service container is destroyed and replaced, the data remains safe in the inventory-data volume. It’s like having a secure vault where you keep all your valuables.

docker volume create inventory-data

  • Creates a named Docker volume called inventory-data. Named volumes persist data even if the container is removed, and they can be shared between containers.

docker run -d --name inventory-service --network ecommerce-network -v inventory-data:/app/data inventory-service

  • -v inventory-data:/app/data: Mounts the inventory-data volume to the /app/data directory inside the container. Any data written to /app/data inside the container is stored in the inventory-data volume.

The Hidden Pathway

Sometimes, you need to work directly with files on your host machine, like when debugging or developing. You create a bind mount, a secret pathway between your host and the container.

docker run -d --name dev-service --network ecommerce-network -v $(pwd)/data:/app/data dev-service

Now, as you make changes to files in your host’s data directory, those changes are instantly reflected in your container. It’s like having a secret door in your house that opens directly into your office at work.

-v $(pwd)/data:/app/data:

  • This creates a bind mount, where the data directory in the current working directory on the host ($(pwd)/data) is mounted to /app/data inside the container. Changes made to files in the data directory on the host are reflected inside the container and vice versa. This is particularly useful for development, as it allows you to edit files on your host and see the changes immediately inside the running container.

The Seamless City

As your application grows, Docker Compose comes into play. It’s like the city planner, helping you manage all the roads (networks) and buildings (containers) in your bustling metropolis. With a simple docker-compose.yml file, you define your entire application stack,

version: '3'
services:
  auth-service:
    image: auth-service
    networks:
      - ecommerce-network
  inventory-service:
    image: inventory-service
    networks:
      - ecommerce-network
    volumes:
      - inventory-data:/app/data

networks:
  ecommerce-network:

volumes:
  inventory-data:

  1. version: '3': Specifies the version of the Docker Compose file format.
  2. services:: Defines the services (containers) that make up your application.
  • auth-service:: Defines the auth-service container.
    • image: auth-service: Specifies the Docker image to use for this service.
    • networks:: Specifies the networks this service is connected to.
  • inventory-service:: Defines the inventory-service container.
    • volumes:: Specifies the volumes to mount. Here, the inventory-data volume is mounted to /app/data inside the container.

3. networks:: Defines the networks used by the services. ecommerce-network is the custom bridge network created for communication between the services.

4. volumes:: Defines the volumes used by the services. inventory-data is a named volume used by the inventory-service.

Now, you can start your entire city with a single command,

docker-compose up

Everything springs to life services find each other, data is stored securely, and your city of containers runs like a well-oiled machine.

Docker EP – 10: Let’s Dockerize a Flask Application

18 August 2024 at 11:49

Let’s develop a simple flask application,

  1. Set up the project directory: Create a new directory for your Flask project.

mkdir flask-docker-app
cd flask-docker-app

2. Create a virtual environment (optional but recommended):


python3 -m venv venv
source venv/bin/activate

3. Install Flask


pip install Flask

4. Create a simple Flask app:

In the flask-docker-app directory, create a file named app.py with the following content,


from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, Dockerized Flask!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

5. Test the Flask app: Run the Flask application to ensure it’s working.

python app.py

Visit http://127.0.0.1:5000/ in your browser. You should see “Hello, Dockerized Flask!”.

Dockerize the Flask Application

  1. Create a Dockerfile: In the flask-docker-app directory, create a file named Dockerfile with the following content:

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir Flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV FLASK_APP=app.py

# Run app.py when the container launches
CMD ["python", "app.py"]

2. Create a .dockerignore file:

In the flask-docker-app directory, create a file named .dockerignore to ignore unnecessary files during the Docker build process:


venv
__pycache__
*.pyc
*.pyo

3. Build the Docker image:

In the flask-docker-app directory, run the following command to build your Docker image:


docker build -t flask-docker-app .

4. Run the Docker container:

Run the Docker container using the image you just built,

docker run -p 5000:5000 flask-docker-app

5. Access the Flask app in Docker: Visit http://localhost:5000/ in your browser. You should see “Hello, Dockerized Flask!” running in a Docker container.

You have successfully created a simple Flask application and Dockerized it. The Dockerfile allows you to package your app with its dependencies and run it in a consistent environment.

Maya : 5mb Storage on browser

17 August 2024 at 16:45

Maya, a developer working on a new website, was frustrated with slow load times caused by using cookies to store user data. One night, she discovered the Web Storage API, specifically Local Storage, which allowed her to store data directly in the browser without needing constant communication with the server.

She starts to explore,

The localStorage is property of the window (browser window object) interface allows you to access a storage object for the Document’s origin; the stored data is saved across browser sessions.

  1. Data is kept for a longtime in local storage (with no expiration date.). This could be one day, one week, or even one year as per the developer preference ( Data in local storage maintained even if the browser is closed).
  2. Local storage only stores strings. So, if you intend to store objects, lists or arrays, you must convert them into a string using JSON.stringfy()
  3. Local storage will be available via the window.localstorage property.
  4. What’s interesting about them is that the data survives a page refresh (for sessionStorage) and even a full browser restart (for localStorage).

Excited with characteristics, she decides to learn more on the localStorage methods,

setItem(key, value)

Functionality:

  • Add key and value to localStorage.
  • setItem returns undefined.
  • Every key, value pair stored is converted to a string. So it’s better to convert an object to string using JSON.stringify()

Examples:

  • For a simple key, value strings.
// setItem normal strings
window.localStorage.setItem("name", "maya");
  • Inorder to store objects, we need to convert them to JSON strings using JSON.stringify()
// Storing an Object without JSON stringify
const data = {
  "commodity":"apple",
  "price":43
};
window.localStorage.setItem('commodity', data);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data without jsonified, "+ result);

getItem(key)

Functionality: This is how you get items from localStorage. If the given key is not present then it will return null.

Examples
Get a simple key value pair

// setItem normal strings
window.localStorage.setItem("name", "goku");

// getItem 
const name = window.localStorage.getItem("name");
console.log("name from localstorage, "+name);

removeItem(key)

Functionality: Remove an item by key from localStorage. If the given key is not present then it wont do anything.

Examples:

  • After removing the value will be null.
// remove item 
window.localStorage.removeItem("commodity");
var result = window.localStorage.getItem('commodity');
console.log("Data after removing the key "+ result);

clear()

Functionality: Clears the entire localStorage
Examples:

  • Simple clear of localStorage
// clear
window.localStorage.clear();
console.log("length of local storage - after clear " + window.localStorage.length);

length

Functionality: We can use this like the property access. It returns number of items stored.
**Examples: **

//length
console.log("length of local storage " + window.localStorage.length);

When to use Local Storage

  1. Data stored in Local Storage can be easily accessed by third party individuals.
  2. So its important to know that any sensitive data must not sorted in Local Storage.
  3. Local Storage can help in storing temporary data before it is pushed to the server.
  4. Always clear local storage once the operation is completed.

localStorage browser support

localStorage as a type of web storage is an HTML5 specification. It is supported by major browsers including IE8. To be sure the browser supports localStorage, you can check using the following snippet:

if (typeof(Storage) !== "undefined") {
    // Code for localStorage
    } else {
    // No web storage Support.
}

Maya is also curious to know where it gets stored ?

Where the local storage is saved ?

Windows

  • **Firefox: **C:\Users\\AppData\Roaming\Mozilla\Firefox\Profiles\\webappsstore.sqlite, %APPDATA%\Mozilla\Firefox\Profiles\\webappsstore.sqlite
  • **Chrome: **%LocalAppData%\Google\Chrome\User Data\Default\Local Storage\

Linux

  • Firefox: ~/.mozilla/firefox//webappsstore.sqlite
  • Chrome: ~/.config/google-chrome/Default/Local Storage/

Mac

  • Firefox: ~/Library/Application Support/Firefox/Profiles//webappsstore.sqlite, ~/Library/Mozilla/Firefox/Profiles//webappsstore.sqlite
  • Chrome: ~/Library/Application Support/Google/Chrome//Local Storage/, ~/Library/Application Support/Google/Chrome/Default/Local Storage/

Simple Hands-On

https://syedjafer.github.io/localStrorage/index.html

Limitations:

  1. Do not store sensitive user information in localStorage
  2. It is not a substitute for a server based database as information is only stored on the browser
  3. localStorage is limited to 5MB across all major browsers
  4. localStorage is quite insecure as it has no form of data protection and can be accessed by any code on your web page
  5. localStorage is synchronous, meaning each operation called would only execute one after the other

Now, Maya is able to resolve her quest of making her website faster, with the awareness of limitations !!!

❌
❌