Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.
In this blog, we’ll cover,
What wait times are in Locust.
Built-in wait time options.
Creating custom wait times.
A full example with instructions to run the test.
What Are Wait Times in Locust?
In real-world scenarios, users don’t interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.
Locust provides several ways to define these wait times within your test scenarios.
FastAPI App Overview
Here’s the FastAPI app that we’ll test,
from fastapi import FastAPI
# Create a FastAPI app instance
app = FastAPI()
# Define a route with a GET method
@app.get("/")
def read_root():
return {"message": "Welcome to FastAPI!"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
Locust Examples for FastAPI
1. Constant Wait Time Example
Here, we’ll simulate constant pauses between user requests
from locust import HttpUser, task, constant
class FastAPIUser(HttpUser):
wait_time = constant(2) # Wait for 2 seconds between requests
@task
def get_root(self):
self.client.get("/") # Simulates a GET request to the root endpoint
@task
def get_item(self):
self.client.get("/items/42?q=test") # Simulates a GET request with path and query parameters
2. Between wait time Example
Simulating random pauses between requests.
from locust import HttpUser, task, between
class FastAPIUser(HttpUser):
wait_time = between(1, 5) # Random wait time between 1 and 5 seconds
@task(3) # Weighted task: this runs 3 times more often
def get_root(self):
self.client.get("/")
@task(1)
def get_item(self):
self.client.get("/items/10?q=locust")
3. Custom Wait Time Example
Using a custom wait time function to introduce more complex user behavior
import random
from locust import HttpUser, task
def custom_wait():
return max(1, random.normalvariate(3, 1)) # Normal distribution (mean: 3s, stddev: 1s)
class FastAPIUser(HttpUser):
wait_time = custom_wait
@task
def get_root(self):
self.client.get("/")
@task
def get_item(self):
self.client.get("/items/99?q=custom")
Full Test Example
Combining all the above elements, here’s a complete Locust test for your FastAPI app.
from locust import HttpUser, task, between
import random
# Custom wait time function
def custom_wait():
return max(1, random.uniform(1, 3)) # Random wait time between 1 and 3 seconds
class FastAPIUser(HttpUser):
wait_time = custom_wait # Use the custom wait time
@task(3)
def browse_homepage(self):
"""Simulates browsing the root endpoint."""
self.client.get("/")
@task(1)
def browse_item(self):
"""Simulates fetching an item with ID and query parameter."""
item_id = random.randint(1, 100)
self.client.get(f"/items/{item_id}?q=test")
Running Locust for FastAPI
Run Your FastAPI App Save the FastAPI app code in a file (e.g., main.py) and start the server
uvicorn main:app --reload
By default, the app will run on http://127.0.0.1:8000.
2. Run Locust Save the Locust file as locustfile.py and start Locust.
it requires external dependency parse for parsing the python string format with placeholders
import parse
from date import TA_MONTHS
from date import datetime
//POC of tamil date time parser
def strptime(format='{month}, {date} {year}',date_string ="நவம்பர், 16 2024"):
parsed = parse.parse(format,date_string)
month = TA_MONTHS.index(parsed['month'])+1
date = int(parsed['date'])
year = int(parsed['year'])
return datetime(year,month,date)
print(strptime("{date}-{month}-{year}","16-நவம்பர்-2024"))
#dt = datetime(2024,11,16);
# print(dt.strptime_ta("நவம்பர் , 16 2024","%m %d %Y"))
Django is a powerful and versatile web framework that helps you build web applications efficiently. Here’s how you can set up a Django project and create an application within it.
Step 1: Install Django
First, you’ll need to install Django on your machine. Open your terminal or command prompt and run:
pip install django
Step 2: Check Django Version
To confirm Django is installed, you can check the version by running:
python3 -m django --version
Step 3: Set Up Project Directory
Organize your workspace by creating a folder where your Django project will reside. In the terminal, run:
mkdir django_project
cd django_project
Step 4: Create a Django Project
Now, create a Django project within this folder. Use the following command:
django-admin startproject mysite
This will create a new Django project named mysite.
Project Structure Overview
Once the project is created, you’ll notice a few files and folders within the mysite directory. Here’s a quick overview:
manage.py: A command-line utility that lets you interact with this Django project in various ways (starting the server, creating applications, running migrations, etc.).
__init__.py: An empty file that tells Python to treat the directory as a Python package.
Step 5: Run the Development Server
Now that the project is set up, you can test it by running Django’s development server:
python3 manage.py runserver
Visit http://127.0.0.1:8000/ in your browser, and you should see the Django welcome page, confirming your project is working.
Step 6: Create an Application
In Django, projects can contain multiple applications, each serving a specific function. To create an application, use the command:
python3 manage.py startapp firstapplication
This will create a folder named firstapplication inside your project directory, which will contain files essential for defining the app’s models, views, templates, and more.
With this setup, you’re ready to start building features in Django by defining models, views, templates, and URLs. This foundation will help you build scalable and structured web applications efficiently.
TASKS:
1. Create a Django Application to display Hello World Message as response.
2. Create One Django Application with multiple views.
3. Create a Django Application to display Current Date and Time.
Open settings.py inside the project folder and add a application name(firstapplication) to INSTALLED APPS List[]
Next open the views.py inside the application folder import httpresponse and create a function.
from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def display(request):
views_content='<h2>HELLO WORLD!<H2>'
return HttpResponse(views_content)
def display1(request):
views_content='<h2>WELCOME TO DJANGO<H2>'
return HttpResponse(views_content)
def display2(request):
views_content='<h2>WELCOME TO MY APPLICATION<H2>'
return HttpResponse(views_content)
import datetime
# Create your views here.
def time_info_view(request):
time = datetime.datetime.now()
output = '<h1> Currently the time is ' + str(time) + '</h1>'
return HttpResponse(output)
next open the urls.py inside the project folder
from firstapplication(application) import views
create a urls of the page in urlpatterns list[]
from django.contrib import admin
from django.urls import path
from firstapplication import views
urlpatterns = [
path('admin/', admin.site.urls),
path('welcome/',views.display),
path('second/',views.display1),
path('third/',views.display2),
path('datetime/',views.time_info_view)
]
Now open the local host 127.0.0.1:8000 you will see the below page
it contains 5 pages admin(default),welcome,second,third,datetime these are we created ones.
If you change the path 127.0.0.1:8000 to 127.0.0.1:8000/welcome/ you will see below page
If you change the path 127.0.0.1:8000 to 127.0.0.1:8000/datetime/ you will see below page
If you change the path 127.0.0.1:8000 to 127.0.0.1:8000/second/ you will see below page
If you change the path 127.0.0.1:8000 to 127.0.0.1:8000/third/ you will see below page
These are the essential steps for creating a Django project as a beginner, helping you understand the basic setup and flow of a Django application.
If you’re a Python developer, you’re probably familiar with pip and pip-tools, the go-to tools for managing Python packages. However, did you know that there’s a faster alternative that can save you time and improve your workflow?
Meet UV
uv is a package installer used for installing packages in python in a faster way. Which is written on Rust 🦀 , makes a warping speed in installation of packages as compared to pip and pip-tools.
Also, It is Free and Open Source done by astral-sh. which has around 11.3k stars on GitHub makes a very trending alternative package manager for python.
pip vs uv
As per astral-sh, they claim uv makes as very faster on installation of python packages, as compared to poetry , which is a another python package manager and pip-compile
Also, we can able to create a virtual environment, at a warping speed as compared to python3 venv or virtualenv.
My experience and Benchmarks
Nowadays, I am using uv as a package manager for doing my side projects. Which feels very good on developing python applications and it will be definitely useful on containerizing python applications using docker.
But now, uv has make my life easier with warping speed in installation on packages, suitable for building and deploying our containers as our need with many repitition and hassle-free in building docker containers.
Here is my comparison on pip and uv, Let’s start with pip
The above pic shows that it takes almost 3.84 or approximately 4 seconds to create a virtual environment in python whereas,
uv takes just 0.01 seconds to create a virtual environment in python. Now we move on with installing packages such as fastapi and langchain at same time, which has more dependencies than ever worked with.
pip install fastapi langchain
This takes around 22.5 seconds, which is fast today 😂, Sometimes which makes it even slower during installation at crucial time. Now let’s check with uv.
uv, makes a warping installation of langchain and fastapi at same time within 0.12 seconds. 🤯💥
Which makes me to use ‘uv’ as my package manager for python while developing my projects at recent times.
uv Installation and usage
Firstly copy the command for installation using linux,
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.1.38/uv-installer.sh | sh
for mac users, go and download official binaries provided by astral-sh, given here.
Virtual Environment creation
for creating virtual environments for python, we can use
uv venv <environment-name>
for <environment-name> give any name your wish.
Package Installation
for package Installation , we have to use
uv pip install <package-name>
Conclusion
Through this blog post, we have learned about uv package manager and how it is effective in making our python workflows faster and building our containers faster and ease of deployment .
Python: A programming language which many consider beginner friendly. A language which some programmers would argue to other aspiring programmers not to learn it first. Python is widely used in artificial intelligence programs, machine learning, data science, ethical hacking and so much more.
Created in February 20, 1991 by Guido Van Rossum, it is said that the name was inspired from the BBC TV show Monty Python's Flying Circus as the creator was a big fan of the show. It is said that he wanted to create a programming language that was easy to read compared to other programming languages at the time which people considered to have a more difficult syntax.
Python is one of the most used programming languages today. Some of the important reasons are that it is an open source programming language, many people would argue it has a very large library for every field in programming. It also has a large community with people who also create custom libraries, it even has custom IDEs. It is also used in many fortune 500 companies.
With all this why is there a group of people who argue this is a language beginners can learn and sometimes even the same people from the same group that argue that beginners shouldn't start learning it? The main argument the 2nd group advocates is that the programming language is too easy compared to other languages and when they try to learn other languages they might struggle in doing so.
There is no particular answer to the question, it just depends on the person learning it. The first programming language I properly learnt was python, but learning other languages after that was easy as i had something to compare to and I wasn't learning something complex without any context. In fact learning other languages made me understand how python could be working on a deeper level that I was unaware of by only knowing python and I hope that others who start by learning python have a similar experience.
I think that python is definitely a great programming language due to its simplicity while coding and trying to understand it. It also has what some people would consider disadvantages that it is not closer to the system compared to other languages but there are other people who consider the same statement to be an advantage for python. So it all comes down to what a particular person wants and sometimes python just occurs to be a very good option.
Selenium in Python is a widely-used tool for automating web browsers. It’s particularly useful for tasks like automated testing, web scraping, and automating repetitive tasks on websites. Selenium allows you to interact with web elements, navigate through web pages, fill out forms, and much more, all programmatically.
Key Components of Selenium in Python
WebDriver: The WebDriver is the core component of Selenium. It acts as an interface between your Python code and the web browser. WebDriver can automate browser actions like clicking buttons, filling out forms, navigating between pages, and more. Each browser (Chrome, Firefox, Safari, etc.) has its own WebDriver.
Browser Drivers: To use Selenium with a specific browser, you need to have the corresponding browser driver installed. For example, for Chrome, you need ChromeDriver; for Firefox, you need GeckoDriver.
Locating Elements: Selenium provides various ways to locate elements on a web page. The most common methods include:
By.ID: Locate an element by its ID attribute.
By.NAME: Locate an element by its name attribute.
By.CLASS_NAME: Locate an element by its class name.
By.TAG_NAME: Locate an element by its tag name.
By.CSS_SELECTOR: Locate an element using a CSS selector.
By.XPATH: Locate an element using an XPath expression.
Interacting with Web Elements: Once you’ve located a web element, you can interact with it in various ways:
send_keys(): Enter text into an input field.
click(): Click a button or link.
submit(): Submit a form.
get_attribute(): Retrieve the value of an attribute.
Handling Alerts and Pop-ups: Selenium allows you to handle browser alerts, pop-ups, and confirmation dialogs.
Waiting for Elements: Web pages can take time to load, and elements might not be available immediately. Selenium provides ways to wait for elements to become available:
implicitly_wait(): Waits for a certain amount of time for all elements to be present.
WebDriverWait: Explicitly waits for a specific condition to be met before proceeding.
Taking Screenshots: Selenium can take screenshots of web pages, which is useful for debugging or visual confirmation.
Handling Multiple Windows/Tabs: Selenium can switch between different windows or tabs within a browser session.
Basic Example: Automating a Google Search
Here’s a basic example of how to use Selenium in Python to automate a Google search:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
# Set up the WebDriver (Chrome in this case)
driver = webdriver.Chrome()
# Navigate to Google's homepage
driver.get("https://www.google.com")
# Find the search input element by its name attribute
search_box = driver.find_element(By.NAME, "q")
# Type in the search query and press Enter
search_box.send_keys("Selenium Python")
search_box.send_keys(Keys.RETURN)
# Wait for the search results page to load
driver.implicitly_wait(10)
# Capture the title of the first search result
first_result = driver.find_element(By.CSS_SELECTOR, "h3")
print(first_result.text)
# Close the browser
driver.quit()
Installing Selenium
To use Selenium in Python, you need to install the Selenium package and the corresponding browser driver.
Language Support: Works with many programming languages, including Python, Java, C#, and Ruby.
Community and Documentation: Extensive community support and documentation are available.
Disadvantages
Speed: Compared to other web scraping tools, Selenium might be slower because it actually loads the entire web page.
Resource-Intensive: Running browsers for automation can consume a lot of system resources.
Selenium is a powerful tool for automating web tasks, and with Python, it becomes even more flexible and easy to use. Would you like to explore any specific features or need help with a particular use case?
In this post, we will delve into a basic Python program that demonstrates the concept of multiplication using both explicit print statements and a while loop.
Part 1: Using Print Statements
#print num series 2 4 8 16 32
#using print statement
print(2)
print(4)
print(8)
print(16)
print(32)
In this section, we use individual print statements to display the sequence of numbers. Each number is a power of 2, starting from 21 and doubling with each subsequent number.
Part 2: Using a Variable
#print num series 2 4 8 16 32
#using a variable
no = 2
print(no)
no = no * 2
print(no)
no = no * 2
print(no)
no = no * 2
print(no)
no = no * 2
print(no)
Here, we start with the number 2 assigned to the variable 'no‘. We then repeatedly multiply 'no‘ by 2 and print the result. This approach showcases the concept of updating a variable’s value based on its previous state.
Part 3: Using a While Loop
#print num series 2 4 8 16 32
#using while loop
no = 2
while no <= 32:
print(no)
no = no * 2
In this final part, we use a ‘while' loop to achieve the same result as above. The loop continues to run as long as ‘no' is less than or equal to 32. Inside the loop, we print the current value of 'no‘ and then double it. This approach is more efficient and scalable, as it reduces the need for repetitive code.
Conclusion
This simple Python program effectively demonstrates how to use basic arithmetic operations and loops to achieve a desired sequence of numbers. It’s a great starting point for beginners to understand variable manipulation and control flow in programming.
program:
#print num series 2 4 8 16 32
#using print statement
print(2)
print(4)
print(8)
print(16)
print(32)
print()#just for space in output
#print num series 2 4 8 16 32
#using a variable
no=2
print(no)
no=no*2
print(no)
no=no*2
print(no)
no=no*2
print(no)
no=no*2
print(no)
print()#just for space in output
#print num series 2 4 8 16 32
#using while loop
no=2
while no<=32:
print(no)
no=no*2
Description: This function moves files from the source folder to the destination folder based on the specified file format and user-provided date.
It first checks if the destination folder exists; if not, it creates it.
It then iterates over the files in the source folder, checking if each file matches the specified format and creation date.
If a match is found, the file is moved to the destination folder, and a message is printed indicating the file has been moved.
Main Function:
Function: main()
Description: This is the entry point of the script. It sets the paths for the source and destination folders and performs the following steps:
Verifies the existence of the source folder.
Retrieves the file format and date from the user.
Calls the function to move files based on the provided criteria.
Script Execution:
The script is executed by calling the main() function when the script is run directly.
Enhancements for Future Consideration:
User Input Validation: Ensure the file format and date inputs are valid.
Error Handling: Implement error handling for file operations and user inputs.
Logging: Add logging to keep track of the operations performed and any errors encountered.
Flexible Date Comparison: Allow for more flexible date comparisons, such as moving files created on or after a specified date.
By following these steps, the script efficiently organizes files based on their creation dates, making it a useful tool for managing large collections of files.
Here’s a Python program to accomplish this task. The script will:
Go to a specified source folder.
List all files with the .mp3 extension.
Extract metadata date and compare it with the user-provided date.
Move files that match the criteria to a specified destination folder.
To achieve this, you’ll need to install the mutagen library for handling MP3 metadata. You can install it using pip install mutagen.
Here’s the Python script:
import os
import shutil
from datetime import datetime
from mutagen.mp3 import MP3
def get_mp3_date(file_path):
try:
audio = MP3(file_path)
if audio.tags is not None:
return datetime.fromtimestamp(audio.info.pprint())
except Exception as e:
print(f"Error reading {file_path}: {e}")
return None
def move_files_by_date(source_folder, destination_folder, user_date):
if not os.path.exists(destination_folder):
os.makedirs(destination_folder)
for root, _, files in os.walk(source_folder):
for file in files:
if file.lower().endswith('.mp3'):
file_path = os.path.join(root, file)
file_date = get_mp3_date(file_path)
if file_date and file_date.date() == user_date:
shutil.move(file_path, os.path.join(destination_folder, file))
print(f"Moved: {file}")
if __name__ == "__main__":
source_folder = "path/to/source/folder"
destination_folder = "path/to/destination/folder"
user_date = datetime.strptime("2023-08-06", "%Y-%m-%d").date()
move_files_by_date(source_folder, destination_folder, user_date)
Instructions to Run the Script:
Install Dependencies: Ensure you have Python installed. Install the mutagen library using pip install mutagen.
Update Paths and Date:
Replace "path/to/source/folder" with the path to your source folder containing the MP3 files.
Replace "path/to/destination/folder" with the path to your destination folder where you want to move the files.
Replace "2023-08-06" with the user-specified date you want to compare against.
Run the Script: Save the script as a .py file and run it using a Python interpreter.
The script will scan the source folder, check the metadata date of each MP3 file, and move files that match the user-specified date to the destination folder.
Objective:
Develop a Python program that:
Scans a specified source folder for MP3 files.
Extracts the metadata date from each MP3 file.
Compares this date to a user-provided date.
Moves files with matching dates to a destination folder.
Required Libraries:
os: To navigate the file system.
shutil: To move files between directories.
datetime: To handle date operations.
mutagen: To read metadata from MP3 files.
Step-by-Step Process:
Import Necessary Libraries:
os for navigating directories.
shutil for moving files.
datetime for handling date comparisons.
mutagen.mp3 for reading MP3 metadata.
Define Function to Extract MP3 Metadata Date:
Use mutagen.mp3.MP3 to read the MP3 file.
Extract the date from the metadata if available.
Return the date as a datetime object.
Define Function to Move Files:
Navigate the source directory to find MP3 files.
For each MP3 file, extract the metadata date.
Compare the metadata date with the user-provided date.
If dates match, move the file to the destination folder.
Main Execution Block:
Define the source and destination folders.
Define the user-provided date for comparison.
Call the function to move files based on the date comparison.
To install the mutagen library using pip, follow these steps:
Steps to Install mutagen Using pip:
Open a Command Prompt or Terminal:
On Windows: Press Win + R, type cmd, and press Enter.
On macOS/Linux: Open the Terminal from the Applications menu or use the shortcut Ctrl + Alt + T (Linux) or Cmd + Space and type “Terminal” (macOS).
Ensure pip is Installed:
Check if pip is installed by running:
pip --version
If pip is not installed, you can install it by following the official pip installation guide.
3.Install mutagen:
Run the following command to install the mutagen library:
pip install mutagen
Example:
On Windows/MacOS/Linux:
open in command prompt/Terminal:
pip install mutagen
Verifying the Installation:
After the installation is complete, you can verify that mutagen is installed by running a simple Python command:
import mutagen
print(mutagen.version)
You can run this command in a Python interpreter or save it in a .py file and execute it. If there are no errors and the version prints correctly, mutagen is installed successfully.
I try the above steps but have facing some error I have discussed about this particular program
The random module in Python is used to generate pseudo-random numbers and perform random operations. It provides a variety of functions for generating random numbers, choosing random items, and shuffling sequences. Here are some commonly used functions from the random module:
random.random(): Returns a random float between 0.0 and 1.0.
import random
print(random.random()) # Example output: 0.37444887175646646
2. random.randint(a, b): Returns a random integer between a and b (inclusive).
print(random.randint(1, 10)) # Example output: 7
3. random.choice(seq): Returns a random element from the non-empty sequence seq.
fruits = ['apple', 'banana', 'cherry']
print(random.choice(fruits)) # Example output: 'banana'
4. random.shuffle(lst): Shuffles the elements of the list lst in place.
Load balancing helps distribute client requests across multiple servers to ensure high availability, performance, and reliability. Weighted Round Robin Load Balancing is an extension of the round-robin algorithm, where each server is assigned a weight based on its capacity or performance capabilities. This approach ensures that more powerful servers handle more traffic, resulting in a more efficient distribution of the load.
What is Weighted Round Robin Load Balancing?
Weighted Round Robin Load Balancing assigns a weight to each server. The weight determines how many requests each server should handle relative to the others. Servers with higher weights receive more requests compared to those with lower weights. This method is useful when backend servers have different processing capabilities or resources.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.
Flask App 1 (app1.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 1!"
@app.route("/data")
def data():
return "Data from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 2!"
@app.route("/data")
def data():
return "Data from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 3!"
@app.route("/data")
def data():
return "Data from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Step 2: Create Dockerfiles for Each Flask Application
Create Dockerfiles for each of the Flask applications:
Dockerfile for Flask App 1 (Dockerfile.app1):
# Use the official Python image from Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the application file into the container
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2):
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3):
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create the HAProxy Configuration File
Create an HAProxy configuration file (haproxy.cfg) to implement Weighted Round Robin Load Balancing
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance roundrobin
server server1 app1:5001 weight 2 check
server server2 app2:5002 weight 1 check
server server3 app3:5003 weight 3 check
Explanation:
The balance roundrobin directive tells HAProxy to use the Round Robin load balancing algorithm.
The weight option for each server specifies the weight associated with each server:
server1 (App 1) has a weight of 2.
server2 (App 2) has a weight of 1.
server3 (App 3) has a weight of 3.
Requests will be distributed based on these weights: App 3 will receive the most requests, App 2 the least, and App 1 will be in between.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy):
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a docker-compose.yml File
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
HAProxy depends on the three Flask applications to be up and running before it starts.
Step 6: Build and Run the Docker Containers
Run the following command to build and start all the containers
docker-compose up --build
This command builds Docker images for all three Flask apps and HAProxy, then starts them.
Step 7: Test the Load Balancer
Open your browser or use curl to make requests to the HAProxy server
curl http://localhost/
curl http://localhost/data
Observation:
With Weighted Round Robin Load Balancing, you should see that requests are distributed according to the weights specified in the HAProxy configuration.
For example, App 3 should receive three times more requests than App 2, and App 1 should receive twice as many as App 2.
Conclusion
By implementing Weighted Round Robin Load Balancing with HAProxy, you can distribute traffic more effectively according to the capacity or performance of each backend server. This approach helps optimize resource utilization and ensures a balanced load across servers.
Load balancing distributes client requests across multiple servers to ensure high availability and reliability. One of the simplest load balancing algorithms is Random Load Balancing, which selects a backend server randomly for each client request.
Although this approach does not consider server load or other metrics, it can be effective for less critical applications or when the goal is to achieve simplicity.
What is Random Load Balancing?
Random Load Balancing assigns incoming requests to a randomly chosen server from the available pool of servers. This method is straightforward and ensures that requests are distributed in a non-deterministic manner, which may work well for environments with equally capable servers and minimal concerns about server load or state.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.
Flask App 1 – (app.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 1!"
@app.route("/data")
def data():
return "Data from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 – (app.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 2!"
@app.route("/data")
def data():
return "Data from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 – (app.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 3!"
@app.route("/data")
def data():
return "Data from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Step 2: Create Dockerfiles for Each Flask Application
Create Dockerfiles for each of the Flask applications:
Dockerfile for Flask App 1 (Dockerfile.app1):
# Use the official Python image from Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the application file into the container
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2):
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3):
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create a Dockerfile for HAProxy
HAProxy Configuration file,
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance random
random draw 2
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
The balance random directive tells HAProxy to use the Random load balancing algorithm.
The random draw 2 setting makes HAProxy select 2 servers randomly and choose the one with the least number of connections. This adds a bit of load awareness to the random choice.
The server directives define the backend servers and their ports.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy):
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a docker-compose.yml File
To manage all the containers together, create a docker-compose.yml file:
The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
HAProxy depends on the three Flask applications to be up and running before it starts.
Step 6: Build and Run the Docker Containers
Run the following command to build and start all the containers:
docker-compose up --build
This command builds Docker images for all three Flask apps and HAProxy, then starts them.
Step 7: Test the Load Balancer
Open your browser or use curl to make requests to the HAProxy server:
curl http://localhost/
curl http://localhost/data
Observation:
With Random Load Balancing, each request should randomly hit one of the three backend servers.
Since the selection is random, you may not see a predictable pattern; however, the requests should be evenly distributed across the servers over a large number of requests.
Conclusion
By implementing Random Load Balancing with HAProxy, we’ve demonstrated a simple way to distribute traffic across multiple servers without relying on complex metrics or state information. While this approach may not be ideal for all use cases, it can be useful in scenarios where simplicity is more valuable than fine-tuned load distribution.
Imagine you are managing a busy highway with multiple lanes, and you want to direct specific types of vehicles to particular lanes: trucks to one lane, cars to another, and motorcycles to yet another. In the world of web traffic, this is similar to what Access Control Lists (ACLs) in HAProxy do—they help you direct incoming requests based on specific criteria.
Let’s dive into what ACLs are in HAProxy, why they are essential, and how you can use them effectively with some practical examples.
What are ACLs in HAProxy?
Access Control Lists (ACLs) in HAProxy are rules or conditions that allow you to define patterns to match incoming requests. These rules help you make decisions about how to route or manage traffic within your infrastructure.
Think of ACLs as powerful filters or guards that analyze incoming HTTP requests based on headers, IP addresses, URL paths, or other attributes. By defining ACLs, you can control how requests are handled—for example, sending specific traffic to different backends, applying security rules, or denying access under certain conditions.
Why Use ACLs in HAProxy?
Using ACLs offers several advantages:
Granular Control Over Traffic: You can filter and route traffic based on very specific criteria, such as the content of HTTP headers, cookies, or request methods.
Security: ACLs can block unwanted traffic, enforce security policies, and prevent malicious access.
Performance Optimization: By directing traffic to specific servers optimized for certain types of content, ACLs can help balance the load and improve performance.
Flexibility and Scalability: ACLs allow dynamic adaptation to changing traffic patterns or new requirements without significant changes to your infrastructure.
How ACLs Work in HAProxy
ACLs in HAProxy are defined in the configuration file (haproxy.cfg). The syntax is straightforward
acl <name> <criteria>
<name>: The name you give to your ACL rule, which you will use to reference it in further configuration.
<criteria>: The condition or match pattern, such as a path, header, method, or IP address.
It either returns True or False.
Examples of ACLs in HAProxy
Let’s look at some practical examples to understand how ACLs work.
Example 1: Routing Traffic Based on URL Path
Suppose you have a web application that serves both static and dynamic content. You want to route all requests for static files (like images, CSS, and JavaScript) to a server optimized for static content, while all other requests should go to a dynamic content server.
Configuration:
frontend http_front
bind *:80
acl is_static path_beg /static
use_backend static_backend if is_static
default_backend dynamic_backend
backend static_backend
server static1 127.0.0.1:5001 check
backend dynamic_backend
server dynamic1 127.0.0.1:5002 check
ACL Definition: acl is_static path_beg /static : checks if the request URL starts with /static.
Usage:use_backend static_backend if is_static routes the traffic to the static_backend if the ACL is_static matches. All other requests are routed to the dynamic_backend.
Example 2: Blocking Traffic from Specific IP Addresses
Let’s say you want to block traffic from a range of IP addresses that are known to be malicious.
Configurations
frontend http_front
bind *:80
acl block_ip src 192.168.1.0/24
http-request deny if block_ip
default_backend web_backend
backend web_backend
server web1 127.0.0.1:5003 check
ACL Definition:acl block_ip src 192.168.1.0/24 defines an ACL that matches any source IP from the range 192.168.1.0/24.
Usage:http-request deny if block_ip denies the request if it matches the block_ip ACL.
Example 4: Redirecting Traffic Based on Request Method
You might want to redirect all POST requests to a different backend for further processing.
Configurations
frontend http_front
bind *:80
acl is_post_method method POST
use_backend post_backend if is_post_method
default_backend general_backend
backend post_backend
server post1 127.0.0.1:5006 check
backend general_backend
server general1 127.0.0.1:5007 check
Example 5: Redirect Traffic Based on User Agent
Imagine you want to serve a different version of your website to mobile users versus desktop users. You can achieve this by using ACLs that check the User-Agent header in the HTTP request.
Configuration:
frontend http_front
bind *:80
acl is_mobile_user_agent req.hdr(User-Agent) -i -m sub Mobile
use_backend mobile_backend if is_mobile_user_agent
default_backend desktop_backend
backend mobile_backend
server mobile1 127.0.0.1:5008 check
backend desktop_backend
server desktop1 127.0.0.1:5009 check
ACL Definition:acl is_mobile_user_agent req.hdr(User-Agent) -i -m sub Mobile checks if the User-Agent header contains the substring "Mobile" (case-insensitive).
Usage:use_backend mobile_backend if is_mobile_user_agent directs mobile users to mobile_backend and all other users to desktop_backend.
Example 6: Restrict Access to Admin Pages by IP Address
Let’s say you want to allow access to the /admin page only from a specific IP address or range, such as your company’s internal network.
Let’s see how you can use ACLs with a Flask application to enforce different rules.
Flask Application Setup
You have two Flask apps: app1.py for general requests and app2.py for special requests like form submissions.
app1.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return "Welcome to the main page!"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5003)
In the world of web applications, imagine you’re running a very popular pizza place. Every evening, customers line up for a delicious slice of pizza. But if your single cashier can’t handle all the orders at once, customers might get frustrated and leave.
What if you could have a system that ensures every customer gets served quickly and efficiently? Enter HAProxy, a tool that helps manage and balance the flow of web traffic so that no single server gets overwhelmed.
Here’s a straightforward guide to understanding HAProxy, installing it, and setting it up to make your web application run smoothly.
What is HAProxy?
HAProxy stands for High Availability Proxy. It’s like a traffic director for your web traffic. It takes incoming requests (like people walking into your pizza place) and decides which server (or pizza station) should handle each request. This way, no single server gets too busy, and everything runs more efficiently.
Why Use HAProxy?
Handles More Traffic: Distributes incoming traffic across multiple servers so no single one gets overloaded.
Increases Reliability: If one server fails, HAProxy directs traffic to the remaining servers.
Improves Performance: Ensures that users get faster responses because the load is spread out.
Installing HAProxy
Here’s how you can install HAProxy on a Linux system:
Open a Terminal: You’ll need to access your command line interface to install HAProxy.
Install HAProxy: Type the following command and hit enter
sudo apt-get update
sudo apt-get install haproxy
3. Check Installation: Once installed, you can verify that HAProxy is running by typing
sudo systemctl status haproxy
This command shows you the current status of HAProxy, ensuring it’s up and running.
Configuring HAProxy
HAProxy’s configuration file is where you set up how it should handle incoming traffic. This file is usually located at /etc/haproxy/haproxy.cfg. Let’s break down the main parts of this configuration file,
1. The global Section
The global section is like setting the rules for the entire pizza place. It defines general settings for HAProxy itself, such as how it should operate, what kind of logging it should use, and what resources it needs. Here’s an example of what you might see in the global section
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660
user haproxy
group haproxy
daemon
Let’s break it down line by line:
log /dev/log local0: This line tells HAProxy to send log messages to the system log at /dev/log and to use the local0 logging facility. Logs help you keep track of what’s happening with HAProxy.
log /dev/log local1 notice: Similar to the previous line, but it uses the local1 logging facility and sets the log level to notice, which is a type of log message indicating important events.
chroot /var/lib/haproxy: This line tells HAProxy to run in a restricted area of the file system (/var/lib/haproxy). It’s a security measure to limit access to the rest of the system.
stats socket /run/haproxy/admin.sock mode 660: This sets up a special socket (a kind of communication endpoint) for administrative commands. The mode 660 part defines the permissions for this socket, allowing specific users to manage HAProxy.
user haproxy: Specifies that HAProxy should run as the user haproxy. Running as a specific user helps with security.
group haproxy: Similar to the user directive, this specifies that HAProxy should run under the haproxy group.
daemon: This tells HAProxy to run as a background service, rather than tying up a terminal window.
2. The defaults Section
The defaults section sets up default settings for HAProxy’s operation and is like defining standard procedures for the pizza place. It applies default configurations to both the frontend and backend sections unless overridden. Here’s an example of a defaults section
defaults
log global
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
Here’s what each line means:
log global: Tells HAProxy to use the logging settings defined in the global section for logging.
option httplog: Enables HTTP-specific logging. This means HAProxy will log details about HTTP requests and responses, which helps with troubleshooting and monitoring.
option dontlognull: Prevents logging of connections that don’t generate any data (null connections). This keeps the logs cleaner and more relevant.
timeout connect 5000ms: Sets the maximum time HAProxy will wait when trying to connect to a backend server to 5000 milliseconds (5 seconds). If the connection takes longer, it will be aborted.
timeout client 50000ms: Defines the maximum time HAProxy will wait for data from the client to 50000 milliseconds (50 seconds). If the client doesn’t send data within this time, the connection will be closed.
timeout server 50000ms: Similar to timeout client, but it sets the maximum time to wait for data from the server to 50000 milliseconds (50 seconds).
3. Frontend Section
The frontend section defines how HAProxy listens for incoming requests. Think of it as the entrance to your pizza place.
frontend http_front: This is a name for your frontend configuration.
bind *:80: Tells HAProxy to listen for traffic on port 80 (the standard port for web traffic).
default_backend http_back: Specifies where the traffic should be sent (to the backend section).
4. Backend Section
The backend section describes where the traffic should be directed. Think of it as the different pizza stations where orders are processed.
backend http_back
balance roundrobin
server app1 192.168.1.2:5000 check
server app2 192.168.1.3:5000 check
server app3 192.168.1.4:5000 check
backend http_back: This is a name for your backend configuration.
balance roundrobin: Distributes traffic evenly across servers.
server app1 192.168.1.2:5000 check: Specifies a server (app1) at IP address 192.168.1.2 on port 5000. The check option ensures HAProxy checks if the server is healthy before sending traffic to it.
server app2 and server app3: Additional servers to handle traffic.
Testing Your Configuration
After setting up your configuration, you’ll need to restart HAProxy to apply the changes:
sudo systemctl restart haproxy
To check if everything is working, you can use a web browser or a tool like curl to send requests to HAProxy and see if it correctly distributes them across your servers.
The past 2 months went with weekly 3 python classes in Tamil, from Kaniyam Foundation
We got around 3500 participants in 3 whatsapp groups. Initial days went with some 1000+ students.
As the classes are in Tamil, live streamed, many participants started to learn easily.
We asked to learn, take notes, write blog daily. Many of them started to write. You can see them all here – https://blogs.kaniyam.cloudns.nz/
I hope minimum 20 students learned python very well.
The project demo days at final weeks proved that within 2 months, anyone can learn python programming and do good projects. All we need is dedicated learning and practicing.
I thank Syed Jafer, who trained us in a easy way. Thanks to all participants for great enthusiasm and hard work on learning.
I got opportunity to handle few classes and few QA sessions. Enjoyed every discussions with the team. Happy to see the progress and read all your blog posts daily. Continue the learning and writing. It is a life long process.
Special Thanks to my ilugc friend Asokan. He is a trainer for 20+ years. He taught python around 2005 in our Chennai Linux Users Group meetings. Happy to learn again from him, on his special training sessions.
On our discussions, he explained how to train python for beginners. Learned on he importance of more good examples, how to explain basics etc.
We all wondered on various methods to solve the fizz buzz problem and the beauty of functional programming.
Thanks for Asokan for mentoring us and TalentSprint.com for providing Zoom for the classes.
The feedback session was interesting. Captured the notes here on the things to improve on the next classes.
Feedback from participants –
go little slow
more basics and examples
first week , explain programming basics for beginners
teach flow charting methods for basics.
try teaching scratch
weekend sessions batch
make more conversations by participants
make sub groups
get cheerleaders within the team to make the classes interactive
more promotion needed
give better examples
more QA sessions are required
each one should talk
showing face in video can help to get some personal connections.
run mini hackathons
make more interactions and connections between the participants
ask to write blogs daily
encourage to give talks in class and other communities
Few more learning’s
Don’t create whatsapp group for communications. It has 1024 members limit. Having multiple groups is a headache.
Telegram is fine for now. Try to explore mailing list too.
Mute the groups, if required, to avoid “hi,hello,good morning” messages.
Teach how to create a free blog in dev.to or wordpress.com
Don’t spend much time on explaining all the things in the language. In 5th or 6th class, they have to write code for a small project. Explain things as solutions for the project ideas or problem statements.
Insist on using names when calling people, always. By habit, people will call as sir/madam. avoid that on any technical discussions. We all are equal.
Zoom is costly. Even though we invest time on training and money for zoom, only around 50 people will complete the training. Check for other platforms like jitsi or google meet too.
Will try to implement these in our upcoming classes.
If you are interested in teaching any open source technology in tamil, write to us at KaniyamFoundation@gmail.com It can be some 30 min talk or few months trainings.
Thanks for all people who are spreading the knowledge openly. you are the backbone of the life.
The past 2 months went with weekly 3 python classes in Tamil, from Kaniyam Foundation
We got around 3500 participants in 3 whatsapp groups. Initial days went with some 1000+ students.
As the classes are in Tamil, live streamed, many participants started to learn easily.
We asked to learn, take notes, write blog daily. Many of them started to write. You can see them all here – https://blogs.kaniyam.cloudns.nz/
I hope minimum 20 students learned python very well.
The project demo days at final weeks proved that within 2 months, anyone can learn python programming and do good projects. All we need is dedicated learning and practicing.
I thank Syed Jafer, who trained us in a easy way. Thanks to all participants for great enthusiasm and hard work on learning.
I got opportunity to handle few classes and few QA sessions. Enjoyed every discussions with the team. Happy to see the progress and read all your blog posts daily. Continue the learning and writing. It is a life long process.
Special Thanks to my ilugc friend Asokan. He is a trainer for 20+ years. He taught python around 2005 in our Chennai Linux Users Group meetings. Happy to learn again from him, on his special training sessions.
On our discussions, he explained how to train python for beginners. Learned on he importance of more good examples, how to explain basics etc.
We all wondered on various methods to solve the fizz buzz problem and the beauty of functional programming.
Thanks for Asokan for mentoring us and TalentSprint.com for providing Zoom for the classes.
The feedback session was interesting. Captured the notes here on the things to improve on the next classes.
Feedback from participants –
go little slow
more basics and examples
first week , explain programming basics for beginners
teach flow charting methods for basics.
try teaching scratch
weekend sessions batch
make more conversations by participants
make sub groups
get cheerleaders within the team to make the classes interactive
more promotion needed
give better examples
more QA sessions are required
each one should talk
showing face in video can help to get some personal connections.
run mini hackathons
make more interactions and connections between the participants
ask to write blogs daily
encourage to give talks in class and other communities
Few more learning’s
Don’t create whatsapp group for communications. It has 1024 members limit. Having multiple groups is a headache.
Telegram is fine for now. Try to explore mailing list too.
Mute the groups, if required, to avoid “hi,hello,good morning” messages.
Teach how to create a free blog in dev.to or wordpress.com
Don’t spend much time on explaining all the things in the language. In 5th or 6th class, they have to write code for a small project. Explain things as solutions for the project ideas or problem statements.
Insist on using names when calling people, always. By habit, people will call as sir/madam. avoid that on any technical discussions. We all are equal.
Zoom is costly. Even though we invest time on training and money for zoom, only around 50 people will complete the training. Check for other platforms like jitsi or google meet too.
Will try to implement these in our upcoming classes.
If you are interested in teaching any open source technology in tamil, write to us at KaniyamFoundation@gmail.com It can be some 30 min talk or few months trainings.
Thanks for all people who are spreading the knowledge openly. you are the backbone of the life.
Meet Jafer, a talented developer (self boast) working at a fast growing tech company. His team is building an innovative app that fetches data from multiple third-party APIs in realtime to provide users with up-to-date information.
Everything is going smoothly until one day, a spike in traffic causes their app to face a wave of “HTTP 500” and “Timeout” errors. Requests start failing left and right, and users are left staring at the dreaded “Data Unavailable” message.
Jafer realizes that he needs a way to make their app more resilient against these unpredictable network hiccups. That’s when he discovers Tenacity a powerful Python library designed to help developers handle retries gracefully.
Join Jafer as he dives into Tenacity and learns how to turn his app from fragile to robust with just a few lines of code!
Step 0: Mock FLASK Api
from flask import Flask, jsonify, make_response
import random
import time
app = Flask(__name__)
# Scenario 1: Random server errors
@app.route('/random_error', methods=['GET'])
def random_error():
if random.choice([True, False]):
return make_response(jsonify({"error": "Server error"}), 500) # Simulate a 500 error randomly
return jsonify({"message": "Success"})
# Scenario 2: Timeouts
@app.route('/timeout', methods=['GET'])
def timeout():
time.sleep(5) # Simulate a long delay that can cause a timeout
return jsonify({"message": "Delayed response"})
# Scenario 3: 404 Not Found error
@app.route('/not_found', methods=['GET'])
def not_found():
return make_response(jsonify({"error": "Not found"}), 404)
# Scenario 4: Rate-limiting (simulated with a fixed chance)
@app.route('/rate_limit', methods=['GET'])
def rate_limit():
if random.randint(1, 10) <= 3: # 30% chance to simulate rate limiting
return make_response(jsonify({"error": "Rate limit exceeded"}), 429)
return jsonify({"message": "Success"})
# Scenario 5: Empty response
@app.route('/empty_response', methods=['GET'])
def empty_response():
if random.choice([True, False]):
return make_response("", 204) # Simulate an empty response with 204 No Content
return jsonify({"message": "Success"})
if __name__ == '__main__':
app.run(host='localhost', port=5000, debug=True)
To run the Flask app, use the command,
python mock_server.py
Step 1: Introducing Tenacity
Jafer decides to start with the basics. He knows that Tenacity will allow him to retry failed requests without cluttering his codebase with complex loops and error handling. So, he installs the library,
pip install tenacity
With Tenacity ready, Jafer decides to tackle his first problem, retrying a request that fails due to server errors.
Step 2: Retrying on Exceptions
He writes a simple function that fetches data from an API and wraps it with Tenacity’s @retry decorator
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(3),
wait=wait_fixed(2),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_random_error():
response = requests.get('http://localhost:5000/random_error')
response.raise_for_status() # Raises an HTTPError for 4xx/5xx responses
return response.json()
if __name__ == '__main__':
try:
data = fetch_random_error()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This code will attempt the request up to 3 times, waiting 2 seconds between each try. Jafer feels confident that this will handle the occasional hiccup. However, he soon realizes that he needs more control over which exceptions trigger a retry.
Step 3: Handling Specific Exceptions
Jafer’s app sometimes receives a “404 Not Found” error, which should not be retried because the resource doesn’t exist. He modifies the retry logic to handle only certain exceptions,
import requests
import logging
from tenacity import before_log, after_log
from requests.exceptions import HTTPError, Timeout
from tenacity import retry, retry_if_exception_type, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(3),
wait=wait_fixed(2),
retry=retry_if_exception_type((HTTPError, Timeout)),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get('http://localhost:5000/timeout', timeout=2) # Set a short timeout to simulate failure
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
Now, the function retries only on HTTPError or Timeout, avoiding unnecessary retries for a “404” error. Jafer’s app is starting to feel more resilient!
Step 4: Implementing Exponential Backoff
A few days later, the team notices that they’re still getting rate-limited by some APIs. Jafer recalls the concept of exponential backoff a strategy where the wait time between retries increases exponentially, reducing the load on the server and preventing further rate limiting.
He decides to implement it,
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_exponential
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=2, max=10),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_rate_limit():
response = requests.get('http://localhost:5000/rate_limit')
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_rate_limit()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
With this code, the wait time starts at 2 seconds and doubles with each retry, up to a maximum of 10 seconds. Jafer’s app is now much less likely to be rate-limited!
Step 5: Retrying Based on Return Values
Jafer encounters another issue: some APIs occasionally return an empty response (204 No Content). These cases should also trigger a retry. Tenacity makes this easy with the retry_if_result feature,
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, retry_if_result
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(retry=retry_if_result(lambda x: x is None), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_empty_response():
response = requests.get('http://localhost:5000/empty_response')
if response.status_code == 204:
return None # Simulate an empty response
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_empty_response()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
Now, the function retries when it receives an empty response, ensuring that users get the data they need.
Step 6: Combining Multiple Retry Conditions
But Jafer isn’t done yet. Some situations require combining multiple conditions. He wants to retry on HTTPError, Timeout, or a None return value. With Tenacity’s retry_any feature, he can do just that,
import requests
import logging
from tenacity import before_log, after_log
from requests.exceptions import HTTPError, Timeout
from tenacity import retry_any, retry, retry_if_exception_type, retry_if_result, stop_after_attempt
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(retry=retry_any(retry_if_exception_type((HTTPError, Timeout)), retry_if_result(lambda x: x is None)), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get("http://localhost:5000/timeout")
if response.status_code == 204:
return None
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This approach covers all his bases, making the app even more resilient!
Step 7: Logging and Tracking Retries
As the app scales, Jafer wants to keep an eye on how often retries happen and why. He decides to add logging,
import logging
import requests
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(2), wait=wait_fixed(2),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get("http://localhost:5000/timeout", timeout=2)
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This logs messages before and after each retry attempt, giving Jafer full visibility into the retry process. Now, he can monitor the app’s behavior in production and quickly spot any patterns or issues.
The Happy Ending
With Tenacity, Jafer has transformed his app into a resilient powerhouse that gracefully handles intermittent failures. Users are happy, the servers are humming along smoothly, and Jafer’s team has more time to work on new features rather than firefighting network errors.
By mastering Tenacity, Jafer has learned that handling network failures gracefully can turn a fragile app into a robust and reliable one. Whether it’s dealing with flaky APIs, network blips, or rate limits, Tenacity is his go-to tool for retrying operations in Python.
So, the next time your app faces unpredictable network challenges, remember Jafer’s story and give Tenacity a try you might just save the day!
string in programming
a="Hello"
b="Avinash"
print(a,b)
a="My name is Avinash"
print(a)
a="""My name is Avinash.I am come to keeramangalam,str(age(19)"""
print(a)
a="Avinash"
print(a[4])
a="Avinash"
print(len(a))
txt="The best beauitiful in india"
print("India" in txt)
modify string
a="Hello world"
print(a.upper())
lower case
a="Hello world"
print(a.lower())
replace string
a="Helllo world"
print(a.replace("h","r"))
strip string
a="Hello world"
print(a.strip())
string concentrated
a="Hello"
b="Avinash"
c=(a+b)
print(c)
add two values
a="Hello"
b="world"
print(a+""+b)
age=10
txt=f"My name is Avinash,Iam{age}"
print(txt)
o/p
Hello Avinash
My name is Avinash
My name is Avinash.I am come to keeramangalam,str(age(19)
a
7
False
HELLO WORLD
hello world
Helllo world
Hello world
HelloAvinash
Helloworld
My name is Avinash,Iam10