This time, weβre shifting gears from theory to practice with mini projects that will help you build real-world solutions. Study materials will be shared beforehand, and youβll work hands-on to solve practical problems building actual projects that showcase your skills.
Whatβs New?
Real-world mini projects Task-based shortlisting process Limited seats for focused learning Dedicated WhatsApp group for discussions & mentorship Live streaming of sessions for wider participation Study materials, quizzes, surprise gifts, and more!
How to Join?
Fill the below RSVP β Open for 20 days (till β March 2) only!
After RSVP closes, shortlisted participants will receive tasks via email.
Complete the tasks to get shortlisted.
Selected students will be added to an exclusive WhatsApp group for intensive training.
Itβs a COST-FREE learning. We require your time, effort and support.
Donβt miss this chance to level up your Python skills Cost Free with hands-on projects and exciting rewards! RSVP now and be part of Python Learning 2.0!
type casting int : <class 'int'>
type casting float : <class 'float'>
type casting string : <class 'str'>
type casting boolean : <class 'bool'>
type casting list : <class 'list'>
type casting set : <class 'set'>
type casting tuple : <class 'tuple'>
Delete Variable
Delete an existing variable using Pythonβs del keyword.
temp = 1
print("temp variable is : ",temp)
del temp
# print(temp) => throw NameError : temp not defined
Output
temp variable is : 1
Find Variable Memory Address
Use the id() function to find the memory address of a variable.
temp = "hi"
print("address of temp variable : ",id(temp))
Output
address of temp variable : 140710894284672
Constants
Python does not have a direct keyword for constants, but namedtuple can be used to create constants.
from collections import namedtuple
const = namedtuple("const",["PI"])
math = const(3.14)
print("namedtuple PI = ",math.PI)
print("namedtuple type =",type(math))
Output
namedtuple PI = 3.14
namedtuple type = <class '__main__.const'>
Global Keyword
Before understanding global keyword, understand function-scoped variables.
Function inside variable are stored in stack memory.
A function cannot modify an outside (global) variable directly, but it can access it.
To create a reference to a global variable inside a function, use the global keyword.
Given a string s, find the length of the longest substring with all distinct characters.Β
Input: s = "geeksforgeeks"
Output: 7
Explanation: "eksforg" is the longest substring with all distinct characters.
Input: s = "abcdefabcbb"
Output: 6
Explanation: The longest substring with all distinct characters is "abcdef", which has a length of 6.
My Approach β Sliding Window
class Solution:
def longestUniqueSubstr(self, s):
# code here
char_index = {}
max_length = 0
start = 0
for i, char in enumerate(s):
if char in char_index and char_index[char] >= start:
start = char_index[char] + 1 #crux
char_index[char] = i
max_length = max(max_length, i - start + 1)
return max_length
Today, we faced a bug in our workflow due to implicit default value in an 3rd party api. In this blog i will be sharing my experience for future reference.
Understanding the Problem
Consider an API where some fields are optional, and a default value is used when those fields are not provided by the client. This design is common and seemingly harmless. However, problems arise when,
Unexpected Categorization: The default value influences logic, such as category assignment, in ways the client did not intend.
Implicit Assumptions: The API assumes a default value aligns with the clientβs intention, leading to misclassification or incorrect behavior.
Debugging Challenges: When issues occur, clients and developers spend significant time tracing the problem because the default behavior is not transparent.
Hereβs an example of how this might manifest,
POST /items
{
"name": "Sample Item",
"category": "premium"
}
If the category field is optional and a default value of "basic" is applied when itβs omitted, the following request,
POST /items
{
"name": "Another Item"
}
might incorrectly classify the item as basic, even if the client intended it to be uncategorized.
Why This is a Code Smell
Implicit default handling for optional fields often signals poor design. Letβs break down why,
Violation of the Principle of Least Astonishment: Clients may be unaware of default behavior, leading to unexpected outcomes.
Hidden Logic: The business logic embedded in defaults is not explicit in the APIβs contract, reducing transparency.
Coupling Between API and Business Logic: When defaults dictate core behavior, the API becomes tightly coupled to specific business rules, making it harder to adapt or extend.
Inconsistent Behavior: If the default logic changes in future versions, existing clients may experience breaking changes.
Best Practices to Avoid the Trap
Make Default Behavior Explicit
Clearly document default values in the API specification (but we still missed it.)
For example, use OpenAPI/Swagger to define optional fields and their default values explicitly
Avoid Implicit Defaults
Instead of applying defaults server-side, require the client to explicitly provide values, even if they are defaults.
This ensures the client is fully aware of the data being sent and its implications.
Use Null or Explicit Indicators
Allow optional fields to be explicitly null or undefined, and handle these cases appropriately.
In this case, the API can handle null as βno category specifiedβ rather than applying a default.
Fail Fast with Validation
Use strict validation to reject ambiguous requests, encouraging clients to provide clear inputs.
{
"error": "Field 'category' must be provided explicitly."
}
5. Version Your API Thoughtfully:
Document changes and provide clear migration paths for clients.
If you must change default behaviors, ensure backward compatibility through versioning.
Implicit default values for optional fields can lead to unintended consequences, obscure logic, and hard-to-debug issues. Recognizing this pattern as a code smell is the first step to building more robust APIs. By adopting explicitness, transparency, and rigorous validation, you can create APIs that are easier to use, understand, and maintain.
Today, I learnt about Sidecar Pattern. Its seems like offloading the common functionalities (logging, networking, β¦) aside within a pod to be used by other apps within the pod.
Its just not only about pods, but other deployments aswell. In this blog, i am going to curate the items i have learnt for my future self. Its a pattern, not an strict rule.
What is a Sidecar?
Imagine youβre riding a motorbike, and you attach a little sidecar to carry your friend or groceries. The sidecar isnβt part of the motorbikeβs engine or core mechanism, but it helps you achieve your goalsβwhether itβs carrying more stuff or having a buddy ride along.
In the software world, a sidecar is a similar concept. Itβs a separate process or container that runs alongside a primary application. Like the motorbikeβs sidecar, it supports the main application by offloading or enhancing certain tasks without interfering with its core functionality.
Why Use a Sidecar?
In traditional applications, all responsibilities (logging, communication, monitoring, etc.) are bundled into the main application. This approach can make the application complex and harder to manage. Sidecars address this by handling auxiliary tasks separately, so the main application can focus on its primary purpose.
Here are some key reasons to use a sidecar
Modularity: Sidecars separate responsibilities, making the system easier to develop, test, and maintain.
Reusability: The same sidecar can be used across multiple services. And its language agnostic.
Scalability: You can scale the sidecar independently from the main application.
Isolation: Sidecars provide a level of isolation, reducing the risk of one part affecting the other.
Real-Life Analogies
To make the concept clearer, here are some real-world analogies:
Coffee Maker with a Milk Frother:
The coffee maker (main application) brews coffee.
The milk frother (sidecar) prepares frothed milk for your latte.
Both work independently but combine their outputs for a better experience.
Movie Subtitles:
The movie (main application) provides the visuals and sound.
The subtitles (sidecar) add clarity for those who need them.
You can watch the movie with or without subtitlesβtheyβre optional but enhance the experience.
A School with a Sports Coach:
The school (main application) handles education.
The sports coach (sidecar) focuses on physical training.
Both have distinct roles but contribute to the overall development of students.
Some Random Sidecar Ideas in Software
Letβs look at how sidecars are used in actual software scenarios
Service Meshes (e.g., Istio, Linkerd):
A service mesh helps microservices communicate with each other reliably and securely.
The sidecar (proxy like Envoy) handles tasks like load balancing, encryption, and monitoring, so the main application doesnβt have to.
Logging and Monitoring:
Instead of the main application generating and managing logs, a sidecar can collect, format, and send logs to a centralized system like Elasticsearch or Splunk.
Authentication and Security:
A sidecar can act as a gatekeeper, handling user authentication and ensuring that only authorized requests reach the main application.
Data Caching:
If an application frequently queries a database, a sidecar can serve as a local cache, reducing database load and speeding up responses.
Service Discovery:
Sidecars can aid in service discovery by automatically registering the main application with a registry service or load balancer, ensuring seamless communication in dynamic environments.
How Sidecars Work
In modern environments like Kubernetes, sidecars are often deployed as separate containers within the same pod as the main application. They share the same network and storage, making communication between the two seamless.
Hereβs a simplified workflow
The main application focuses on its core tasks (e.g., serving a web page).
The sidecar handles auxiliary tasks (e.g., compressing and encrypting logs).
The two communicate over local connections within the pod.
Pros and Cons of Sidecars
Pros:
Simplifies the main application.
Encourages reusability and modular design.
Improves scalability and flexibility.
Enhances observability with centralized logging and metrics.
Facilitates experimentationβyou can deploy or update sidecars independently.
Cons:
Adds complexity to deployment and orchestration.
Consumes additional resources (CPU, memory).
Requires careful design to avoid tight coupling between the sidecar and the main application.
Latency (You are adding an another hop).
Do we always need to use sidecars
No. Not at all.
a. When there is a latency between the parent application and sidecar, then Reconsider.
b. If your application is small, then reconsider.
c. When you are scaling differently or independently from the parent application, then Reconsider.
Some other examples
1. Adding HTTPS to a Legacy Application
Consider a legacy web service which services requests over unencrypted HTTP. We have a requirement to enhance the same legacy system to service requests with HTTPS in future.
The legacy app is configured to serve request exclusively on localhost, which means that only services that share the local network with the server able to access legacy application. In addition to the main container (legacy app) we can add Nginx Sidecar container which runs in the same network namespace as the main container so that it can access the service running on localhost.
2. For Logging (Image from ByteByteGo)
Sidecars are not just technical solutions; they embody the principle of collaboration and specialization. By dividing responsibilities, they empower the main application to shine while ensuring auxiliary tasks are handled efficiently. Next time you hear about sidecars, youβll know theyβre more than just cool attachments for motorcycle theyβre an essential part of scalable, maintainable software systems.
Also, do you feel its closely related to Adapter and Ambassador Pattern ? I Do.
Given an array arr[] of positive integers and another integer target. Determine if there exists two distinct indices such that the sum of there elements is equals to target.
Input: arr[] = [1, 2, 4, 3, 6], target = 11
Output: false
Explanation: None of the pair makes a sum of 11.
My Approach
Iterate through the array
For each element, check whether the remaining (target β element) is also present in the array using the supportive hashmap.
If the remaining is also present then return True.
Else, save the element in the hashmap and go to the next element.
#User function Template for python3
class Solution:
def twoSum(self, arr, target):
# code here
maps = {}
for item in arr:
rem = target - item
if maps.get(rem):
return True
maps[item] = True
return False
You are given a 2D matrix mat[][] of size nΓm.Β The task is to modify the matrix such that if mat[i][j] is 0, all the elements in theΒ i-th row and j-th column are set to 0 and do it in constant space complexity.
Input: mat[][] = [[1, -1, 1],
[-1, 0, 1],
[1, -1, 1]]
Output: [[1, 0, 1],
[0, 0, 0],
[1, 0, 1]]
Explanation: mat[1][1] = 0, so all elements in row 1 and column 1 are updated to zeroes.
Input: mat[][] = [[0, 1, 2, 0],
[3, 4, 5, 2],
[1, 3, 1, 5]]
Output: [[0, 0, 0, 0],
[0, 4, 5, 0],
[0, 3, 1, 0]]
Explanation: mat[0][0] and mat[0][3] are 0s, so all elements in row 0, column 0 and column 3 are updated to zeroes.
My Approach
Iterate through the matrix and check whether mat[i][j] is zero. If its zero then row i and col j need to made as zeros.
Collect them in a set
Finally iterate through the set and update the matrix.
#User function Template for python3
class Solution:
def setMatrixZeroes(self, mat):
rows_to_zeros = set()
cols_to_zeros = set()
rows = len(mat)
cols = len(mat[0])
for i in range(rows):
for j in range(cols):
if mat[i][j] == 0:
rows_to_zeros.add(i)
cols_to_zeros.add(j)
for row in rows_to_zeros:
for itr in range(cols):
mat[row][itr] = 0
for col in cols_to_zeros:
for itr in range(rows):
mat[itr][col] = 0
return mat
Given a strictly sorted 2D matrix mat[][] of size n x mΒ anda numberΒ x. Find whether the number x is present in the matrix or not.
Note: In a strictly sorted matrix, each row is sorted in strictly increasing order, andΒ the first element of the ithΒ row (i!=0) is greater than the last element of the (i-1)thΒ row.
Input: mat[][] = [[1, 5, 9], [14, 20, 21], [30, 34, 43]], x = 14
Output: true
Explanation: 14 is present in the matrix, so output is true.
Given a row-wise sorted 2D matrix mat[][] of size n x mΒ andan integer x, find whether element x is present in the matrix. Note: In a row-wise sorted matrix, each row is sorted in itself, i.e. for any i, j within bounds, mat[i][j] <= mat[i][j+1].
Input: mat[][] = [[3, 4, 9],[2, 5, 6],[9, 25, 27]], x = 9
Output: true
Explanation: 9 is present in the matrix, so the output is true.
Input: mat[][] = [[19, 22, 27, 38, 55, 67]], x = 56
Output: false
Explanation: 56 is not present in the matrix, so the output is false.
My Approach:
Todayβs problem is same as yesterdayβs problem. But i got timed out. So instead of calculating the len(arr) each time (which is same always ) i just stored it in a variable and passed.
class Solution:
def binary_search(self, arr, x, start, stop):
if start > stop:
return False
mid = (start + stop) // 2
if start == stop and arr[start] != x:
return False
if arr[mid] == x:
result = self.binary_search(arr, x, 0, length)
elif arr[mid] > x:
return self.binary_search(arr, x, start, mid)
else:
return self.binary_search(arr, x, mid+1, stop)
#Function to search a given number in row-column sorted matrix.
def searchRowMatrix(self, mat, x):
# code here
length = len(mat[0]) - 1
for arr in mat:
result = self.binary_search(arr, x, 0, length)
if result:
return True
return False
Given a 2D integer matrix mat[][] of size n x m, where every row and column is sorted in increasing order and a number x,the task is to find whether element x is present in the matrix.
Examples:
Input: mat[][] = [[3, 30, 38],[20, 52, 54],[35, 60, 69]], x = 62
Output: false
Explanation: 62 is not present in the matrix, so output is false.
Input: mat[][] = [[18, 21, 27],[38, 55, 67]], x = 55
Output: true
Explanation: 55 is present in the matrix.
My Approach
The question states that every row in the matrix is sorted in ascending order. So we can use the binary search to find the element inside each array.
So ,
Iterate each array of the matrix.
Find the element in array using binary search.
#User function Template for python3
class Solution:
def binary_search(self, arr, x, start, stop):
if start > stop:
return False
mid = (start + stop) // 2
if start == stop and arr[start] != x:
return False
if arr[mid] == x:
return True
elif arr[mid] > x:
return self.binary_search(arr, x, start, mid)
else:
return self.binary_search(arr, x, mid+1, stop)
def matSearch(self, mat, x):
# Complete this function
for arr in mat:
result = self.binary_search(arr, x, 0, len(arr)-1)
if result:
return True
return False
Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.
In this blog, weβll cover,
What wait times are in Locust.
Built-in wait time options.
Creating custom wait times.
A full example with instructions to run the test.
What Are Wait Times in Locust?
In real-world scenarios, users donβt interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.
Locust provides several ways to define these wait times within your test scenarios.
FastAPI App Overview
Hereβs the FastAPI app that weβll test,
from fastapi import FastAPI
# Create a FastAPI app instance
app = FastAPI()
# Define a route with a GET method
@app.get("/")
def read_root():
return {"message": "Welcome to FastAPI!"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
Locust Examples for FastAPI
1. Constant Wait Time Example
Here, weβll simulate constant pauses between user requests
from locust import HttpUser, task, constant
class FastAPIUser(HttpUser):
wait_time = constant(2) # Wait for 2 seconds between requests
@task
def get_root(self):
self.client.get("/") # Simulates a GET request to the root endpoint
@task
def get_item(self):
self.client.get("/items/42?q=test") # Simulates a GET request with path and query parameters
2. Between wait time Example
Simulating random pauses between requests.
from locust import HttpUser, task, between
class FastAPIUser(HttpUser):
wait_time = between(1, 5) # Random wait time between 1 and 5 seconds
@task(3) # Weighted task: this runs 3 times more often
def get_root(self):
self.client.get("/")
@task(1)
def get_item(self):
self.client.get("/items/10?q=locust")
3. Custom Wait Time Example
Using a custom wait time function to introduce more complex user behavior
import random
from locust import HttpUser, task
def custom_wait():
return max(1, random.normalvariate(3, 1)) # Normal distribution (mean: 3s, stddev: 1s)
class FastAPIUser(HttpUser):
wait_time = custom_wait
@task
def get_root(self):
self.client.get("/")
@task
def get_item(self):
self.client.get("/items/99?q=custom")
Full Test Example
Combining all the above elements, hereβs a complete Locust test for your FastAPI app.
from locust import HttpUser, task, between
import random
# Custom wait time function
def custom_wait():
return max(1, random.uniform(1, 3)) # Random wait time between 1 and 3 seconds
class FastAPIUser(HttpUser):
wait_time = custom_wait # Use the custom wait time
@task(3)
def browse_homepage(self):
"""Simulates browsing the root endpoint."""
self.client.get("/")
@task(1)
def browse_item(self):
"""Simulates fetching an item with ID and query parameter."""
item_id = random.randint(1, 100)
self.client.get(f"/items/{item_id}?q=test")
Running Locust for FastAPI
Run Your FastAPI App Save the FastAPI app code in a file (e.g., main.py) and start the server
uvicorn main:app --reload
By default, the app will run on http://127.0.0.1:8000.
2. Run Locust Save the Locust file as locustfile.py and start Locust.
JavaScript is a versatile programming language primarily used to create dynamic and interactive features on websites. JavaScript is a scripting language that allows you to implement complex features on web pages. Browsers have Interpreters. It will converts JAVASCRIPT code to machine code. Browsers have its own interpreters like
Chrome β V8-engine
Edge β Chakra
JavaScript- Identifiers :
var message; β> Variable (Identifier) message = βJavascriptβ;
func sayHello() { console.log(βHelloβ) }
//sayHello Is the identifier for this function.
//variables , objects,functions,arrays ,classes names are identifiers in js.
SCOPE : In JavaScript, scope refers to the context in which variables and functions are accessible. It determines the visibility and lifetime of these variables and functions within your code. There are three main types of scope in JavaScript.
Global Scope:.
Variables declared outside any function or block have global scope.
These variables are accessible from anywhere in the code
example :
let globalVar = "I'm global";
function test() {
console.log(globalVar); // Accessible here
}
test();
console.log(globalVar); // Accessible here too
Function Scope
Variables declared within a function are local to that function.
They cannot be accessed from outside the function.
example :
function test() {
let localVar = "I'm local";
console.log(localVar); // Accessible here
}
test();
console.log(localVar); // Error: localVar is not defined
Block Scope:
Introduced with ES6, variables declared withΒ letΒ orΒ constΒ within a block (e.g., insideΒ {}) are only accessible within that block
example :
{
let blockVar = "I'm block-scoped";
console.log(blockVar); // Accessible here
}
console.log(blockVar); // Error: blockVar is not defined
Keywords | Reserved Words
Keywords are reserved words in JavaScript that cannot use to indicate variable labels or function names.
Variables
variables ==> stored values ==> it will stored to ram / It will create separate memory.so we need memory address for access the values.
Stores Anything : JavaScript will store any value in the variable.
Declaring Variable :
* Var
* let
* const
we can declare variable using above keywords:
Initialize Variable :
Using assignment operator to assign the value to the variables.
var text = "hello";
JavaScript is a versatile programming language primarily used to create dynamic and interactive features on websites. JavaScript is a scripting language that allows you to implement complex features on web pages. Browsers have Interpreters. It will converts JAVASCRIPT code to machine code. Browsers have its own interpreters like
Chrome β V8-engine
Edge β Chakra
JavaScript- Identifiers :
var message; β> Variable (Identifier) message = βJavascriptβ;
func sayHello() { console.log(βHelloβ) }
//sayHello Is the identifier for this function.
//variables , objects,functions,arrays ,classes names are identifiers in js.
SCOPE : In JavaScript, scope refers to the context in which variables and functions are accessible. It determines the visibility and lifetime of these variables and functions within your code. There are three main types of scope in JavaScript.
Global Scope:.
Variables declared outside any function or block have global scope.
These variables are accessible from anywhere in the code
example :
let globalVar = "I'm global";
function test() {
console.log(globalVar); // Accessible here
}
test();
console.log(globalVar); // Accessible here too
Function Scope
Variables declared within a function are local to that function.
They cannot be accessed from outside the function.
example :
function test() {
let localVar = "I'm local";
console.log(localVar); // Accessible here
}
test();
console.log(localVar); // Error: localVar is not defined
Block Scope:
Introduced with ES6, variables declared withΒ letΒ orΒ constΒ within a block (e.g., insideΒ {}) are only accessible within that block
example :
{
let blockVar = "I'm block-scoped";
console.log(blockVar); // Accessible here
}
console.log(blockVar); // Error: blockVar is not defined
Keywords | Reserved Words
Keywords are reserved words in JavaScript that cannot use to indicate variable labels or function names.
Variables
variables ==> stored values ==> it will stored to ram / It will create separate memory.so we need memory address for access the values.
Stores Anything : JavaScript will store any value in the variable.
Declaring Variable :
* Var
* let
* const
we can declare variable using above keywords:
Initialize Variable :
Using assignment operator to assign the value to the variables.
var text = "hello";
Load balancing helps distribute client requests across multiple servers to ensure high availability, performance, and reliability. Weighted Round Robin Load Balancing is an extension of the round-robin algorithm, where each server is assigned a weight based on its capacity or performance capabilities. This approach ensures that more powerful servers handle more traffic, resulting in a more efficient distribution of the load.
What is Weighted Round Robin Load Balancing?
Weighted Round Robin Load Balancing assigns a weight to each server. The weight determines how many requests each server should handle relative to the others. Servers with higher weights receive more requests compared to those with lower weights. This method is useful when backend servers have different processing capabilities or resources.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
Weβll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.
Flask App 1 (app1.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 1!"
@app.route("/data")
def data():
return "Data from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 2!"
@app.route("/data")
def data():
return "Data from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 3!"
@app.route("/data")
def data():
return "Data from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Step 2: Create Dockerfiles for Each Flask Application
Create Dockerfiles for each of the Flask applications:
Dockerfile for Flask App 1 (Dockerfile.app1):
# Use the official Python image from Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the application file into the container
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2):
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3):
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create the HAProxy Configuration File
Create an HAProxy configuration file (haproxy.cfg) to implement Weighted Round Robin Load Balancing
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance roundrobin
server server1 app1:5001 weight 2 check
server server2 app2:5002 weight 1 check
server server3 app3:5003 weight 3 check
Explanation:
The balance roundrobin directive tells HAProxy to use the Round Robin load balancing algorithm.
The weight option for each server specifies the weight associated with each server:
server1 (App 1) has a weight of 2.
server2 (App 2) has a weight of 1.
server3 (App 3) has a weight of 3.
Requests will be distributed based on these weights: App 3 will receive the most requests, App 2 the least, and App 1 will be in between.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy):
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a docker-compose.yml File
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
HAProxy depends on the three Flask applications to be up and running before it starts.
Step 6: Build and Run the Docker Containers
Run the following command to build and start all the containers
docker-compose up --build
This command builds Docker images for all three Flask apps and HAProxy, then starts them.
Step 7: Test the Load Balancer
Open your browser or use curl to make requests to the HAProxy server
curl http://localhost/
curl http://localhost/data
Observation:
With Weighted Round Robin Load Balancing, you should see that requests are distributed according to the weights specified in the HAProxy configuration.
For example, App 3 should receive three times more requests than App 2, and App 1 should receive twice as many as App 2.
Conclusion
By implementing Weighted Round Robin Load Balancing with HAProxy, you can distribute traffic more effectively according to the capacity or performance of each backend server. This approach helps optimize resource utilization and ensures a balanced load across servers.
Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, weβll explore how to implement Least Connection load balancing using Flask as our backend application and HAProxy as our load balancer.
What is Least Connection Load Balancing?
Least Connection Load Balancing is a dynamic algorithm that distributes requests to the server with the fewest active connections at any given time. This method ensures that servers with lighter loads receive more requests, preventing any single server from becoming a bottleneck.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
Weβll create three separate Dockerfiles, one for each Flask app.
Flask App 1 (app1.py) β Introduced Slowness by adding sleep
from flask import Flask
import time
app = Flask(__name__)
@app.route("/")
def hello():
time.sleep(5)
return "Hello from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py) β Introduced Slowness by adding sleep.
from flask import Flask
import time
app = Flask(__name__)
@app.route("/")
def hello():
time.sleep(5)
return "Hello from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Each Flask app listens on a different port (5001, 5002, 5003).
Step 2: Dockerfiles for each flask application
Dockerfile for Flask App 1 (Dockerfile.app1)
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2)
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3)
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create a configuration for HAProxy
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance leastconn
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance leastconn directive sets the Least Connection for load balancing.
server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy)
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a Dockercompose file
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
Each Flask app is built from its respective Dockerfile and runs on its port.
HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.
Step 6: Build and Run the Docker Containers
Run the following commands to build and start all the containers:
# Build and run the containers
docker-compose up --build
This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.
You should see the responses alternating between βHello from Flask App 1!β, βHello from Flask App 2!β, and βHello from Flask App 3!β as HAProxy uses the Round Robin algorithm to distribute requests.
Step 7: Test the Load Balancer
Open your browser or use a tool like curl to make requests to the HAProxy server:
curl http://localhost
You should see responses cycling between βHello from Flask App 1!β, βHello from Flask App 2!β, and βHello from Flask App 3!β according to the Least Connection strategy.
Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, weβll explore how to implement Round Robin load balancing using Flask as our backend application and HAProxy as our load balancer.
What is Round Robin Load Balancing?
Round Robin load balancing works by distributing incoming requests sequentially across a group of servers.
For example, the first request goes to Server A, the second to Server B, the third to Server C, and so on. Once all servers have received a request, the cycle repeats. This algorithm is simple and works well when all servers have similar capabilities.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
Weβll create three separate Dockerfiles, one for each Flask app.
Flask App 1 (app1.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Each Flask app listens on a different port (5001, 5002, 5003).
Step 2: Dockerfiles for each flask application
Dockerfile for Flask App 1 (Dockerfile.app1)
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2)
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3)
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create a configuration for HAProxy
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance roundrobin
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance roundrobin directive sets the Round Robin algorithm for load balancing.
server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy)
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a Dockercompose file
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
Each Flask app is built from its respective Dockerfile and runs on its port.
HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.
Step 6: Build and Run the Docker Containers
Run the following commands to build and start all the containers:
# Build and run the containers
docker-compose up --build
This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.
You should see the responses alternating between βHello from Flask App 1!β, βHello from Flask App 2!β, and βHello from Flask App 3!β as HAProxy uses the Round Robin algorithm to distribute requests.
Step 7: Test the Load Balancer
Open your browser or use a tool like curl to make requests to the HAProxy server:
Meet Sarah, a backend developer at βBrightApps,β a fast-growing startup specializing in custom web applications. Recently, BrightApps launched a new service called βFitGuru,β a health and fitness platform that quickly gained traction. However, as the platformβs user base started to grow, the team noticed performance issuesβpage loads were slow, and users began to complain.
Sarah knew that simply scaling up their backend servers might not solve the problem. What they needed was a smarter way to handle incoming traffic and distribute it across their servers. Thatβs when she decided to dive into the world of Layer 7 (L7) load balancing with HAProxy.
Understanding L7 Load Balancing
Layer 7 load balancing operates at the Application Layer of the OSI model. Unlike Layer 4 (L4) load balancing, which only considers information from the Transport Layer (like IP addresses and ports), an L7 load balancer examines the actual content of the HTTP requests. This deeper inspection allows it to make more intelligent decisions on how to route traffic.
Hereβs why Sarah chose L7 load balancing for βFitGuruβ:
Content-Based Routing: Sarah could route requests to different servers based on the URL path, HTTP headers, cookies, or even specific parameters in the request. For example, requests for video content could be directed to a server optimized for handling media, while API requests could go to a server focused on data processing.
SSL Termination: The L7 load balancer could handle the SSL encryption and decryption, relieving the backend servers from this CPU-intensive task.
Advanced Health Checks: Sarah could set up health checks that simulate real user traffic to ensure backend servers are actually serving content correctly, not just responding to pings.
Enhanced Security: With L7, she could filter out malicious traffic more effectively by inspecting request contents, blocking suspicious patterns, and protecting the app from common web attacks.
Step 1: Sarahβs Plan with HAProxy as an HTTP Proxy
Sarah decided to configure HAProxy as an HTTP proxy. This way, it would operate at Layer 7 and provide advanced traffic management capabilities. She had a few objectives:
Route traffic based on the URL path to different servers.
Offload SSL termination to HAProxy.
Serve static files from specific backend servers and dynamic content from others.
Sarah started with a simple Flask application to test her configuration:
Flask Application Setup
Sarah created two basic Flask apps:
Static Content Server (static_app.py):
from flask import Flask, send_from_directory
app = Flask(__name__)
@app.route('/static/<path:filename>')
def serve_static(filename):
return send_from_directory('static', filename)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5001)
This app served static content from a folder named static.
Dynamic Content Server (dynamic_app.py):
from flask import Flask
app = Flask(__name__)
@app.route('/')
def home():
return "Welcome to FitGuru!"
@app.route('/api/data')
def api_data():
return {"data": "Some dynamic data"}
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5002)
This app handled dynamic requests like API endpoints and the home page.
Step 2: Configuring HAProxy for HTTP Proxy
Sarah then moved on to configure HAProxy. She created an HAProxy configuration file (haproxy.cfg) to route traffic based on URL paths
global
log stdout format raw local0
maxconn 4096
defaults
mode http
log global
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
acl is_static path_beg /static
use_backend static_backend if is_static
default_backend dynamic_backend
backend static_backend
balance roundrobin
server static1 127.0.0.1:5001 check
backend dynamic_backend
balance roundrobin
server dynamic1 127.0.0.1:5002 check
Explanation of the Configuration
Frontend Configuration (http_front):
The frontend listens on ports 80 (HTTP).
An ACL (is_static) is defined to identify requests for static content based on the URL path prefix /static.
Requests that match the is_static ACL are routed to the static_backend. All other requests are routed to the dynamic_backend.
Backend Configuration:
The static_backend handles static content requests and uses a round-robin strategy to distribute traffic between the servers (in this case, just static1).
The dynamic_backend handles all other requests in a similar manner.
Step 3: Deploying HAProxy with Docker
Sarah decided to use Docker to deploy HAProxy quickly:
Dockerfile for HAProxy:
FROM haproxy:2.4
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
This command runs HAProxy in a Docker container, listening on ports 80.
Step 4: Testing the Setup
Now, it was time to test!
Static Content Test:
Sarah visited http://localhost:5000/static/logo.png. The L7 load balancer identified the /static path and routed the request to static_backend.
Dynamic Content Test:
Visiting http://localhost:5000 or http://localhost:5000/api/data confirmed that requests were routed to the dynamic_backend as expected.
The Result: A Smoother Experience for βFitGuruβ
With L7 load balancing in place, βFitGuruβ was now more responsive and could efficiently handle the incoming traffic surge:
Optimized Performance: Static content requests were efficiently served from servers dedicated to that purpose, while dynamic content was processed by more capable machines.
Enhanced Security: SSL termination was handled by HAProxy, and the backend servers were freed from CPU-intensive encryption tasks.
Flexible Traffic Management: Sarah could now easily add or modify rules to adapt to changing traffic patterns or requirements.
By implementing Layer 7 load balancing with HAProxy, Sarah provided βFitGuruβ with a robust and scalable solution that ensured a seamless user experience, even during peak times. Now, she could confidently tackle the growing demands of their expanding user base, knowing the platform was built to handle whatever traffic came its way.
Layer 7 load balancing was more than just a tool; it was a strategy that allowed Sarah to understand, control, and optimize traffic in a way that best suited their applicationβs unique needs. And with HAProxy, she had all the flexibility and power she needed to keep βFitGuruβ running smoothly.
During our college times, we had a crash course on Machine Learning. Our coordinators has arranged an ML Engineer to take class for 3 days. He insisted to install packages to have hands-on experience. But unfortunately many of our people were not sure about the installations of the packages. So we need to find a solution to install all necessary packages in all machines.
We had a scenario like, all the machines had one specific same user accountwith same password for all the machines. So we were like; if we are able to automate it in one machine then it would be easy for rest of the machines ( Just a for-loop iterating the x.0.0.1 to x.0.0.255 ). This is the birthplace of this tool.
Code=-
#!/usr/bin/env python
import sys
import os.path
from multiprocessing.pool import ThreadPool
import paramiko
BASE_ADDRESS = "192.168.7."
USERNAME = "t1"
PASSWORD = "uni1"
def create_client(hostname):
"""Create a SSH connection to a given hostname."""
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(hostname=hostname, username=USERNAME, password=PASSWORD)
ssh_client.invoke_shell()
return ssh_client
def kill_computer(ssh_client):
"""Power off a computer."""
ssh_client.exec_command("poweroff")
def install_python_modules(ssh_client):
"""Install the programs specified in requirements.txt"""
ftp_client = ssh_client.open_sftp()
# Move over get-pip.py
local_getpip = os.path.expanduser("~/lab_freak/get-pip.py")
remote_getpip = "/home/%s/Documents/get-pip.py" % USERNAME
ftp_client.put(local_getpip, remote_getpip)
# Move over requirements.txt
local_requirements = os.path.expanduser("~/lab_freak/requirements.txt")
remote_requirements = "/home/%s/Documents/requirements.txt" % USERNAME
ftp_client.put(local_requirements, remote_requirements)
ftp_client.close()
# Install pip and the desired modules.
ssh_client.exec_command("python %s --user" % remote_getpip)
ssh_client.exec_command("python -m pip install --user -r %s" % remote_requirements)
def worker(action, hostname):
try:
ssh_client = create_client(hostname)
if action == "kill":
kill_computer(ssh_client)
elif action == "install":
install_python_modules(ssh_client)
else:
raise ValueError("Unknown action %r" % action)
except BaseException as e:
print("Running the payload on %r failed with %r" % (hostname, action))
def main():
if len(sys.argv) < 2:
print("USAGE: python kill.py ACTION")
sys.exit(1)
hostnames = [str(BASE_ADDRESS) + str(i) for i in range(30, 60)]
with ThreadPool() as pool:
pool.map(lambda hostname: worker(sys.argv[1], hostname), hostnames)
if __name__ == "__main__":
main()
Set up the project directory: Create a new directory for your Flask project.
mkdir flask-docker-app
cd flask-docker-app
2. Create a virtual environment (optional but recommended):
python3 -m venv venv
source venv/bin/activate
3. Install Flask
pip install Flask
4. Create a simple Flask app:
In the flask-docker-app directory, create a file named app.py with the following content,
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, Dockerized Flask!'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
5. Test the Flask app: Run the Flask application to ensure itβs working.
python app.py
Visit http://127.0.0.1:5000/ in your browser. You should see βHello, Dockerized Flask!β.
Dockerize the Flask Application
Create a Dockerfile: In the flask-docker-app directory, create a file named Dockerfile with the following content:
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir Flask
# Make port 5000 available to the world outside this container
EXPOSE 5000
# Define environment variable
ENV FLASK_APP=app.py
# Run app.py when the container launches
CMD ["python", "app.py"]
2. Create a .dockerignore file:
In the flask-docker-app directory, create a file named .dockerignore to ignore unnecessary files during the Docker build process:
venv
__pycache__
*.pyc
*.pyo
3. Build the Docker image:
In the flask-docker-app directory, run the following command to build your Docker image:
docker build -t flask-docker-app .
4. Run the Docker container:
Run the Docker container using the image you just built,
docker run -p 5000:5000 flask-docker-app
5. Access the Flask app in Docker: Visit http://localhost:5000/ in your browser. You should see βHello, Dockerized Flask!β running in a Docker container.
You have successfully created a simple Flask application and Dockerized it. The Dockerfile allows you to package your app with its dependencies and run it in a consistent environment.