❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

πŸ“’ Python Learning 2.0 in Tamil – Call for Participants! πŸš€

10 February 2025 at 07:58

After an incredible year of Python learning Watch our journey here, we’re back with an all new approach for 2025!

If you haven’t subscribed to our channel, don’t miss to do it ? Support Us by subscribing

This time, we’re shifting gears from theory to practice with mini projects that will help you build real-world solutions. Study materials will be shared beforehand, and you’ll work hands-on to solve practical problems building actual projects that showcase your skills.

πŸ”‘ What’s New?

βœ… Real-world mini projects
βœ… Task-based shortlisting process
βœ… Limited seats for focused learning
βœ… Dedicated WhatsApp group for discussions & mentorship
βœ… Live streaming of sessions for wider participation
βœ… Study materials, quizzes, surprise gifts, and more!

πŸ“‹ How to Join?

  1. Fill the below RSVP – Open for 20 days (till – March 2) only!
  2. After RSVP closes, shortlisted participants will receive tasks via email.
  3. Complete the tasks to get shortlisted.
  4. Selected students will be added to an exclusive WhatsApp group for intensive training.
  5. It’s a COST-FREE learning. We require your time, effort and support.
  6. Course start date will be announced after RSVP.

πŸ“œ RSVP Form

☎ How to Contact for Queries ?

If you have any queries, feel free to message in whatsapp, telegram, signal on this number 9176409201.

You can also mail me at learnwithjafer@gmail.com

Follow us for more oppurtunities/updates and more…

Don’t miss this chance to level up your Python skills Cost Free with hands-on projects and exciting rewards! RSVP now and be part of Python Learning 2.0! πŸš€

Our Previous Monthly meets – https://www.youtube.com/watch?v=cPtyuSzeaa8&list=PLiutOxBS1MizPGGcdfXF61WP5pNUYvxUl&pp=gAQB

Our Previous Sessions,

Postgres – https://www.youtube.com/watch?v=04pE5bK2-VA&list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt&pp=gAQB

Python – https://www.youtube.com/watch?v=lQquVptFreE&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&pp=gAQB

Docker – https://www.youtube.com/watch?v=nXgUBanjZP8&list=PLiutOxBS1Mizi9IRQM-N3BFWXJkb-hQ4U&pp=gAQB

Note: If you wish to support me for this initiative please share this with your friends, students and those who are in need.

Python variable

By: krishna
7 February 2025 at 06:34

This blog explores Python variable usage and functionalities.

Store All Data Types

First, let’s list the data types supported in Python.

Python Supported Data Types

  • Primitive Data Types:
    • Integer
    • Float
    • String
    • Boolean
    • None
  • Non-Primitive Data Types:
    • List
    • Tuple
    • Set
    • Bytes and ByteArray
    • Complex
    • Class Object

Store and Experiment with All Data Types

phone = 1234567890
pi    = 3.14
name  = "python programming"
using_python   = True
example_none   = None
simple_list    = [1,2,3]
simple_dict    = {"name":"python"}
simple_set     = {1,1,1,1,1.0}
simple_tuple   = (1,2,3)
complex_number = 3+5j

print("number     = ",phone)
print("float      = ",pi)
print("boolean    = ",using_python)
print("None       = ",example_none)
print("list       = ",simple_list)
print("dictionary = ",simple_dict)
print("set        = ",simple_set)
print("tuple      = ",simple_tuple)
print("complex    = ",complex_number)

Output

number     =  1234567890
float      =  3.14
boolean    =  True
None       =  None
list       =  [1, 2, 3]
dictionary =  {'name': 'python'}
set        =  {1}
tuple      =  (1, 2, 3)
complex    =  (3+5j)

Type Function

This function is used to print the data type of a variable.

print(type(phone))
print(type(pi))
print(type(using_python))
print(type(simple_list))
print(type(simple_dict))
print(type(simple_set))
print(type(simple_tuple))
print(type(complex_number))
print(type(example_none))

Output

<class 'int'>
<class 'float'>
<class 'bool'>
<class 'list'>
<class 'dict'>
<class 'set'>
<class 'tuple'>
<class 'complex'>
<class 'NoneType'>

Bytes and ByteArray

These data types help modify large binary datasets like audio and image processing.

The memoryview() function allows modifying this data without zero-copy access to memory.

bytes is immutable, where as bytearray is mutable.

bytes

b1 = bytes([1,2,3,4])
bo = memoryview(b1)
print("byte example :",bo[0])

Output

byte example : 1

bytearray

b2 = bytearray(b"aaaa")
bo = memoryview(b2)
print("byte array :",b2)
bo[1] = 98
bo[2] = 99
bo[3] = 100
print("byte array :",b2)

Output

byte array : bytearray(b'aaaa')
byte array : bytearray(b'abcd')

Other Data Types

Frozenset

A frozenset is similar to a normal set, but it is immutable.

test_frozenset = frozenset([1,2,1])
for i in test_frozenset:
    print("frozen set item :",i)

Output

frozen set item : 1
frozen set item : 2

Range

The range data type specifies a number range (e.g., 1-10).

It is mostly used in loops and mathematical operations.

a = range(3)
print("a = ",a)
print("type = ",type(a))

Output

a =  range(0, 3)
type =  <class 'range'>

Type Casting

Type casting is converting a data type into another. Try explicit type casting to change data types.

value = 1
numbers = [1,2,3]
print("type casting int     : ",type(int(value)))
print("type casting float   : ",type(float(value)))
print("type casting string  : ",type(str(value)))
print("type casting boolean : ",type(bool("True")))
print("type casting list    : ",type(list(numbers)))
print("type casting set     : ",type(set(numbers)))
print("type casting tuple   : ",type(tuple(numbers)))

Output

type casting int     :  <class 'int'>
type casting float   :  <class 'float'>
type casting string  :  <class 'str'>
type casting boolean :  <class 'bool'>
type casting list    :  <class 'list'>
type casting set     :  <class 'set'>
type casting tuple   :  <class 'tuple'>

Delete Variable

Delete an existing variable using Python’s del keyword.

temp = 1
print("temp variable is : ",temp)
del temp
# print(temp)  => throw NameError : temp not defined

Output

temp variable is :  1

Find Variable Memory Address

Use the id() function to find the memory address of a variable.

temp = "hi"
print("address of temp variable : ",id(temp))

Output

address of temp variable :  140710894284672

Constants

Python does not have a direct keyword for constants, but namedtuple can be used to create constants.

from collections import namedtuple
const = namedtuple("const",["PI"])
math = const(3.14)

print("namedtuple PI = ",math.PI)
print("namedtuple type =",type(math))

Output

namedtuple PI =  3.14
namedtuple type = <class '__main__.const'>

Global Keyword

Before understanding global keyword, understand function-scoped variables.

  • Function inside variable are stored in stack memory.
  • A function cannot modify an outside (global) variable directly, but it can access it.
  • To create a reference to a global variable inside a function, use the global keyword.
message = "Hi"
def dummy():
    global message 
    message = message+" all"

dummy()
print(message)

Output

Hi all

Explicit Type Hint

Type hints are mostly used in function parameters and arguments.

They improve code readability and help developers understand variable types.

-> float : It indicates the return data type.

def area_of_circle(radius :float) -> float:
    PI :float = 3.14
    return PI * (radius * radius)

print("calculate area of circle = ",area_of_circle(2))

Output

calculate area of circle =  12.56

POTD #22 – Longest substring with distinct characters | Geeks For Geeks

11 January 2025 at 16:44

Problem Statement

Geeks For Geeks : https://www.geeksforgeeks.org/problems/longest-distinct-characters-in-string5848/1

Given a string s, find the length of the longest substring with all distinct characters.Β 


Input: s = "geeksforgeeks"
Output: 7
Explanation: "eksforg" is the longest substring with all distinct characters.


Input: s = "abcdefabcbb"
Output: 6
Explanation: The longest substring with all distinct characters is "abcdef", which has a length of 6.

My Approach – Sliding Window


class Solution:
    def longestUniqueSubstr(self, s):
        # code here
        char_index = {}
        max_length = 0
        start = 0
        
        for i, char in enumerate(s):
            if char in char_index and char_index[char] >= start:
                start = char_index[char] + 1 #crux
            
            char_index[char] = i
            
            max_length = max(max_length, i - start + 1)
        
        return max_length
                

Learning Notes #49 – Pitfall of Implicit Default Values in APIs

9 January 2025 at 14:00

Today, we faced a bug in our workflow due to implicit default value in an 3rd party api. In this blog i will be sharing my experience for future reference.

Understanding the Problem

Consider an API where some fields are optional, and a default value is used when those fields are not provided by the client. This design is common and seemingly harmless. However, problems arise when,

  1. Unexpected Categorization: The default value influences logic, such as category assignment, in ways the client did not intend.
  2. Implicit Assumptions: The API assumes a default value aligns with the client’s intention, leading to misclassification or incorrect behavior.
  3. Debugging Challenges: When issues occur, clients and developers spend significant time tracing the problem because the default behavior is not transparent.

Here’s an example of how this might manifest,


POST /items
{
  "name": "Sample Item",
  "category": "premium"
}

If the category field is optional and a default value of "basic" is applied when it’s omitted, the following request,


POST /items
{
  "name": "Another Item"
}

might incorrectly classify the item as basic, even if the client intended it to be uncategorized.

Why This is a Code Smell

Implicit default handling for optional fields often signals poor design. Let’s break down why,

  1. Violation of the Principle of Least Astonishment: Clients may be unaware of default behavior, leading to unexpected outcomes.
  2. Hidden Logic: The business logic embedded in defaults is not explicit in the API’s contract, reducing transparency.
  3. Coupling Between API and Business Logic: When defaults dictate core behavior, the API becomes tightly coupled to specific business rules, making it harder to adapt or extend.
  4. Inconsistent Behavior: If the default logic changes in future versions, existing clients may experience breaking changes.

Best Practices to Avoid the Trap

  1. Make Default Behavior Explicit
    • Clearly document default values in the API specification (but we still missed it.)
    • For example, use OpenAPI/Swagger to define optional fields and their default values explicitly
  2. Avoid Implicit Defaults
    • Instead of applying defaults server-side, require the client to explicitly provide values, even if they are defaults.
    • This ensures the client is fully aware of the data being sent and its implications.
  3. Use Null or Explicit Indicators
    • Allow optional fields to be explicitly null or undefined, and handle these cases appropriately.
    • In this case, the API can handle null as β€œno category specified” rather than applying a default.
  4. Fail Fast with Validation
    • Use strict validation to reject ambiguous requests, encouraging clients to provide clear inputs.

{
  "error": "Field 'category' must be provided explicitly."
}

5. Version Your API Thoughtfully:

  • Document changes and provide clear migration paths for clients.
  • If you must change default behaviors, ensure backward compatibility through versioning.

Implicit default values for optional fields can lead to unintended consequences, obscure logic, and hard-to-debug issues. Recognizing this pattern as a code smell is the first step to building more robust APIs. By adopting explicitness, transparency, and rigorous validation, you can create APIs that are easier to use, understand, and maintain.

Learning Notes #11 – Sidecar Pattern | Cloud Patterns

26 December 2024 at 17:40

Today, I learnt about Sidecar Pattern. Its seems like offloading the common functionalities (logging, networking, …) aside within a pod to be used by other apps within the pod.

Its just not only about pods, but other deployments aswell. In this blog, i am going to curate the items i have learnt for my future self. Its a pattern, not an strict rule.

What is a Sidecar?

Imagine you’re riding a motorbike, and you attach a little sidecar to carry your friend or groceries. The sidecar isn’t part of the motorbike’s engine or core mechanism, but it helps you achieve your goalsβ€”whether it’s carrying more stuff or having a buddy ride along.

In the software world, a sidecar is a similar concept. It’s a separate process or container that runs alongside a primary application. Like the motorbike’s sidecar, it supports the main application by offloading or enhancing certain tasks without interfering with its core functionality.

Why Use a Sidecar?

In traditional applications, all responsibilities (logging, communication, monitoring, etc.) are bundled into the main application. This approach can make the application complex and harder to manage. Sidecars address this by handling auxiliary tasks separately, so the main application can focus on its primary purpose.

Here are some key reasons to use a sidecar

  1. Modularity: Sidecars separate responsibilities, making the system easier to develop, test, and maintain.
  2. Reusability: The same sidecar can be used across multiple services. And its language agnostic.
  3. Scalability: You can scale the sidecar independently from the main application.
  4. Isolation: Sidecars provide a level of isolation, reducing the risk of one part affecting the other.

Real-Life Analogies

To make the concept clearer, here are some real-world analogies:

  1. Coffee Maker with a Milk Frother:
    • The coffee maker (main application) brews coffee.
    • The milk frother (sidecar) prepares frothed milk for your latte.
    • Both work independently but combine their outputs for a better experience.
  2. Movie Subtitles:
    • The movie (main application) provides the visuals and sound.
    • The subtitles (sidecar) add clarity for those who need them.
    • You can watch the movie with or without subtitlesβ€”they’re optional but enhance the experience.
  3. A School with a Sports Coach:
    • The school (main application) handles education.
    • The sports coach (sidecar) focuses on physical training.
    • Both have distinct roles but contribute to the overall development of students.

Some Random Sidecar Ideas in Software

Let’s look at how sidecars are used in actual software scenarios

  1. Service Meshes (e.g., Istio, Linkerd):
    • A service mesh helps microservices communicate with each other reliably and securely.
    • The sidecar (proxy like Envoy) handles tasks like load balancing, encryption, and monitoring, so the main application doesn’t have to.
  2. Logging and Monitoring:
    • Instead of the main application generating and managing logs, a sidecar can collect, format, and send logs to a centralized system like Elasticsearch or Splunk.
  3. Authentication and Security:
    • A sidecar can act as a gatekeeper, handling user authentication and ensuring that only authorized requests reach the main application.
  4. Data Caching:
    • If an application frequently queries a database, a sidecar can serve as a local cache, reducing database load and speeding up responses.
  5. Service Discovery:
    • Sidecars can aid in service discovery by automatically registering the main application with a registry service or load balancer, ensuring seamless communication in dynamic environments.

How Sidecars Work

In modern environments like Kubernetes, sidecars are often deployed as separate containers within the same pod as the main application. They share the same network and storage, making communication between the two seamless.

Here’s a simplified workflow

  1. The main application focuses on its core tasks (e.g., serving a web page).
  2. The sidecar handles auxiliary tasks (e.g., compressing and encrypting logs).
  3. The two communicate over local connections within the pod.

Pros and Cons of Sidecars

Pros:

  • Simplifies the main application.
  • Encourages reusability and modular design.
  • Improves scalability and flexibility.
  • Enhances observability with centralized logging and metrics.
  • Facilitates experimentationβ€”you can deploy or update sidecars independently.

Cons:

  • Adds complexity to deployment and orchestration.
  • Consumes additional resources (CPU, memory).
  • Requires careful design to avoid tight coupling between the sidecar and the main application.
  • Latency (You are adding an another hop).

Do we always need to use sidecars

No. Not at all.

a. When there is a latency between the parent application and sidecar, then Reconsider.

b. If your application is small, then reconsider.

c. When you are scaling differently or independently from the parent application, then Reconsider.

Some other examples

1. Adding HTTPS to a Legacy Application

Consider a legacy web service which services requests over unencrypted HTTP. We have a requirement to enhance the same legacy system to service requests with HTTPS in future.

The legacy app is configured to serve request exclusively on localhost, which means that only services that share the local network with the server able to access legacy application. In addition to the main container (legacy app) we can add Nginx Sidecar container which runs in the same network namespace as the main container so that it can access the service running on localhost.

2. For Logging (Image from ByteByteGo)

Sidecars are not just technical solutions; they embody the principle of collaboration and specialization. By dividing responsibilities, they empower the main application to shine while ensuring auxiliary tasks are handled efficiently. Next time you hear about sidecars, you’ll know they’re more than just cool attachments for motorcycle they’re an essential part of scalable, maintainable software systems.

Also, do you feel its closely related to Adapter and Ambassador Pattern ? I Do.

References:

  1. Hussein Nasser – https://www.youtube.com/watch?v=zcJWvhzkPsw&pp=ygUHc2lkZWNhcg%3D%3D
  2. Sudo Code – https://www.youtube.com/watch?v=QU5WcwuFpZU&pp=ygUPc2lkZWNhciBwYXR0ZXJu
  3. Software Dude – https://www.youtube.com/watch?v=poPUzN33Oug&pp=ygUPc2lkZWNhciBwYXR0ZXJu
  4. https://medium.com/nerd-for-tech/microservice-design-pattern-sidecar-sidekick-pattern-dbcea9bed783
  5. https://dzone.com/articles/sidecar-design-pattern-in-your-microservices-ecosy-1

POTD #6 – Two Sum – Pair with Given Sum | Geeks For Geeks

26 December 2024 at 06:05

Problem Statement

Geeks For Geeks : https://www.geeksforgeeks.org/problems/key-pair5616/1

Given an array arr[] of positive integers and another integer target. Determine if there exists two distinct indices such that the sum of there elements is equals to target.

Examples


Input: arr[] = [1, 4, 45, 6, 10, 8], target = 16
Output: true
Explanation: arr[3] + arr[4] = 6 + 10 = 16.


Input: arr[] = [1, 2, 4, 3, 6], target = 11
Output: false
Explanation: None of the pair makes a sum of 11.

My Approach

  • Iterate through the array
  • For each element, check whether the remaining (target – element) is also present in the array using the supportive hashmap.
  • If the remaining is also present then return True.
  • Else, save the element in the hashmap and go to the next element.

#User function Template for python3
class Solution:
	def twoSum(self, arr, target):
		# code here
		maps = {}
		for item in arr:
		    rem = target - item
		    if maps.get(rem):
		        return True
		    maps[item] = True
		return False
		

POTD #5 – Set Matrix Zeroes | Geeks For Geeks

25 December 2024 at 06:20

Problem Statement

Geeks For Geeks : https://www.geeksforgeeks.org/problems/set-matrix-zeroes/1

You are given a 2D matrix mat[][] of size nΓ—m.Β The task is to modify the matrix such that if mat[i][j] is 0, all the elements in theΒ i-th row and j-th column are set to 0 and do it in constant space complexity.


Input: mat[][] = [[1, -1, 1],
                [-1, 0, 1],
                [1, -1, 1]]
Output: [[1, 0, 1],
        [0, 0, 0],
        [1, 0, 1]]
Explanation: mat[1][1] = 0, so all elements in row 1 and column 1 are updated to zeroes.


Input: mat[][] = [[0, 1, 2, 0],
                [3, 4, 5, 2],
                [1, 3, 1, 5]]
Output: [[0, 0, 0, 0],
        [0, 4, 5, 0],
        [0, 3, 1, 0]]
Explanation: mat[0][0] and mat[0][3] are 0s, so all elements in row 0, column 0 and column 3 are updated to zeroes.

My Approach

  1. Iterate through the matrix and check whether mat[i][j] is zero. If its zero then row i and col j need to made as zeros.
  2. Collect them in a set
  3. Finally iterate through the set and update the matrix.

#User function Template for python3
class Solution:
    
    
    def setMatrixZeroes(self, mat):
        rows_to_zeros = set()
        cols_to_zeros = set()
        
        rows = len(mat)
        cols = len(mat[0])
        
        for i in range(rows):
            for j in range(cols):
                if mat[i][j] == 0:
                    rows_to_zeros.add(i)
                    cols_to_zeros.add(j)
        
        for row in rows_to_zeros:
            for itr in range(cols):
                mat[row][itr] = 0
        
        
        for col in cols_to_zeros:
            for itr in range(rows):
                mat[itr][col] = 0       
        
        return mat

POTD #4 – Search in a sorted Matrix | Geeks For Geeks

24 December 2024 at 06:29

Problem Statement

Geeks For Geeks – https://www.geeksforgeeks.org/problems/search-in-a-matrix-1587115621/1

Given a strictly sorted 2D matrix mat[][] of size n x mΒ anda numberΒ x. Find whether the number x is present in the matrix or not.


Note: In a strictly sorted matrix, each row is sorted in strictly increasing order, andΒ the first element of the ithΒ row (i!=0) is greater than the last element of the (i-1)thΒ row.


Input: mat[][] = [[1, 5, 9], [14, 20, 21], [30, 34, 43]], x = 14
Output: true
Explanation: 14 is present in the matrix, so output is true.

My Approach

Today’s problem is same as yesterday’s problem.


class Solution:
    
    def binary_search(self, arr, x, start, stop):
        if start > stop:
            return False
        mid = (start + stop) // 2
        if start == stop and arr[start] != x:
            return False
        if arr[mid] == x:
            return True
        elif arr[mid] > x:
            return self.binary_search(arr, x, start, mid)
        else:
            return self.binary_search(arr, x, mid+1, stop)
    
    #Function to search a given number in row-column sorted matrix.
    def searchMatrix(self, mat, x): 
    	# code here 
    	length = len(mat[0]) - 1
        for arr in mat:
            result = self.binary_search(arr, x, 0, length)
            if result:
                return True
        return False
    	

POTD #3 Search in a row-wise sorted matrix | Geeks For Geeks

23 December 2024 at 10:25

Problem Statement

GFG Link – https://www.geeksforgeeks.org/problems/search-in-a-row-wise-sorted-matrix/1

Given a row-wise sorted 2D matrix mat[][] of size n x mΒ andan integer x, find whether element x is present in the matrix.
Note: In a row-wise sorted matrix, each row is sorted in itself, i.e. for any i, j within bounds, mat[i][j] <= mat[i][j+1].

Input: mat[][] = [[3, 4, 9],[2, 5, 6],[9, 25, 27]], x = 9
Output: true
Explanation: 9 is present in the matrix, so the output is true.

Input: mat[][] = [[19, 22, 27, 38, 55, 67]], x = 56
Output: false
Explanation: 56 is not present in the matrix, so the output is false.

My Approach:

Today’s problem is same as yesterday’s problem. But i got timed out. So instead of calculating the len(arr) each time (which is same always ) i just stored it in a variable and passed.

class Solution:
    def binary_search(self, arr, x, start, stop):
        if start > stop:
            return False
        mid = (start + stop) // 2
        if start == stop and arr[start] != x:
            return False
        if arr[mid] == x:
            result = self.binary_search(arr, x, 0, length)
        elif arr[mid] > x:
            return self.binary_search(arr, x, start, mid)
        else:
            return self.binary_search(arr, x, mid+1, stop)
    
    #Function to search a given number in row-column sorted matrix.
    def searchRowMatrix(self, mat, x): 
    	# code here 
    	length = len(mat[0]) - 1
    	for arr in mat:
            result = self.binary_search(arr, x, 0, length)
            if result:
                return True
        return False


POTD #2 Search in a Row-Column sorted matrix | Geeks For Geeks

22 December 2024 at 05:50

Second day of POTD Geeks For Geeks. https://www.geeksforgeeks.org/problems/search-in-a-matrix17201720/1.

Given a 2D integer matrix mat[][] of size n x m, where every row and column is sorted in increasing order and a number x,the task is to find whether element x is present in the matrix.

Examples:


Input: mat[][] = [[3, 30, 38],[20, 52, 54],[35, 60, 69]], x = 62
Output: false
Explanation: 62 is not present in the matrix, so output is false.

Input: mat[][] = [[18, 21, 27],[38, 55, 67]], x = 55
Output: true
Explanation: 55 is present in the matrix.

My Approach

The question states that every row in the matrix is sorted in ascending order. So we can use the binary search to find the element inside each array.

So ,

  1. Iterate each array of the matrix.
  2. Find the element in array using binary search.

#User function Template for python3
class Solution:
    
    def binary_search(self, arr, x, start, stop):
        if start > stop:
            return False
        mid = (start + stop) // 2
        if start == stop and arr[start] != x:
            return False
        if arr[mid] == x:
            return True
        elif arr[mid] > x:
            return self.binary_search(arr, x, start, mid)
        else:
            return self.binary_search(arr, x, mid+1, stop)
        
        
    
    def matSearch(self, mat, x):
        # Complete this function
        for arr in mat:
            result = self.binary_search(arr, x, 0, len(arr)-1)
            if result:
                return True
        return False


Locust EP 2: Understanding Locust Wait Times with Complete Examples

17 November 2024 at 07:43

Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.

In this blog, we’ll cover,

  1. What wait times are in Locust.
  2. Built-in wait time options.
  3. Creating custom wait times.
  4. A full example with instructions to run the test.

What Are Wait Times in Locust?

In real-world scenarios, users don’t interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.

Locust provides several ways to define these wait times within your test scenarios.

FastAPI App Overview

Here’s the FastAPI app that we’ll test,


from fastapi import FastAPI

# Create a FastAPI app instance
app = FastAPI()

# Define a route with a GET method
@app.get("/")
def read_root():
    return {"message": "Welcome to FastAPI!"}

@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
    return {"item_id": item_id, "q": q}

Locust Examples for FastAPI

1. Constant Wait Time Example

Here, we’ll simulate constant pauses between user requests


from locust import HttpUser, task, constant

class FastAPIUser(HttpUser):
    wait_time = constant(2)  # Wait for 2 seconds between requests

    @task
    def get_root(self):
        self.client.get("/")  # Simulates a GET request to the root endpoint

    @task
    def get_item(self):
        self.client.get("/items/42?q=test")  # Simulates a GET request with path and query parameters

2. Between wait time Example

Simulating random pauses between requests.


from locust import HttpUser, task, between

class FastAPIUser(HttpUser):
    wait_time = between(1, 5)  # Random wait time between 1 and 5 seconds

    @task(3)  # Weighted task: this runs 3 times more often
    def get_root(self):
        self.client.get("/")

    @task(1)
    def get_item(self):
        self.client.get("/items/10?q=locust")

3. Custom Wait Time Example

Using a custom wait time function to introduce more complex user behavior


import random
from locust import HttpUser, task

def custom_wait():
    return max(1, random.normalvariate(3, 1))  # Normal distribution (mean: 3s, stddev: 1s)

class FastAPIUser(HttpUser):
    wait_time = custom_wait

    @task
    def get_root(self):
        self.client.get("/")

    @task
    def get_item(self):
        self.client.get("/items/99?q=custom")


Full Test Example

Combining all the above elements, here’s a complete Locust test for your FastAPI app.


from locust import HttpUser, task, between
import random

# Custom wait time function
def custom_wait():
    return max(1, random.uniform(1, 3))  # Random wait time between 1 and 3 seconds

class FastAPIUser(HttpUser):
    wait_time = custom_wait  # Use the custom wait time

    @task(3)
    def browse_homepage(self):
        """Simulates browsing the root endpoint."""
        self.client.get("/")

    @task(1)
    def browse_item(self):
        """Simulates fetching an item with ID and query parameter."""
        item_id = random.randint(1, 100)
        self.client.get(f"/items/{item_id}?q=test")

Running Locust for FastAPI

  1. Run Your FastAPI App
    Save the FastAPI app code in a file (e.g., main.py) and start the server

uvicorn main:app --reload

By default, the app will run on http://127.0.0.1:8000.

2. Run Locust
Save the Locust file as locustfile.py and start Locust.


locust -f locustfile.py

3. Configure Locust
Open http://localhost:8089 in your browser and enter:

  • Host: http://127.0.0.1:8000
  • Number of users and spawn rate based on your testing requirements.

4. Run in Headless Mode (Optional)
Use the following command to run Locust in headless mode


locust -f locustfile.py --headless -u 50 -r 10 --host http://127.0.0.1:8000`

-u 50: Simulate 50 users.

-r 10: Spawn 10 users per second.

JAVASCRIPT

By: Elavarasu
25 October 2024 at 09:56

JavaScript is a versatile programming language primarily used to create dynamic and interactive features on websites.
JavaScript is a scripting language that allows you to implement complex features on web pages.
Browsers have Interpreters. It will converts JAVASCRIPT code to machine code.
Browsers have its own interpreters like

  • Chrome – V8-engine
  • Edge – Chakra

JavaScript- Identifiers :

var message; –> Variable (Identifier)
message = β€˜Javascript’;

func sayHello() {
console.log(β€˜Hello’)
}

//sayHello Is the identifier for this function.

//variables , objects,functions,arrays ,classes names are identifiers in js.

SCOPE :
In JavaScript, scope refers to the context in which variables and functions are accessible. It determines the visibility and lifetime of these variables and functions within your code. There are three main types of scope in JavaScript.

Global Scope:.

  • Variables declared outside any function or block have global scope.
  • These variables are accessible from anywhere in the code

example :

let globalVar = "I'm global";

function test() {
  console.log(globalVar); // Accessible here
}

test();
console.log(globalVar); // Accessible here too

Function Scope

  • Variables declared within a function are local to that function.
  • They cannot be accessed from outside the function.

example :

function test() {
  let localVar = "I'm local";
  console.log(localVar); // Accessible here
}

test();
console.log(localVar); // Error: localVar is not defined

Block Scope:

  • Introduced with ES6, variables declared withΒ letΒ orΒ constΒ within a block (e.g., insideΒ {}) are only accessible within that block

example :

{
  let blockVar = "I'm block-scoped";
  console.log(blockVar); // Accessible here
}

console.log(blockVar); // Error: blockVar is not defined

Keywords | Reserved Words

Keywords are reserved words in JavaScript that cannot use to indicate variable labels or function names.

Variables

variables ==> stored values ==> it will stored to ram / It will create separate memory.so we need memory address for access the values.

Stores Anything :
JavaScript will store any value in the variable.

Declaring Variable :

 * Var
 * let
 * const    

we can declare variable using above keywords:

Initialize Variable :

Using assignment operator to assign the value to the variables.

var text = "hello";

JAVASCRIPT

By: Elavarasu
25 October 2024 at 09:56

JavaScript is a versatile programming language primarily used to create dynamic and interactive features on websites.
JavaScript is a scripting language that allows you to implement complex features on web pages.
Browsers have Interpreters. It will converts JAVASCRIPT code to machine code.
Browsers have its own interpreters like

  • Chrome – V8-engine
  • Edge – Chakra

JavaScript- Identifiers :

var message; –> Variable (Identifier)
message = β€˜Javascript’;

func sayHello() {
console.log(β€˜Hello’)
}

//sayHello Is the identifier for this function.

//variables , objects,functions,arrays ,classes names are identifiers in js.

SCOPE :
In JavaScript, scope refers to the context in which variables and functions are accessible. It determines the visibility and lifetime of these variables and functions within your code. There are three main types of scope in JavaScript.

Global Scope:.

  • Variables declared outside any function or block have global scope.
  • These variables are accessible from anywhere in the code

example :

let globalVar = "I'm global";

function test() {
  console.log(globalVar); // Accessible here
}

test();
console.log(globalVar); // Accessible here too

Function Scope

  • Variables declared within a function are local to that function.
  • They cannot be accessed from outside the function.

example :

function test() {
  let localVar = "I'm local";
  console.log(localVar); // Accessible here
}

test();
console.log(localVar); // Error: localVar is not defined

Block Scope:

  • Introduced with ES6, variables declared withΒ letΒ orΒ constΒ within a block (e.g., insideΒ {}) are only accessible within that block

example :

{
  let blockVar = "I'm block-scoped";
  console.log(blockVar); // Accessible here
}

console.log(blockVar); // Error: blockVar is not defined

Keywords | Reserved Words

Keywords are reserved words in JavaScript that cannot use to indicate variable labels or function names.

Variables

variables ==> stored values ==> it will stored to ram / It will create separate memory.so we need memory address for access the values.

Stores Anything :
JavaScript will store any value in the variable.

Declaring Variable :

 * Var
 * let
 * const    

we can declare variable using above keywords:

Initialize Variable :

Using assignment operator to assign the value to the variables.

var text = "hello";

HAProxy EP 9: Load Balancing with Weighted Round Robin

11 September 2024 at 14:39

Load balancing helps distribute client requests across multiple servers to ensure high availability, performance, and reliability. Weighted Round Robin Load Balancing is an extension of the round-robin algorithm, where each server is assigned a weight based on its capacity or performance capabilities. This approach ensures that more powerful servers handle more traffic, resulting in a more efficient distribution of the load.

What is Weighted Round Robin Load Balancing?

Weighted Round Robin Load Balancing assigns a weight to each server. The weight determines how many requests each server should handle relative to the others. Servers with higher weights receive more requests compared to those with lower weights. This method is useful when backend servers have different processing capabilities or resources.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.

  • Flask App 1 (app1.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 1!"

@app.route("/data")
def data():
    return "Data from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)

  • Flask App 2 (app2.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 2!"

@app.route("/data")
def data():
    return "Data from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)

  • Flask App 3 (app3.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 3!"

@app.route("/data")
def data():
    return "Data from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Step 2: Create Dockerfiles for Each Flask Application

Create Dockerfiles for each of the Flask applications:

  • Dockerfile for Flask App 1 (Dockerfile.app1):

# Use the official Python image from Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the application file into the container
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

  • Dockerfile for Flask App 2 (Dockerfile.app2):

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

  • Dockerfile for Flask App 3 (Dockerfile.app3):

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create the HAProxy Configuration File

Create an HAProxy configuration file (haproxy.cfg) to implement Weighted Round Robin Load Balancing


global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance roundrobin
    server server1 app1:5001 weight 2 check
    server server2 app2:5002 weight 1 check
    server server3 app3:5003 weight 3 check

Explanation:

  • The balance roundrobin directive tells HAProxy to use the Round Robin load balancing algorithm.
  • The weight option for each server specifies the weight associated with each server:
    • server1 (App 1) has a weight of 2.
    • server2 (App 2) has a weight of 1.
    • server3 (App 3) has a weight of 3.
  • Requests will be distributed based on these weights: App 3 will receive the most requests, App 2 the least, and App 1 will be in between.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy):


# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a docker-compose.yml File

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3


Explanation:

  • The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
  • HAProxy depends on the three Flask applications to be up and running before it starts.

Step 6: Build and Run the Docker Containers

Run the following command to build and start all the containers


docker-compose up --build

This command builds Docker images for all three Flask apps and HAProxy, then starts them.

Step 7: Test the Load Balancer

Open your browser or use curl to make requests to the HAProxy server


curl http://localhost/
curl http://localhost/data

Observation:

  • With Weighted Round Robin Load Balancing, you should see that requests are distributed according to the weights specified in the HAProxy configuration.
  • For example, App 3 should receive three times more requests than App 2, and App 1 should receive twice as many as App 2.

Conclusion

By implementing Weighted Round Robin Load Balancing with HAProxy, you can distribute traffic more effectively according to the capacity or performance of each backend server. This approach helps optimize resource utilization and ensures a balanced load across servers.

HAProxy Ep 6: Load Balancing With Least Connection

11 September 2024 at 13:32

Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, we’ll explore how to implement Least Connection load balancing using Flask as our backend application and HAProxy as our load balancer.

What is Least Connection Load Balancing?

Least Connection Load Balancing is a dynamic algorithm that distributes requests to the server with the fewest active connections at any given time. This method ensures that servers with lighter loads receive more requests, preventing any single server from becoming a bottleneck.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py) – Introduced Slowness by adding sleep

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py) – Introduced Slowness by adding sleep.

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance leastconn
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
  • backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance leastconn directive sets the Least Connection for load balancing.
  • server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:

# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

You should see the responses alternating between β€œHello from Flask App 1!”, β€œHello from Flask App 2!”, and β€œHello from Flask App 3!” as HAProxy uses the Round Robin algorithm to distribute requests.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:

curl http://localhost

You should see responses cycling between β€œHello from Flask App 1!”, β€œHello from Flask App 2!”, and β€œHello from Flask App 3!” according to the Least Connection strategy.

HAProxy EP 5: Load Balancing With Round Robin

11 September 2024 at 12:56

Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, we’ll explore how to implement Round Robin load balancing using Flask as our backend application and HAProxy as our load balancer.

What is Round Robin Load Balancing?

Round Robin load balancing works by distributing incoming requests sequentially across a group of servers.

For example, the first request goes to Server A, the second to Server B, the third to Server C, and so on. Once all servers have received a request, the cycle repeats. This algorithm is simple and works well when all servers have similar capabilities.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py)


from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)


# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)


FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)


FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance roundrobin
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
  • backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance roundrobin directive sets the Round Robin algorithm for load balancing.
  • server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)


# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file


version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:


# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

You should see the responses alternating between β€œHello from Flask App 1!”, β€œHello from Flask App 2!”, and β€œHello from Flask App 3!” as HAProxy uses the Round Robin algorithm to distribute requests.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:


curl http://localhost

HAProxy EP 3: Sarah’s Adventure with L7 Load Balancing and HAProxy

10 September 2024 at 23:26

Meet Sarah, a backend developer at β€œBrightApps,” a fast-growing startup specializing in custom web applications. Recently, BrightApps launched a new service called β€œFitGuru,” a health and fitness platform that quickly gained traction. However, as the platform’s user base started to grow, the team noticed performance issuesβ€”page loads were slow, and users began to complain.

Sarah knew that simply scaling up their backend servers might not solve the problem. What they needed was a smarter way to handle incoming traffic and distribute it across their servers. That’s when she decided to dive into the world of Layer 7 (L7) load balancing with HAProxy.

Understanding L7 Load Balancing

Layer 7 load balancing operates at the Application Layer of the OSI model. Unlike Layer 4 (L4) load balancing, which only considers information from the Transport Layer (like IP addresses and ports), an L7 load balancer examines the actual content of the HTTP requests. This deeper inspection allows it to make more intelligent decisions on how to route traffic.

Here’s why Sarah chose L7 load balancing for β€œFitGuru”:

  1. Content-Based Routing: Sarah could route requests to different servers based on the URL path, HTTP headers, cookies, or even specific parameters in the request. For example, requests for video content could be directed to a server optimized for handling media, while API requests could go to a server focused on data processing.
  2. SSL Termination: The L7 load balancer could handle the SSL encryption and decryption, relieving the backend servers from this CPU-intensive task.
  3. Advanced Health Checks: Sarah could set up health checks that simulate real user traffic to ensure backend servers are actually serving content correctly, not just responding to pings.
  4. Enhanced Security: With L7, she could filter out malicious traffic more effectively by inspecting request contents, blocking suspicious patterns, and protecting the app from common web attacks.

Step 1: Sarah’s Plan with HAProxy as an HTTP Proxy

Sarah decided to configure HAProxy as an HTTP proxy. This way, it would operate at Layer 7 and provide advanced traffic management capabilities. She had a few objectives:

  • Route traffic based on the URL path to different servers.
  • Offload SSL termination to HAProxy.
  • Serve static files from specific backend servers and dynamic content from others.

Sarah started with a simple Flask application to test her configuration:

Flask Application Setup

Sarah created two basic Flask apps:

  1. Static Content Server (static_app.py):

from flask import Flask, send_from_directory

app = Flask(__name__)

@app.route('/static/<path:filename>')
def serve_static(filename):
    return send_from_directory('static', filename)

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)

This app served static content from a folder named static.

  1. Dynamic Content Server (dynamic_app.py):

from flask import Flask

app = Flask(__name__)

@app.route('/')
def home():
    return "Welcome to FitGuru!"

@app.route('/api/data')
def api_data():
    return {"data": "Some dynamic data"}

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

This app handled dynamic requests like API endpoints and the home page.

Step 2: Configuring HAProxy for HTTP Proxy

Sarah then moved on to configure HAProxy. She created an HAProxy configuration file (haproxy.cfg) to route traffic based on URL paths


global
    log stdout format raw local0
    maxconn 4096

defaults
    mode http
    log global
    option httplog
    option dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80

    acl is_static path_beg /static
    use_backend static_backend if is_static
    default_backend dynamic_backend

backend static_backend
    balance roundrobin
    server static1 127.0.0.1:5001 check

backend dynamic_backend
    balance roundrobin
    server dynamic1 127.0.0.1:5002 check

Explanation of the Configuration

  1. Frontend Configuration (http_front):
    • The frontend listens on ports 80 (HTTP).
    • An ACL (is_static) is defined to identify requests for static content based on the URL path prefix /static.
    • Requests that match the is_static ACL are routed to the static_backend. All other requests are routed to the dynamic_backend.
  2. Backend Configuration:
    • The static_backend handles static content requests and uses a round-robin strategy to distribute traffic between the servers (in this case, just static1).
    • The dynamic_backend handles all other requests in a similar manner.

Step 3: Deploying HAProxy with Docker

Sarah decided to use Docker to deploy HAProxy quickly:

Dockerfile for HAProxy:


FROM haproxy:2.4

COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

Build and Run:


docker build -t haproxy-http .
docker run -d -p 80:80 -p 443:443 haproxy-http


This command runs HAProxy in a Docker container, listening on ports 80.

Step 4: Testing the Setup

Now, it was time to test!

  1. Static Content Test:
    • Sarah visited http://localhost:5000/static/logo.png. The L7 load balancer identified the /static path and routed the request to static_backend.
  2. Dynamic Content Test:
    • Visiting http://localhost:5000 or http://localhost:5000/api/data confirmed that requests were routed to the dynamic_backend as expected.

The Result: A Smoother Experience for β€œFitGuru”

With L7 load balancing in place, β€œFitGuru” was now more responsive and could efficiently handle the incoming traffic surge:

  • Optimized Performance: Static content requests were efficiently served from servers dedicated to that purpose, while dynamic content was processed by more capable machines.
  • Enhanced Security: SSL termination was handled by HAProxy, and the backend servers were freed from CPU-intensive encryption tasks.
  • Flexible Traffic Management: Sarah could now easily add or modify rules to adapt to changing traffic patterns or requirements.

By implementing Layer 7 load balancing with HAProxy, Sarah provided β€œFitGuru” with a robust and scalable solution that ensured a seamless user experience, even during peak times. Now, she could confidently tackle the growing demands of their expanding user base, knowing the platform was built to handle whatever traffic came its way.

Layer 7 load balancing was more than just a tool; it was a strategy that allowed Sarah to understand, control, and optimize traffic in a way that best suited their application’s unique needs. And with HAProxy, she had all the flexibility and power she needed to keep β€œFitGuru” running smoothly.

Tool: Serial Activity – Remote SSH Manager

20 August 2024 at 02:16

Why this tool was created ?

During our college times, we had a crash course on Machine Learning. Our coordinators has arranged an ML Engineer to take class for 3 days. He insisted to install packages to have hands-on experience. But unfortunately many of our people were not sure about the installations of the packages. So we need to find a solution to install all necessary packages in all machines.

We had a scenario like, all the machines had one specific same user account with same password for all the machines. So we were like; if we are able to automate it in one machine then it would be easy for rest of the machines ( Just a for-loop iterating the x.0.0.1 to x.0.0.255 ). This is the birthplace of this tool.

Code=-

#!/usr/bin/env python
import sys
import os.path
from multiprocessing.pool import ThreadPool

import paramiko

BASE_ADDRESS = "192.168.7."
USERNAME = "t1"
PASSWORD = "uni1"


def create_client(hostname):
    """Create a SSH connection to a given hostname."""
    ssh_client = paramiko.SSHClient()
    ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    ssh_client.connect(hostname=hostname, username=USERNAME, password=PASSWORD)
    ssh_client.invoke_shell()
    return ssh_client


def kill_computer(ssh_client):
    """Power off a computer."""
    ssh_client.exec_command("poweroff")


def install_python_modules(ssh_client):
    """Install the programs specified in requirements.txt"""
    ftp_client = ssh_client.open_sftp()

    # Move over get-pip.py
    local_getpip = os.path.expanduser("~/lab_freak/get-pip.py")
    remote_getpip = "/home/%s/Documents/get-pip.py" % USERNAME
    ftp_client.put(local_getpip, remote_getpip)

    # Move over requirements.txt
    local_requirements = os.path.expanduser("~/lab_freak/requirements.txt")
    remote_requirements = "/home/%s/Documents/requirements.txt" % USERNAME
    ftp_client.put(local_requirements, remote_requirements)

    ftp_client.close()

    # Install pip and the desired modules.
    ssh_client.exec_command("python %s --user" % remote_getpip)
    ssh_client.exec_command("python -m pip install --user -r %s" % remote_requirements)


def worker(action, hostname):
    try:
        ssh_client = create_client(hostname)

        if action == "kill":
            kill_computer(ssh_client)
        elif action == "install":
            install_python_modules(ssh_client)
        else:
            raise ValueError("Unknown action %r" % action)
    except BaseException as e:
        print("Running the payload on %r failed with %r" % (hostname, action))


def main():
    if len(sys.argv) < 2:
        print("USAGE: python kill.py ACTION")
        sys.exit(1)

    hostnames = [str(BASE_ADDRESS) + str(i) for i in range(30, 60)]

    with ThreadPool() as pool:
        pool.map(lambda hostname: worker(sys.argv[1], hostname), hostnames)


if __name__ == "__main__":
    main()


Docker EP – 10: Let’s Dockerize a Flask Application

18 August 2024 at 11:49

Let’s develop a simple flask application,

  1. Set up the project directory: Create a new directory for your Flask project.

mkdir flask-docker-app
cd flask-docker-app

2. Create a virtual environment (optional but recommended):


python3 -m venv venv
source venv/bin/activate

3. Install Flask


pip install Flask

4. Create a simple Flask app:

In the flask-docker-app directory, create a file named app.py with the following content,


from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, Dockerized Flask!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

5. Test the Flask app: Run the Flask application to ensure it’s working.

python app.py

Visit http://127.0.0.1:5000/ in your browser. You should see β€œHello, Dockerized Flask!”.

Dockerize the Flask Application

  1. Create a Dockerfile: In the flask-docker-app directory, create a file named Dockerfile with the following content:

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir Flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV FLASK_APP=app.py

# Run app.py when the container launches
CMD ["python", "app.py"]

2. Create a .dockerignore file:

In the flask-docker-app directory, create a file named .dockerignore to ignore unnecessary files during the Docker build process:


venv
__pycache__
*.pyc
*.pyo

3. Build the Docker image:

In the flask-docker-app directory, run the following command to build your Docker image:


docker build -t flask-docker-app .

4. Run the Docker container:

Run the Docker container using the image you just built,

docker run -p 5000:5000 flask-docker-app

5. Access the Flask app in Docker: Visit http://localhost:5000/ in your browser. You should see β€œHello, Dockerized Flask!” running in a Docker container.

You have successfully created a simple Flask application and Dockerized it. The Dockerfile allows you to package your app with its dependencies and run it in a consistent environment.

❌
❌