❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

HAProxy EP 9: Load Balancing with Weighted Round Robin

11 September 2024 at 14:39

Load balancing helps distribute client requests across multiple servers to ensure high availability, performance, and reliability. Weighted Round Robin Load Balancing is an extension of the round-robin algorithm, where each server is assigned a weight based on its capacity or performance capabilities. This approach ensures that more powerful servers handle more traffic, resulting in a more efficient distribution of the load.

What is Weighted Round Robin Load Balancing?

Weighted Round Robin Load Balancing assigns a weight to each server. The weight determines how many requests each server should handle relative to the others. Servers with higher weights receive more requests compared to those with lower weights. This method is useful when backend servers have different processing capabilities or resources.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.

  • Flask App 1 (app1.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 1!"

@app.route("/data")
def data():
    return "Data from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)

  • Flask App 2 (app2.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 2!"

@app.route("/data")
def data():
    return "Data from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)

  • Flask App 3 (app3.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 3!"

@app.route("/data")
def data():
    return "Data from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Step 2: Create Dockerfiles for Each Flask Application

Create Dockerfiles for each of the Flask applications:

  • Dockerfile for Flask App 1 (Dockerfile.app1):

# Use the official Python image from Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the application file into the container
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

  • Dockerfile for Flask App 2 (Dockerfile.app2):

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

  • Dockerfile for Flask App 3 (Dockerfile.app3):

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create the HAProxy Configuration File

Create an HAProxy configuration file (haproxy.cfg) to implement Weighted Round Robin Load Balancing


global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance roundrobin
    server server1 app1:5001 weight 2 check
    server server2 app2:5002 weight 1 check
    server server3 app3:5003 weight 3 check

Explanation:

  • The balance roundrobin directive tells HAProxy to use the Round Robin load balancing algorithm.
  • The weight option for each server specifies the weight associated with each server:
    • server1 (App 1) has a weight of 2.
    • server2 (App 2) has a weight of 1.
    • server3 (App 3) has a weight of 3.
  • Requests will be distributed based on these weights: App 3 will receive the most requests, App 2 the least, and App 1 will be in between.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy):


# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a docker-compose.yml File

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3


Explanation:

  • The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
  • HAProxy depends on the three Flask applications to be up and running before it starts.

Step 6: Build and Run the Docker Containers

Run the following command to build and start all the containers


docker-compose up --build

This command builds Docker images for all three Flask apps and HAProxy, then starts them.

Step 7: Test the Load Balancer

Open your browser or use curl to make requests to the HAProxy server


curl http://localhost/
curl http://localhost/data

Observation:

  • With Weighted Round Robin Load Balancing, you should see that requests are distributed according to the weights specified in the HAProxy configuration.
  • For example, App 3 should receive three times more requests than App 2, and App 1 should receive twice as many as App 2.

Conclusion

By implementing Weighted Round Robin Load Balancing with HAProxy, you can distribute traffic more effectively according to the capacity or performance of each backend server. This approach helps optimize resource utilization and ensures a balanced load across servers.

HAProxy Ep 6: Load Balancing With Least Connection

11 September 2024 at 13:32

Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, we’ll explore how to implement Least Connection load balancing using Flask as our backend application and HAProxy as our load balancer.

What is Least Connection Load Balancing?

Least Connection Load Balancing is a dynamic algorithm that distributes requests to the server with the fewest active connections at any given time. This method ensures that servers with lighter loads receive more requests, preventing any single server from becoming a bottleneck.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py) – Introduced Slowness by adding sleep

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py) – Introduced Slowness by adding sleep.

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance leastconn
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
  • backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance leastconn directive sets the Least Connection for load balancing.
  • server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:

# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

You should see the responses alternating between β€œHello from Flask App 1!”, β€œHello from Flask App 2!”, and β€œHello from Flask App 3!” as HAProxy uses the Round Robin algorithm to distribute requests.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:

curl http://localhost

You should see responses cycling between β€œHello from Flask App 1!”, β€œHello from Flask App 2!”, and β€œHello from Flask App 3!” according to the Least Connection strategy.

HAProxy EP 5: Load Balancing With Round Robin

11 September 2024 at 12:56

Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, we’ll explore how to implement Round Robin load balancing using Flask as our backend application and HAProxy as our load balancer.

What is Round Robin Load Balancing?

Round Robin load balancing works by distributing incoming requests sequentially across a group of servers.

For example, the first request goes to Server A, the second to Server B, the third to Server C, and so on. Once all servers have received a request, the cycle repeats. This algorithm is simple and works well when all servers have similar capabilities.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

We’ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py)


from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)


# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)


FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)


FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance roundrobin
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
  • backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance roundrobin directive sets the Round Robin algorithm for load balancing.
  • server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)


# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file


version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:


# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

You should see the responses alternating between β€œHello from Flask App 1!”, β€œHello from Flask App 2!”, and β€œHello from Flask App 3!” as HAProxy uses the Round Robin algorithm to distribute requests.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:


curl http://localhost

HAProxy EP 3: Sarah’s Adventure with L7 Load Balancing and HAProxy

10 September 2024 at 23:26

Meet Sarah, a backend developer at β€œBrightApps,” a fast-growing startup specializing in custom web applications. Recently, BrightApps launched a new service called β€œFitGuru,” a health and fitness platform that quickly gained traction. However, as the platform’s user base started to grow, the team noticed performance issuesβ€”page loads were slow, and users began to complain.

Sarah knew that simply scaling up their backend servers might not solve the problem. What they needed was a smarter way to handle incoming traffic and distribute it across their servers. That’s when she decided to dive into the world of Layer 7 (L7) load balancing with HAProxy.

Understanding L7 Load Balancing

Layer 7 load balancing operates at the Application Layer of the OSI model. Unlike Layer 4 (L4) load balancing, which only considers information from the Transport Layer (like IP addresses and ports), an L7 load balancer examines the actual content of the HTTP requests. This deeper inspection allows it to make more intelligent decisions on how to route traffic.

Here’s why Sarah chose L7 load balancing for β€œFitGuru”:

  1. Content-Based Routing: Sarah could route requests to different servers based on the URL path, HTTP headers, cookies, or even specific parameters in the request. For example, requests for video content could be directed to a server optimized for handling media, while API requests could go to a server focused on data processing.
  2. SSL Termination: The L7 load balancer could handle the SSL encryption and decryption, relieving the backend servers from this CPU-intensive task.
  3. Advanced Health Checks: Sarah could set up health checks that simulate real user traffic to ensure backend servers are actually serving content correctly, not just responding to pings.
  4. Enhanced Security: With L7, she could filter out malicious traffic more effectively by inspecting request contents, blocking suspicious patterns, and protecting the app from common web attacks.

Step 1: Sarah’s Plan with HAProxy as an HTTP Proxy

Sarah decided to configure HAProxy as an HTTP proxy. This way, it would operate at Layer 7 and provide advanced traffic management capabilities. She had a few objectives:

  • Route traffic based on the URL path to different servers.
  • Offload SSL termination to HAProxy.
  • Serve static files from specific backend servers and dynamic content from others.

Sarah started with a simple Flask application to test her configuration:

Flask Application Setup

Sarah created two basic Flask apps:

  1. Static Content Server (static_app.py):

from flask import Flask, send_from_directory

app = Flask(__name__)

@app.route('/static/<path:filename>')
def serve_static(filename):
    return send_from_directory('static', filename)

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)

This app served static content from a folder named static.

  1. Dynamic Content Server (dynamic_app.py):

from flask import Flask

app = Flask(__name__)

@app.route('/')
def home():
    return "Welcome to FitGuru!"

@app.route('/api/data')
def api_data():
    return {"data": "Some dynamic data"}

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

This app handled dynamic requests like API endpoints and the home page.

Step 2: Configuring HAProxy for HTTP Proxy

Sarah then moved on to configure HAProxy. She created an HAProxy configuration file (haproxy.cfg) to route traffic based on URL paths


global
    log stdout format raw local0
    maxconn 4096

defaults
    mode http
    log global
    option httplog
    option dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80

    acl is_static path_beg /static
    use_backend static_backend if is_static
    default_backend dynamic_backend

backend static_backend
    balance roundrobin
    server static1 127.0.0.1:5001 check

backend dynamic_backend
    balance roundrobin
    server dynamic1 127.0.0.1:5002 check

Explanation of the Configuration

  1. Frontend Configuration (http_front):
    • The frontend listens on ports 80 (HTTP).
    • An ACL (is_static) is defined to identify requests for static content based on the URL path prefix /static.
    • Requests that match the is_static ACL are routed to the static_backend. All other requests are routed to the dynamic_backend.
  2. Backend Configuration:
    • The static_backend handles static content requests and uses a round-robin strategy to distribute traffic between the servers (in this case, just static1).
    • The dynamic_backend handles all other requests in a similar manner.

Step 3: Deploying HAProxy with Docker

Sarah decided to use Docker to deploy HAProxy quickly:

Dockerfile for HAProxy:


FROM haproxy:2.4

COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

Build and Run:


docker build -t haproxy-http .
docker run -d -p 80:80 -p 443:443 haproxy-http


This command runs HAProxy in a Docker container, listening on ports 80.

Step 4: Testing the Setup

Now, it was time to test!

  1. Static Content Test:
    • Sarah visited http://localhost:5000/static/logo.png. The L7 load balancer identified the /static path and routed the request to static_backend.
  2. Dynamic Content Test:
    • Visiting http://localhost:5000 or http://localhost:5000/api/data confirmed that requests were routed to the dynamic_backend as expected.

The Result: A Smoother Experience for β€œFitGuru”

With L7 load balancing in place, β€œFitGuru” was now more responsive and could efficiently handle the incoming traffic surge:

  • Optimized Performance: Static content requests were efficiently served from servers dedicated to that purpose, while dynamic content was processed by more capable machines.
  • Enhanced Security: SSL termination was handled by HAProxy, and the backend servers were freed from CPU-intensive encryption tasks.
  • Flexible Traffic Management: Sarah could now easily add or modify rules to adapt to changing traffic patterns or requirements.

By implementing Layer 7 load balancing with HAProxy, Sarah provided β€œFitGuru” with a robust and scalable solution that ensured a seamless user experience, even during peak times. Now, she could confidently tackle the growing demands of their expanding user base, knowing the platform was built to handle whatever traffic came its way.

Layer 7 load balancing was more than just a tool; it was a strategy that allowed Sarah to understand, control, and optimize traffic in a way that best suited their application’s unique needs. And with HAProxy, she had all the flexibility and power she needed to keep β€œFitGuru” running smoothly.

Tool: Serial Activity – Remote SSH Manager

20 August 2024 at 02:16

Why this tool was created ?

During our college times, we had a crash course on Machine Learning. Our coordinators has arranged an ML Engineer to take class for 3 days. He insisted to install packages to have hands-on experience. But unfortunately many of our people were not sure about the installations of the packages. So we need to find a solution to install all necessary packages in all machines.

We had a scenario like, all the machines had one specific same user account with same password for all the machines. So we were like; if we are able to automate it in one machine then it would be easy for rest of the machines ( Just a for-loop iterating the x.0.0.1 to x.0.0.255 ). This is the birthplace of this tool.

Code=-

#!/usr/bin/env python
import sys
import os.path
from multiprocessing.pool import ThreadPool

import paramiko

BASE_ADDRESS = "192.168.7."
USERNAME = "t1"
PASSWORD = "uni1"


def create_client(hostname):
    """Create a SSH connection to a given hostname."""
    ssh_client = paramiko.SSHClient()
    ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    ssh_client.connect(hostname=hostname, username=USERNAME, password=PASSWORD)
    ssh_client.invoke_shell()
    return ssh_client


def kill_computer(ssh_client):
    """Power off a computer."""
    ssh_client.exec_command("poweroff")


def install_python_modules(ssh_client):
    """Install the programs specified in requirements.txt"""
    ftp_client = ssh_client.open_sftp()

    # Move over get-pip.py
    local_getpip = os.path.expanduser("~/lab_freak/get-pip.py")
    remote_getpip = "/home/%s/Documents/get-pip.py" % USERNAME
    ftp_client.put(local_getpip, remote_getpip)

    # Move over requirements.txt
    local_requirements = os.path.expanduser("~/lab_freak/requirements.txt")
    remote_requirements = "/home/%s/Documents/requirements.txt" % USERNAME
    ftp_client.put(local_requirements, remote_requirements)

    ftp_client.close()

    # Install pip and the desired modules.
    ssh_client.exec_command("python %s --user" % remote_getpip)
    ssh_client.exec_command("python -m pip install --user -r %s" % remote_requirements)


def worker(action, hostname):
    try:
        ssh_client = create_client(hostname)

        if action == "kill":
            kill_computer(ssh_client)
        elif action == "install":
            install_python_modules(ssh_client)
        else:
            raise ValueError("Unknown action %r" % action)
    except BaseException as e:
        print("Running the payload on %r failed with %r" % (hostname, action))


def main():
    if len(sys.argv) < 2:
        print("USAGE: python kill.py ACTION")
        sys.exit(1)

    hostnames = [str(BASE_ADDRESS) + str(i) for i in range(30, 60)]

    with ThreadPool() as pool:
        pool.map(lambda hostname: worker(sys.argv[1], hostname), hostnames)


if __name__ == "__main__":
    main()


Docker EP – 10: Let’s Dockerize a Flask Application

18 August 2024 at 11:49

Let’s develop a simple flask application,

  1. Set up the project directory: Create a new directory for your Flask project.

mkdir flask-docker-app
cd flask-docker-app

2. Create a virtual environment (optional but recommended):


python3 -m venv venv
source venv/bin/activate

3. Install Flask


pip install Flask

4. Create a simple Flask app:

In the flask-docker-app directory, create a file named app.py with the following content,


from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, Dockerized Flask!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

5. Test the Flask app: Run the Flask application to ensure it’s working.

python app.py

Visit http://127.0.0.1:5000/ in your browser. You should see β€œHello, Dockerized Flask!”.

Dockerize the Flask Application

  1. Create a Dockerfile: In the flask-docker-app directory, create a file named Dockerfile with the following content:

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir Flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV FLASK_APP=app.py

# Run app.py when the container launches
CMD ["python", "app.py"]

2. Create a .dockerignore file:

In the flask-docker-app directory, create a file named .dockerignore to ignore unnecessary files during the Docker build process:


venv
__pycache__
*.pyc
*.pyo

3. Build the Docker image:

In the flask-docker-app directory, run the following command to build your Docker image:


docker build -t flask-docker-app .

4. Run the Docker container:

Run the Docker container using the image you just built,

docker run -p 5000:5000 flask-docker-app

5. Access the Flask app in Docker: Visit http://localhost:5000/ in your browser. You should see β€œHello, Dockerized Flask!” running in a Docker container.

You have successfully created a simple Flask application and Dockerized it. The Dockerfile allows you to package your app with its dependencies and run it in a consistent environment.

Selection Sort

18 August 2024 at 05:59

The selection sort algorithm sorts an array by iteratively finding the minimum (for ascending order) / maximum (for descending order) element from the unsorted part and putting it at the beginning of the array.

The algorithm will be considering two subarrays in a given array.

  1. The subarray which is already sorted.
  2. Remaining subarray which is unsorted.

In every iteration of selection sort, the minimum element (considering ascending order)/ the maximum element (considering descending order) from the unsorted subarray is picked and moved to the sorted subarray. The movement is the process of swapping the minimum element with the current element.

How selection sort works ?

We will see the implementation of sorting in ascending order.

Consider the array : 29,10,14,37,14

First Iteration

Initially the val of min_index is 0. The element at the min_index is 29.

Then we can iterate over the rest of the array to find if there are any element which are minimum than the element at min_index.

first pass - selection sort

By iterating, we found 10 is the minimum element present in the array. Thus, replace 29 with 10. After one iteration 10, which happens to be the least value in the array, tends to appear in the first position of the sorted list.

after first pass - selection sort

Second Iteration

Now the value of min_index will be incremented by 1. So now it will be min_index = 1 . Then we can iterate over the rest of the array to find if the there are any element which are minimum than the element at min_index.

second pass

By iterating, we found 14 is the minimum element present in the array. Thus, swap 29 with 14.

image.png

Third Iteration

We can repeat the same process till the entire array is sorted.

Now the value of min_index will be incremented by 1. So now it will be min_index = 2 . Then we can iterate over the rest of the array to find if the there are any element which are minimum than the element at min_index.

third pass

By iterating, we found 14 is the minimum element present in the array. Thus, swap 29 with 14.

after third pass

Fourth Iteration

Now the value of min_index will be incremented by 3. So now it will be min_index = 3 . Then we can iterate over the rest of the array to find if the there are any element which are minimum than the element at min_index.

fourth pass

By iterating, we found 29 is the minimum element present in the array. Thus, swap 29 with 37.

after fourth pass

Its Done.

After 4th iteration, the entire array is sorted.

selection sort completed

So we can define the approach like,

  • Initialize minimum value(min_index) to location 0
  • Traverse the array to find the minimum element in the array
  • While traversing if any element smaller than min_index is found then swap both the values.
  • Then, increment min_index to point to next element
  • Repeat until array is sorted

Code


def selection_sort(array, size):
    
    for itr in range(size):
        min_index = itr
        for ctr in range(itr + 1, size):
            if array[ctr] < array[min_index]:
                min_index = ctr
        array[itr], array[min_index] = array[min_index], array[itr]
 
arr = [29,10,14,37,14]
size = len(arr)
selection_sort(arr, size)

print(arr)

Complexity analysis

Time Complexity

The sort complexity is used to express the number of execution times it takes to sort the list. The implementation has two loops. The outer loop which picks the values one by one from the list is executed n times where n is the total number of values in the list. The inner loop, which compares the value from the outer loop with the rest of the values, is also executed n times where n is the total number of elements in the list.

Therefore, the number of executions is (n * n), which can also be expressed as O(n2).

  1. Worst case – this is where the list provided is in descending order. The algorithm performs the maximum number of executions which is expressed as [Big-O] O(n2)
  2. Best case – this occurs when the provided list is already sorted. The algorithm performs the minimum number of executions which is expressed as [Big-Omega] Ξ©(n2)
  3. Average case – this occurs when the list is in random order. The average complexity is expressed as [Big-theta] Θ(n2)
Time Complexity - selection sort

Space Complexity

Since selection sort is an inplace sorting algorithm, it has a space complexity of O(1) as it requires one temporal variable used for swapping values.

Is Selection sort algorithm stable ?

The default implementation is not stable. However it can be made stable. But it can be achived using stable selection sort.

Is Selection Sort Algorithm in-place?

Yes, it does not require extra space.

When to use selection sort ?

  1. Selection sort can be good at checking if everything is already sorted πŸ˜‚.
  2. It is also good to use when memory space is limited. This is because unlike other sorting algorithms, selection sort doesn’t go around swapping things until the very end, resulting in less temporary storage space used.
  3. When a simple sorting implementation is desired
  4. When the array to be sorted is relatively small

When to avoid selection sort ?

  1. The array to be sorted has a large number of elements
  2. The array is nearly sorted
  3. You want a faster run time and memory is not a concern.

Summary

  1. Selection sort is an in-place comparison algorithm that is used to sort a random list into an ordered list. It has a time complexity of O(n2)
  2. The list is divided into two sections, sorted and unsorted. The minimum value is picked from the unsorted section and placed into the sorted section.
  3. This thing is repeated until all items have been sorted.
  4. The time complexity measures the number of steps required to sort the list.
  5. The worst-case time complexity happens when the list is in descending order. It has a time complexity of [Big-O] O(n2)
  6. The best-case time complexity happens when the list is already in ascending order. It has a time complexity of [Big-Omega] O(n2)
  7. The average-case time complexity happens when the list is in random order. It has a time complexity of [Big-theta] O(n2)
  8. The selection sort is best used when you have a small list of items to sort, the cost of swapping values does not matter, and checking of all the values is mandatory.
  9. The selection sort does not perform well on huge lists

Bonus Section

For the bigger array, how selection sort will work with insertion sort ?

The size of the array involved is rarely of much consequence. The real question is the speed of comparison vs. copying. The time a selection sort will win is when a comparison is a lot faster than copying. Just for example, let’s assume two fields: a single int as a key, and another megabyte of data attached to it. In such a case, comparisons involve only that single int, so it’s really fast, but copying involves the entire megabyte, so it’s almost certainly quite a bit slower.

Since the selection sort does a lot of comparisons, but relatively few copies, this sort of situation will favor it.

Binary Insertion Sort

17 August 2024 at 17:09

Introduction

Binary insertion sort is a sorting algorithm similar to insertion sort, but instead of using linear search to find the position where the element should be inserted, we use binary search. Thus, we reduce the number of comparisons for inserting one element from O(N) (Time complexity in Insertion Sort) to O(log N).

Best of two worlds

Binary insertion sort is a combination of insertion sort and binary search.

Insertion sort is sorting technique that works by finding the correct position of the element in the array and then inserting it into its correct position. Binary search is searching technique that works by finding the middle of the array for finding the element.

As the complexity of binary search is of logarithmic order, the searching algorithm’s time complexity will also decrease to of logarithmic order. Implementation of binary Insertion sort. this program is a simple Insertion sort program but instead of the standard searching technique binary search is used.

How Binary Insertion Sort works ?

Process flow:

In binary insertion sort, we divide the array into two subarrays β€” sorted and unsorted. The first element of the array is in the sorted subarray, and the rest of the elements are in the unsorted one.

We then iterate from the second element to the last element. For the i-th iteration, we make the current element our β€œkey.” This key is the element that we have to add to our existing sorted subarray.

Example

Consider the array 29, 10, 14, 37, 14,

First Pass

Key = 1

Since we consider the first element is in the sorted array, we will be starting from the second element. Then we apply the binary search on the sorted array.

In this scenario, we can see that the middle element in sorted array (29) is greater than the key element 10 (key = 1).

Now we need to place the key element before the middle element (i.e.) to the position of 0. Then we can shift the remaining elements by 1 position.

Increment the value of key.

Second Pass

Key = 2

Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element to be placed at index 1 (between 10 and 29). Then we can shift the remaining elements by 1 position.

Third Pass

Key = 3

Now the key element is 37. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element is placed in its correct position.

Fourth Pass

Key = 4

Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element to be placed at index 2 (between 14 and 29). Then we can shift the remaining elements by 1 position.

Now we can see all the elements are sorted.


def binary_search(arr, key, start, end):
    if start == end:
        if arr[start] > key:
            return start
        else:
            return start+1
 
    if start > end:
        return start
 
    mid = (start+end)//2
    if arr[mid] < key:
        return binary_search(arr, key, mid+1, end)
    elif arr[mid] > key:
        return binary_search(arr, key, start, mid-1)
    else:
        return mid
 
def insertion_sort(arr):
    total_num = len(arr)
    for i in range(1, total_num):
        key = arr[i]
        j = binary_search(arr, key, 0, i-1)
        arr = arr[:j] + [key] + arr[j:i] + arr[i+1:]
    return arr
 

sorted_array = insertion_sort([29, 10, 14, 37, 14])
print("Sorted Array : ", sorted_array)

Complexity Analysis

Worst Case

For inserting the i-th element in its correct position in the sorted, finding the position (pos) will take O(log i) steps. However, to insert the element, we need to shift all the elements from pos to i-1. This will take i steps in the worst case (when we have to insert at the starting position).

We make a total of N insertions. so, the worst-case time complexity of binary insertion sort is O(N^2).

This occurs when the array is initially sorted in descending order.

Best Case

The best case will be when the element is already in its sorted position. In this case, we don’t have to shift any of the elements; we can insert the element in O(1).

But we are using binary search to find the position where we need to insert. If the element is already in its sorted position, binary search will take (log i) steps. Thus, for the i-th element, we make (log i) operations, so its best-case time complexity is O(N log N).

This occurs when the array is initially sorted in ascending order.

Average Case

For average-case time complexity, we assume that the elements of the array are jumbled. Thus, on average, we will need O(i /2) steps for inserting the i-th element, so the average time complexity of binary insertion sort is O(N^2).

Space Complexity Analysis

Binary insertion sort is an in-place sorting algorithm. This means that it only requires a constant amount of additional space. We sort the given array by shifting and inserting the elements.

Therefore, the space complexity of this algorithm is O(1) if we use iterative binary search. It will be O(logN) if we use recursive binary search because of O(log N) recursive calls.

Is Binary Insertion Sort a stable algorithm

It is a stable sorting algorithm, the elements with the same values appear in the same order in the final array as they were in the initial array.

Cons and Pros

  1. Binary insertion sort works efficiently for smaller arrays.
  2. This algorithm also works well for almost-sorted arrays, where the elements are near their position in the sorted array.
  3. However, when the size of the array is large, the binary insertion sort doesn’t perform well. We can use other sorting algorithms like merge sort or quicksort in such cases.
  4. Making fewer comparisons is also one of the strengths of this sorting algorithm; therefore, it is efficient to use it when the cost of comparison is high.
  5. Its efficient when the cost of comparison between keys is sufficiently high. For example, if we want to sort an array of strings, the comparison operation of two strings will be high.

Bonus Section

Binary Insertion Sort has a quadratic time complexity just as Insertion Sort. Still, it is usually faster than Insertion Sort in practice, which is apparent when comparison takes significantly more time than swapping two elements.

Maya : 5mb Storage on browser

17 August 2024 at 16:45

Maya, a developer working on a new website, was frustrated with slow load times caused by using cookies to store user data. One night, she discovered the Web Storage API, specifically Local Storage, which allowed her to store data directly in the browser without needing constant communication with the server.

She starts to explore,

The localStorage is property of the window (browser window object) interface allows you to access a storage object for the Document’s origin; the stored data is saved across browser sessions.

  1. Data is kept for a longtime in local storage (with no expiration date.). This could be one day, one week, or even one year as per the developer preference ( Data in local storage maintained even if the browser is closed).
  2. Local storage only stores strings. So, if you intend to store objects, lists or arrays, you must convert them into a string using JSON.stringfy()
  3. Local storage will be available via the window.localstorage property.
  4. What’s interesting about them is that the data survives a page refresh (for sessionStorage) and even a full browser restart (for localStorage).

Excited with characteristics, she decides to learn more on the localStorage methods,

setItem(key, value)

Functionality:

  • Add key and value to localStorage.
  • setItem returns undefined.
  • Every key, value pair stored is converted to a string. So it’s better to convert an object to string using JSON.stringify()

Examples:

  • For a simple key, value strings.
// setItem normal strings
window.localStorage.setItem("name", "maya");
  • Inorder to store objects, we need to convert them to JSON strings using JSON.stringify()
// Storing an Object without JSON stringify
const data = {
  "commodity":"apple",
  "price":43
};
window.localStorage.setItem('commodity', data);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data without jsonified, "+ result);

getItem(key)

Functionality: This is how you get items from localStorage. If the given key is not present then it will return null.

Examples
Get a simple key value pair

// setItem normal strings
window.localStorage.setItem("name", "goku");

// getItem 
const name = window.localStorage.getItem("name");
console.log("name from localstorage, "+name);

removeItem(key)

Functionality: Remove an item by key from localStorage. If the given key is not present then it wont do anything.

Examples:

  • After removing the value will be null.
// remove item 
window.localStorage.removeItem("commodity");
var result = window.localStorage.getItem('commodity');
console.log("Data after removing the key "+ result);

clear()

Functionality: Clears the entire localStorage
Examples:

  • Simple clear of localStorage
// clear
window.localStorage.clear();
console.log("length of local storage - after clear " + window.localStorage.length);

length

Functionality: We can use this like the property access. It returns number of items stored.
**Examples: **

//length
console.log("length of local storage " + window.localStorage.length);

When to use Local Storage

  1. Data stored in Local Storage can be easily accessed by third party individuals.
  2. So its important to know that any sensitive data must not sorted in Local Storage.
  3. Local Storage can help in storing temporary data before it is pushed to the server.
  4. Always clear local storage once the operation is completed.

localStorage browser support

localStorage as a type of web storage is an HTML5 specification. It is supported by major browsers including IE8. To be sure the browser supports localStorage, you can check using the following snippet:

if (typeof(Storage) !== "undefined") {
    // Code for localStorage
    } else {
    // No web storage Support.
}

Maya is also curious to know where it gets stored ?

Where the local storage is saved ?

Windows

  • **Firefox: **C:\Users\\AppData\Roaming\Mozilla\Firefox\Profiles\\webappsstore.sqlite, %APPDATA%\Mozilla\Firefox\Profiles\\webappsstore.sqlite
  • **Chrome: **%LocalAppData%\Google\Chrome\User Data\Default\Local Storage\

Linux

  • Firefox: ~/.mozilla/firefox//webappsstore.sqlite
  • Chrome: ~/.config/google-chrome/Default/Local Storage/

Mac

  • Firefox: ~/Library/Application Support/Firefox/Profiles//webappsstore.sqlite, ~/Library/Mozilla/Firefox/Profiles//webappsstore.sqlite
  • Chrome: ~/Library/Application Support/Google/Chrome//Local Storage/, ~/Library/Application Support/Google/Chrome/Default/Local Storage/

Simple Hands-On

https://syedjafer.github.io/localStrorage/index.html

Limitations:

  1. Do not store sensitive user information in localStorage
  2. It is not a substitute for a server based database as information is only stored on the browser
  3. localStorage is limited to 5MB across all major browsers
  4. localStorage is quite insecure as it has no form of data protection and can be accessed by any code on your web page
  5. localStorage is synchronous, meaning each operation called would only execute one after the other

Now, Maya is able to resolve her quest of making her website faster, with the awareness of limitations !!!

Docker Ep 9: The Building Blocks – Detailed Structure of a Dockerfile

15 August 2024 at 11:51

Alex now knows the basics, but it’s time to get their hands dirty by writing an actual Dockerfile.

The FROM Instruction: Choosing the Foundation

The first thing Alex learns is the FROM instruction, which sets the base image for their container. It’s like choosing the foundation for a house.

  • Purpose:
    • The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions.
  • Choosing a Base Image:
    • Alex decides to use a Python base image for their application. They learn that python:3.9-slim is a lightweight version, saving space and reducing the size of the final image.

FROM python:3.9-slim

Example: Think of FROM as picking the type of bread for your sandwich. Do you want white, whole wheat, or maybe something gluten-free? Your choice sets the tone for the rest of the recipe.

The LABEL Instruction: Adding Metadata (Optional)

Next, Alex discovers the LABEL instruction. While optional, it’s a good practice to include metadata about the image.

  • Purpose:
    • The LABEL instruction adds metadata like version, description, or maintainer information to the image.
  • Example:
    • Alex decides to add a maintainer label:

LABEL maintainer="alex@example.com"

Story Note: This is like writing your name on a sandwich wrapper, so everyone knows who made it and what’s inside.

The RUN Instruction: Building the Layers

The RUN instruction is where Alex can execute commands inside the image, such as installing dependencies.

  • Purpose:
    • The RUN instruction runs any commands in a new layer on top of the current image and commits the results.
  • Example:
    • To install the Flask framework, Alex writes:

RUN pip install flask

They also learn to combine commands to reduce layers:


RUN apt-get update && apt-get install -y curl

Story Note: Imagine slicing tomatoes and cheese for your sandwich and placing them carefully on top. Each ingredient (command) adds a layer of flavor.

The COPY and ADD Instructions: Bringing in Ingredients

Now, Alex needs to bring their application code into the container, which is where the COPY and ADD instructions come into play.

  • COPY:
    • The COPY instruction copies files or directories from the host filesystem into the container’s filesystem.
  • ADD:
    • The ADD instruction is similar to COPY but with additional features, like extracting compressed files.
  • Example:
    • Alex copies their application code into the container:

COPY . /app

Story Note: This is like moving ingredients from your fridge (host) to the counter (container) where you’re preparing the sandwich.

The WORKDIR Instruction: Setting the Workspace

Alex learns that setting a working directory makes it easier to manage paths within the container.

  • Purpose:
    • The WORKDIR instruction sets the working directory for subsequent instructions.
  • Example:
    • Alex sets the working directory to /app:

WORKDIR /app

Story Note: This is like setting up a designated area on your counter where you’ll assemble your sandwichβ€”keeping everything organized.

The CMD and ENTRYPOINT Instructions: The Final Touch

Finally, Alex learns how to define the default command that will run when the container starts.

  • CMD:
    • Provides defaults for an executing container, but can be overridden.
  • ENTRYPOINT:
    • Configures a container that will run as an executable, making it difficult to override.
  • Example:
    • Alex uses CMD to specify the command to start their Flask app:

CMD ["python", "app.py"]

Story Note: Think of CMD as the final step in making your sandwichβ€”like deciding to add a toothpick to hold everything together before serving.

Below is an example Dockerfile of a flask application,


# Use an official Python runtime as a parent image
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD ["python", "app.py"]

Breakdown of the Dockerfile:

  1. FROM python:3.9-slim:
    • This line specifies the base image. In this case, it uses a slim version of Python 3.9, which is lightweight and sufficient for a simple Flask application.
  2. WORKDIR /app:
    • This sets the working directory inside the container to /app. All subsequent commands will be run inside this directory.
  3. COPY . /app:
    • This copies everything from your current directory on the host machine into the /app directory inside the container.
  4. RUN pip install –no-cache-dir -r requirements.txt:
    • This installs the necessary Python packages listed in the requirements.txt file. The --no-cache-dir option reduces the image size by not caching the downloaded packages.
  5. EXPOSE 80:
    • This makes port 80 available for external access. It’s where the Flask application will listen for incoming requests.
  6. ENV NAME World:
    • This sets an environment variable NAME to β€œWorld”. You can access this variable in your Python code.
  7. CMD [β€œpython”, β€œapp.py”]:
    • This tells the container to run the app.py file using Python when it starts.

Example Flask Application (app.py):

To complete the example, here’s a simple Flask application you can use:


from flask import Flask
import os

app = Flask(__name__)

@app.route('/')
def hello():
    name = os.getenv('NAME', 'World')
    return f'Hello, {name}!'

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=80)

Example requirements.txt:

And here’s the requirements.txt file listing the dependencies for the Flask app:


Flask==2.0.3

Building and Running the Docker Image:

  1. Build the Docker image using the Dockerfile:
docker build -t my-flask-app .

2. Run the Docker container:


docker run -p 4000:80 my-flask-app
  • This maps port 4000 on your host machine to port 80 in the container.

Open your browser and go to http://localhost:4000, and you should see β€œHello, World!” displayed on the page.

You can customize the ENV NAME in the Dockerfile or by passing it as an argument when running the container:


docker run -p 4000:80 -e NAME=Alex my-flask-app

This will display β€œHello, Alex!” instead.

Docker Ep 7: Diving Deeper into Docker with Detached Mode, Naming, and Inspections

15 August 2024 at 05:04

In our last adventure with Docker, we successfully ran our first container using the BusyBox image. It was a moment of triumph, seeing that β€œhello world” echo back at us from the depths of our new container.

Today, we delve even deeper into the Dockerverse, learning how to run containers in detached mode, name them, and inspect their inner workings.

The Background Wizardry: Detached Mode

Imagine you’re a wizard, conjuring a spell to summon a creature (a.k.a. a Docker container). But what if you wanted to summon this creature to do your bidding in the background while you continued your magical studies in the foreground? This is where the art of detached mode comes in.

To cast this spell, we use the -d flag with our docker run command, telling Docker to send the container off to the background.


docker run -d busybox:1.36 sleep 1000

With a flick of our wands (or rather, the Enter key), the spell is cast, and Docker returns a long string of magical symbolsβ€”a container ID. This ID is proof that our BusyBox creature is now running somewhere in the background, quietly counting sheep for 1000 seconds.

But how do we know our creature is truly running, and not just wandering off in the Docker forest? Fear not, young wizard; there is a way to check.

Casting the docker ps Spell

To see the creatures currently roaming the Dockerverse, we use the docker ps spell. This spell reveals a list of all the active containers, along with some essential details like their ID, image name, and what they’re currently up to.


docker ps

Behold! There it is our BusyBox container, happily sleeping as commanded. The spell even shows us the short version of our container’s ID, which is the first few characters of the long ID Docker gave us earlier.

But wait, what if you wanted to see all the creatures that have ever existed in your Dockerverse, even the ones that have now perished? For that, we have another spell.

Summoning the Ghosts: docker ps -a

The docker ps -a spell lets you commune with the ghosts of containers past. These are containers that once roamed your Dockerverse but have since exited, leaving behind only memories and a few logs.


docker ps -a

With this spell, you’ll see a list of all the containers that have ever run on your system, whether they’re still alive or not.

The Vanishing Trick: Automatically Removing Containers

Sometimes, we summon creatures for a quick task, like retrieving a lost scroll, and we don’t want them to linger once the task is done. For this, we have a trick to make them vanish as soon as they’re no longer needed. We simply add the --rm flag to our summoning command.


docker run --rm busybox:1.36 sleep 1

In this case, our BusyBox creature wakes up, sleeps for 1 second, and then vanishes into the ether, leaving no trace behind. If you run the docker ps -a spell afterward, you’ll see that the container is gone completely erased from existence.

Naming Your Creatures: The --name Flag

By default, Docker gives your creatures amusing names like β€œboring_rosaline” or β€œkickass_hopper.” But as a wizard of Docker, you have the power to name your creatures whatever you like. This is done using the --name flag when summoning them.

docker run --name hello_world busybox:1.36

Now, when you run the docker ps -a spell, you’ll see your BusyBox container proudly bearing the name β€œhello_world”. Naming your containers makes it easier to keep track of them, especially when you’re managing an entire menagerie of Docker creatures.

The All-Seeing Eye: docker inspect

Finally, we come to one of the most powerful spells in your Docker arsenal: docker inspect. This spell allows you to peer into the very soul of a container, revealing all sorts of low-level details that are hidden from the casual observer.

After summoning a new creature in detached mode, like so:


docker run -d busybox:1.36 sleep 1000

You can use the docker inspect spell to see everything about it:


docker inspect <container_id>

With this spell, you can uncover the container’s IP address, MAC address, image ID, log paths, and much more. It’s like having x-ray vision, allowing you to see every detail about your container.

And so, our journey continues, with new spells and powers at your disposal. You’ve learned how to run containers in the background, name them, and inspect their deepest secrets. As you grow in your mastery of Docker, these tools will become invaluable in managing your containers with the precision and skill of a true Docker wizard.

Docker Ep 3 : Virtual Machines VS Containers

12 August 2024 at 08:40
SNo.Virtual Machines(VM)Containers
1VM is a piece of software that allows you to install other software inside of it so you control it virtually as opposed to installing the software directly on the computer.While a container is software that allows different functionalities of an application independently.
2.Applications running on a VM system, or hypervisor, can run different OS.While applications running in a container environment share a single OS.
3.VM virtualizes the computer system, meaning its hardware.While containers virtualize the operating system, or the software only.
4.VM size is very large, generally in gigabytes.While the size of the container is very light, generally a few hundred megabytes, though it may vary as per use.
5.VM takes longer to run than containers, the exact time depending on the underlying hardware.While containers take far less time to run.
6.VM uses a lot of system memory.While containers require very less memory.
7.VM is more secure, as the underlying hardware isn’t shared between processes.While containers are less secure, as the virtualization is software-based, and memory is shared.
8.VMs are useful when we require all of the OS resources to run various applications.While containers are useful when we are required to maximize the running applications using minimal servers.
9.Examples of Type 1 hypervisors are KVM, Xen, and VMware. Virtualbox is a Type 2 hypervisorExamples of containers are RancherOS, PhotonOS, and Containers by Docker.

Build a game where the user guesses a randomly generated number.

11 August 2024 at 07:14

Building a number-guessing game in Python is a great way to practice control flow, user input, and random number generation. Below are some input ideas and the steps to implement this game.

Game Steps

  1. Welcome Message: Display a welcome message and explain the game rules to the user.
  2. Random Number Generation: Generate a random number within a specified range (e.g., 1 to 100).
  3. User Input: Prompt the user to guess the number.
  4. Feedback: Provide feedback on whether the guess is too high, too low, or correct.
  5. Repeat: Allow the user to guess again until they find the correct number.
  6. End Game: Congratulate the user and display the number of attempts taken.

Input Ideas

Here are some specific inputs you can use to enhance the game:

  1. Range of Numbers: Allow the user to choose the range (e.g., 1 to 50, 1 to 100, or 1 to 1000) to adjust the difficulty.
  2. Maximum Attempts: Set a limit on the number of attempts (e.g., 5 or 10 attempts) to increase the challenge.
  3. Hints: Provide hints after a certain number of wrong guesses, such as whether the number is even or odd.
  4. Difficulty Levels: Offer different difficulty levels (easy, medium, hard) that affect the range of numbers and the number of attempts allowed.
  5. Play Again Option: After the game ends, ask if the user wants to play again.
  6. Input Validation: Ensure the user inputs a valid number within the specified range.

Additional Features

  • Scoring System: Introduce a scoring system based on the number of attempts.
  • Leaderboard: Track and display the top scores or fastest guesses.
  • Graphical Interface: Use a library like Tkinter to create a GUI version of the game.

ACESS YOUR LOCAL APPLICATION VIA INTERNET

5 August 2024 at 16:32

In this blog post, we’ll explore how to set up a simple web server using Caddy to serve an HTML page over HTTPS.

Additionally, we’ll configure port forwarding on your router to make the server accessible from the internet using your WAN IP address.

Caddy is an excellent choice for this task because of its ease of use, automatic HTTPS, and modern web technologies support.

What You’ll Need

  • Your WAN IP address (You can get this from your router)
  • A computer or server running Linux, macOS, or Windows
  • Caddy or any related web server (nginx, etc…) installed on your machine
  • An HTML file to serve.
  • Access to your router’s administration interface

Step 1: Install Caddy

Linux

To install Caddy on Linux, you can use the following commands:

# Download Caddy
wget https://github.com/caddyserver/caddy/releases/download/v2.6.4/caddy_2.6.4_linux_amd64.tar.gz

# Extract the downloaded file
tar -xzf caddy_2.6.4_linux_amd64.tar.gz

# Move Caddy to a directory in your PATH
sudo mv caddy /usr/local/bin/

# Give execute permissions
sudo chmod +x /usr/local/bin/caddy

macOS

On macOS, you can install Caddy using Homebrew,

brew install caddy

Windows

For Windows, download the Caddy binary from the official website and add it to your system PATH.

Step 2: Create an HTML Page

Create a simple HTML page that you want to serve with Caddy. For example, create a file named index.html with the following content:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>ParottaSalna</title>
</head>
<body>
    <h1>Hello Soldiers !</h1>
    <p>Vanakkam</p>
</body>
</html>

Step 3: Configure Caddy

Create a Caddyfile in the same directory as your index.html file. This file will tell Caddy how to serve your HTML page.

Here’s a simple Caddyfile configuration:

:8080 {
    root * .
    file_server
}

Replace . (current directory) with the path to the directory containing your index.html file.

Explanation

  • :8080: This tells Caddy to listen on port 8080 on your local machine.
  • root * /path/to/your/html: This specifies the directory containing your HTML files.
  • file_server: This directive enables the file server to serve static files.

Step 4: Run Caddy

Navigate to the directory where your Caddyfile is located and start Caddy with the following command:

caddy run

Caddy will read the Caddyfile and start serving your HTML page on the specified port (8080 in this example).

Step 5: Access Your Server Locally

Open a web browser and go to the following URL to view your HTML page:

https://localhost:8080

Step 6: Configure Port Forwarding

To make your server accessible from the internet, you’ll need to set up port forwarding on your router.

Find Your Local IP Address

First, find your local IP address. You can do this by running:

  • Linux/macOS: ifconfig or ip addr
  • Windows: ipconfig

Access Your Router’s Admin Interface

  1. Open a web browser and enter your router’s IP address (commonly 192.168.0.1 or 192.168.1.1). I am using D-Link Router (http://192.168.0.1)
  2. Log in with your router’s admin credentials.

Set Up Port Forwarding (Usual steps)

  1. Locate the port forwarding section in your router’s interface.
  2. Create a new port forwarding rule:
    • Service Name: Caddy (or any name you prefer)
    • External Port: 80 and 443 (for HTTP and HTTPS)
    • Internal Port: 80 and 443
    • Internal IP Address: Your computer’s local IP address
    • Protocol: TCP
  3. Save the changes.

The above steps are subjected to change based on the router. I am using D-Link 615 router and i followed this https://www.cfos.de/en/cfos-personal-net/port-forwarding/d-link-dir-615.htm to enable port forwarding.

Step 7: Access Your Server

Now, you can access your server from anywhere using WAN IP address (you can find the WAN IP from in your router). Enter the address in your browser to see your HTML page served over HTTPS!

Step 8: NO-IP dynamic dns.

You can have a free domain (mostly a subdomain/a random generated domain) from no-ip and assign your ip:port to a domain.

or if you own a domain, you can assign it. I created a subdomain named home.parottasalna.com

and i am able to access my item via http://home.parottasalna.com:8080 as below,

Here i haven’t specified any reverse proxy. You can try setting them aswell so that you don’t need to specify port in the domain.

ntfy.sh – To save you from un-noticed events

4 August 2024 at 05:19

Alex Pandian was the system administrator for a tech company, responsible for managing servers, maintaining network stability, and ensuring that everything ran smoothly.

With many scripts running daily and long-running processes that needed monitoring, Alex was constantly flooded with notifications.

Alex Pandian: β€œEvery day, I have to gothrough dozens of emails and alerts just to find the ones that matter,”

Alex muttered while sipping coffee in the server room.

Alex Pandian: β€œThere must be a better way to streamline all this information.”

Despite using several monitoring tools, the notifications from these systems were scattered and overwhelming. Alex needed a more efficient method to receive alerts only when crucial events occurred, such as script failures or the completion of resource-intensive tasks.

Determined to find a better system, Alex began searching online for a tool that could help consolidate and manage notifications.

After reading through countless forums and reviews, Alex stumbled upon a discussion about ntfy.sh, a service praised for its simplicity and flexibility.

β€œThis looks promising,” Alex thought, excited by the ability to publish and subscribe to notifications using a straightforward, topic-based system. The idea of having notifications sent directly to a phone or desktop without needing complex configurations was exactly what Alex was looking for.

Alex decided to consult with Sam, a fellow system admin known for their expertise in automation and monitoring.

Alex Pandian: β€œHey Sam, have you ever used ntfy.sh?”

Sam: β€œAbsolutely, It’s a lifesaver for managing notifications. How do you plan to use it?”

Alex Pandian: β€œI’m thinking of using it for real-time alerts on script failures and long-running commands, Can you show me how it works?”

Sam: β€œOf course,”

with a smile, eager to guide Alex through setting up ntfy.sh to improve workflow efficiency.

Together, Sam and Alex began configuring ntfy.sh for Alex’s environment. They focused on setting up topics and integrating them with existing systems to ensure that important notifications were delivered promptly.

Step 1: Identifying Key Topics

Alex identified the main areas where notifications were needed:

  • script-failures: To receive alerts whenever a script failed.
  • command-completions: To notify when long-running commands finished.
  • server-health: For critical server health alerts.

Step 2: Subscribing to Topics

Sam showed Alex how to subscribe to these topics using ntfy.sh on a mobile device and desktop. This ensured that Alex would receive notifications wherever they were, without having to constantly check email or dashboards.


# Subscribe to topics
ntfy subscribe script-failures
ntfy subscribe command-completions
ntfy subscribe server-health

Step 3: Automating Notifications

Sam explained how to use bash scripts and curl to send notifications to ntfy.sh whenever specific events occurred.

β€œFor example, if a script fails, you can automatically send an alert to the β€˜script-failures’ topic,” Sam demonstrated.


# Notify on script failure
./backup-script.sh || curl -d "Backup script failed!" ntfy.sh/script-failures

Alex was impressed by the simplicity and efficiency of this approach. β€œI can automate all of this?” Alex asked.

β€œDefinitely,” Sam replied. β€œYou can integrate it with cron jobs, monitoring tools, and more. It’s a great way to keep track of important events without getting bogged down by noise.”

With the basics in place, Alex began applying ntfy.sh to various real-world scenarios, streamlining the notification process and improving overall efficiency.

Monitoring Script Failures

Alex set up automated alerts for critical scripts that ran daily, ensuring that any failures were immediately reported. This allowed Alex to address issues quickly, minimizing downtime and improving system reliability.


# Notify on critical script failure
./critical-task.sh || curl -d "Critical task script failed!" ntfy.sh/script-failures

Tracking Long-Running Commands

Whenever Alex initiated a long-running command, such as a server backup or data migration, notifications were sent upon completion. This enabled Alex to focus on other tasks without constantly checking on progress.


# Notify on long-running command completion
long-command && curl -d "Long command completed successfully." ntfy.sh/command-completions

Server Health Alerts

To monitor server health, Alex integrated ntfy.sh with existing monitoring tools, ensuring that any critical issues were immediately flagged.


# Send server health alert
curl -d "Server CPU usage is critically high!" ntfy.sh/server-health

As with any new tool, there were challenges to overcome. Alex encountered a few hurdles, but with Sam’s guidance, these were quickly resolved.

Challenge: Managing Multiple Notifications

Initially, Alex found it challenging to manage multiple notifications and ensure that only critical alerts were prioritized. Sam suggested using filters and priorities to focus on the most important messages.


# Subscribe with filters for high-priority alerts
ntfy subscribe script-failures --priority=high

Challenge: Scheduling Notifications

Alex wanted to schedule notifications for regular maintenance tasks and reminders. Sam introduced Alex to using cron for scheduling automated alerts.S

# Schedule notification for regular maintenance
echo "Time for weekly server maintenance." | at 8:00 AM next Saturday ntfy.sh/server-health


Sam gave some more examples to alex,

Monitoring disk space

As a system administrator, you can use ntfy.sh to receive alerts when disk space usage reaches a critical level. This helps prevent issues related to insufficient disk space.


# Check disk space and notify if usage is over 80%
disk_usage=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')
if [ $disk_usage -gt 80 ]; then
  curl -d "Warning: Disk space usage is at ${disk_usage}%." ntfy.sh/disk-space
fi

Alerting on Website Downtime

You can use ntfy.sh to monitor the status of a website and receive notifications if it goes down.


# Check website status and notify if it's down
website="https://example.com"
status_code=$(curl -o /dev/null -s -w "%{http_code}\n" $website)

if [ $status_code -ne 200 ]; then
  curl -d "Alert: $website is down! Status code: $status_code." ntfy.sh/website-monitor
fi

Reminding for Daily Tasks

You can set up ntfy.sh to send you daily reminders for important tasks, ensuring that you stay on top of your schedule.


# Schedule daily reminders
echo "Time to review your daily tasks!" | at 9:00 AM ntfy.sh/daily-reminders
echo "Stand-up meeting at 10:00 AM." | at 9:50 AM ntfy.sh/daily-reminders

Alerting on High System Load

Monitor system load and receive notifications when it exceeds a certain threshold, allowing you to take action before it impacts performance.

# Check system load and notify if it's high
load=$(uptime | awk '{ print $10 }' | sed 's/,//')
threshold=2.0

if (( $(echo "$load > $threshold" | bc -l) )); then
  curl -d "Warning: System load is high: $load" ntfy.sh/system-load
fi

Notify on Backup Completion

Receive a notification when a backup process completes, allowing you to verify its success.

# Notify on backup completion
backup_command="/path/to/backup_script.sh"
$backup_command && curl -d "Backup completed successfully." ntfy.sh/backup-status || curl -d "Backup failed!" ntfy.sh/backup-status

Notifying on Container Events with Docker

Integrate ntfy.sh with Docker to send alerts for specific container events, such as when a container stops unexpectedly.


# Notify on Docker container stop event
container_name="my_app"
container_status=$(docker inspect -f '{{.State.Status}}' $container_name)

if [ "$container_status" != "running" ]; then
  curl -d "Alert: Docker container $container_name has stopped." ntfy.sh/docker-alerts
fi

Integrating with CI/CD Pipelines

Use ntfy.sh to notify you about the status of CI/CD pipeline stages, ensuring you stay informed about build successes or failures.


# Example GitLab CI/CD YAML snippet
stages:
  - build

build_job:
  stage: build
  script:
    - make build
  after_script:
    - if [ "$CI_JOB_STATUS" == "success" ]; then
        curl -d "Build succeeded for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
      else
        curl -d "Build failed for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
      fi

Notification on ssh login to server

Lets try with docker,


FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
# Set root password for SSH access (change 'your_password' to your desired password)
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
COPY ntfy-ssh.sh /usr/bin/ntfy-ssh.sh
RUN chmod +x /usr/bin/ntfy-ssh.sh
RUN echo "session optional pam_exec.so /usr/bin/ntfy-ssh.sh" >> /etc/pam.d/sshd
RUN apt-get -y update; apt-get -y install curl
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

script to send notification,


#!/bin/bash
if [ "${PAM_TYPE}" = "open_session" ]; then
  echo "here"
  curl \
    -H prio:high \
    -H tags:warning \
    -d "SSH login: ${PAM_USER} from ${PAM_RHOST}" \
    ntfy.sh/syed-alerts
fi

With ntfy.sh as an integral part of daily operations, Alex found a renewed sense of balance and control. The once overwhelming chaos of notifications was now a manageable stream of valuable information.

As Alex reflected on the journey, it was clear that ntfy.sh had transformed not just the way notifications were managed, but also the overall approach to system administration.

In a world full of noise, ntfy.sh had provided a clear and effective way to stay informed without distractions. For Alex, it was more than just a toolβ€”it was a new way of managing systems efficiently.

TASK – The Botanical Garden and Rose Garden – Python SETS

3 August 2024 at 10:01
  1. Create a set named rose_garden containing different types of roses: "red rose", "white rose", "yellow rose". Print the same.
  2. Add "pink rose" to the rose_garden set. Print the set to confirm the addition.
  3. Remove "yellow rose" from the rose_garden set using the remove() method. Print the set to verify the removal.
  4. Create another set botanical_garden with elements "sunflower", "tulip", and "red rose". Find the union of rose_garden and botanical_garden and print the result.
  5. Find the intersection of rose_garden and botanical_garden and print the common elements.
  6. Find the difference between rose_garden and botanical_garden and print the elements that are only in rose_garden.
  7. Find the symmetric difference between rose_garden and botanical_garden and print the elements unique to each set.
  8. Create a set small_garden containing "red rose", "white rose". Check if small_garden is a subset of rose_garden and print the result.
  9. Check if rose_garden is a superset of small_garden and print the result.
  10. Use the len() function to find the number of elements in the rose_garden set. Print the result.
  11. Use the discard() method to remove "pink rose" from the rose_garden set. Try to discard a non-existent element "blue rose" and observe what happens.
  12. Use the clear() method to remove all elements from the rose_garden set. Print the set to confirm it’s empty.
  13. Make a copy of the botanical_garden set using the copy() method. Add "lily" to the copy and print both sets to see the differences.
  14. Create a frozen set immutable_garden with elements "orchid", "daisy", "red rose". Try to add or remove an element and observe what happens.
  15. Iterate over the botanical_garden set and print each element.
  16. Use set comprehension to create a set even_numbers containing even numbers from 1 to 10.
  17. Given a list of flowers ["rose", "tulip", "rose", "daisy", "tulip"], use a set to remove duplicates and print the unique flowers.
  18. Check if "sunflower" is in the botanical_garden set and print the result.
  19. Use the intersection_update() method to update the botanical_garden set with only the elements found in rose_garden. Print the updated set.
  20. Use the difference_update() method to remove all elements in small_garden from botanical_garden. Print the updated set.

Task – Annachi Kadai – Python Dictionary

3 August 2024 at 09:54
  1. Create a dictionary named student with the following keys and values. and print the same
    • "name": "Alice"
    • "age": 21
    • "major": "Computer Science"
  2. Using the student dictionary, print the values associated with the keys "name" and "major".
  3. Add a new key-value pair to the student dictionary: "gpa": 3.8. Then update the "age" to 22.
  4. Remove the key "major" from the student dictionary using the del statement. Print the dictionary to confirm the removal.
  5. Check if the key "age" exists in the student dictionary. Print True or False based on the result.
  6. Create a dictionary prices with three items, e.g., "apple": 0.5, "banana": 0.3, "orange": 0.7. Iterate over the dictionary and print each key-value pair.
  7. Use the len() function to find the number of key-value pairs in the prices dictionary. Print the result.
  8. Use the get() method to access the "gpa" in the student dictionary. Try to access a non-existing key, e.g., "graduation_year", with a default value of 2025.
  9. Create another dictionary extra_info with the following keys and values. Also merge extra_info into the student dictionary using the update() method.
    • "graduation_year": 2025
    • "hometown": "Springfield"
  10. Create a dictionary squares where the keys are numbers from 1 to 5 and the values are the squares of the keys. Use dictionary comprehension.
  11. Using the prices dictionary, print the keys and values as separate lists using the keys() and values() methods.
  12. Create a dictionary school with two nested dictionaries. Access and print the age of "student2".
    • "student1": {"name": "Alice", "age": 21}
    • "student2": {"name": "Bob", "age": 22}
  13. Use the setdefault() method to add a new key "advisor" with the value "Dr. Smith" to the student dictionary if it does not exist.
  14. Use the pop() method to remove the "hometown" key from the student dictionary and store its value in a variable. Print the variable.
  15. Use the clear() method to remove all items from the prices dictionary. Print the dictionary to confirm it’s empty.
  16. Make a copy of the student dictionary using the copy() method. Modify the copy by changing "name" to "Charlie". Print both dictionaries to see the differences.
  17. Create two lists: keys = ["name", "age", "major"] and values = ["Eve", 20, "Mathematics"]. Use the zip() function to create a dictionary from these lists.
  18. Use the items() method to iterate over the student dictionary and print each key-value pair.
  19. Given a list of fruits: ["apple", "banana", "apple", "orange", "banana", "banana"], create a dictionary fruit_count that counts the occurrences of each fruit.
  20. Use collections.defaultdict to create a dictionary word_count that counts the number of occurrences of each word in a list: ["hello", "world", "hello", "python"].

The Botanical Garden and Rose Garden: Understanding Sets

3 August 2024 at 09:36

Introduction to the Botanical Garden

We are planning to opening a botanical garden with flowers which will attract people to visit.

Morning: Planting Unique Flowers

One morning, we decides to plant flowers in the garden. They ensure that each flower they plant is unique.


botanical_garden = {"Rose", "Lily", "Sunflower"}

Noon: Adding More Flowers

At noon, they find some more flowers and add them to the garden, making sure they only add flowers that aren’t already there.

Adding Elements to a Set:


# Adding more unique flowers to the enchanted garden
botanical_garden.add("Jasmine")
botanical_garden.add("Hibiscus")
print(botanical_garden)
# output: {'Hibiscus', 'Rose', 'Tulip', 'Sunflower', 'Jasmine'}

Afternoon: Trying to Plant Duplicate Flowers

In the afternoon, they accidentally try to plant another Rose, but the garden’s rule prevents any duplicates from being added.

Adding Duplicate Elements:


# Attempting to add a duplicate flower
botanical_garden.add("Rose")
print(botanical_garden)
# output: {'Lily', 'Sunflower', 'Rose'}

Evening: Removing Unwanted Plants

As evening approaches, they decide to remove some flowers they no longer want in their garden.

Removing Elements from a Set:


# Removing a flower from the enchanted garden
botanical_garden.remove("Lily")
print(botanical_garden)
# output: {'Sunflower', 'Rose'}

Night: Checking Flower Types

Before going to bed, they check if certain flowers are present in their botanical garden.

Checking Membership:


# Checking if certain flowers are in the garden
is_rose_in_garden = "Rose" in botanical_garden
is_tulip_in_garden = "Tulip" in botanical_garden

print(f"Is Rose in the garden? {is_rose_in_garden}")
print(f"Is Tulip in the garden? {is_tulip_in_garden}")

# Output
# Is Rose in the garden? True
# Is Tulip in the garden? False

Midnight: Comparing with Rose Garden

Late at night, they compare their botanical garden with their rose garden to see which flowers they have in common and which are unique to each garden.

Set Operations:

Intersections:


# Neighbor's enchanted garden
rose_garden = {"Rose", "Lavender"}

# Flowers in both gardens (Intersection)
common_flowers = botanical_garden.intersection(rose_garden)
print(f"Common flowers: {common_flowers}")

# Output
# Common flowers: {'Rose'}
# Unique flowers: {'Sunflower'}
# All unique flowers: {'Sunflower', 'Lavender', 'Rose'}

Difference:



# Flowers unique to their garden (Difference)
unique_flowers = botanical_garden.difference(rose_garden)
print(f"Unique flowers: {unique_flowers}")

#output
# Unique flowers: {'Sunflower'}


Union:



# All unique flowers from both gardens (Union)
all_unique_flowers = botanical_garden.union(rose_garden)
print(f"All unique flowers: {all_unique_flowers}")
# Output: All unique flowers: {'Sunflower', 'Lavender', 'Rose'}

ANNACHI KADAI – The Dictionary

3 August 2024 at 09:23

In a vibrant town in Tamil Nadu, there is a popular grocery store called Annachi Kadai. This store is always bustling with fresh deliveries of items.

The store owner, Pandian, uses a special inventory system to track the products. This system functions like a dictionary in Python, where each item is labeled with its name, and the quantity available is recorded.

Morning: Delivering Items to the Store

One bright morning, a new delivery truck arrives at the grocery store, packed with fresh items. Pandian records these new items in his inventory list.

Creating and Updating the Inventory:


# Initial delivery of items to the store
inventory = {
    "apples": 20,
    "bananas": 30,
    "carrots": 15,
    "milk": 10
}

print("Initial Inventory:", inventory)
# Output: Initial Inventory: {'apples': 20, 'bananas': 30, 'carrots': 15, 'milk': 10}

Noon: Additional Deliveries

As the day progresses, more deliveries arrive with additional items that need to be added to the inventory. Pandian updates the system with these new arrivals.

Adding New Items to the Inventory:


# Adding more items from the delivery
inventory["bread"] = 25
inventory["eggs"] = 50

print("Updated Inventory:", inventory)
# Output: Updated Inventory: {'apples': 20, 'bananas': 30, 'carrots': 15, 'milk': 10, 'bread': 25, 'eggs': 50}

Afternoon: Stocking the Shelves

In the afternoon, Pandian notices that some items are running low and restocks them by updating the quantities in the inventory system.

Updating Quantities:


# Updating item quantities after restocking shelves
inventory["apples"] += 10  # 10 more apples added
inventory["milk"] += 5     # 5 more bottles of milk added

print("Inventory after Restocking:", inventory)
# Output: Inventory after Restocking: {'apples': 30, 'bananas': 30, 'carrots': 15, 'milk': 15, 'bread': 25, 'eggs': 50}

Evening: Removing Sold-Out Items

As evening falls, some items are sold out, and Pandian needs to remove them from the inventory to reflect their unavailability.

Removing Items from the Inventory:


# Removing sold-out items
del inventory["carrots"]

print("Inventory after Removal:", inventory)
# Output: Inventory after Removal: {'apples': 30, 'bananas': 30, 'milk': 15, 'bread': 25, 'eggs': 50}

Night: Checking Inventory

Before closing the store, Pandian checks the inventory to ensure that all items are accurately recorded and none are missing.

Checking for Items:

# Checking if specific items are in the inventory
is_bananas_in_stock = "bananas" in inventory
is_oranges_in_stock = "oranges" in inventory

print(f"Are bananas in stock? {is_bananas_in_stock}")
print(f"Are oranges in stock? {is_oranges_in_stock}")
# Output: Are bananas in stock? True
# Output: Are oranges in stock? False


Midnight: Reviewing Inventory

After a busy day, Pandian reviews the entire inventory to ensure all deliveries and sales are accurately recorded.

Iterating Over the Inventory:


# Reviewing the final inventory
for item, quantity in inventory.items():
    print(f"Item: {item}, Quantity: {quantity}")

# Output:
# Item: apples, Quantity: 30
# Item: bananas, Quantity: 30
# Item: milk, Quantity: 15
# Item: bread, Quantity: 25
# Item: eggs, Quantity: 50

❌
❌