Reading view

There are new articles available, click to refresh the page.

Lulu Mall: More Than Shoppings — A Business Viewpoint

Lulu Mall: More Than Shoppings🛒 — A Business Viewpoint

Source: https://www.kochi.lulumall.in/

“Use people’s laziness and become rich.”

If someone can capitalize on people’s laziness, they can become rich. By providing comfort and convenience, businesses can charge premium prices — and people are willing to pay. Swiggy and Zomato are prime examples of this business model.

Hi everyone 👋! Last week, I visited Kerala, exploring Vagamon and Kochi. It was an experience filled with beautiful memories. During this trip, I visited the famous Lulu Mall. Ever since I was a child, I’ve been intrigued by the business strategies behind places and events, and Lulu Mall was no exception.

We all know that for middle-class people, shopping at a mall may not always fit their budget. I realized this during my visit to Chennai’s Phoenix Market City Mall, and I had the same thoughts while visiting Lulu Mall.

🔍 Interesting Business Insights I Observed at Lulu Mall

1. Premium Enjoyment for Kids Lulu Mall’s indoor amusement area offers attractions like the Roller Coaster, Ferris Wheel, and Cine Coaster. It also features arcade games and VR experiences. I noticed many kids enjoying themselves here — it truly is an exciting place for them. However, these premium rides and games come at a high cost. While the enjoyment is top-notch, it’s clearly a premium experience designed to encourage spending.

Source: Own Phone Camera | Samsung M34

2. The Food Court Strategy On the third floor, Lulu Mall presents a diverse range of dining options. A subtle but clever observation I made was the pricing strategy: food outlets near the escalators charge higher prices due to increased foot traffic, while those located farther away offer comparatively lower rates. This model successfully capitalizes on customer behavior and convenience.

Source: Own Phone Camera | Samsung M34

3. Strategic Store Placement The mall’s layout is meticulously designed with anchor stores like Lulu Hypermarket and PVR Cinemas placed at strategic locations. This encourages visitors to walk through the entire mall, exposing them to other shops along the way. It’s a smart way to boost foot traffic and create opportunities for impulse purchases, benefiting both large and small retailers.

4. Seasonal Themes and Promotions To attract repeat visitors, Lulu Mall frequently adopts seasonal promotions and festive decorations. Events such as cultural festivals, sales, and themed displays keep the mall lively and engaging. This ensures that customers always have a reason to return.

5. Focus on Luxury and Aspirational Brands Lulu Mall caters to a broad demographic by offering both essentials for middle-class families and high-end luxury brands. This balance attracts a wide customer base, including those looking to indulge occasionally in premium experiences. It’s a strategy that drives sales while maintaining the mall’s aspirational appeal.

6. Catchy quotes help to keep the brand in customers’ minds. There are numerous examples of this. Each shop in the mall uses advertisements with a quote. These quotes are strategically placed in the mall by the shops.

Source: Own Phone Camera | Samsung M34

❓ A Fun Question:

I’ve yet to try burgers or pizza — what do they taste like? Can you compare them to South Indian dishes like Sambar Rice with Potato Fry, Rasam with Potato Fry, or Biryani? I’d love to hear your thoughts!

Alright, guys, that’s it for this blog. Oh, and one more thing! Today, I’ve been on a blogging spree. If you have time, don’t forget to check out my latest post on Emacs & Org Mode in Windows.”

When you feel this content is valuable, follow me for more upcoming Blogs.

Connect with Me:

🚀 #FOSS: Mastering Superfile: The Ultimate Terminal-Based File Manager for Power Users

🔥 Introduction

Are you tired of slow, clunky GUI-based file managers? Do you want lightning-fast navigation and total control over your files—right from your terminal? Meet Superfile, the ultimate tool for power users who love efficiency and speed.

In this blog, we’ll take you on a deep dive into Superfile’s features, commands, and shortcuts, transforming you into a file management ninja! ⚡

💡 Why Choose Superfile?

Superfile isn’t just another file manager it’s a game-changer.

Here’s why

✅ Blazing Fast – No unnecessary UI lag, just pure efficiency.

✅ Keyboard-Driven – Forget the mouse, master navigation with powerful keybindings.

✅ Multi-Panel Support – Work with multiple directories simultaneously.

✅ Smart Search & Sorting – Instantly locate and organize files.

✅ Built-in File Preview & Metadata Display – See what you need without opening files.

✅ Highly Customizable – Tailor it to fit your workflow perfectly.

🛠 Installation

Getting started is easy! Install Superfile using

# For Linux (Debian-based)
wget -qO- https://superfile.netlify.app/install.sh | bash

# For macOS (via Homebrew)
brew install superfile

# For Windows (via Scoop)
scoop install superfile

Once installed, launch it with

spf

🚀 Boom! You’re ready to roll.

⚡ Essential Commands & Shortcuts

🏗 General Operations

  • Launch Superfile: spf
  • Exit: Press q or Esc
  • Help Menu: ?
  • Toggle Footer Panel: F

📂 File & Folder Navigation

  • New File Panel: n
  • Close File Panel: w
  • Toggle File Preview: f
  • Next Panel: Tab or Shift + l
  • Sidebar Panel: s

📝 File & Folder Management

  • Create File/Folder: Ctrl + n
  • Rename: Ctrl + r
  • Copy: Ctrl + c
  • Cut: Ctrl + x
  • Paste: Ctrl + v
  • Delete: Ctrl + d
  • Copy Path: Ctrl + p

🔎 Search & Selection

  • Search: /
  • Select Files: v
  • Select All: Shift + a

📦 Compression & Extraction

  • Extract Zip: Ctrl + e
  • Compress to Zip: Ctrl + a

🏆 Advanced Power Moves

  • Open Terminal Here: Shift + t
  • Open in Editor: e
  • Toggle Hidden Files: .

💡 Pro Tip: Use Shift + p to pin frequently accessed folders for even quicker access!

🎨 Customizing Superfile

Want to make Superfile truly yours? Customize it easily by editing the config file

$EDITOR CONFIG_PATH

To enable the metadata plugin, add

metadata = true

For more customizations, check out the Superfile documentation.

🎯 Final Thoughts

Superfile is the Swiss Army knife of terminal-based file managers. Whether you’re a developer, system admin, or just someone who loves a fast, efficient workflow, Superfile will revolutionize the way you manage files.

🚀 Ready to supercharge your productivity? Install Superfile today and take control like never before!

For more details, visit the Superfile website.

Can UV Transform Python Scripts into Standalone Executables ?

Managing dependencies for small Python scripts has always been a bit of a hassle.

Traditionally, we either install packages globally (not recommended) or create a virtual environment, activate it, and install dependencies manually.

But what if we could run Python scripts like standalone binaries ?

Introducing PEP 723 – Inline Script Metadata

PEP 723 (https://peps.python.org/pep-0723/) introduces a new way to specify dependencies directly within a script, making it easier to execute standalone scripts without dealing with external dependency files.

This is particularly useful for quick automation scripts or one-off tasks.

Consider a script that interacts with an API requiring a specific package,

# /// script
# requires-python = ">=3.11"
# dependencies = [
#   "requests",
# ]
# ///

import requests
response = requests.get("https://api.example.com/data")
print(response.json())

Here, instead of manually creating a requirements.txt or setting up a virtual environment, the dependencies are defined inline. When using uv, it automatically installs the required packages and runs the script just like a binary.

Running the Script as a Third-Party Tool

With uv, executing the script feels like running a compiled binary,

$ uv run fetch-data.py
Reading inline script metadata from: fetch-data.py
Installed dependencies in milliseconds

ehind the scenes, uv creates an isolated environment, ensuring a clean dependency setup without affecting the global Python environment. This allows Python scripts to function as independent tools without any manual dependency management.

Why This Matters

This approach makes Python an even more attractive choice for quick automation tasks, replacing the need for complex setups. It allows scripts to be shared and executed effortlessly, much like compiled executables in other programming environments.

By leveraging uv, we can streamline our workflow and use Python scripts as powerful, self-contained tools without the usual dependency headaches.

Minimal Typing Practice Application in Python

Introduction

This is a Python-based single-file application designed for typing practice. It provides a simple interface to improve typing accuracy and speed. Over time, this minimal program has gradually increased my typing skill.

What I Learned from This Project

  • 2D Array Validation
    I first simply used a 1D array to store user input, but I noticed some issues. After implementing a 2D array, I understood why the 2D array was more appropriate for handling user inputs.
  • Tkinter
    I wanted to visually see and update correct, wrong, and incomplete typing inputs, but I didn’t know how to implement it in the terminal. So, I used a simple Tkinter gui window

Run This Program

It depends on the following applications:

  • Python 3
  • python3-tk

Installation Command on Debian-Based Systems

sudo apt install python3 python3-tk

Clone repository and run program

git clone https://github.com/github-CS-krishna/TerminalTyping
cd TerminalTyping
python3 terminalType.py

Links

For more details, refer to the README documentation on GitHub.

This will help you understand how to use it.

source code(github)

Learning Notes #48 – Common Pitfalls in Event Driven Architecture

Today, i came across Raul Junco post on mistakes in Event Driven Architecture – https://www.linkedin.com/posts/raul-junco_after-years-building-event-driven-systems-activity-7278770394046631936-zu3-?utm_source=share&utm_medium=member_desktop. In this blog i am highlighting the same for future reference.

Event-driven architectures are awesome, but they come with their own set of challenges. Missteps can lead to unreliable systems, inconsistent data, and frustrated users. Let’s explore some of the most common pitfalls and how to address them effectively.

1. Duplication

Idempotent APIs – https://parottasalna.com/2025/01/08/learning-notes-47-idempotent-post-requests/

Events often get re-delivered due to retries or system failures. Without proper handling, duplicate events can,

  • Charge a customer twice for the same transaction: Imagine a scenario where a payment service retries a payment event after a temporary network glitch, resulting in a duplicate charge.
  • Cause duplicate inventory updates: For example, an e-commerce platform might update stock levels twice for a single order, leading to overestimating available stock.
  • Create inconsistent or broken system states: Duplicates can cascade through downstream systems, introducing mismatched or erroneous data.

Solution:

  • Assign unique IDs: Ensure every event has a globally unique identifier. Consumers can use these IDs to detect and discard duplicates.
  • Design idempotent processing: Structure your operations so they produce the same outcome even when executed multiple times. For instance, an API updating inventory could always set stock levels to a specific value rather than incrementing or decrementing.

2. Not Guaranteeing Order

Events can arrive out of order when distributed across partitions or queues. This can lead to

  • Processing a refund before the payment: If a refund event is processed before the corresponding payment event, the system might show a negative balance or fail to reconcile properly.
  • Breaking logic that relies on correct sequence: Certain workflows, such as assembling logs or transactional data, depend on a strict event order to function correctly.

Solution

  • Use brokers with ordering guarantees: Message brokers like Apache Kafka support partition-level ordering. Design your topics and partitions to align with entities requiring ordered processing (e.g., user or account ID).
  • Add sequence numbers or timestamps: Include metadata in events to indicate their position in a sequence. Consumers can use this data to reorder events if necessary, ensuring logical consistency.

3. The Dual Write Problem

Outbox Pattern: https://parottasalna.com/2025/01/03/learning-notes-31-outbox-pattern-cloud-pattern/

When writing to a database and publishing an event, one might succeed while the other fails. This can

  • Lose events: If the event is not published after the database write, downstream systems might remain unaware of critical changes, such as a new order or a status update.
  • Cause mismatched states: For instance, a transaction might be logged in a database but not propagated to analytical or monitoring systems, creating inconsistencies.

Solution

  • Use the Transactional Outbox Pattern: In this pattern, events are written to an “outbox” table within the same database transaction as the main data write. A separate process then reads from the outbox and publishes events reliably.
  • Adopt Change Data Capture (CDC) tools: CDC tools like Debezium can monitor database changes and publish them as events automatically, ensuring no changes are missed.

4. Non-Backward-Compatible Changes

Changing event schemas without considering existing consumers can break systems. For example:

  • Removing a field: A consumer relying on this field might encounter null values or fail altogether.
  • Renaming or changing field types: This can lead to deserialization errors or misinterpretation of data.

Solution:

  • Maintain versioned schemas: Introduce new schema versions incrementally and ensure consumers can continue using older versions during the transition.
  • Use schema evolution-friendly formats: Formats like Avro or Protobuf natively support schema evolution, allowing you to add fields or make other non-breaking changes easily.
  • Add adapters for compatibility: Build adapters or translators that transform events from new schemas to older formats, ensuring backward compatibility for legacy systems.

Learning Notes #41 – Shared Lock and Exclusive Locks | Postgres

Today, I learnt about various locking mechanism to prevent double update. In this blog, i make notes on Shared Lock and Exclusive Lock for my future self.

What Are Locks in Databases?

Locks are mechanisms used by a DBMS to control access to data. They ensure that transactions are executed in a way that maintains the ACID (Atomicity, Consistency, Isolation, Durability) properties of the database. Locks can be classified into several types, including

  • Shared Locks (S Locks): Allow multiple transactions to read a resource simultaneously but prevent any transaction from writing to it.
  • Exclusive Locks (X Locks): Allow a single transaction to modify a resource, preventing both reading and writing by other transactions.
  • Intent Locks: Used to signal the type of lock a transaction intends to acquire at a lower level.
  • Deadlock Prevention Locks: Special locks aimed at preventing deadlock scenarios.

Shared Lock

A shared lock is used when a transaction needs to read a resource (e.g., a database row or table) without altering it. Multiple transactions can acquire a shared lock on the same resource simultaneously. However, as long as one or more shared locks exist on a resource, no transaction can acquire an exclusive lock on that resource.


-- Transaction A: Acquire a shared lock on a row
BEGIN;
SELECT * FROM employees WHERE id = 1 FOR SHARE;
-- Transaction B: Acquire a shared lock on the same row
BEGIN;
SELECT * FROM employees WHERE id = 1 FOR SHARE;
-- Both transactions can read the row concurrently
-- Transaction C: Attempt to update the same row
BEGIN;
UPDATE employees SET salary = salary + 1000 WHERE id = 1;
-- Transaction C will be blocked until Transactions A and B release their locks

Key Characteristics of Shared Locks

1. Concurrent Reads

  • Shared locks allow multiple transactions to read the same resource at the same time.
  • This is ideal for operations like SELECT queries that do not modify data.

2. Write Blocking

  • While a shared lock is active, no transaction can modify the locked resource.
  • Prevents dirty writes and ensures read consistency.

3. Compatibility

  • Shared locks are compatible with other shared locks but not with exclusive locks.

When Are Shared Locks Used?

Shared locks are typically employed in read operations under certain isolation levels. For instance,

1. Read Committed Isolation Level:

  • Shared locks are held for the duration of the read operation.
  • Prevents dirty reads by ensuring the data being read is not modified by other transactions during the read.

2. Repeatable Read Isolation Level:

  • Shared locks are held until the transaction completes.
  • Ensures that the data read during a transaction remains consistent and unmodified.

3. Snapshot Isolation:

  • Shared locks may not be explicitly used, as the DBMS creates a consistent snapshot of the data for the transaction.

    Exclusive Locks

    An exclusive lock is used when a transaction needs to modify a resource. Only one transaction can hold an exclusive lock on a resource at a time, ensuring no other transactions can read or write to the locked resource.

    
    -- Transaction X: Acquire an exclusive lock to update a row
    BEGIN;
    UPDATE employees SET salary = salary + 1000 WHERE id = 2;
    -- Transaction Y: Attempt to read the same row
    BEGIN;
    SELECT * FROM employees WHERE id = 2;
    -- Transaction Y will be blocked until Transaction X completes
    -- Transaction Z: Attempt to update the same row
    BEGIN;
    UPDATE employees SET salary = salary + 500 WHERE id = 2;
    -- Transaction Z will also be blocked until Transaction X completes
    

    Key Characteristics of Exclusive Locks

    1. Write Operations: Exclusive locks are essential for operations like INSERT, UPDATE, and DELETE.

    2. Blocking Reads and Writes: While an exclusive lock is active, no other transaction can read or write to the resource.

    3. Isolation: Ensures that changes made by one transaction are not visible to others until the transaction is complete.

      When Are Exclusive Locks Used?

      Exclusive locks are typically employed in write operations or any operation that modifies the database. For instance:

      1. Transactional Updates – A transaction that updates a row acquires an exclusive lock to ensure no other transaction can access or modify the row during the update.

      2. Table Modifications – When altering a table structure, the DBMS may place an exclusive lock on the entire table.

      Benefits of Shared and Exclusive Locks

      Benefits of Shared Locks

      1. Consistency in Multi-User Environments – Ensure that data being read is not altered by other transactions, preserving consistency.
      2. Concurrency Support – Allow multiple transactions to read data simultaneously, improving system performance.
      3. Data Integrity – Prevent dirty reads and writes, ensuring that operations yield reliable results.

      Benefits of Exclusive Locks

      1. Data Integrity During Modifications – Prevents other transactions from accessing data being modified, ensuring changes are applied safely.
      2. Isolation of Transactions – Ensures that modifications by one transaction are not visible to others until committed.

      Limitations and Challenges

      Shared Locks

      1. Potential for Deadlocks – Deadlocks can occur if two transactions simultaneously hold shared locks and attempt to upgrade to exclusive locks.
      2. Blocking Writes – Shared locks can delay write operations, potentially impacting performance in write-heavy systems.
      3. Lock Escalation – In systems with high concurrency, shared locks may escalate to table-level locks, reducing granularity and concurrency.

      Exclusive Locks

      1. Reduced Concurrency – Exclusive locks prevent other transactions from accessing the locked resource, which can lead to bottlenecks in highly concurrent systems.
      2. Risk of Deadlocks – Deadlocks can occur if two transactions attempt to acquire exclusive locks on resources held by each other.

      Lock Compatibility

      HAProxy EP 2: TCP Proxy for Flask Application

      Meet Jafer, a backend engineer tasked with ensuring the new microservice they are building can handle high traffic smoothly. The microservice is a Flask application that needs to be accessed over TCP, and Jafer decided to use HAProxy to act as a TCP proxy to manage incoming traffic.

      This guide will walk you through how Jafer sets up HAProxy to work as a TCP proxy for a sample Flask application.

      Why Use HAProxy as a TCP Proxy?

      HAProxy as a TCP proxy operates at Layer 4 (Transport Layer) of the OSI model. It forwards raw TCP connections from clients to backend servers without inspecting the contents of the packets. This is ideal for scenarios where:

      • You need to handle non-HTTP traffic, such as databases or other TCP-based applications.
      • You want to perform load balancing without application-level inspection.
      • Your services are using protocols other than HTTP/HTTPS.

      In this layer, it can’t read the packets but can identify the ip address of the client.

      Step 1: Set Up a Sample Flask Application

      First, Jafer created a simple Flask application that listens on a TCP port. Let’s create a file named app.py

      from flask import Flask, request
      
      app = Flask(__name__)
      
      @app.route('/', methods=['GET'])
      def home():
          return "Hello from Flask over TCP!"
      
      if __name__ == "__main__":
          app.run(host='0.0.0.0', port=5000)  # Run the app on port 5000
      
      
      

      Step 2: Dockerize the Flask Application

      To make the Flask app easy to deploy, Jafer decided to containerize it using Docker.

      Create a Dockerfile

      # Use an official Python runtime as a parent image
      FROM python:3.9-slim
      
      # Set the working directory
      WORKDIR /app
      
      # Copy the current directory contents into the container at /app
      COPY . /app
      
      # Install any needed packages specified in requirements.txt
      RUN pip install flask
      
      # Make port 5000 available to the world outside this container
      EXPOSE 5000
      
      # Run app.py when the container launches
      CMD ["python", "app.py"]
      
      
      

      To build and run the Docker container, use the following commands

      docker build -t flask-app .
      docker run -d -p 5000:5000 flask-app
      
      

      This will start the Flask application on port 5000.

      Step 3: Configure HAProxy as a TCP Proxy

      Now, Jafer needs to configure HAProxy to act as a TCP proxy for the Flask application.

      Create an HAProxy configuration file named haproxy.cfg

      global
          log stdout format raw local0
          maxconn 4096
      
      defaults
          mode tcp  # Operating in TCP mode
          log global
          option tcplog
          timeout connect 5000ms
          timeout client  50000ms
          timeout server  50000ms
      
      frontend tcp_front
          bind *:4000  # Bind to port 4000 for incoming TCP traffic
          default_backend flask_backend
      
      backend flask_backend
          balance roundrobin  # Use round-robin load balancing
          server flask1 127.0.0.1:5000 check  # Proxy to Flask app running on port 5000
      
      

      In this configuration:

      • Mode TCP: HAProxy is set to work in TCP mode.
      • Frontend: Listens on port 4000 and forwards incoming TCP traffic to the backend.
      • Backend: Contains a single server (flask1) where the Flask app is running.

      Step 4: Run HAProxy with the Configuration

      To start HAProxy with the above configuration, you can use Docker to run HAProxy in a container.

      Create a Dockerfile for HAProxy

      FROM haproxy:2.4
      
      # Copy the HAProxy configuration file to the container
      COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
      

      Build and run the HAProxy Docker container

      docker build -t haproxy-tcp .
      docker run -d -p 4000:4000 haproxy-tcp
      
      

      This will start HAProxy on port 4000, which is configured to proxy TCP traffic to the Flask application running on port 5000.

      Step 5: Test the TCP Proxy Setup

      To test the setup, open a web browser or use curl to send a request to the HAProxy server

      curl http://localhost:4000/
      

      You should see the response

      Hello from Flask over TCP!
      

      This confirms that HAProxy is successfully proxying TCP traffic to the Flask application.

      Step 6: Scaling Up

      If Jafer wants to scale the application to handle more traffic, he can add more backend servers to the haproxy.cfg file

      backend flask_backend
          balance roundrobin
          server flask1 127.0.0.1:5000 check
          server flask2 127.0.0.1:5001 check
      

      Jafer could run another instance of the Flask application on a different port (5001), and HAProxy would balance the TCP traffic between the two instances.

      Conclusion

      By configuring HAProxy as a TCP proxy, Jafer could efficiently manage and balance incoming traffic to their Flask application. This setup ensures scalability and reliability for any TCP-based service, not just HTTP-based ones.

      HAProxy EP 1: Traffic Police for Web

      In the world of web applications, imagine you’re running a very popular pizza place. Every evening, customers line up for a delicious slice of pizza. But if your single cashier can’t handle all the orders at once, customers might get frustrated and leave.

      What if you could have a system that ensures every customer gets served quickly and efficiently? Enter HAProxy, a tool that helps manage and balance the flow of web traffic so that no single server gets overwhelmed.

      Here’s a straightforward guide to understanding HAProxy, installing it, and setting it up to make your web application run smoothly.

      What is HAProxy?

      HAProxy stands for High Availability Proxy. It’s like a traffic director for your web traffic. It takes incoming requests (like people walking into your pizza place) and decides which server (or pizza station) should handle each request. This way, no single server gets too busy, and everything runs more efficiently.

      Why Use HAProxy?

      • Handles More Traffic: Distributes incoming traffic across multiple servers so no single one gets overloaded.
      • Increases Reliability: If one server fails, HAProxy directs traffic to the remaining servers.
      • Improves Performance: Ensures that users get faster responses because the load is spread out.

      Installing HAProxy

      Here’s how you can install HAProxy on a Linux system:

      1. Open a Terminal: You’ll need to access your command line interface to install HAProxy.
      2. Install HAProxy: Type the following command and hit enter
      
      sudo apt-get update
      sudo apt-get install haproxy
      

      3. Check Installation: Once installed, you can verify that HAProxy is running by typing

      
      sudo systemctl status haproxy
      

      This command shows you the current status of HAProxy, ensuring it’s up and running.

      Configuring HAProxy

      HAProxy’s configuration file is where you set up how it should handle incoming traffic. This file is usually located at /etc/haproxy/haproxy.cfg. Let’s break down the main parts of this configuration file,

      1. The global Section

      The global section is like setting the rules for the entire pizza place. It defines general settings for HAProxy itself, such as how it should operate, what kind of logging it should use, and what resources it needs. Here’s an example of what you might see in the global section

      
      global
          log /dev/log local0
          log /dev/log local1 notice
          chroot /var/lib/haproxy
          stats socket /run/haproxy/admin.sock mode 660
          user haproxy
          group haproxy
          daemon
      
      

      Let’s break it down line by line:

      • log /dev/log local0: This line tells HAProxy to send log messages to the system log at /dev/log and to use the local0 logging facility. Logs help you keep track of what’s happening with HAProxy.
      • log /dev/log local1 notice: Similar to the previous line, but it uses the local1 logging facility and sets the log level to notice, which is a type of log message indicating important events.
      • chroot /var/lib/haproxy: This line tells HAProxy to run in a restricted area of the file system (/var/lib/haproxy). It’s a security measure to limit access to the rest of the system.
      • stats socket /run/haproxy/admin.sock mode 660: This sets up a special socket (a kind of communication endpoint) for administrative commands. The mode 660 part defines the permissions for this socket, allowing specific users to manage HAProxy.
      • user haproxy: Specifies that HAProxy should run as the user haproxy. Running as a specific user helps with security.
      • group haproxy: Similar to the user directive, this specifies that HAProxy should run under the haproxy group.
      • daemon: This tells HAProxy to run as a background service, rather than tying up a terminal window.

      2. The defaults Section

      The defaults section sets up default settings for HAProxy’s operation and is like defining standard procedures for the pizza place. It applies default configurations to both the frontend and backend sections unless overridden. Here’s an example of a defaults section

      
      defaults
          log     global
          option  httplog
          option  dontlognull
          timeout connect 5000ms
          timeout client  50000ms
          timeout server  50000ms
      
      

      Here’s what each line means:

      • log global: Tells HAProxy to use the logging settings defined in the global section for logging.
      • option httplog: Enables HTTP-specific logging. This means HAProxy will log details about HTTP requests and responses, which helps with troubleshooting and monitoring.
      • option dontlognull: Prevents logging of connections that don’t generate any data (null connections). This keeps the logs cleaner and more relevant.
      • timeout connect 5000ms: Sets the maximum time HAProxy will wait when trying to connect to a backend server to 5000 milliseconds (5 seconds). If the connection takes longer, it will be aborted.
      • timeout client 50000ms: Defines the maximum time HAProxy will wait for data from the client to 50000 milliseconds (50 seconds). If the client doesn’t send data within this time, the connection will be closed.
      • timeout server 50000ms: Similar to timeout client, but it sets the maximum time to wait for data from the server to 50000 milliseconds (50 seconds).

      3. Frontend Section

      The frontend section defines how HAProxy listens for incoming requests. Think of it as the entrance to your pizza place.

      
      frontend http_front
          bind *:80
          default_backend http_back
      
      • frontend http_front: This is a name for your frontend configuration.
      • bind *:80: Tells HAProxy to listen for traffic on port 80 (the standard port for web traffic).
      • default_backend http_back: Specifies where the traffic should be sent (to the backend section).

      4. Backend Section

      The backend section describes where the traffic should be directed. Think of it as the different pizza stations where orders are processed.

      
      backend http_back
          balance roundrobin
          server app1 192.168.1.2:5000 check
          server app2 192.168.1.3:5000 check
          server app3 192.168.1.4:5000 check
      
      • backend http_back: This is a name for your backend configuration.
      • balance roundrobin: Distributes traffic evenly across servers.
      • server app1 192.168.1.2:5000 check: Specifies a server (app1) at IP address 192.168.1.2 on port 5000. The check option ensures HAProxy checks if the server is healthy before sending traffic to it.
      • server app2 and server app3: Additional servers to handle traffic.

      Testing Your Configuration

      After setting up your configuration, you’ll need to restart HAProxy to apply the changes:

      
      sudo systemctl restart haproxy
      

      To check if everything is working, you can use a web browser or a tool like curl to send requests to HAProxy and see if it correctly distributes them across your servers.

      The Search for the Perfect Media Server: A Journey of Discovery

      Dinesh, an avid movie collector and music lover, had a growing problem. His laptop was bursting at the seams with countless movies, albums, and family photos. Every time he wanted to watch a movie or listen to her carefully curated playlists, he had to sit around his laptop. And if he wanted to share something with his friends, it meant copying with USB drives or spending hours transferring files.

      One Saturday evening, after yet another struggle to connect his laptop to his smart TV via a mess of cables, Dinesh decided it was time for a change. He needed a solution that would let his access all his media from any device in his house – phone, tablet, and TV. He needed a media server.

      Dinesh fired up his browser and began his search: “How to stream media to all my devices.” He gone through the results – Plex, Jellyfin, Emby… Each option seemed promising but felt too complex, requiring subscriptions or heavy installations.

      Frustrated, Dinesh thought, “There must be something simpler. I don’t need all the bells and whistles; I just want to access my files from anywhere in my house.” He refined her search: “lightweight media server for Linux.”

      There it was – MiniDLNA. Described as a simple, lightweight DLNA server that was easy to set up and perfect for home use, MiniDLNA (also known as ReadyMedia) seemed to be exactly what Dinesh needed.

      MiniDLNA (also known as ReadyMedia) is a lightweight, simple server for streaming media (like videos, music, and pictures) to devices on your network. It is compatible with various DLNA/UPnP (Digital Living Network Alliance/Universal Plug and Play) devices such as smart TVs, media players, gaming consoles, etc.

      How to Use MiniDLNA

      Here’s a step-by-step guide to setting up and using MiniDLNA on a Linux based system.

      1. Install MiniDLNA

      To get started, you need to install MiniDLNA. The installation steps can vary slightly depending on your operating system.

      For Debian/Ubuntu-based systems:

      sudo apt update
      sudo apt install minidlna
      

      For Red Hat/CentOS-based systems:

      First, enable the EPEL repository,

      sudo yum install epel-release
      

      Then, install MiniDLNA,

      sudo yum install minidlna
      

      2. Configure MiniDLNA

      Once installed, you need to configure MiniDLNA to tell it where to find your media files.

      a. Open the MiniDLNA configuration file in a text editor

      sudo nano /etc/minidlna.conf
      

      b. Configure the following parameters:

      • media_dir: Set this to the directories where your media files (music, pictures, and videos) are stored. You can specify different media types for each directory.
      media_dir=A,/path/to/music  # 'A' is for audio
      media_dir=V,/path/to/videos # 'V' is for video
      media_dir=P,/path/to/photos # 'P' is for pictures
      
      • db_dir=: The directory where the database and cache files are stored.
      db_dir=/var/cache/minidlna
      
      • log_dir=: The directory where log files are stored.
      log_dir=/var/log/minidlna
      
      • friendly_name=: The name of your media server. This will appear on your DLNA devices.
      friendly_name=Laptop SJ
      
      • notify_interval=: The interval in seconds that MiniDLNA will notify clients of its presence. The default is 900 (15 minutes).
      notify_interval=900
      

      c. Save and close the file (Ctrl + X, Y, Enter in Nano).

      3. Start the MiniDLNA Service

      After configuration, start the MiniDLNA service

      sudo systemctl start minidlna
      
      

      To enable it to start at boot,

      sudo systemctl enable minidlna
      

      4. Rescan Media Files

      To make MiniDLNA scan your media files and add them to its database, you can force a rescan with

      sudo minidlnad -R
      
      

      5. Access Your Media on DLNA/UPnP Devices

      Now, your MiniDLNA server should be up and running. You can access your media from any DLNA-compliant device on your network:

      • On your Smart TV, look for the “Media Server” or “DLNA” option in the input/source menu.
      • On a Windows PC, go to This PC or Network and find your DLNA server under “Media Devices.”
      • On Android, use a media player app like VLC or BubbleUPnP to find your server.

      6. Check Logs and Troubleshoot

      If you encounter any issues, you can check the logs for more information

      sudo tail -f /var/log/minidlna/minidlna.log
      
      

      To setup for a single user

      Disable the global daemon

      sudo service minidlna stop
      sudo update-rc.d minidlna disable
      
      

      Create the necessary local files and directories as regular user and edit the configuration

      mkdir -p ~/.minidlna/cache
      cd ~/.minidlna
      cp /etc/minidlna.conf .
      $EDITOR minidlna.conf
      

      Configure as you would globally above but these definitions need to be defined locally

      db_dir=/home/$USER/.minidlna/cache
      log_dir=/home/$USER/.minidlna 
      
      

      To start the daemon locally

      minidlnad -f /home/$USER/.minidlna/minidlna.conf -P /home/$USER/.minidlna/minidlna.pid
      

      To stop the local daemon

      xargs kill </home/$USER/.minidlna/minidlna.pid
      
      

      To rebuild the database,

      minidlnad -f /home/$USER/.minidlna/minidlna.conf -R
      

      For more info: https://help.ubuntu.com/community/MiniDLNA

      Additional Tips

      • Firewall Rules: Ensure that your firewall settings allow traffic on the MiniDLNA port (8200 by default) and UPnP (typically port 1900 for UDP).
      • Update Media Files: Whenever you add or remove files from your media directory, run minidlnad -R to update the database.
      • Multiple Media Directories: You can have multiple media_dir lines in your configuration if your media is spread across different folders.

      To set up MiniDLNA with VLC Media Player so you can stream content from your MiniDLNA server, follow these steps:

      Let’s see how to use this in VLC

      On Machine

      1. Install VLC Media Player

      Make sure you have VLC Media Player installed on your device. If not, you can download it from the official VLC website.

      2. Open VLC Media Player

      Launch VLC Media Player on your computer.

      3. Open the UPnP/DLNA Network Stream

      1. Go to the “View” Menu:
        • On the VLC menu bar, click on View and then Playlist or press Ctrl + L (Windows/Linux) or Cmd + Shift + P (Mac).
      2. Locate Your DLNA Server:
        • In the left sidebar, you will see an option for Local Network.
        • Click on Universal Plug'n'Play or UPnP.
        • VLC will search for available DLNA/UPnP servers on your network.
      3. Select Your MiniDLNA Server:
        • After a few moments, your MiniDLNA server should appear under the UPnP section.
        • Click on your server name (e.g., My DLNA Server).
      4. Browse and Play Media:
        • You will see the folders you configured (e.g., Music, Videos, Pictures).
        • Navigate through the folders and double-click on a media file to start streaming.

      4. Alternative Method: Open Network Stream

      If you know the IP address of your MiniDLNA server, you can connect directly:

      1. Open Network Stream:
        • Click on Media in the menu bar and select Open Network Stream... or press Ctrl + N (Windows/Linux) or Cmd + N (Mac).
      2. Enter the URL:
        • Enter the URL of your MiniDLNA server in the format http://[Server IP]:8200.
        • Example: http://192.168.1.100:8200.
      3. Click “Play”:
        • Click on the Play button to start streaming from your MiniDLNA server.

      5. Tips for Better Streaming Experience

      • Ensure the Server is Running: Make sure the MiniDLNA server is running and the media files are correctly indexed.
      • Network Stability: A stable local network connection is necessary for smooth streaming. Use a wired connection if possible or ensure a strong Wi-Fi signal.
      • Firewall Settings: Ensure that the firewall on your server allows traffic on port 8200 (or the port specified in your MiniDLNA configuration).

      On Android

      To set up and stream content from MiniDLNA using an Android app, you will need a DLNA/UPnP client app that can discover and stream media from DLNA servers. Several apps are available for this purpose, such as VLC for Android, BubbleUPnP, Kodi, and others. Here’s how to use VLC for Android and BubbleUPnP, two popular choices

      Using VLC for Android

      1. Install VLC for Android:
      2. Open VLC for Android:
        • Launch the VLC app on your Android device.
      3. Access the Local Network:
        • Tap on the menu button (three horizontal lines) in the upper-left corner of the screen.
        • Select Local Network from the sidebar menu.
      4. Find Your MiniDLNA Server:
        • VLC will automatically search for DLNA/UPnP servers on your local network. After a few moments, your MiniDLNA server should appear in the list.
        • Tap on the name of your MiniDLNA server (e.g., My DLNA Server).
      5. Browse and Play Media:
        • You will see your media folders (e.g., Music, Videos, Pictures) as configured in your MiniDLNA setup.
        • Navigate to the desired folder and tap on any media file to start streaming.

      Additional Tips

      • Ensure MiniDLNA is Running: Make sure your MiniDLNA server is properly configured and running on your local network.
      • Check Network Connection: Ensure your Android device is connected to the same local network (Wi-Fi) as the MiniDLNA server.
      • Firewall Settings: If you are not seeing the MiniDLNA server in your app, ensure that the server’s firewall settings allow DLNA/UPnP traffic.

      Some Problems That you may face

      1. minidlna.service: Main process exited, code=exited, status=255/EXCEPTION - check the logs. Mostly its due to an instance already running on port 8200. Kill that and reload the db. lsof -i :8200 will give PID. and `kill -9 <PID>` will kill the process.
      2. If the media files is not refreshing, then try minidlnad -f /home/$USER/.minidlna/minidlna.conf -R or `sudo minidlnad -R`

      Different Database Models

      Database models define the structure, relationships, and operations that can be performed on a database. Different database models are used based on the specific needs of an application or organization. Here are the most common types of database models:

      1. Hierarchical Database Model

      • Structure: Data is organized in a tree-like structure with a single root, where each record has a single parent but can have multiple children.
      • Usage: Best for applications with a clear hierarchical relationship, like organizational structures or file systems.
      • Example: IBM’s Information Management System (IMS).
      • Advantages: Fast access to data through parent-child relationships.
      • Disadvantages: Rigid structure; difficult to reorganize or restructure.

      2. Network Database Model

      • Structure: Data is organized in a graph structure, where each record can have multiple parent and child records, forming a network of relationships.
      • Usage: Useful for complex relationships, such as in telecommunications or transportation networks.
      • Example: Integrated Data Store (IDS).
      • Advantages: Flexible representation of complex relationships.
      • Disadvantages: Complex design and navigation; can be difficult to maintain.

      3. Relational Database Model

      • Structure: Data is organized into tables (relations) where each table consists of rows (records) and columns (fields). Relationships between tables are managed through keys.
      • Usage: Widely used in various applications, including finance, retail, and enterprise software.
      • Example: MySQL, PostgreSQL, Oracle Database, Microsoft SQL Server.
      • Advantages: Simplicity, data integrity, flexibility in querying through SQL.
      • Disadvantages: Can be slower for very large datasets or highly complex queries.

      4. Object-Oriented Database Model

      • Structure: Data is stored as objects, similar to objects in object-oriented programming. Each object contains both data and methods for processing the data.
      • Usage: Suitable for applications that require the modeling of complex data and relationships, such as CAD, CAM, and multimedia databases.
      • Example: db4o, ObjectDB.
      • Advantages: Seamless integration with object-oriented programming languages, reusability of objects.
      • Disadvantages: Complexity, not as widely adopted as relational databases.

      5. Document-Oriented Database Model

      • Structure: Data is stored in document collections, with each document being a self-contained piece of data often in JSON, BSON, or XML format.
      • Usage: Ideal for content management systems, real-time analytics, and big data applications.
      • Example: MongoDB, CouchDB.
      • Advantages: Flexible schema design, scalability, ease of storing hierarchical data.
      • Disadvantages: May require denormalization, leading to potential data redundancy.

      6. Key-Value Database Model

      • Structure: Data is stored as key-value pairs, where each key is unique, and the value can be a string, number, or more complex data structure.
      • Usage: Best for applications requiring fast access to simple data, such as caching, session management, and real-time analytics.
      • Example: Redis, DynamoDB, Riak.
      • Advantages: High performance, simplicity, scalability.
      • Disadvantages: Limited querying capabilities, lack of complex relationships.

      7. Column-Family Database Model

      • Structure: Data is stored in columns rather than rows, with each column family containing a set of columns that are logically related.
      • Usage: Suitable for distributed databases, handling large volumes of data across multiple servers.
      • Example: Apache Cassandra, HBase.
      • Advantages: High write and read performance, efficient storage of sparse data.
      • Disadvantages: Complexity in design and maintenance, not as flexible for ad-hoc queries.

      8. Graph Database Model

      • Structure: Data is stored as nodes (entities) and edges (relationships) forming a graph. Each node represents an object, and edges represent the relationships between objects.
      • Usage: Ideal for social networks, recommendation engines, fraud detection, and any scenario where relationships between entities are crucial.
      • Example: Neo4j, Amazon Neptune.
      • Advantages: Efficient traversal and querying of complex relationships, flexible schema.
      • Disadvantages: Not as efficient for operations on large sets of unrelated data.

      9. Multimodel Database

      • Structure: Supports multiple data models (e.g., relational, document, graph) within a single database engine.
      • Usage: Useful for applications that require different types of data storage and querying mechanisms.
      • Example: ArangoDB, Microsoft Azure Cosmos DB.
      • Advantages: Flexibility, ability to handle diverse data requirements within a single system.
      • Disadvantages: Complexity in management and optimization.

      10. Time-Series Database Model

      • Structure: Specifically designed to handle time-series data, where each record is associated with a timestamp.
      • Usage: Best for applications like monitoring, logging, and real-time analytics where data changes over time.
      • Example: InfluxDB, TimescaleDB.
      • Advantages: Optimized for handling and querying large volumes of time-stamped data.
      • Disadvantages: Limited use cases outside of time-series data.

      11. NoSQL Database Model

      • Structure: An umbrella term for various non-relational database models, including key-value, document, column-family, and graph databases.
      • Usage: Ideal for handling unstructured or semi-structured data, and scenarios requiring high scalability and flexibility.
      • Example: MongoDB, Cassandra, Couchbase, Neo4j.
      • Advantages: Flexibility, scalability, high performance for specific use cases.
      • Disadvantages: Lack of standardization, potential data consistency challenges.

      Summary

      Each database model serves different purposes, and the choice of model depends on the specific requirements of the application, such as data structure, relationships, performance needs, and scalability. While relational databases are still the most widely used, NoSQL and specialized databases have become increasingly important for handling diverse data types and large-scale applications.

      Tool: Serial Activity – Remote SSH Manager

      Why this tool was created ?

      During our college times, we had a crash course on Machine Learning. Our coordinators has arranged an ML Engineer to take class for 3 days. He insisted to install packages to have hands-on experience. But unfortunately many of our people were not sure about the installations of the packages. So we need to find a solution to install all necessary packages in all machines.

      We had a scenario like, all the machines had one specific same user account with same password for all the machines. So we were like; if we are able to automate it in one machine then it would be easy for rest of the machines ( Just a for-loop iterating the x.0.0.1 to x.0.0.255 ). This is the birthplace of this tool.

      Code=-

      #!/usr/bin/env python
      import sys
      import os.path
      from multiprocessing.pool import ThreadPool
      
      import paramiko
      
      BASE_ADDRESS = "192.168.7."
      USERNAME = "t1"
      PASSWORD = "uni1"
      
      
      def create_client(hostname):
          """Create a SSH connection to a given hostname."""
          ssh_client = paramiko.SSHClient()
          ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
          ssh_client.connect(hostname=hostname, username=USERNAME, password=PASSWORD)
          ssh_client.invoke_shell()
          return ssh_client
      
      
      def kill_computer(ssh_client):
          """Power off a computer."""
          ssh_client.exec_command("poweroff")
      
      
      def install_python_modules(ssh_client):
          """Install the programs specified in requirements.txt"""
          ftp_client = ssh_client.open_sftp()
      
          # Move over get-pip.py
          local_getpip = os.path.expanduser("~/lab_freak/get-pip.py")
          remote_getpip = "/home/%s/Documents/get-pip.py" % USERNAME
          ftp_client.put(local_getpip, remote_getpip)
      
          # Move over requirements.txt
          local_requirements = os.path.expanduser("~/lab_freak/requirements.txt")
          remote_requirements = "/home/%s/Documents/requirements.txt" % USERNAME
          ftp_client.put(local_requirements, remote_requirements)
      
          ftp_client.close()
      
          # Install pip and the desired modules.
          ssh_client.exec_command("python %s --user" % remote_getpip)
          ssh_client.exec_command("python -m pip install --user -r %s" % remote_requirements)
      
      
      def worker(action, hostname):
          try:
              ssh_client = create_client(hostname)
      
              if action == "kill":
                  kill_computer(ssh_client)
              elif action == "install":
                  install_python_modules(ssh_client)
              else:
                  raise ValueError("Unknown action %r" % action)
          except BaseException as e:
              print("Running the payload on %r failed with %r" % (hostname, action))
      
      
      def main():
          if len(sys.argv) < 2:
              print("USAGE: python kill.py ACTION")
              sys.exit(1)
      
          hostnames = [str(BASE_ADDRESS) + str(i) for i in range(30, 60)]
      
          with ThreadPool() as pool:
              pool.map(lambda hostname: worker(sys.argv[1], hostname), hostnames)
      
      
      if __name__ == "__main__":
          main()
      
      
      

      Security Incident : Code Smells – Not Replaced Constants

      The Secure Boot Case Study

      Attackers can break through the Secure Boot process on millions of computers using Intel and ARM processors due to a leaked cryptographic key that many manufacturers used during the startup process. This key, called the Platform Key (PK), is meant to verify the authenticity of a device’s firmware and boot software.

      Unfortunately, this key was leaked back in 2018. It seems that some manufacturers used this key in their devices instead of replacing it with a secure one, as was intended. As a result, millions of devices from brands like Lenovo, HP, Asus, and SuperMicro are vulnerable to attacks.

      If an attacker has access to this leaked key, they can easily bypass Secure Boot, allowing them to install malicious software that can take control of the device. To fix this problem, manufacturers need to replace the compromised key and update the firmware on affected devices. Some have already started doing this, but it might take time for all devices to be updated, especially those in critical systems.

      The problem is serious because the leaked key is like a master key that can unlock many devices. This issue highlights poor cryptographic key management practices, which have been a problem for many years.

      What Are “Not Replaced Constants”?

      In software, constants are values that are not meant to change during the execution of a program. They are often used to define configuration settings, cryptographic keys, and other critical values.

      When these constants are hard-coded into a system and not updated or replaced when necessary, they become a code smell known as “Not Replaced Constants.”

      Why Are They a Problem?

      When constants are not replaced or updated:

      1. Security Risks: Outdated or exposed constants, such as cryptographic keys, can become security vulnerabilities. If these constants are publicly leaked or discovered by attackers, they can be exploited to gain unauthorized access or control over a system.
      2. Maintainability Issues: Hard-coded constants can make a codebase less maintainable. Changes to these values require code modifications, which can be error-prone and time-consuming.
      3. Flexibility Limitations: Systems with hard-coded constants lack flexibility, making it difficult to adapt to new requirements or configurations without altering the source code.

      The Secure Boot Case Study

      The recent Secure Boot vulnerability is a perfect example of the dangers posed by “Not Replaced Constants.” Here’s a breakdown of what happened:

      The Vulnerability

      Researchers discovered that a cryptographic key used in the Secure Boot process of millions of devices was leaked publicly. This key, known as the Platform Key (PK), serves as the root of trust during the Secure Boot process, verifying the authenticity of a device’s firmware and boot software.

      What Went Wrong

      The leaked PK was originally intended as a test key by American Megatrends International (AMI). However, it was not replaced by some manufacturers when producing devices for the market. As a result, the same compromised key was used across millions of devices, leaving them vulnerable to attacks.

      The Consequences

      Attackers with access to the leaked key can bypass Secure Boot protections, allowing them to install persistent malware and gain control over affected devices. This vulnerability highlights the critical importance of replacing test keys and securely managing cryptographic constants.

      Sample Code:

      Wrong

      def generate_pk() -> str:
          return "DO NOT TRUST"
      
      # Vendor forgets to replace PK
      def use_default_pk() -> str:
          pk = generate_pk()
          return pk  # "DO NOT TRUST" PK used in production
      
      
      

      Right

      def generate_pk() -> str:
          # The documentation tells vendors to replace this value
          return "DO NOT TRUST"
      
      def use_default_pk() -> str:
          pk = generate_pk()
      
          if pk == "DO NOT TRUST":
              raise ValueError("Error: PK must be replaced before use.")
      
          return pk  # Valid PK used in production
      
      

      Ignoring important security steps, like changing default keys, can create big security holes. This ongoing problem shows how important it is to follow security procedures carefully. Instead of just relying on written instructions, make sure to test everything thoroughly to ensure it works as expected.

      ntfy.sh – To save you from un-noticed events

      Alex Pandian was the system administrator for a tech company, responsible for managing servers, maintaining network stability, and ensuring that everything ran smoothly.

      With many scripts running daily and long-running processes that needed monitoring, Alex was constantly flooded with notifications.

      Alex Pandian: “Every day, I have to gothrough dozens of emails and alerts just to find the ones that matter,”

      Alex muttered while sipping coffee in the server room.

      Alex Pandian: “There must be a better way to streamline all this information.”

      Despite using several monitoring tools, the notifications from these systems were scattered and overwhelming. Alex needed a more efficient method to receive alerts only when crucial events occurred, such as script failures or the completion of resource-intensive tasks.

      Determined to find a better system, Alex began searching online for a tool that could help consolidate and manage notifications.

      After reading through countless forums and reviews, Alex stumbled upon a discussion about ntfy.sh, a service praised for its simplicity and flexibility.

      “This looks promising,” Alex thought, excited by the ability to publish and subscribe to notifications using a straightforward, topic-based system. The idea of having notifications sent directly to a phone or desktop without needing complex configurations was exactly what Alex was looking for.

      Alex decided to consult with Sam, a fellow system admin known for their expertise in automation and monitoring.

      Alex Pandian: “Hey Sam, have you ever used ntfy.sh?”

      Sam: “Absolutely, It’s a lifesaver for managing notifications. How do you plan to use it?”

      Alex Pandian: “I’m thinking of using it for real-time alerts on script failures and long-running commands, Can you show me how it works?”

      Sam: “Of course,”

      with a smile, eager to guide Alex through setting up ntfy.sh to improve workflow efficiency.

      Together, Sam and Alex began configuring ntfy.sh for Alex’s environment. They focused on setting up topics and integrating them with existing systems to ensure that important notifications were delivered promptly.

      Step 1: Identifying Key Topics

      Alex identified the main areas where notifications were needed:

      • script-failures: To receive alerts whenever a script failed.
      • command-completions: To notify when long-running commands finished.
      • server-health: For critical server health alerts.

      Step 2: Subscribing to Topics

      Sam showed Alex how to subscribe to these topics using ntfy.sh on a mobile device and desktop. This ensured that Alex would receive notifications wherever they were, without having to constantly check email or dashboards.

      
      # Subscribe to topics
      ntfy subscribe script-failures
      ntfy subscribe command-completions
      ntfy subscribe server-health
      
      

      Step 3: Automating Notifications

      Sam explained how to use bash scripts and curl to send notifications to ntfy.sh whenever specific events occurred.

      “For example, if a script fails, you can automatically send an alert to the ‘script-failures’ topic,” Sam demonstrated.

      
      # Notify on script failure
      ./backup-script.sh || curl -d "Backup script failed!" ntfy.sh/script-failures
      
      

      Alex was impressed by the simplicity and efficiency of this approach. “I can automate all of this?” Alex asked.

      “Definitely,” Sam replied. “You can integrate it with cron jobs, monitoring tools, and more. It’s a great way to keep track of important events without getting bogged down by noise.”

      With the basics in place, Alex began applying ntfy.sh to various real-world scenarios, streamlining the notification process and improving overall efficiency.

      Monitoring Script Failures

      Alex set up automated alerts for critical scripts that ran daily, ensuring that any failures were immediately reported. This allowed Alex to address issues quickly, minimizing downtime and improving system reliability.

      
      # Notify on critical script failure
      ./critical-task.sh || curl -d "Critical task script failed!" ntfy.sh/script-failures
      
      

      Tracking Long-Running Commands

      Whenever Alex initiated a long-running command, such as a server backup or data migration, notifications were sent upon completion. This enabled Alex to focus on other tasks without constantly checking on progress.

      
      # Notify on long-running command completion
      long-command && curl -d "Long command completed successfully." ntfy.sh/command-completions
      
      

      Server Health Alerts

      To monitor server health, Alex integrated ntfy.sh with existing monitoring tools, ensuring that any critical issues were immediately flagged.

      
      # Send server health alert
      curl -d "Server CPU usage is critically high!" ntfy.sh/server-health
      

      As with any new tool, there were challenges to overcome. Alex encountered a few hurdles, but with Sam’s guidance, these were quickly resolved.

      Challenge: Managing Multiple Notifications

      Initially, Alex found it challenging to manage multiple notifications and ensure that only critical alerts were prioritized. Sam suggested using filters and priorities to focus on the most important messages.

      
      # Subscribe with filters for high-priority alerts
      ntfy subscribe script-failures --priority=high
      
      

      Challenge: Scheduling Notifications

      Alex wanted to schedule notifications for regular maintenance tasks and reminders. Sam introduced Alex to using cron for scheduling automated alerts.S

      # Schedule notification for regular maintenance
      echo "Time for weekly server maintenance." | at 8:00 AM next Saturday ntfy.sh/server-health
      
      
      

      Sam gave some more examples to alex,

      Monitoring disk space

      As a system administrator, you can use ntfy.sh to receive alerts when disk space usage reaches a critical level. This helps prevent issues related to insufficient disk space.

      
      # Check disk space and notify if usage is over 80%
      disk_usage=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')
      if [ $disk_usage -gt 80 ]; then
        curl -d "Warning: Disk space usage is at ${disk_usage}%." ntfy.sh/disk-space
      fi
      
      

      Alerting on Website Downtime

      You can use ntfy.sh to monitor the status of a website and receive notifications if it goes down.

      
      # Check website status and notify if it's down
      website="https://example.com"
      status_code=$(curl -o /dev/null -s -w "%{http_code}\n" $website)
      
      if [ $status_code -ne 200 ]; then
        curl -d "Alert: $website is down! Status code: $status_code." ntfy.sh/website-monitor
      fi
      
      

      Reminding for Daily Tasks

      You can set up ntfy.sh to send you daily reminders for important tasks, ensuring that you stay on top of your schedule.

      
      # Schedule daily reminders
      echo "Time to review your daily tasks!" | at 9:00 AM ntfy.sh/daily-reminders
      echo "Stand-up meeting at 10:00 AM." | at 9:50 AM ntfy.sh/daily-reminders
      
      

      Alerting on High System Load

      Monitor system load and receive notifications when it exceeds a certain threshold, allowing you to take action before it impacts performance.

      # Check system load and notify if it's high
      load=$(uptime | awk '{ print $10 }' | sed 's/,//')
      threshold=2.0
      
      if (( $(echo "$load > $threshold" | bc -l) )); then
        curl -d "Warning: System load is high: $load" ntfy.sh/system-load
      fi
      
      

      Notify on Backup Completion

      Receive a notification when a backup process completes, allowing you to verify its success.

      # Notify on backup completion
      backup_command="/path/to/backup_script.sh"
      $backup_command && curl -d "Backup completed successfully." ntfy.sh/backup-status || curl -d "Backup failed!" ntfy.sh/backup-status
      
      

      Notifying on Container Events with Docker

      Integrate ntfy.sh with Docker to send alerts for specific container events, such as when a container stops unexpectedly.

      
      # Notify on Docker container stop event
      container_name="my_app"
      container_status=$(docker inspect -f '{{.State.Status}}' $container_name)
      
      if [ "$container_status" != "running" ]; then
        curl -d "Alert: Docker container $container_name has stopped." ntfy.sh/docker-alerts
      fi
      
      

      Integrating with CI/CD Pipelines

      Use ntfy.sh to notify you about the status of CI/CD pipeline stages, ensuring you stay informed about build successes or failures.

      
      # Example GitLab CI/CD YAML snippet
      stages:
        - build
      
      build_job:
        stage: build
        script:
          - make build
        after_script:
          - if [ "$CI_JOB_STATUS" == "success" ]; then
              curl -d "Build succeeded for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
            else
              curl -d "Build failed for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
            fi
      
      

      Notification on ssh login to server

      Lets try with docker,

      
      FROM ubuntu:16.04
      RUN apt-get update && apt-get install -y openssh-server
      RUN mkdir /var/run/sshd
      # Set root password for SSH access (change 'your_password' to your desired password)
      RUN echo 'root:password' | chpasswd
      RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
      RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
      COPY ntfy-ssh.sh /usr/bin/ntfy-ssh.sh
      RUN chmod +x /usr/bin/ntfy-ssh.sh
      RUN echo "session optional pam_exec.so /usr/bin/ntfy-ssh.sh" >> /etc/pam.d/sshd
      RUN apt-get -y update; apt-get -y install curl
      EXPOSE 22
      CMD ["/usr/sbin/sshd", "-D"]
      

      script to send notification,

      
      #!/bin/bash
      if [ "${PAM_TYPE}" = "open_session" ]; then
        echo "here"
        curl \
          -H prio:high \
          -H tags:warning \
          -d "SSH login: ${PAM_USER} from ${PAM_RHOST}" \
          ntfy.sh/syed-alerts
      fi
      

      With ntfy.sh as an integral part of daily operations, Alex found a renewed sense of balance and control. The once overwhelming chaos of notifications was now a manageable stream of valuable information.

      As Alex reflected on the journey, it was clear that ntfy.sh had transformed not just the way notifications were managed, but also the overall approach to system administration.

      In a world full of noise, ntfy.sh had provided a clear and effective way to stay informed without distractions. For Alex, it was more than just a tool—it was a new way of managing systems efficiently.

      Operators, Conditionals and Inputs

      Operators

      Operators are symbols that tell the computer to perform specific mathematical or logical operations.

      1.Arithmetic Operators

      These operators perform basic mathematical operations like addition, subtraction, multiplication, and division.

      *Addition (+): Add two numbers.
      eg:

      >>>print(1+3)
      

      *Subtraction (-): Subtracts one number from another.
      eg:

      >>>print(1-3)
      

      Multiplication (): Multiplies two numbers.
      eg:

      >>>print(1*3)
      

      *Division (/): Divides one number by another.
      eg:

      >>>print(1/3)
      

      *Floor Division (//): Divides one number by another and rounds down to the nearest whole number.
      eg:

      >>>print(1//3)
      

      *Modulus (%): Returns the remainder when one number is divided by another.
      eg:

      >>>print(1%3)
      

      Exponentiation (*): Raises one number to the power of another.
      eg:

      >>>print(1**3)
      

      2.Comparison Operators

      These operators compare two values and return either True or False.

      *Equal to (==): Checks if two values are equal.

      >>>a = 5
      >>>b = 3
      >>>result = (a == b)  
      
      >>>result is False
      

      *Not equal to (!=): Checks if two values are not equal.

      >>>a = 5
      >>>b = 3
      >>>result = (a != b)  
      
      >>>result is True
      

      *Greater than (>): Checks if one value is greater than another.

      >>>a = 5
      >>>b = 3
      >>>result = (a > b)  
      
      >>>result is True
      

      *Less than (<): Checks if one value is less than another.

      >>>a = 5
      >>>b = 3
      >>>result = (a < b)  
      
      >>>result is False
      

      *Greater than or equal to (>=): Checks if one value is greater than or equal to another.

      >>>a = 5
      >>>b = 3
      >>>result = (a >= b)  
      
      >>>result is True
      

      *Less than or equal to (<=): Checks if one value is less than or equal to another

      >>>a = 5
      >>>b = 3
      >>>result = (a <= b)  
      >>>result is False
      

      3.Logical Operators

      These operators are used to combine conditional statements.

      *and: Returns True if both statements are true.

      >>>a = 5
      >>>b = 3
      >>>result = (a > b and a > 0)  
      
      >>>result is True
      

      *or: Returns True if one of the statements is true.

      >>>a = 5
      >>>b = 3
      >>>result = (a > b or a < 0)  
      >>>result is True
      

      *not: Reverses the result, returns False if the result is true.

      >>>a = 5
      >>>result = not (a > 0)  
      
      >>>result is False
      

      Conditionals

      Conditionals are like traffic signals for your code. They help your program decide which path to take based on certain conditions.

      1. The if Statement

      The if statement checks a condition and executes the code block if the condition is True.
      eg:

      >>>a = 5
      >>>b = 3
      >>>if a > b:
          print("a is greater than b")
      

      2. The elif Statement

      The elif statement is short for “else if”. It checks another condition if the previous if condition was False.
      eg:

      >>>a = 5
      >>>b = 5
      >>>if a > b:
          print("a is greater than b")
      >>>elif a == b:
          print("a is equal to b")
      

      3. The else Statement

      The else statement catches anything that isn’t caught by the preceding conditions.
      eg:

      >>>a = 3
      >>>b = 5
      >>>if a > b:
          print("a is greater than b")
      >>>elif a == b:
          print("a is equal to b")
      >>>else:
          print("a is less than b")
      

      Exploring TAPAS: Analyzing Clinical Trial Data with Transformers

      Introduction:

      Welcome to the world of Transformers, where cutting-edge natural language processing models are revolutionizing the way I interact with data. In this series of blogs, I will embark on a journey to explore and understand the capabilities of the TAPAS (Tabular Pre-trained Language Model) model, which is designed to extract valuable insights from tabular data. To kick things off, I'll delve into the basics of TAPAS and see it in action on a real-world dataset.

      Understanding TAPAS:

      TAPAS is a powerful language model developed by Google that specializes in processing tabular data. Unlike traditional models, TAPAS can handle structured data seamlessly, making it a game-changer for tasks involving tables and spreadsheets. With a token size of 512k, TAPAS can process large datasets efficiently, making it a valuable tool for data analysts and scientists.

      My Dataset:

      For this introductory exploration, I will work with a clinical trial dataset [Clinicaltrails.gov]. To start, I load the dataset and create a data frame containing the "label" column. This column contains information about gender distribution in clinical trials. I'll be using this data to ask questions and obtain insights.

      from transformers import pipeline,TapasTokenizer, TapasForQuestionAnswering
      import pandas as pd
      import datasets
      
      # Load the dataset (only once)
      dataset = datasets.load_dataset("Kira-Asimov/gender_clinical_trial")
      
      # Create the clinical_trials_data DataFrame with just the "label" column (only once)
      clinical_trials_data = pd.DataFrame({
          "id": dataset["train"]["id"],
          "label": dataset["train"]["label"],
      })
      
      clinical_trials_data = clinical_trials_data.head(100)
      
      
      

      Asking Questions with TAPAS:

      The magic of TAPAS begins when I start asking questions about our data. In this example, I want to know how many records are in the dataset and how many of them are gender-specific (Male and Female). I construct queries like:

      "How many records are in total?"
      "How many 'Male' only gender studies are in total?"
      "How many 'Female' only gender studies are in total?"

      Using TAPAS to Answer Questions:

      I utilize the "google/tapas-base-finetuned-wtq" model and its associated tokenizer to process our questions and tabular data. TAPAS tokenizes the data, extracts answers, and even performs aggregations when necessary.

      counts = {}
      answers = []
      
      def TAPAS_model_learning(clinical_trials_data):
          model_name = "google/tapas-base-finetuned-wtq"
          model = TapasForQuestionAnswering.from_pretrained(model_name)
          tokenizer = TapasTokenizer.from_pretrained(model_name)
      
      
          queries = [
              "How many records are in total ?",
              "How many 'Male' only gender studies are in total ?",
              "How many 'Female' only gender studies are in total ?",
          ]
      
          for query in queries:
                  model_name = "google/tapas-base-finetuned-wtq"
                  model = TapasForQuestionAnswering.from_pretrained(model_name)
                  tokenizer = TapasTokenizer.from_pretrained(model_name)
                  # Tokenize the query and table
                  inputs = tokenizer(table=clinical_trials_data, queries=query, padding="max_length", return_tensors="pt", truncation=True)
      
                  # Get the model's output
                  outputs = model(**inputs)
                  predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
                      inputs, outputs.logits.detach(), outputs.logits_aggregation.detach()
                  )
      
                  # Initialize variables to store answers for the current query
                  current_answers = []
      
                  # Count the number of cells in the answer coordinates
                  count = 0
                  for coordinates in predicted_answer_coordinates:
                      count += len(coordinates)
                      # Collect the cell values for the current answer
                      cell_values = []
                      for coordinate in coordinates:
                          cell_values.append(clinical_trials_data.iat[coordinate])
      
                      current_answers.append(", ".join(cell_values))
      
                  # Check if there are no matching cells for the query
                  if count == 0:
                      current_answers = ["No matching cells"]
                  counts[query] = count
                  answers.append(current_answers)
          return counts,answers
      

      Evaluating TAPAS Performance:

      Now, let's see how well TAPAS performs in answering our questions. I have expected answers for each question variation, and I calculate the error percentage to assess the model's accuracy.

      # Prepare your variations of the same question and their expected answers
      question_variations = {
          "How many records are in total ?": 100,
          "How many 'Male' only gender studies are in total ?": 3,
          "How many 'Female' only gender studies are in total ?":9,
      }
      
      
      
      # Use TAPAS to predict the answer based on your tabular data and the question
      predicted_count,predicted_answer = TAPAS_model_learning(clinical_trials_data)
      print(predicted_count)
      # Check if any predicted answer matches the expected answer
      for key,value in predicted_count.items():
          error = question_variations[key] - value
      
      
          # Calculate the accuracy percentage
          error_percentage = (error / question_variations[key]) * 100
      
          # Print the results
          print(f"{key}: Model Value: {value}, Excepted Value: {question_variations[key]}, Error Percentage: {error_percentage :.2f}%")
      
      

      Results and Insights:

      The output reveals how TAPAS handled our queries:

      For the question "How many records are in total?", TAPAS predicted 69 records, with an error percentage of 31.00% compared to the expected value of 100 records.

      For the question "How many 'Male' only gender studies are in total?", TAPAS correctly predicted 3 records, with a perfect match to the expected value.

      For the question "How many 'Female' only gender studies are in total?", TAPAS predicted 2 records, with a significant error percentage of 77.78% compared to the expected value of 9 records.

      Conclusion and Future Exploration:

      In this first blog of our TAPAS exploration series, I introduced you to the model's capabilities and showcased its performance on a real dataset. I observed both accurate and less accurate predictions, highlighting the importance of understanding and fine-tuning the model for specific tasks.

      In our future blogs, I will delve deeper into TAPAS, exploring its architecture, fine-tuning techniques, and strategies for improving its accuracy on tabular data. Stay tuned as I unlock the full potential of TAPAS for data analysis and insights.

      Basic Linux Commands-ls

      Here,some of the basic commands we are going to use mostly in the beginning.

      pwd

      pwd stands for “print working directory“.We can also assume as “present working directory”.This command is used to find the current working directory of the user.

      ls

      ls stands for “list“.This command is used to display the list of files and folders present in the current directory.

      ls -l

      This command is used to display the list of files and folders present in the current directory with a proper ordered list contains it’s file size,date of creation,read and write permission also the size of that particular directory.

      ls -a
      ls -all

      “ls -a”==> This command is for displaying the hidden files in the current directory.
      Whereas, “ls -all” is to displays all files including hidden ones with it’s file size,date of creation,read and write permission also the size of that particular directory.

      ls -h

      h ==>stands for “human reading“.The file size were easily read by the user.Let’s see with an example..
      28641 Jan 22 16:20 Flipkart_TestCase.pdf
      This was a file which has a size of 28641.If we use ‘-h’ it displays the same as,
      28K Jan 22 16:20 Flipkart_TestCase.pdf
      Here, 28641 is changed into 28K (28 KiloBytes).

      ls -S
      ls -s

      -S ==> “Sorting“.We can Sort the list in the directory.
      -s ==>”size“.We can see the size of each files and folders with Byte format.

      ls -R
      ls -r

      -R ==> “Recursive”.This command is for displaying the entire files and sub-folder files in the current directory.
      -r ==>”Reverse”.This command displays the files and folders in descending/reverse order.

      ls -t

      t ==> “sort by time”.Sort the files according to the time,newest first.

      ls -b

      This commands escapes the non-graphic characters in the list displayed.Let’s see with an example…
      Frameworks Challenges-pdf’
      Frameworks\ Challenges-pdf

      The spaces were filled with ‘ \ ‘ and removes the ” ‘ ‘ “.

      man
      man ls

      There are so many commands in the ls (LS) category.
      We can refer it with the manual given by the linux by default.By accessing the manual use this command in the terminal.

      man ==> For accessing the manual.
      man ls ==> Getting a manual for particular command “ls”.

      NOTE:
      We can use these in multiple ways,we get as per the order used in the command line.for eg:-

      • ls -la
      • ls -hla
      • ls -Sh and so….

      Basic Linux Commands-ls

      Here,some of the basic commands we are going to use mostly in the beginning.

      pwd

      pwd stands for “print working directory“.We can also assume as “present working directory”.This command is used to find the current working directory of the user.

      ls

      ls stands for “list“.This command is used to display the list of files and folders present in the current directory.

      ls -l

      This command is used to display the list of files and folders present in the current directory with a proper ordered list contains it’s file size,date of creation,read and write permission also the size of that particular directory.

      ls -a
      ls -all

      “ls -a”==> This command is for displaying the hidden files in the current directory.
      Whereas, “ls -all” is to displays all files including hidden ones with it’s file size,date of creation,read and write permission also the size of that particular directory.

      ls -h

      h ==>stands for “human reading“.The file size were easily read by the user.Let’s see with an example..
      28641 Jan 22 16:20 Flipkart_TestCase.pdf
      This was a file which has a size of 28641.If we use ‘-h’ it displays the same as,
      28K Jan 22 16:20 Flipkart_TestCase.pdf
      Here, 28641 is changed into 28K (28 KiloBytes).

      ls -S
      ls -s

      -S ==> “Sorting“.We can Sort the list in the directory.
      -s ==>”size“.We can see the size of each files and folders with Byte format.

      ls -R
      ls -r

      -R ==> “Recursive”.This command is for displaying the entire files and sub-folder files in the current directory.
      -r ==>”Reverse”.This command displays the files and folders in descending/reverse order.

      ls -t

      t ==> “sort by time”.Sort the files according to the time,newest first.

      ls -b

      This commands escapes the non-graphic characters in the list displayed.Let’s see with an example…
      Frameworks Challenges-pdf’
      Frameworks\ Challenges-pdf

      The spaces were filled with ‘ \ ‘ and removes the ” ‘ ‘ “.

      man
      man ls

      There are so many commands in the ls (LS) category.
      We can refer it with the manual given by the linux by default.By accessing the manual use this command in the terminal.

      man ==> For accessing the manual.
      man ls ==> Getting a manual for particular command “ls”.

      NOTE:
      We can use these in multiple ways,we get as per the order used in the command line.for eg:-

      • ls -la
      • ls -hla
      • ls -Sh and so….
      ❌