โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Learning Notes #68 โ€“ Buildpacks and Dockerfile

2 February 2025 at 09:32

  1. What is an OCI ?
  2. Does Docker Create OCI Images?
  3. What is a Buildpack ?
  4. Overview of Buildpack Process
  5. Builder: The Image That Executes the Build
    1. Components of a Builder Image
    2. Stack: The Combination of Build and Run Images
  6. Installation and Initial Setups
  7. Basic Build of an Image (Python Project)
    1. Building an image using buildpack
    2. Building an Image using Dockerfile
  8. Unique Benefits of Buildpacks
    1. No Need for a Dockerfile (Auto-Detection)
    2. Automatic Security Updates
    3. Standardized & Reproducible Builds
    4. Extensibility: Custom Buildpacks
  9. Generating SBOM in Buildpacks
    1. a) Using pack CLI to Generate SBOM
    2. b) Generate SBOM in Docker

Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developerโ€™s pain. In this blog i jot down my experience on Buildpacks.

Before going to try Buildpacks, we need to understand what is an OCI ?

What is an OCI ?

An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).

It consists of,

  1. Manifest โ€“ Metadata describing the image (layers, config, etc.).
  2. Config JSON โ€“ Information about how the container should run (CMD, ENV, etc.).
  3. Filesystem Layers โ€“ The actual file system of the container.

OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.

Does Docker Create OCI Images?

Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.

  • When you build an image with docker build, it follows the OCI Image format.
  • When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.

However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.

What is a Buildpack ?

A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).

Overview of Buildpack Process

The buildpack process consists of two primary phases

  • Detection Phase: Determines if the buildpack should be applied based on the appโ€™s dependencies.
  • Build Phase: Executes the necessary steps to prepare the application for running in a container.

Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacksโ€™ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.

Builder: The Image That Executes the Build

A builder is an image that contains all necessary components to run a buildpack.

Components of a Builder Image

  1. Build Image โ€“ Used during the build phase (includes compilers, dependencies, etc.).
  2. Run Image โ€“ A minimal environment for running the final built application.
  3. Lifecycle โ€“ The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.

Stack: The Combination of Build and Run Images

  • Build Image + Run Image = Stack
  • Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
  • Run Image: Lightweight OS with only the runtime dependencies for execution.

Installation and Initial Setups

Basic Build of an Image (Python Project)

Project Source: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app

Building an image using buildpack

Before running these commands, ensure you have Pack CLI (pack) installed.

a) Detect builder suggest

pack builder suggest

b) Build the image

pack build my-app --builder paketobuildpacks/builder:base

c) Run the image locally


docker run -p 8080:8080 my-python-app

Building an Image using Dockerfile

a) Dockerfile


FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .

RUN pip install -r requirements.txt

COPY ./random_id_generator ./random_id_generator
COPY app.py app.py

EXPOSE 8080

CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8080"]

b) Build and Run


docker build -t my-python-app .
docker run -p 8080:8080 my-python-app

Unique Benefits of Buildpacks

No Need for a Dockerfile (Auto-Detection)

Buildpacks automatically detect the language and dependencies, removing the need for Dockerfile.


pack build my-python-app --builder paketobuildpacks/builder:base

It detects Python, installs dependencies, and builds the app into a container. ๐Ÿš€ Docker requires a Dockerfile, which developers must manually configure and maintain.

Automatic Security Updates

Buildpacks automatically patch base images for security vulnerabilities.

If thereโ€™s a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.


pack rebase my-python-app

No need to rebuild! It replaces only the OS layers while keeping the app the same.

Standardized & Reproducible Builds

Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,


pack build my-app

Extensibility: Custom Buildpacks

Developers can create custom Buildpacks to add special dependencies.

Example: Adding ffmpeg to a Python buildpack,


pack buildpack package my-custom-python-buildpack --path .

Generating SBOM in Buildpacks

a) Using pack CLI to Generate SBOM

After building an image with pack, run,


pack sbom download my-python-app --output-dir ./sbom
  • This fetches the SBOM for your built image.
  • The SBOM is saved in the ./sbom/ directory.

โœ… Supported formats:

  • SPDX (sbom.spdx.json)
  • CycloneDX (sbom.cdx.json)

b) Generate SBOM in Docker


trivy image --format cyclonedx -o sbom.json my-python-app

Both are helpful in creating images. Its all about the tradeoffs.

RabbitMQ โ€“ All You Need To Know To Start Building Scalable Platforms

1 February 2025 at 02:39

  1. Introduction
  2. What is a Message Queue ?
  3. So Problem Solved !!! Not Yet
  4. RabbitMQ: Installation
  5. RabbitMQ: An Introduction (Optional)
    1. What is RabbitMQ?
    2. Why Use RabbitMQ?
    3. Key Features and Use Cases
  6. Building Blocks of Message Broker
    1. Connection & Channels
    2. Queues โ€“ Message Store
    3. Exchanges โ€“ Message Distributor and Binding
  7. Producing, Consuming and Acknowledging
  8. Problem #1 โ€“ Task Queue for Background Job Processing
    1. Context
    2. Problem
    3. Proposed Solution
  9. Problem #2 โ€“ Broadcasting NEWS to all subscribers
    1. Problem
    2. Solution Overview
    3. Step 1: Producer (Publisher)
    4. Step 2: Consumers (Subscribers)
      1. Consumer 1: Mobile App Notifications
      2. Consumer 2: Email Alerts
      3. Consumer 3: Web Notifications
      4. How It Works
  10. Intermediate Resources
    1. Prefetch Count
    2. Request Reply Pattern
    3. Dead Letter Exchange
    4. Alternate Exchanges
    5. Lazy Queues
    6. Quorom Queues
    7. Change Data Capture
    8. Handling Backpressure in Distributed Systems
    9. Choreography Pattern
    10. Outbox Pattern
    11. Queue Based Loading
    12. Two Phase Commit Protocol
    13. Competing Consumer
    14. Retry Pattern
    15. Can We Use Database as a Queue
  11. Letโ€™s Connect

Introduction

Letโ€™s take the example of an online food ordering system like Swiggy or Zomato. Suppose a user places an order through the mobile app. If the application follows a synchronous approach, it would first send the order request to the restaurantโ€™s system and then wait for confirmation. If the restaurant is busy, the app will have to keep waiting until it receives a response.

If the restaurantโ€™s system crashes or temporarily goes offline, the order will fail, and the user may have to restart the process.

This approach leads to a poor user experience, increases the chances of failures, and makes the system less scalable, as multiple users waiting simultaneously can cause a bottleneck.

In a traditional synchronous communication model, one service directly interacts with another and waits for a response before proceeding. While this approach is simple and works for small-scale applications, it introduces several challenges, especially in systems that require high availability and scalability.

The main problems with synchronous communication include slow performance, system failures, and scalability issues. If the receiving service is slow or temporarily unavailable, the sender has no choice but to wait, which can degrade the overall performance of the application.

Moreover, if the receiving service crashes, the entire process fails, leading to potential data loss or incomplete transactions.

In this book, we are going to solve how this can be solved with a message queue.

What is a Message Queue ?

A message queue is a system that allows different parts of an application (or different applications) to communicate with each other asynchronously by sending and receiving messages.

It acts like a buffer or an intermediary where messages are stored until the receiving service is ready to process them.

How It Works

  1. A producer (sender) creates a message and sends it to the queue.
  2. The message sits in the queue until a consumer (receiver) picks it up.
  3. The consumer processes the message and removes it from the queue.

This process ensures that the sender does not have to wait for the receiver to be available, making the system faster, more reliable, and scalable.

Real-Life Example

Imagine a fast-food restaurant where customers place orders at the counter. Instead of waiting at the counter for their food, customers receive a token number and move aside. The kitchen prepares the order in the background, and when itโ€™s ready, the token number is called for pickup.

In this analogy,

  • The counter is the producer (sending orders).
  • The queue is the token system (storing orders).
  • The kitchen is the consumer (processing orders).
  • The customer picks up the food when ready (message is consumed).

Similarly, in applications, a message queue helps decouple systems, allowing them to work at their own pace without blocking each other. RabbitMQ, Apache Kafka, and Redis are popular message queue systems used in modern software development. ๐Ÿš€

So Problem Solved !!! Not Yet

It seems like problem is solved, but the message life cycle in the queue is need to handled.

  • Message Routing & Binding (Optional) โ€“ How a message is routed ?. If an exchange is used, the message is routed based on predefined rules.
  • Message Storage (Queue Retention) โ€“ How long a message stays in the queue. The message stays in the queue until a consumer picks it up.
  • If the consumer successfully processes the message, it sends an acknowledgment (ACK), and the message is removed. If the consumer fails, the message requeues or moves to a dead-letter queue (DLQ).
  • Messages that fail multiple times, are not acknowledged, or expire may be moved to a Dead-Letter Queue for further analysis.
  • Messages stored only in memory can be lost if RabbitMQ crashes.
  • Messages not consumed within their TTL expire.
  • If a consumer fails to acknowledge a message, it may be reprocessed twice.
  • Messages failing multiple times may be moved to a DLQ.
  • Too many messages in the queue due to slow consumers can cause system slowdowns.
  • Network failures can disrupt message delivery between producers, RabbitMQ, and consumers.
  • Messages with corrupt or bad data may cause repeated consumer failures.

To handle all the above problems, we need a tool. Stable, Battle tested, Reliable tool. RabbitMQ is one kind of that tool. In this book we will cover the basics of RabbitMQ.

RabbitMQ: Installation

For RabbitMQ Installation please refer to https://www.rabbitmq.com/docs/download. In this book we will go with RabbitMQ docker.

docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:4.0-management


RabbitMQ: An Introduction (Optional)

What is RabbitMQ?

Imagine youโ€™re sending messages between friends, but instead of delivering them directly, you drop them in a mailbox, and your friend picks them up when they are ready. RabbitMQ acts like this mailbox, but for computer programs. It helps applications communicate asynchronously, meaning they donโ€™t have to wait for each other to process data.

RabbitMQ is a message broker, which means it handles and routes messages between different parts of an application. It ensures that messages are delivered efficiently, even when some components are running at different speeds or go offline temporarily.

Why Use RabbitMQ?

Modern applications often consist of multiple services that need to exchange data. Sometimes, one service produces data faster than another can consume it. Instead of forcing the slower service to catch up or making the faster service wait, RabbitMQ allows the fast service to place messages in a queue. The slow service can then process them at its own pace.

Some key benefits of using RabbitMQ include,

  • Decoupling services: Components communicate via messages rather than direct calls, reducing dependencies.
  • Scalability: RabbitMQ allows multiple consumers to process messages in parallel.
  • Reliability: It supports message durability and acknowledgments, preventing message loss.
  • Flexibility: Works with many programming languages and integrates well with different systems.
  • Efficient Load Balancing: Multiple consumers can share the message load to prevent overload on a single component.

Key Features and Use Cases

RabbitMQ is widely used in different applications, including

  • Chat applications: Messages are queued and delivered asynchronously to users.
  • Payment processing: Orders are placed in a queue and processed sequentially.
  • Event-driven systems: Used for microservices communication and event notification.
  • IoT systems: Devices publish data to RabbitMQ, which is then processed by backend services.
  • Job queues: Background tasks such as sending emails or processing large files.

Building Blocks of Message Broker

Connection & Channels

In RabbitMQ, connections and channels are fundamental concepts for communication between applications and the broker,

Connections: A connection is a TCP link between a client (producer or consumer) and the RabbitMQ broker. Each connection consumes system resources and is relatively expensive to create and maintain.

Channels: A channel is a virtual communication path inside a connection. It allows multiple logical streams of data over a single TCP connection, reducing overhead. Channels are lightweight and preferred for performing operations like publishing and consuming messages.

Queues โ€“ Message Store

A queue is a message buffer that temporarily holds messages until a consumer retrieves and processes them.

1. Queues operate on a FIFO (First In, First Out) basis, meaning messages are processed in the order they arrive (unless priorities or other delivery strategies are set).

2. Queues persist messages if they are declared as durable and the messages are marked as persistent, ensuring reliability even if RabbitMQ restarts.

3. Multiple consumers can subscribe to a queue, and messages can be distributed among them in a round-robin manner.

Consumption by multiple consumers,

Can also be broadcasted,

4. If no consumers are available, messages remain in the queue until a consumer connects.

Analogy: Think of a queue as a to-do list where tasks (messages) are stored until someone (a worker/consumer) picks them up and processes them.

Exchanges โ€“ Message Distributor and Binding

An exchange is responsible for routing messages to one or more queues based on routing rules.

When a producer sends a message, it doesnโ€™t go directly to a queue but first reaches an exchange, which decides where to forward it.๐Ÿ”ฅ

The blue color line is called as Binding. A binding is the link between the exchange and the queue, guiding messages to the right place.

RabbitMQ supports different types of exchanges

Direct Exchange (direct)

  • Routes messages to queues based on an exact match between the routing key and the queueโ€™s binding key.
  • Example: Sending messages to a specific queue based on a severity level (info, error, warning).


Fanout Exchange (fanout)

  • Routes messages to all bound queues, ignoring routing keys.
  • Example: Broadcasting notifications to multiple services at once.

Topic Exchange (topic)

  • Routes messages based on pattern matching using * (matches one word) and # (matches multiple words).
  • Example: Routing logs where log.info goes to one queue, log.error goes to another, and log.* captures all.

Headers Exchange (headers)

  • Routes messages based on message headers instead of routing keys.
  • Example: Delivering messages based on metadata like device: mobile or region: US.

Analogy: An exchange is like a traffic controller that decides which road (queue) a vehicle (message) should take based on predefined rules.

Binding

A binding is a link between an exchange and a queue that defines how messages should be routed.

  • When a queue is bound to an exchange with a binding key, messages with a matching routing key are delivered to that queue.
  • A queue can have multiple bindings to different exchanges, allowing it to receive messages from multiple sources.

Example:

  • A queue named error_logs can be bound to a direct exchange with a binding key error.
  • Another queue, all_logs, can be bound to the same exchange with a binding key # (wildcard in a topic exchange) to receive all logs.

Analogy: A binding is like a GPS route guiding messages (vehicles) from the exchange (traffic controller) to the right queue (destination).

Producing, Consuming and Acknowledging

RabbitMQ follows the producer-exchange-queue-consumer model,

  • Producing messages (Publishing): A producer creates a message and sends it to RabbitMQ, which routes it to the correct queue.
  • Consuming messages (Subscribing): A consumer listens for messages from the queue and processes them.
  • Acknowledgment: The consumer sends an acknowledgment (ack) after successfully processing a message.
  • Durability: Ensures messages and queues survive RabbitMQ restarts.

Why do we need an Acknowledgement ?

  1. Ensures message reliability โ€“ Prevents messages from being lost if a consumer crashes.
  2. Prevents message loss โ€“ Messages are redelivered if no ACK is received.
  3. Avoids unintentional message deletion โ€“ Messages stay in the queue until properly processed.
  4. Supports at-least-once delivery โ€“ Ensures every message is processed at least once.
  5. Enables load balancing โ€“ Distributes messages fairly among multiple consumers.
  6. Allows manual control โ€“ Consumers can acknowledge only after successful processing.
  7. Handles redelivery โ€“ Messages can be requeued and sent to another consumer if needed.

Problem #1 โ€“ Task Queue for Background Job Processing

Context

A company runs an image processing application where users upload images that need to be resized, watermarked, and optimized before they can be served. Processing these images synchronously would slow down the user experience, so the company decides to implement an asynchronous task queue using RabbitMQ.

Problem

  • Users upload large images that require multiple processing steps.
  • Processing each image synchronously blocks the application, leading to slow response times.
  • High traffic results in queue buildup, making it challenging to scale the system efficiently.

Proposed Solution

1. Producer Service

  • Publishes image processing tasks to a RabbitMQ exchange (task_exchange).
  • Sends the image filename as the message body to the queue (image_queue).

2. Worker Consumers

  • Listen for new image processing tasks from the queue.
  • Process each image (resize, watermark, optimize, etc.).
  • Acknowledge completion to ensure no duplicate processing.

3. Scalability

  • Multiple workers can run in parallel to process images faster.

producer.py

import pika

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

# Declare exchange and queue
channel.exchange_declare(exchange='task_exchange', exchange_type='direct')
channel.queue_declare(queue='image_queue')

# Bind queue to exchange
channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task')

# List of images to process
images = ["image1.jpg", "image2.jpg", "image3.jpg"]

for image in images:
    channel.basic_publish(exchange='task_exchange', routing_key='image_task', body=image)
    print(f" [x] Sent {image}")

connection.close()

consumer.py

import pika
import time

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

# Declare exchange and queue
channel.exchange_declare(exchange='task_exchange', exchange_type='direct')
channel.queue_declare(queue='image_queue')

# Bind queue to exchange
channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task')

def process_image(ch, method, properties, body):
    print(f" [x] Processing {body.decode()}")
    time.sleep(2)  # Simulate processing time
    print(f" [x] Finished {body.decode()}")
    ch.basic_ack(delivery_tag=method.delivery_tag)

# Start consuming
channel.basic_consume(queue='image_queue', on_message_callback=process_image)
print(" [*] Waiting for image tasks. To exit press CTRL+C")
channel.start_consuming()

Problem #2 โ€“ Broadcasting NEWS to all subscribers

Problem

A news application wants to send breaking news alerts to all subscribers, regardless of their location or interest.

Use a fanout exchange (news_alerts_exchange) to broadcast messages to all connected queues, ensuring all users receive the alert.

๐Ÿ”น Example

  • mobile_app_queue (for users receiving push notifications)
  • email_alert_queue (for users receiving email alerts)
  • web_notification_queue (for users receiving notifications on the website)

Solution Overview

  • We create a fanout exchange called news_alerts_exchange.
  • Multiple queues (mobile_app_queue, email_alert_queue, and web_notification_queue) are bound to this exchange.
  • A producer publishes messages to the exchange.
  • Each consumer listens to its respective queue and receives the alert.

Step 1: Producer (Publisher)

This script publishes a breaking news alert to the fanout exchange.

import pika

# Establish connection
connection = pika.BlockingConnection(pika.ConnectionParameters("localhost"))
channel = connection.channel()

# Declare a fanout exchange
channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout")

# Publish a message
message = "Breaking News: Major event happening now!"
channel.basic_publish(exchange="news_alerts_exchange", routing_key="", body=message)

print(f" [x] Sent: {message}")

# Close connection
connection.close()

Step 2: Consumers (Subscribers)

Each consumer listens to its respective queue and processes the alert.

Consumer 1: Mobile App Notifications

import pika

# Establish connection
connection = pika.BlockingConnection(pika.ConnectionParameters("localhost"))
channel = connection.channel()

# Declare exchange
channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout")

# Declare a queue (auto-delete if no consumers)
queue_name = "mobile_app_queue"
channel.queue_declare(queue=queue_name)
channel.queue_bind(exchange="news_alerts_exchange", queue=queue_name)

# Callback function
def callback(ch, method, properties, body):
    print(f" [Mobile App] Received: {body.decode()}")

# Consume messages
channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True)
print(" [*] Waiting for news alerts...")
channel.start_consuming()

Consumer 2: Email Alerts

import pika

connection = pika.BlockingConnection(pika.ConnectionParameters("localhost"))
channel = connection.channel()

channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout")

queue_name = "email_alert_queue"
channel.queue_declare(queue=queue_name)
channel.queue_bind(exchange="news_alerts_exchange", queue=queue_name)

def callback(ch, method, properties, body):
    print(f" [Email Alert] Received: {body.decode()}")

channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True)
print(" [*] Waiting for news alerts...")
channel.start_consuming()

Consumer 3: Web Notifications

import pika

connection = pika.BlockingConnection(pika.ConnectionParameters("localhost"))
channel = connection.channel()

channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout")

queue_name = "web_notification_queue"
channel.queue_declare(queue=queue_name)
channel.queue_bind(exchange="news_alerts_exchange", queue=queue_name)

def callback(ch, method, properties, body):
    print(f" [Web Notification] Received: {body.decode()}")

channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True)
print(" [*] Waiting for news alerts...")
channel.start_consuming()

How It Works

  1. The producer sends a news alert to the fanout exchange (news_alerts_exchange).
  2. All queues (mobile_app_queue, email_alert_queue, web_notification_queue) bound to the exchange receive the message.
  3. Each consumer listens to its queue and processes the alert.

This setup ensures all users receive the alert simultaneously across different platforms. ๐Ÿš€

Intermediate Resources

Prefetch Count

Prefetch is a mechanism that defines how many messages can be delivered to a consumer at a time before the consumer sends an acknowledgment back to the broker. This ensures that the consumer does not get overwhelmed with too many unprocessed messages, which could lead to high memory usage and potential performance issues.

To Know More: https://parottasalna.com/2024/12/29/learning-notes-16-prefetch-count-rabbitmq/

Request Reply Pattern

The Request-Reply Pattern is a fundamental communication style in distributed systems, where a requester sends a message to a responder and waits for a reply. Itโ€™s widely used in systems that require synchronous communication, enabling the requester to receive a response for further processing.

To Know More: https://parottasalna.com/2024/12/28/learning-notes-15-request-reply-pattern-rabbitmq/

Dead Letter Exchange

A dead letter is a message that cannot be delivered to its intended queue or is rejected by a consumer. Common scenarios where messages are dead lettered include,

  1. Message Rejection: A consumer explicitly rejects a message without requeuing it.
  2. Message TTL (Time-To-Live) Expiry: The message remains in the queue longer than its TTL.
  3. Queue Length Limit: The queue has reached its maximum capacity, and new messages are dropped.
  4. Routing Failures: Messages that cannot be routed to any queue from an exchange.

To Know More: https://parottasalna.com/2024/12/28/learning-notes-14-dead-letter-exchange-rabbitmq/

Alternate Exchanges

An alternate exchange in RabbitMQ is a fallback exchange configured for another exchange. If a message cannot be routed to any queue bound to the primary exchange, RabbitMQ will publish the message to the alternate exchange instead. This mechanism ensures that undeliverable messages are not lost but can be processed in a different way, such as logging, alerting, or storing them for later inspection.

To Know More: https://parottasalna.com/2024/12/27/learning-notes-12-alternate-exchanges-rabbitmq/

Lazy Queues

  • Lazy Queues are designed to store messages primarily on disk rather than in memory.
  • They are optimized for use cases involving large message backlogs where minimizing memory usage is critical.

To Know More: https://parottasalna.com/2024/12/26/learning-notes-10-lazy-queues-rabbitmq/

Quorom Queues

  • Quorum Queues are distributed queues built on the Raft consensus algorithm.
  • They are designed for high availability, durability, and data safety by replicating messages across multiple nodes in a RabbitMQ cluster.
  • Its a replacement of Mirrored Queues.

To Know More: https://parottasalna.com/2024/12/25/learning-notes-9-quorum-queues-rabbitmq/

Change Data Capture

CDC stands for Change Data Capture. Itโ€™s a technique that listens to a database and captures every change that happens in it. These changes can then be sent to other systems to,

  • Keep data in sync across multiple databases.
  • Power real-time analytics dashboards.
  • Trigger notifications for certain database events.
  • Process data streams in real time.

To Know More: https://parottasalna.com/2025/01/19/learning-notes-63-change-data-capture-what-does-it-do/

Handling Backpressure in Distributed Systems

Backpressure occurs when a downstream system (consumer) cannot keep up with the rate of data being sent by an upstream system (producer). In distributed systems, this can arise in scenarios such as

  • A message queue filling up faster than it is drained.
  • A database struggling to handle the volume of write requests.
  • A streaming system overwhelmed by incoming data.

To Know More: https://parottasalna.com/2025/01/07/learning-notes-45-backpressure-handling-in-distributed-systems/

Choreography Pattern

In the Choreography Pattern, services communicate directly with each other via asynchronous events, without a central controller. Each service is responsible for a specific part of the workflow and responds to events produced by other services. This pattern allows for a more autonomous and loosely coupled system.

To Know More: https://parottasalna.com/2025/01/05/learning-notes-38-choreography-pattern-cloud-pattern/

Outbox Pattern

The Outbox Pattern is a proven architectural solution to this problem, helping developers manage data consistency, especially when dealing with events, messaging systems, or external APIs.

To Know More: https://parottasalna.com/2025/01/03/learning-notes-31-outbox-pattern-cloud-pattern/

Queue Based Loading

The Queue-Based Loading Pattern leverages message queues to decouple and coordinate tasks between producers (such as applications or services generating data) and consumers (services or workers processing that data). By using queues as intermediaries, this pattern allows systems to manage workloads efficiently, ensuring seamless and scalable operation.

To Know More: https://parottasalna.com/2025/01/03/learning-notes-30-queue-based-loading-cloud-patterns/

Two Phase Commit Protocol

The Two-Phase Commit (2PC) protocol is a distributed algorithm used to ensure atomicity in transactions spanning multiple nodes or databases. Atomicity ensures that either all parts of a transaction are committed or none are, maintaining consistency in distributed systems.

To Know More: https://parottasalna.com/2025/01/03/learning-notes-29-two-phase-commit-protocol-acid-in-distributed-systems/

Competing Consumer

The competing consumer pattern involves multiple consumers that independently compete to process messages or tasks from a shared queue. This pattern is particularly effective in scenarios where the rate of incoming tasks is variable or high, as it allows multiple consumers to process tasks concurrently.

To Know More: https://parottasalna.com/2025/01/01/learning-notes-24-competing-consumer-messaging-queue-patterns/

Retry Pattern

The Retry Pattern is a design strategy used to manage transient failures by retrying failed operations. Instead of immediately failing an operation after an error, the pattern retries it with an optional delay or backoff strategy. This is particularly useful in distributed systems where failures are often temporary.

To Know More: https://parottasalna.com/2024/12/31/learning-notes-23-retry-pattern-cloud-patterns/

Can We Use Database as a Queue

Developers try to use their RDBMS as a way to do background processing or service communication. While this can often appear to โ€˜get the job doneโ€™, there are a number of limitations and concerns with this approach.

There are two divisions to any asynchronous processing: the service(s) that create processing tasks and the service(s) that consume and process these tasks accordingly.

To Know More: https://parottasalna.com/2024/06/15/can-we-use-database-as-queue-in-asynchronous-process/

Letโ€™s Connect

Telegram: https://t.me/parottasalna/1

LinkedIn: https://www.linkedin.com/in/syedjaferk/

Whatsapp Channel: https://whatsapp.com/channel/0029Vavu8mF2v1IpaPd9np0s

Youtube: https://www.youtube.com/@syedjaferk

Github: https://github.com/syedjaferk/

Understanding RAGAS: A Comprehensive Framework for RAG System Evaluation

By: angu10
1 February 2025 at 01:40

In the rapidly evolving landscape of artificial intelligence, Retrieval Augmented Generation (RAG) systems have emerged as a crucial technology for enhancing Large Language Models with external knowledge. However, ensuring the quality and reliability of these systems requires robust evaluation methods. Enter RAGAS (Retrieval Augmented Generation Assessment System), a groundbreaking framework that provides comprehensive metrics for evaluating RAG systems.

The Importance of RAG Evaluation

RAG systems combine the power of retrieval mechanisms with generative AI to produce more accurate and contextually relevant responses. However, their complexity introduces multiple potential points of failure, from retrieval accuracy to answer generation quality. This is where RAGAS steps in, offering a structured approach to assessment that helps developers and organizations maintain high standards in their RAG implementations.

Core RAGAS Metrics

Context Precision

Context precision measures how relevant the retrieved information is to the given query. This metric evaluates whether the system is pulling in the right pieces of information from its knowledge base. A high context precision score indicates that the retrieval component is effectively identifying and selecting relevant content, while a low score might suggest that the system is retrieving tangentially related or irrelevant information.

Faithfulness

Faithfulness assesses the alignment between the generated answer and the provided context. This crucial metric ensures that the system's responses are grounded in the retrieved information rather than hallucinated or drawn from the model's pre-trained knowledge. A faithful response should be directly supported by the context, without introducing external or contradictory information.

Answer Relevancy

The answer relevancy metric evaluates how well the generated response addresses the original question. This goes beyond mere factual accuracy to assess whether the answer provides the information the user was seeking. A highly relevant answer should directly address the query's intent and provide appropriate detail level.

Context Recall

Context recall compares the retrieved contexts against ground truth information, measuring how much of the necessary information was successfully retrieved. This metric helps identify cases where critical information might be missing from the system's responses, even if what was retrieved was accurate.

Practical Implementation

RAGAS's implementation is designed to be straightforward while providing deep insights. The framework accepts evaluation datasets containing:

Questions posed to the system
Retrieved contexts for each question
Generated answers
Ground truth answers for comparison

This structured approach allows for automated evaluation across multiple dimensions of RAG system performance, providing a comprehensive view of system quality.

Benefits and Applications

Quality Assurance

RAGAS enables continuous monitoring of RAG system performance, helping teams identify degradation or improvements over time. This is particularly valuable when making changes to the retrieval mechanism or underlying models.

Development Guidance

The granular metrics provided by RAGAS help developers pinpoint specific areas needing improvement. For instance, low context precision scores might indicate the need to refine the retrieval strategy, while poor faithfulness scores might suggest issues with the generation parameters.

Comparative Analysis

Organizations can use RAGAS to compare different RAG implementations or configurations, making it easier to make data-driven decisions about system architecture and deployment.

Best Practices for RAGAS Implementation

  1. Regular Evaluation Implement RAGAS as part of your regular testing pipeline to catch potential issues early and maintain consistent quality.
  2. Diverse Test Sets Create evaluation datasets that cover various query types, complexities, and subject matters to ensure robust assessment.
  3. Metric Thresholds Establish minimum acceptable scores for each metric based on your application's requirements and use these as quality gates in your deployment process.
  4. Iterative Refinement Use RAGAS metrics to guide iterative improvements to your RAG system, focusing on the areas showing the lowest performance scores.

Practical Codeย Examples

Basic RAGAS Evaluation

Here's a simple example of how to implement RAGAS evaluation in your Python code:

from ragas import evaluate
from datasets import Dataset
from ragas.metrics import (
    faithfulness,
    answer_relevancy,
    context_precision
)

def evaluate_rag_system(questions, contexts, answers, references):
    """
    Simple function to evaluate a RAG system using RAGAS

    Args:
        questions (list): List of questions
        contexts (list): List of contexts for each question
        answers (list): List of generated answers
        references (list): List of reference answers (ground truth)

    Returns:
        EvaluationResult: RAGAS evaluation results
    """
    # First, let's make sure you have the required packages
    try:
        import ragas
        import datasets
    except ImportError:
        print("Please install required packages:")
        print("pip install ragas datasets")
        return None

    # Prepare evaluation dataset
    eval_data = {
        "question": questions,
        "contexts": [[ctx] for ctx in contexts],  # RAGAS expects list of lists
        "answer": answers,
        "reference": references
    }

    # Convert to Dataset format
    eval_dataset = Dataset.from_dict(eval_data)

    # Run evaluation with key metrics
    results = evaluate(
        eval_dataset,
        metrics=[
            faithfulness,      # Measures if answer is supported by context
            answer_relevancy,  # Measures if answer is relevant to question
            context_precision  # Measures if retrieved context is relevant
        ]
    )

    return results

# Example usage
if __name__ == "__main__":
    # Sample data
    questions = [
        "What are the key features of Python?",
        "How does Python handle memory management?"
    ]

    contexts = [
        "Python is a high-level programming language known for its simple syntax and readability. It supports multiple programming paradigms including object-oriented, imperative, and functional programming.",
        "Python uses automatic memory management through garbage collection. It employs reference counting as the primary mechanism and has a cycle-detecting garbage collector for handling circular references."
    ]

    answers = [
        "Python is known for its simple syntax and readability, and it supports multiple programming paradigms including OOP.",
        "Python handles memory management automatically through garbage collection, using reference counting and cycle detection."
    ]

    references = [
        "Python's key features include readable syntax and support for multiple programming paradigms like OOP, imperative, and functional programming.",
        "Python uses automatic garbage collection with reference counting and cycle detection for memory management."
    ]

    # Run evaluation
    results = evaluate_rag_system(
        questions=questions,
        contexts=contexts,
        answers=answers,
        references=references
    )

    if results:
        # Print results
        print("\nRAG System Evaluation Results:")
        print(results)  

Welcoming Winter with Python โ€“ PyKids and PyLadies

31 January 2025 at 21:30

Winter Break for kids

In Canada, we have around 15 days of winter break for all school kids, covering Christmas and New year.

These celebrations are helping much to come out of the winter worries.

Winter is scary word, but people have to go through it, as life has to go on. As we can not travel much and there are no outdoor events/games, we have to be at home, all the days, weeks and months. Organizing indoor events are costly.

To spend the winter actively, many celebrations days are occurring. Halloween, Christmas, Boxing day, New year, Valentine day and more are there, to make the winter active.

Keeping the kids at home for 17 days winter break is tough. We have to engage them whole day. In our apartment, we are conducting many kids events like weekly chess hour, dance hours, board games day, movie time, sleep over nights etc.

Computer Literacy is good here. Kids are learning to use computer at school, from Grade 3 itself. They play many educational games at school. Homework are done with google slides and google docs, from grade 5. Scratch programming also trained here at grade 5. So, they know very well to use computer, read text online, search the internet and gather some info etc.

PyKids

This time, thought of having some tech events for kids. Called for 10 days training as โ€œPyKidsโ€, for grade 5 and above. The announcement was welcomed well by many parents. We had around 17 kids participated.

As our house is empty mostly, ( thanks to Nithya, for the minimalistic life style ), our hall helped for gathering and teaching.

By keeping the hall empty, we are using the place as Daily Zumba dance hall, mini party hall, DJ hall, kids play area and now as a learning place.

Teaching Python for kids is not easy. The kids are not ready to listen to any long talks. They can not even listen to my regular โ€œpython introductionโ€ slides. So, jumped into hands-on on the day one itself.

My mentor, Asokan Pichai explained how we have to goto hands-on on any python training, few months ago. Experienced the benefits of it this time.

Even-though, I am using Python for 10+ years, teaching it to kids was really tough. I had to read few books and read on more basics, so that I can explain the building blocks of python with more relevant examples for kids.

The kids are good at asking questions. They share feedback with their eyes itself. It is a huge different on teaching to adults. Most of the adults donโ€™t ask questions. They hesitate to say they donโ€™t understand something. But, kids are brave enough to ask questions and express the feedback immediately.

With a training on 4-6 pm everyday, for around 10 days, we can cover so little of python only.

We practiced the code here โ€“ https://github.com/tshrinivasan/python-for-kids We used https://www.online-python.com/ as IDE, as the kids have laptops and tablets with different OS. Will install Python on their laptops on next events so that they can explore more python libraries.

On the final day, my friend Jay Varadharajan, gave a Pizza party for all kids, along with a participation certificate.

Thanks for all the questions kids. Along with you, I learnt a lot. Thanks for all the parents for the great support.

PyLadies

Nithya wanted to try out full day training for her friends. Getting a time of 9-5 to learn something is so luxury for many people. Still, around 10 friends participated.

Nithya took the day with all hands-on. She covered the variables, getting input, if/else, for/while loop, string/list operations. The participants were happy to dive into programming so quickly.

โ€œA byte of Pythonโ€ is a super easy book to learn python. Read it here for free. https://python.swaroopch.com/

Gave this link as asked to read/practice regularly. Hope they are following the book.

Hall for PyLadies meetup

Home as Learning Space

Thus, we are converting our home as a learning space for kids and friends. Thinking of conducting some technical meetups too. ( I am missing all the Linux Users groups meetings and hackathons). Hope we can get more tech events in the winter and make it so interesting and productive.

Short forms and its meaning: IT oriented

By: vsraj80
31 January 2025 at 16:21

OOP/OOPs โ€“ Object Oriented Programming/s

DevOps โ€“ Development and Operation

HTML โ€“ Hyper-Text Markup Language

API โ€“ Application Programming Interface

IDE โ€“ Integrated Development Environment

WWW โ€“ World Wide Web

HTTP โ€“ Hyper-Text Transfer Protocol

HTTPS โ€“ Hyper-Text Transfer Protocol Secured

XML โ€“ extensible Markup Language

PY โ€“ Python

GUI โ€“ Graphical User Interface

APP โ€“ Application

UI/UX โ€“ User Interface / User experience

PHP โ€“ Hyper-Text Preprocessor (previously called as Personal Home Page)

TDL โ€“ Tally Defination Language

TCP โ€“ Tally Complaint Product

.NET โ€“ Network Enabled Technology

XLS โ€“ Excel Spreadsheet

XLSX โ€“ Excel Open XML Spreadsheet

CSV โ€“ Comma-Separated Value

PDF โ€“ Portable Document Format

JSON โ€“ Java Script Object Notation

JPG/JPEG โ€“ Join Photographic Experts Group

PNG โ€“ Portable Network Graphics

.SQL โ€“ Structured Query Language

RDBMS โ€“ Relational DataBase Management System

Welcoming Winter with Python โ€“ PyKids and PyLadies

31 January 2025 at 21:30

Winter Break for kids

In Canada, we have around 15 days of winter break for all school kids, covering Christmas and New year.

These celebrations are helping much to come out of the winter worries.

Winter is scary word, but people have to go through it, as life has to go on. As we can not travel much and there are no outdoor events/games, we have to be at home, all the days, weeks and months. Organizing indoor events are costly.

To spend the winter actively, many celebrations days are occurring. Halloween, Christmas, Boxing day, New year, Valentine day and more are there, to make the winter active.

Keeping the kids at home for 17 days winter break is tough. We have to engage them whole day. In our apartment, we are conducting many kids events like weekly chess hour, dance hours, board games day, movie time, sleep over nights etc.

Computer Literacy is good here. Kids are learning to use computer at school, from Grade 3 itself. They play many educational games at school. Homework are done with google slides and google docs, from grade 5. Scratch programming also trained here at grade 5. So, they know very well to use computer, read text online, search the internet and gather some info etc.

PyKids

This time, thought of having some tech events for kids. Called for 10 days training as โ€œPyKidsโ€, for grade 5 and above. The announcement was welcomed well by many parents. We had around 17 kids participated.

As our house is empty mostly, ( thanks to Nithya, for the minimalistic life style ), our hall helped for gathering and teaching.

By keeping the hall empty, we are using the place as Daily Zumba dance hall, mini party hall, DJ hall, kids play area and now as a learning place.

Teaching Python for kids is not easy. The kids are not ready to listen to any long talks. They can not even listen to my regular โ€œpython introductionโ€ slides. So, jumped into hands-on on the day one itself.

My mentor, Asokan Pichai explained how we have to goto hands-on on any python training, few months ago. Experienced the benefits of it this time.

Even-though, I am using Python for 10+ years, teaching it to kids was really tough. I had to read few books and read on more basics, so that I can explain the building blocks of python with more relevant examples for kids.

The kids are good at asking questions. They share feedback with their eyes itself. It is a huge different on teaching to adults. Most of the adults donโ€™t ask questions. They hesitate to say they donโ€™t understand something. But, kids are brave enough to ask questions and express the feedback immediately.

With a training on 4-6 pm everyday, for around 10 days, we can cover so little of python only.

We practiced the code here โ€“ https://github.com/tshrinivasan/python-for-kids We used https://www.online-python.com/ as IDE, as the kids have laptops and tablets with different OS. Will install Python on their laptops on next events so that they can explore more python libraries.

On the final day, my friend Jay Varadharajan, gave a Pizza party for all kids, along with a participation certificate.

Thanks for all the questions kids. Along with you, I learnt a lot. Thanks for all the parents for the great support.

PyLadies

Nithya wanted to try out full day training for her friends. Getting a time of 9-5 to learn something is so luxury for many people. Still, around 10 friends participated.

Nithya took the day with all hands-on. She covered the variables, getting input, if/else, for/while loop, string/list operations. The participants were happy to dive into programming so quickly.

โ€œA byte of Pythonโ€ is a super easy book to learn python. Read it here for free. https://python.swaroopch.com/

Gave this link as asked to read/practice regularly. Hope they are following the book.

Hall for PyLadies meetup

Home as Learning Space

Thus, we are converting our home as a learning space for kids and friends. Thinking of conducting some technical meetups too. ( I am missing all the Linux Users groups meetings and hackathons). Hope we can get more tech events in the winter and make it so interesting and productive.

About SQL

By: vsraj80
31 January 2025 at 16:15

Structured Query Language

Relational Data-Base Management System

SQL is a Free Open Source Software

MySQL Client โ€“ front end MySQL Server โ€“ back end

Functions of SQL Client

  1. Validating the password and authenticating

2. Receiving input from client end and convert it as token and send to sql server

3. Getting the results from SQL server to user

Functions of SQL Server

SQL server consists 2 Major part

Receiving the request from client and return the response after processing

1.Management Layer

a.Decoding the data

b.Validating and parsing(analyzing) the data

c.Sending the catched queries to Storage Engine

2.Storage Engine

a.Managing Database,tables,indexes

b.sending the data to other shared SQL Server

Install SQL in Ubuntu

sudo apt-get install mysql-server

To make secure configure as below

sudo mysql_secure_installation

1.It used to removes Anonymous users

2.Allow the root only from the local host

3.Removing the test database

MySQL Configuration options

/etc/mysql is the MySQL configuration directory

To Start MySQL

sudo service mysql start

To Stop MySQL

sudo service mysql stop

To Restart MySQL

sudo service mysql restart

MySQL Clients

Normally we will use mysql in command line

But in linux we can access through following GUI

MySQL Work Bench

sudo apt-ยญget install MySQLยญ-workbench

MySQL Navigator

sudo aptยญ-get install MySQLยญ-navigator

EMMA

sudo aptยญ-get install emma

PHP MYAdmin

sudo aptitude install phpmyadmin

MySQL Admin

sudo aptยญ-get install MySQLยญ-admin

Kinds of MySQL

1.GUI based Desktop based application

2.Web based application

3.Shell based application -(text-only based applications)

To connect the server with MySQL client

mysql -u root -p

My Project Requirement

By: vsraj80
30 January 2025 at 23:38

Service center database software โ€“ Desktop based

Draft

ABC Computer Service Center

#123, Greater Road, South Extension, Old Mahabalipuram Road, Chennai

Job Sheet No: Abc20250001 JS Date:28/01/2025

Customer Name: S.Ganesh Contact No.:9876543210

Email:sganesh123@gmail.com Job Recd by.xxxx

Address: R.Ganesh, No.10/46, Madhya Kailash, Job alloted to:Eng.0001 Rajiv gandhi road, OMR, chennai โ€“ 600 000.

Product Details :

Product: Laptop Model:Dell Inspiron G123 Color: black 1TB Hdd, 8GB Ram

Customer Remarks:

Laptop in not working condition. it was working very slow. battery need to change.

Remarks from Engineer:

1.Charges informed approx.rs.2800 for battery and service charges

2.System got ready and informed to customer

3.Total amount rs.3500 collected and laptop given. (job sheet needs to close)

Job Status:

1.*Pending for process 2.* Pending for customer approval 3.*Completed and informed to customer 4.*Completed and closed

Job Sheet Closed Date: 30/01/2025

Job Closed by:Eng.0001

Revenue Details

Spare actual cost Rs.2500 (to be enter at the time of job closing)

Spare Sale income Rs.2800

Service income:700

IN THIS, REQUIRED DATA ONLY ADD FOR JOB SHEET PRINT. CONSOLIDATED ENGINEER REVENUE DETAILS WILL BE ENABLED ONLY FOR ADMIN USER.

SQL error

By: vsraj80
30 January 2025 at 14:55

I import sqlite3

class Database:
def __init__(self,db):
self.con=sqlite3.connect(db)
self.cur=self.con.cursor()
sql=โ€โ€โ€
CREATE TABLE IF NOT EXISTS Customer(
id Integer Primary key,
name text,
mobile text,
email text,
address text,
)
โ€œโ€โ€

#self.cur.execute (sql) (getting error while executing this line. if removed i have getting empty database output sheet)

self.con.commit()


O=Database(โ€œCustomer.dbโ€)

In this code Customer db is getting generated but there is no data

Minimal Typing Practice Application in Python

By: krishna
30 January 2025 at 09:40

Introduction

This is a Python-based single-file application designed for typing practice. It provides a simple interface to improve typing accuracy and speed. Over time, this minimal program has gradually increased my typing skill.

What I Learned from This Project

  • 2D Array Validation
    I first simply used a 1D array to store user input, but I noticed some issues. After implementing a 2D array, I understood why the 2D array was more appropriate for handling user inputs.
  • Tkinter
    I wanted to visually see and update correct, wrong, and incomplete typing inputs, but I didnโ€™t know how to implement it in the terminal. So, I used a simple Tkinter gui window

Run This Program

It depends on the following applications:

  • Python 3
  • python3-tk

Installation Command on Debian-Based Systems

sudo apt install python3 python3-tk

Clone repository and run program

git clone https://github.com/github-CS-krishna/TerminalTyping
cd TerminalTyping
python3 terminalType.py

Links

For more details, refer to the README documentation on GitHub.

This will help you understand how to use it.

source code(github)

Learning Notes #67 โ€“ Build and Push to a Registry (Docker Hub) with GH-Actions

28 January 2025 at 02:30

GitHub Actions is a powerful tool for automating workflows directly in your repository.In this blog, weโ€™ll explore how to efficiently set up GitHub Actions to handle Docker workflows with environments, secrets, and protection rules.

Why Use GitHub Actions for Docker?

My Code base is in Github and i want to tryout gh-actions to build and push images to docker hub seamlessly.

Setting Up GitHub Environments

GitHub Environments let you define settings specific to deployment stages. Hereโ€™s how to configure them:

1. Create an Environment

Go to your GitHub repository and navigate to Settings > Environments. Click New environment, name it (e.g., production), and save.

2. Add Secrets and Variables

Inside the environment settings, click Add secret to store sensitive information like DOCKER_USERNAME and DOCKER_TOKEN.

Use Variables for non-sensitive configuration, such as the Docker image name.

3. Optional: Set Protection Rules

Enforce rules like requiring manual approval before deployments. Restrict deployments to specific branches (e.g., main).

Sample Workflow for Building and Pushing Docker Images

Below is a GitHub Actions workflow for automating the build and push of a Docker image based on a minimal Flask app.

Workflow: .github/workflows/docker-build-push.yml


name: Build and Push Docker Image

on:
  push:
    branches:
      - main  # Trigger workflow on pushes to the `main` branch

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    environment: production  # Specify the environment to use

    steps:
      # Checkout the repository
      - name: Checkout code
        uses: actions/checkout@v3

      # Log in to Docker Hub using environment secrets
      - name: Log in to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_TOKEN }}

      # Build the Docker image using an environment variable
      - name: Build Docker image
        env:
          DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
        run: |
          docker build -t ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }} .

      # Push the Docker image to Docker Hub
      - name: Push Docker image
        env:
          DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
        run: |
          docker push ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }}

To Actions on live: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app/actions

Build a Product Rental App in Node.js

By: krishna
29 January 2025 at 10:47

Introduction

I created a website called Vinmeen that allows users to rent products for temporary needs at a low cost. The goal was to design a simple UI for users to easily rent things they need temporarily.

Technologies Used

  • Node.js & Express
  • Node Packages
    • Express
    • EJS
    • Nodemailer
    • Bcrypt
    • Multer
    • Sync-SQL
    • MySQL
  • MySQL

What I Learned from This Project

This project helped me understand how dynamic websites work and how template rendering is done. I used EJS for rendering templates, MySQL for database handling, and Bcrypt for securely storing user passwords through hashing. I also learned how to send email notifications with OTP and rent requests, among other things.

Hosting

I hosted the site using two different services

Website Hosting โ€“ Render.com

Render provides free hosting for experimentation and student projects. This plan has minimal resources, but itโ€™s great for learning and testing.

MySQL Database โ€“ Filess.io:

Files.io offers a free MySQL database with a 10MB size limit and a maximum of 5 concurrent connections. Itโ€™s ideal for students and self-study projects, but not recommended for startups or businesses.

Links

HTML- ChessBoard

By: vsraj80
10 January 2025 at 02:57

<!DOCTYPE html>

<html>

<head>

<style>

html{

background-color: gray;

}

body{

width:400px;

height:400px;

margin:0 auto;

margin-top: 50px;

background-color: red;

}

.row{

width:50px;

height:50px;

float:left;

}

.color1{

width:50px;

height:50px;

background-color:white;

border-color: black;

border-right:none;

border-style:groove;

border-width: 1px;

float:left;

}

.color2{

width:50px;

height:50px;

background-color:red;

border-color: black;

border-right:none;

border-style:groove;

border-width: 1px;

float:left;

}

</style>

</head>

<body>

<div class=โ€rowโ€>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

</div>

<div class=โ€rowโ€>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

</div>

<div class=โ€rowโ€>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

</div>

<div class=โ€rowโ€>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

</div>

<div class=โ€rowโ€>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

</div>

<div class=โ€rowโ€>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

</div>

<div class=โ€rowโ€>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

</div>

<div class=โ€rowโ€>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

<div class=โ€color2โ€ณ></div>

<div class=โ€color1โ€ณ></div>

</div>

</body>

</html>

SelfHost #2 | BugSink โ€“ An Error Tracking Tool

26 January 2025 at 16:41

I am regular follower of https://selfh.st/ , last week they showcased about BugSink. Bugsink is a tool to track errors in your applications that you can self-host. Itโ€™s easy to install and use, is compatible with the Sentry SDK, and is scalable and reliable.

When an application breaks, finding and fixing the root cause quickly is critical. Hosted error tracking tools often make you trade privacy for convenience, and they can be expensive. On the other hand, self-hosted solutions are an alternative, but they are often a pain to set up and maintain.

What Is Error Tracking?

When code is deployed in production, errors are inevitable. They can arise from a variety of reasons like bugs in the code, network failures, integration mismatches, or even unforeseen user behavior. To ensure smooth operation and user satisfaction, error tracking is essential.

Error tracking involves monitoring and recording errors in your application code, particularly in production environments. A good error tracker doesnโ€™t just log errors; it contextualizes them, offering insights that make troubleshooting straightforward.

Here are the key benefits of error tracking

  • Early Detection: Spot issues before they snowball into critical outages.
  • Context-Rich Reporting: Understand the โ€œwhat, when, and whyโ€ of an error.
  • Faster Debugging: Detailed stack traces make it easier to pinpoint root causes.

Effective error tracking tools allow developers to respond to errors proactively, minimizing user impact.

Why Bugsink?

Bugsink takes error tracking to a new level by prioritizing privacy, simplicity, and compatibility.

1. Built for Self-Hosting

Unlike many hosted error tracking tools that require sensitive data to be shared with third-party servers, Bugsink is self-hosted. This ensures you retain full control over your data, a critical aspect for privacy-conscious teams.

2. Easy to Set Up and Manage

Whether youโ€™re deploying it on your local server or in the cloud, the experience is smooth.

3. Resource Efficiency

Bugsink is designed to be lightweight and efficient. It doesnโ€™t demand hefty server resources, making it an ideal choice for startups, small teams, or resource-constrained environments.

4. Compatible with Sentry

If youโ€™ve used Sentry before, youโ€™ll feel right at home with Bugsink. It offers Sentry compatibility, allowing you to migrate effortlessly or use it alongside existing tools. This compatibility also means you can leverage existing SDKs and integrations.

5. Proactive Notifications

Bugsink ensures youโ€™re in the loop as soon as something goes wrong. Email notifications alert you the moment an error occurs, enabling swift action. This proactive approach reduces the mean time to resolution (MTTR) and keeps users happy.

Docs: https://www.bugsink.com/docs/

In this blog, i jot down my experience on using BugSink with Python.

1. Run using Docker

There are many ways proposed for BugSink installation, https://www.bugsink.com/docs/installation/. In this blog, i am trying using docker.


docker pull bugsink/bugsink:latest

docker run \
  -e SECRET_KEY=ab4xjs5wfnP2XrUwRJPtmk1sEnMcx9d2mta8vtbdZ4oOtvy5BJ \
  -e CREATE_SUPERUSER=admin:admin \
  -e PORT=8000 \
  -p 8000:8000 \
  bugsink/bugsink

2. Log In, Create a Team, Project

The Application will run at port 8000.

Login using admin/admin. Create a new team, by clicking the top right button.

Give a name to the team,

then create a project, under this team,

After creating a project, you will be able to see like below,

You will get an individual DSN , like http://9d0186dd7b854205bed8d60674f349ea@localhost:8000/1.

3. Attaching DSN to python app



import sentry_sdk

sentry_sdk.init(
    "http://d76bc0ccf4da4423b71d1fa80d6004a3@localhost:8000/1",

    send_default_pii=True,
    max_request_body_size="always",
    traces_sample_rate=0,
)

def divide(num1, num2):
    return num1/num2

divide(1, 0)


The above program, will throw an Zero Division Error, which will be reflected in BugSink application.

The best part is you will get the value of variables at that instance. In this example, you can see values of num1 and num2.

There are lot more awesome features out there https://www.bugsink.com/docs/.

Learning Notes #66 โ€“ What is SBOM ? Software Bill of Materials

26 January 2025 at 09:16

Yesterday, i came to know about SBOM, from my friend Prasanth Baskar. Letโ€™s say youโ€™re building a website.

You decide to use a popular open-source tool to handle user logins. Hereโ€™s the catch,

  • That library uses another library to store data.
  • That tool depends on another library to handle passwords.

Now, if one of those libraries has a bug or security issue, how do you even know itโ€™s there? In this blog, i will jot down my understanding on SBOM with Trivy.

What is SBOM ?

A Software Bill of Materials (SBOM) is a list of everything that makes up a piece of software.

Think of it as,

  • A shopping list for all the tools, libraries, and pieces used to build the software.
  • A recipe card showing whatโ€™s inside and how itโ€™s structured.

For software, this means,

  • Components: These are the โ€œingredients,โ€ such as open-source libraries, frameworks, and tools.
  • Versions: Just like you might want to know if the cake uses almond flour or regular flour, knowing the version of a software component matters.
  • Licenses: Did the baker follow the rules for the ingredients they used? Software components also come with licenses that dictate how they can be used.

So How come its Important ?

1. Understanding What Youโ€™re Using

When you download or use software, especially something complex, you often donโ€™t know whatโ€™s inside. An SBOM helps you understand what components are being used are they secure? Are they trustworthy?

2. Finding Problems Faster

If someone discovers that a specific ingredient is badโ€”like flour with bacteria in itโ€”youโ€™d want to know if thatโ€™s in your cake. Similarly, if a software library has a security issue, an SBOM helps you figure out if your software is affected and needs fixing.

For example,

When the Log4j vulnerability made headlines, companies that had SBOMs could quickly identify whether they used Log4j and take action.

3. Building Trust

Imagine buying food without a label or list of ingredients.

Youโ€™d feel doubtful, right ? Similarly, an SBOM builds trust by showing users exactly whatโ€™s in the software theyโ€™re using.

4. Avoiding Legal Trouble

Some software components come with specific rules or licenses about how they can be used. An SBOM ensures these rules are followed, avoiding potential legal headaches.

How to Create an SBOM?

For many developers, creating an SBOM manually would be impossible because modern software can have hundreds (or even thousands!) of components.

Thankfully, there are tools that automatically create SBOMs. Examples include,

  • Trivy: A lightweight tool to generate SBOMs and find vulnerabilities.
  • CycloneDX: A popular SBOM format supported by many tools https://cyclonedx.org/
  • SPDX: Another format designed to make sharing SBOMs easier https://spdx.dev/

These tools can scan your software and automatically list out every component, its version, and its dependencies.

We will see example on generating a SBOM file for nginx using trivy.

How Trivy Works ?

On running trivy scan,

1. It downloads Trivy DB including vulnerability information.

2. Pull Missing layers in cache.

3. Analyze layers and stores information in cache.

4. Detect security issues and write to SBOM file.

Note: a CVE refers to a Common Vulnerabilities and Exposures identifier. A CVE is a unique code used to catalog and track publicly known security vulnerabilities and exposures in software or systems.

How to Generate SBOMs with Trivy

Step 1: Install Trivy in Ubuntu

sudo apt-get install wget gnupg
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb generic main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy

More on Installation: https://github.com/aquasecurity/trivy/blob/main/docs/getting-started/installation.md

Step 2: Generate an SBOM

Trivy allows you to create SBOMs in formats like CycloneDX or SPDX.

trivy image --format cyclonedx --output sbom.json nginx:latest

It generates the SBOM file.

It can be incorporated into Github CI/CD.

Event Summary: FOSS United Chennai Meetup โ€“ 25-01-2025

26 January 2025 at 04:53

๐Ÿš€ Attended the FOSS United Chennai Meetup Yesterday! ๐Ÿš€

After, attending Grafana & Friends Meetup, straightly went to FOSS United Chennai Meetup at YuniQ in Taramani.

Had a chance to meet my Friends face to face after a long time. Sakhil Ahamed E. , Dhanasekar T, Dhanasekar Chellamuthu, Thanga Ayyanar, Parameshwar Arunachalam, Guru Prasath S, Krisha, Gopinathan Asokan

Talks Summary,

1. Ansh Arora, Gave a tour on FOSS United, How its formed, Motto, FOSS Hack, FOSS Clubs.

2. Karthikeyan A K, Gave a talk on his open source product injee (The no configuration instant database for frontend developers.). Itโ€™s a great tool. He gave a personal demo for me. Itโ€™s a great tool with lot of potentials. Would like to contribute !.

3. Justin Benito, How they celebrated New Year with https://tamilnadu.tech
Itโ€™s single go to page for events in Tamil Nadu. If you are interested ,go to the repo https://lnkd.in/geKFqnFz and contribute.

From Kaniyam Foundation we are maintaining a Google Calendar for a long time on Tech Events happening in Tamil Nadu https://lnkd.in/gbmGMuaa.

4. Prasanth Baskar, gave a talk on Harbor, OSS Container Registry with SBOM and more functionalities. SBOM was new to me.

5. Thanga Ayyanar, gave a talk on Static Site Generation with Emacs.

At the end, we had a group photo and went for tea. Got to meet my Juniors from St. Josephโ€™s Institute of Technology in this meet. Had a discussion with Parameshwar Arunachalam on his BuildToLearn Experience. They started prototyping an Tinder app for Tamil Words. After that had a small discussion on our Feb 8th Glug Inauguration at St. Josephโ€™s Institute of Technology Dr. KARTHI M .

Happy to see, lot of minds travelling from different districts to attend this meet.

Event Summary: Grafana & Friends Meetup Chennai โ€“ 25-01-2025

26 January 2025 at 04:47

๐Ÿš€ Attended the Grafana & Friends Meetup Yesterday! ๐Ÿš€

I usually have a question. As a developer, i have logs, isnโ€™t that enough. With curious mind, i attended Grafana & Friends Chennai meetup (Jan 25th 2025)

Had an awesome time meeting fellow tech enthusiasts (devops engineers) and learning about cool ways to monitor and understand data better.
Big shoutout to the Grafana Labs community and Presidio for hosting such a great event!

Sandwich and Juice was nice ๐Ÿ˜‹

Talk Summary,

1โƒฃ Making Data Collection Easier with Grafana Alloy
Dinesh J. and Krithika R shared how Grafana Alloy, combined with Open Telemetry, makes it super simple to collect and manage data for better monitoring.

2โƒฃ Running Grafana in Kubernetes
Lakshmi Narasimhan Parthasarathy (https://lnkd.in/gShxtucZ) showed how to set up Grafana in Kubernetes in 4 different ways (vanilla, helm chart, grafana operator, kube-prom-stack). He is building a SaaS product https://lnkd.in/gSS9XS5m (Heroku on your own servers).

3โƒฃ Observability for Frontend Apps with Grafana Faro
Selvaraj Kuppusamy show how Grafana Faro can help frontend developers monitor whatโ€™s happening on websites and apps in real time. This makes it easier to spot and fix issues quickly. Were able to see core web vitals, and traces too. I was surprised about this.

Techies i had interaction with,

Prasanth Baskar, who is an open source contributor at Cloud Native Computing Foundation (CNCF) on project https://lnkd.in/gmHjt9Bs. I was also happy to know that he knows **parottasalna** (thatโ€™s me) and read some blogs. Happy To Hear that.

Selvaraj Kuppusamy, Devops Engineer, he is also conducting Grafana and Friends chapter in Coimbatore on Feb 1. I will attend that aswell.

Saranraj Chandrasekaran who is also a devops engineer, Had a chat with him on devops and related stuffs.

To all of them, i shared about KanchiLUG (https://lnkd.in/gasCnxXv) and Parottasalna (https://parottasalna.com/) and My Channel on Tech https://lnkd.in/gKcyE-b5.

Thanks Achanandhi M for organising this wonderful meetup. You did well. I came to Achanandhi M from medium. He regularly writes blog on cloud related stuffs. https://lnkd.in/ghUS-GTc Checkout his blog.

Also, He shared some tasks for us,

1. Create your First Grafana Dashboard.
Objective: Create a basic Grafana Dashboard to visualize data in various formats such as tables, charts and graphs. Aslo, try to connect to multiple data sources to get diverse data for your dashboard.

2. Monitor your linux systemโ€™s health with prometheus, Node Exporter and Grafana.
Objective: Use prometheus, Node Exporter adn Grafana to monitor your linux machines health system by tracking key metrics like CPU, memory and disk usage.


3. Using Grafana Faro to track User Actions (Like Button Clicks) and Identify the Most Used Features.

Give a try on these.

RSVP for RabbitMQ: Build Scalable Messaging Systems in Tamil

24 January 2025 at 11:21

Hi All,

Invitation to RabbitMQ Session

๐Ÿ”น Topic: RabbitMQ: Asynchronous Communication
๐Ÿ”น Date: Feb 2 Sunday
๐Ÿ”น Time: 10:30 AM to 1 PM
๐Ÿ”น Venue: Online. Will be shared in mail after RSVP.

Join us for an in-depth session on RabbitMQ in เฎคเฎฎเฎฟเฎดเฏ, where weโ€™ll explore,

  • Message queuing fundamentals
  • Connections, channels, and virtual hosts
  • Exchanges, queues, and bindings
  • Publisher confirmations and consumer acknowledgments
  • Use cases and live demos

Whether youโ€™re a developer, DevOps enthusiast, or curious learner, this session will empower you with the knowledge to build scalable and efficient messaging systems.

๐Ÿ“Œ Donโ€™t miss this opportunity to level up your messaging skills!

RSVP closed !

Our Previous Monthly meets โ€“ https://www.youtube.com/watch?v=cPtyuSzeaa8&list=PLiutOxBS1MizPGGcdfXF61WP5pNUYvxUl&pp=gAQB

Our Previous Sessions,

  1. Python โ€“ https://www.youtube.com/watch?v=lQquVptFreE&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&pp=gAQB
  2. Docker โ€“ https://www.youtube.com/watch?v=nXgUBanjZP8&list=PLiutOxBS1Mizi9IRQM-N3BFWXJkb-hQ4U&pp=gAQB
  3. Postgres โ€“ https://www.youtube.com/watch?v=04pE5bK2-VA&list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt&pp=gAQB

Our Social Handles,

โŒ
โŒ