Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developerโs pain. In this blog i jot down my experience on Buildpacks.
Before going to try Buildpacks, we need to understand what is an OCI ?
What is an OCI ?
An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).
It consists of,
Manifest โ Metadata describing the image (layers, config, etc.).
Config JSON โ Information about how the container should run (CMD, ENV, etc.).
Filesystem Layers โ The actual file system of the container.
OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.
Does Docker Create OCI Images?
Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.
When you build an image with docker build, it follows the OCI Image format.
When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.
However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.
What is a Buildpack ?
A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).
Overview of Buildpack Process
The buildpack process consists of two primary phases
Detection Phase: Determines if the buildpack should be applied based on the appโs dependencies.
Build Phase: Executes the necessary steps to prepare the application for running in a container.
Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacksโ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.
Builder: The Image That Executes the Build
A builder is an image that contains all necessary components to run a buildpack.
Components of a Builder Image
Build Image โ Used during the build phase (includes compilers, dependencies, etc.).
Run Image โ A minimal environment for running the final built application.
Lifecycle โ The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.
Stack: The Combination of Build and Run Images
Build Image + Run Image = Stack
Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
Run Image: Lightweight OS with only the runtime dependencies for execution.
It detects Python, installs dependencies, and builds the app into a container. Docker requires a Dockerfile, which developers must manually configure and maintain.
Automatic Security Updates
Buildpacks automatically patch base images for security vulnerabilities.
If thereโs a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.
pack rebase my-python-app
No need to rebuild! It replaces only the OS layers while keeping the app the same.
Standardized & Reproducible Builds
Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,
pack build my-app
Extensibility: Custom Buildpacks
Developers can create custom Buildpacks to add special dependencies.
Letโs take the example of an online food ordering system like Swiggy or Zomato. Suppose a user places an order through the mobile app. If the application follows a synchronous approach, it would first send the order request to the restaurantโs system and then wait for confirmation. If the restaurant is busy, the app will have to keep waiting until it receives a response.
If the restaurantโs system crashes or temporarily goes offline, the order will fail, and the user may have to restart the process.
This approach leads to a poor user experience, increases the chances of failures, and makes the system less scalable, as multiple users waiting simultaneously can cause a bottleneck.
In a traditional synchronous communication model, one service directly interacts with another and waits for a response before proceeding. While this approach is simple and works for small-scale applications, it introduces several challenges, especially in systems that require high availability and scalability.
The main problems with synchronous communication include slow performance, system failures, and scalability issues. If the receiving service is slow or temporarily unavailable, the sender has no choice but to wait, which can degrade the overall performance of the application.
Moreover, if the receiving service crashes, the entire process fails, leading to potential data loss or incomplete transactions.
In this book, we are going to solve how this can be solved with a message queue.
What is a Message Queue ?
A message queue is a system that allows different parts of an application (or different applications) to communicate with each other asynchronously by sending and receiving messages.
It acts like a buffer or an intermediary where messages are stored until the receiving service is ready to process them.
How It Works
A producer (sender) creates a message and sends it to the queue.
The message sits in the queue until a consumer (receiver) picks it up.
The consumer processes the message and removes it from the queue.
This process ensures that the sender does not have to wait for the receiver to be available, making the system faster, more reliable, and scalable.
Real-Life Example
Imagine a fast-food restaurant where customers place orders at the counter. Instead of waiting at the counter for their food, customers receive a token number and move aside. The kitchen prepares the order in the background, and when itโs ready, the token number is called for pickup.
In this analogy,
The counter is the producer (sending orders).
The queue is the token system (storing orders).
The kitchen is the consumer (processing orders).
The customer picks up the food when ready (message is consumed).
Similarly, in applications, a message queue helps decouple systems, allowing them to work at their own pace without blocking each other. RabbitMQ, Apache Kafka, and Redis are popular message queue systems used in modern software development.
So Problem Solved !!! Not Yet
It seems like problem is solved, but the message life cycle in the queue is need to handled.
Message Routing & Binding (Optional) โ How a message is routed ?. If an exchange is used, the message is routed based on predefined rules.
Message Storage (Queue Retention) โ How long a message stays in the queue. The message stays in the queue until a consumer picks it up.
If the consumer successfully processes the message, it sends an acknowledgment (ACK), and the message is removed. If the consumer fails, the message requeues or moves to a dead-letter queue (DLQ).
Messages that fail multiple times, are not acknowledged, or expire may be moved to a Dead-Letter Queue for further analysis.
Messages stored only in memory can be lost if RabbitMQ crashes.
Messages not consumed within their TTL expire.
If a consumer fails to acknowledge a message, it may be reprocessed twice.
Messages failing multiple times may be moved to a DLQ.
Too many messages in the queue due to slow consumers can cause system slowdowns.
Network failures can disrupt message delivery between producers, RabbitMQ, and consumers.
Messages with corrupt or bad data may cause repeated consumer failures.
To handle all the above problems, we need a tool. Stable, Battle tested, Reliable tool. RabbitMQ is one kind of that tool. In this book we will cover the basics of RabbitMQ.
Imagine youโre sending messages between friends, but instead of delivering them directly, you drop them in a mailbox, and your friend picks them up when they are ready. RabbitMQ acts like this mailbox, but for computer programs. It helps applications communicate asynchronously, meaning they donโt have to wait for each other to process data.
RabbitMQ is a message broker, which means it handles and routes messages between different parts of an application. It ensures that messages are delivered efficiently, even when some components are running at different speeds or go offline temporarily.
Why Use RabbitMQ?
Modern applications often consist of multiple services that need to exchange data. Sometimes, one service produces data faster than another can consume it. Instead of forcing the slower service to catch up or making the faster service wait, RabbitMQ allows the fast service to place messages in a queue. The slow service can then process them at its own pace.
Some key benefits of using RabbitMQ include,
Decoupling services: Components communicate via messages rather than direct calls, reducing dependencies.
Scalability: RabbitMQ allows multiple consumers to process messages in parallel.
Reliability: It supports message durability and acknowledgments, preventing message loss.
Flexibility: Works with many programming languages and integrates well with different systems.
Efficient Load Balancing: Multiple consumers can share the message load to prevent overload on a single component.
Key Features and Use Cases
RabbitMQ is widely used in different applications, including
Chat applications: Messages are queued and delivered asynchronously to users.
Payment processing: Orders are placed in a queue and processed sequentially.
Event-driven systems: Used for microservices communication and event notification.
IoT systems: Devices publish data to RabbitMQ, which is then processed by backend services.
Job queues: Background tasks such as sending emails or processing large files.
Building Blocks of Message Broker
Connection & Channels
In RabbitMQ, connections and channels are fundamental concepts for communication between applications and the broker,
Connections: A connection is a TCP link between a client (producer or consumer) and the RabbitMQ broker. Each connection consumes system resources and is relatively expensive to create and maintain.
Channels: A channel is a virtual communication path inside a connection. It allows multiple logical streams of data over a single TCP connection, reducing overhead. Channels are lightweight and preferred for performing operations like publishing and consuming messages.
Queues โ Message Store
A queue is a message buffer that temporarily holds messages until a consumer retrieves and processes them.
1. Queues operate on a FIFO (First In, First Out) basis, meaning messages are processed in the order they arrive (unless priorities or other delivery strategies are set).
2. Queues persist messages if they are declared as durable and the messages are marked as persistent, ensuring reliability even if RabbitMQ restarts.
3. Multiple consumers can subscribe to a queue, and messages can be distributed among them in a round-robin manner.
Consumption by multiple consumers,
Can also be broadcasted,
4. If no consumers are available, messages remain in the queue until a consumer connects.
Analogy: Think of a queue as a to-do list where tasks (messages) are stored until someone (a worker/consumer) picks them up and processes them.
Exchanges โ Message Distributor and Binding
An exchange is responsible for routing messages to one or more queues based on routing rules.
When a producer sends a message, it doesnโt go directly to a queue but first reaches an exchange, which decides where to forward it.
The blue color line is called as Binding. A binding is the link between the exchange and the queue, guiding messages to the right place.
RabbitMQ supports different types of exchanges
Direct Exchange (direct)
Routes messages to queues based on an exact match between the routing key and the queueโs binding key.
Example: Sending messages to a specific queue based on a severity level (info, error, warning).
Fanout Exchange (fanout)
Routes messages to all bound queues, ignoring routing keys.
Example: Broadcasting notifications to multiple services at once.
Topic Exchange (topic)
Routes messages based on pattern matching using * (matches one word) and # (matches multiple words).
Example: Routing logs where log.info goes to one queue, log.error goes to another, and log.* captures all.
Headers Exchange (headers)
Routes messages based on message headers instead of routing keys.
Example: Delivering messages based on metadata like device: mobile or region: US.
Analogy: An exchange is like a traffic controller that decides which road (queue) a vehicle (message) should take based on predefined rules.
Binding
A binding is a link between an exchange and a queue that defines how messages should be routed.
When a queue is bound to an exchange with a binding key, messages with a matching routing key are delivered to that queue.
A queue can have multiple bindings to different exchanges, allowing it to receive messages from multiple sources.
Example:
A queue named error_logs can be bound to a direct exchange with a binding key error.
Another queue, all_logs, can be bound to the same exchange with a binding key # (wildcard in a topic exchange) to receive all logs.
Analogy: A binding is like a GPS route guiding messages (vehicles) from the exchange (traffic controller) to the right queue (destination).
Producing, Consuming and Acknowledging
RabbitMQ follows the producer-exchange-queue-consumer model,
Producing messages (Publishing): A producer creates a message and sends it to RabbitMQ, which routes it to the correct queue.
Consuming messages (Subscribing): A consumer listens for messages from the queue and processes them.
Acknowledgment: The consumer sends an acknowledgment (ack) after successfully processing a message.
Durability: Ensures messages and queues survive RabbitMQ restarts.
Why do we need an Acknowledgement ?
Ensures message reliability โ Prevents messages from being lost if a consumer crashes.
Prevents message loss โ Messages are redelivered if no ACK is received.
Avoids unintentional message deletion โ Messages stay in the queue until properly processed.
Supports at-least-once delivery โ Ensures every message is processed at least once.
Enables load balancing โ Distributes messages fairly among multiple consumers.
Allows manual control โ Consumers can acknowledge only after successful processing.
Handles redelivery โ Messages can be requeued and sent to another consumer if needed.
Problem #1 โ Task Queue for Background Job Processing
Context
A company runs an image processing application where users upload images that need to be resized, watermarked, and optimized before they can be served. Processing these images synchronously would slow down the user experience, so the company decides to implement an asynchronous task queue using RabbitMQ.
Problem
Users upload large images that require multiple processing steps.
Processing each image synchronously blocks the application, leading to slow response times.
High traffic results in queue buildup, making it challenging to scale the system efficiently.
Proposed Solution
1. Producer Service
Publishes image processing tasks to a RabbitMQ exchange (task_exchange).
Sends the image filename as the message body to the queue (image_queue).
2. Worker Consumers
Listen for new image processing tasks from the queue.
Process each image (resize, watermark, optimize, etc.).
Acknowledge completion to ensure no duplicate processing.
3. Scalability
Multiple workers can run in parallel to process images faster.
producer.py
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare exchange and queue
channel.exchange_declare(exchange='task_exchange', exchange_type='direct')
channel.queue_declare(queue='image_queue')
# Bind queue to exchange
channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task')
# List of images to process
images = ["image1.jpg", "image2.jpg", "image3.jpg"]
for image in images:
channel.basic_publish(exchange='task_exchange', routing_key='image_task', body=image)
print(f" [x] Sent {image}")
connection.close()
consumer.py
import pika
import time
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare exchange and queue
channel.exchange_declare(exchange='task_exchange', exchange_type='direct')
channel.queue_declare(queue='image_queue')
# Bind queue to exchange
channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task')
def process_image(ch, method, properties, body):
print(f" [x] Processing {body.decode()}")
time.sleep(2) # Simulate processing time
print(f" [x] Finished {body.decode()}")
ch.basic_ack(delivery_tag=method.delivery_tag)
# Start consuming
channel.basic_consume(queue='image_queue', on_message_callback=process_image)
print(" [*] Waiting for image tasks. To exit press CTRL+C")
channel.start_consuming()
Problem #2 โ Broadcasting NEWS to all subscribers
Problem
A news application wants to send breaking news alerts to all subscribers, regardless of their location or interest.
Use a fanout exchange (news_alerts_exchange) to broadcast messages to all connected queues, ensuring all users receive the alert.
The producer sends a news alert to the fanout exchange (news_alerts_exchange).
All queues (mobile_app_queue, email_alert_queue, web_notification_queue) bound to the exchange receive the message.
Each consumer listens to its queue and processes the alert.
This setup ensures all users receive the alert simultaneously across different platforms.
Intermediate Resources
Prefetch Count
Prefetch is a mechanism that defines how many messages can be delivered to a consumer at a time before the consumer sends an acknowledgment back to the broker. This ensures that the consumer does not get overwhelmed with too many unprocessed messages, which could lead to high memory usage and potential performance issues.
The Request-Reply Pattern is a fundamental communication style in distributed systems, where a requester sends a message to a responder and waits for a reply. Itโs widely used in systems that require synchronous communication, enabling the requester to receive a response for further processing.
A dead letter is a message that cannot be delivered to its intended queue or is rejected by a consumer. Common scenarios where messages are dead lettered include,
Message Rejection: A consumer explicitly rejects a message without requeuing it.
Message TTL (Time-To-Live) Expiry: The message remains in the queue longer than its TTL.
Queue Length Limit: The queue has reached its maximum capacity, and new messages are dropped.
Routing Failures: Messages that cannot be routed to any queue from an exchange.
An alternate exchange in RabbitMQ is a fallback exchange configured for another exchange. If a message cannot be routed to any queue bound to the primary exchange, RabbitMQ will publish the message to the alternate exchange instead. This mechanism ensures that undeliverable messages are not lost but can be processed in a different way, such as logging, alerting, or storing them for later inspection.
CDC stands for Change Data Capture. Itโs a technique that listens to a database and captures every change that happens in it. These changes can then be sent to other systems to,
Keep data in sync across multiple databases.
Power real-time analytics dashboards.
Trigger notifications for certain database events.
Backpressure occurs when a downstream system (consumer) cannot keep up with the rate of data being sent by an upstream system (producer). In distributed systems, this can arise in scenarios such as
A message queue filling up faster than it is drained.
A database struggling to handle the volume of write requests.
In the Choreography Pattern, services communicate directly with each other via asynchronous events, without a central controller. Each service is responsible for a specific part of the workflow and responds to events produced by other services. This pattern allows for a more autonomous and loosely coupled system.
The Outbox Pattern is a proven architectural solution to this problem, helping developers manage data consistency, especially when dealing with events, messaging systems, or external APIs.
The Queue-Based Loading Pattern leverages message queues to decouple and coordinate tasks between producers (such as applications or services generating data) and consumers (services or workers processing that data). By using queues as intermediaries, this pattern allows systems to manage workloads efficiently, ensuring seamless and scalable operation.
The Two-Phase Commit (2PC) protocol is a distributed algorithm used to ensure atomicity in transactions spanning multiple nodes or databases. Atomicity ensures that either all parts of a transaction are committed or none are, maintaining consistency in distributed systems.
The competing consumer pattern involves multiple consumers that independently compete to process messages or tasks from a shared queue. This pattern is particularly effective in scenarios where the rate of incoming tasks is variable or high, as it allows multiple consumers to process tasks concurrently.
The Retry Pattern is a design strategy used to manage transient failures by retrying failed operations. Instead of immediately failing an operation after an error, the pattern retries it with an optional delay or backoff strategy. This is particularly useful in distributed systems where failures are often temporary.
Developers try to use their RDBMS as a way to do background processing or service communication. While this can often appear to โget the job doneโ, there are a number of limitations and concerns with this approach.
There are two divisions to any asynchronous processing: the service(s) that create processing tasks and the service(s) that consume and process these tasks accordingly.
In the rapidly evolving landscape of artificial intelligence, Retrieval Augmented Generation (RAG) systems have emerged as a crucial technology for enhancing Large Language Models with external knowledge. However, ensuring the quality and reliability of these systems requires robust evaluation methods. Enter RAGAS (Retrieval Augmented Generation Assessment System), a groundbreaking framework that provides comprehensive metrics for evaluating RAG systems.
The Importance of RAG Evaluation
RAG systems combine the power of retrieval mechanisms with generative AI to produce more accurate and contextually relevant responses. However, their complexity introduces multiple potential points of failure, from retrieval accuracy to answer generation quality. This is where RAGAS steps in, offering a structured approach to assessment that helps developers and organizations maintain high standards in their RAG implementations.
Core RAGAS Metrics
Context Precision
Context precision measures how relevant the retrieved information is to the given query. This metric evaluates whether the system is pulling in the right pieces of information from its knowledge base. A high context precision score indicates that the retrieval component is effectively identifying and selecting relevant content, while a low score might suggest that the system is retrieving tangentially related or irrelevant information.
Faithfulness
Faithfulness assesses the alignment between the generated answer and the provided context. This crucial metric ensures that the system's responses are grounded in the retrieved information rather than hallucinated or drawn from the model's pre-trained knowledge. A faithful response should be directly supported by the context, without introducing external or contradictory information.
Answer Relevancy
The answer relevancy metric evaluates how well the generated response addresses the original question. This goes beyond mere factual accuracy to assess whether the answer provides the information the user was seeking. A highly relevant answer should directly address the query's intent and provide appropriate detail level.
Context Recall
Context recall compares the retrieved contexts against ground truth information, measuring how much of the necessary information was successfully retrieved. This metric helps identify cases where critical information might be missing from the system's responses, even if what was retrieved was accurate.
Practical Implementation
RAGAS's implementation is designed to be straightforward while providing deep insights. The framework accepts evaluation datasets containing:
Questions posed to the system
Retrieved contexts for each question
Generated answers
Ground truth answers for comparison
This structured approach allows for automated evaluation across multiple dimensions of RAG system performance, providing a comprehensive view of system quality.
Benefits and Applications
Quality Assurance
RAGAS enables continuous monitoring of RAG system performance, helping teams identify degradation or improvements over time. This is particularly valuable when making changes to the retrieval mechanism or underlying models.
Development Guidance
The granular metrics provided by RAGAS help developers pinpoint specific areas needing improvement. For instance, low context precision scores might indicate the need to refine the retrieval strategy, while poor faithfulness scores might suggest issues with the generation parameters.
Comparative Analysis
Organizations can use RAGAS to compare different RAG implementations or configurations, making it easier to make data-driven decisions about system architecture and deployment.
Best Practices for RAGAS Implementation
Regular Evaluation
Implement RAGAS as part of your regular testing pipeline to catch potential issues early and maintain consistent quality.
Diverse Test Sets
Create evaluation datasets that cover various query types, complexities, and subject matters to ensure robust assessment.
Metric Thresholds
Establish minimum acceptable scores for each metric based on your application's requirements and use these as quality gates in your deployment process.
Iterative Refinement
Use RAGAS metrics to guide iterative improvements to your RAG system, focusing on the areas showing the lowest performance scores.
Practical Codeย Examples
Basic RAGAS Evaluation
Here's a simple example of how to implement RAGAS evaluation in your Python code:
from ragas import evaluate
from datasets import Dataset
from ragas.metrics import (
faithfulness,
answer_relevancy,
context_precision
)
def evaluate_rag_system(questions, contexts, answers, references):
"""
Simple function to evaluate a RAG system using RAGAS
Args:
questions (list): List of questions
contexts (list): List of contexts for each question
answers (list): List of generated answers
references (list): List of reference answers (ground truth)
Returns:
EvaluationResult: RAGAS evaluation results
"""
# First, let's make sure you have the required packages
try:
import ragas
import datasets
except ImportError:
print("Please install required packages:")
print("pip install ragas datasets")
return None
# Prepare evaluation dataset
eval_data = {
"question": questions,
"contexts": [[ctx] for ctx in contexts], # RAGAS expects list of lists
"answer": answers,
"reference": references
}
# Convert to Dataset format
eval_dataset = Dataset.from_dict(eval_data)
# Run evaluation with key metrics
results = evaluate(
eval_dataset,
metrics=[
faithfulness, # Measures if answer is supported by context
answer_relevancy, # Measures if answer is relevant to question
context_precision # Measures if retrieved context is relevant
]
)
return results
# Example usage
if __name__ == "__main__":
# Sample data
questions = [
"What are the key features of Python?",
"How does Python handle memory management?"
]
contexts = [
"Python is a high-level programming language known for its simple syntax and readability. It supports multiple programming paradigms including object-oriented, imperative, and functional programming.",
"Python uses automatic memory management through garbage collection. It employs reference counting as the primary mechanism and has a cycle-detecting garbage collector for handling circular references."
]
answers = [
"Python is known for its simple syntax and readability, and it supports multiple programming paradigms including OOP.",
"Python handles memory management automatically through garbage collection, using reference counting and cycle detection."
]
references = [
"Python's key features include readable syntax and support for multiple programming paradigms like OOP, imperative, and functional programming.",
"Python uses automatic garbage collection with reference counting and cycle detection for memory management."
]
# Run evaluation
results = evaluate_rag_system(
questions=questions,
contexts=contexts,
answers=answers,
references=references
)
if results:
# Print results
print("\nRAG System Evaluation Results:")
print(results)
In Canada, we have around 15 days of winter break for all school kids, covering Christmas and New year.
These celebrations are helping much to come out of the winter worries.
Winter is scary word, but people have to go through it, as life has to go on. As we can not travel much and there are no outdoor events/games, we have to be at home, all the days, weeks and months. Organizing indoor events are costly.
To spend the winter actively, many celebrations days are occurring. Halloween, Christmas, Boxing day, New year, Valentine day and more are there, to make the winter active.
Keeping the kids at home for 17 days winter break is tough. We have to engage them whole day. In our apartment, we are conducting many kids events like weekly chess hour, dance hours, board games day, movie time, sleep over nights etc.
Computer Literacy is good here. Kids are learning to use computer at school, from Grade 3 itself. They play many educational games at school. Homework are done with google slides and google docs, from grade 5. Scratch programming also trained here at grade 5. So, they know very well to use computer, read text online, search the internet and gather some info etc.
PyKids
This time, thought of having some tech events for kids. Called for 10 days training as โPyKidsโ, for grade 5 and above. The announcement was welcomed well by many parents. We had around 17 kids participated.
As our house is empty mostly, ( thanks to Nithya, for the minimalistic life style ), our hall helped for gathering and teaching.
By keeping the hall empty, we are using the place as Daily Zumba dance hall, mini party hall, DJ hall, kids play area and now as a learning place.
Teaching Python for kids is not easy. The kids are not ready to listen to any long talks. They can not even listen to my regular โpython introductionโ slides. So, jumped into hands-on on the day one itself.
My mentor, Asokan Pichai explained how we have to goto hands-on on any python training, few months ago. Experienced the benefits of it this time.
Even-though, I am using Python for 10+ years, teaching it to kids was really tough. I had to read few books and read on more basics, so that I can explain the building blocks of python with more relevant examples for kids.
The kids are good at asking questions. They share feedback with their eyes itself. It is a huge different on teaching to adults. Most of the adults donโt ask questions. They hesitate to say they donโt understand something. But, kids are brave enough to ask questions and express the feedback immediately.
With a training on 4-6 pm everyday, for around 10 days, we can cover so little of python only.
On the final day, my friend Jay Varadharajan, gave a Pizza party for all kids, along with a participation certificate.
Thanks for all the questions kids. Along with you, I learnt a lot. Thanks for all the parents for the great support.
PyLadies
Nithya wanted to try out full day training for her friends. Getting a time of 9-5 to learn something is so luxury for many people. Still, around 10 friends participated.
Nithya took the day with all hands-on. She covered the variables, getting input, if/else, for/while loop, string/list operations. The participants were happy to dive into programming so quickly.
Gave this link as asked to read/practice regularly. Hope they are following the book.
Home as Learning Space
Thus, we are converting our home as a learning space for kids and friends. Thinking of conducting some technical meetups too. ( I am missing all the Linux Users groups meetings and hackathons). Hope we can get more tech events in the winter and make it so interesting and productive.
In Canada, we have around 15 days of winter break for all school kids, covering Christmas and New year.
These celebrations are helping much to come out of the winter worries.
Winter is scary word, but people have to go through it, as life has to go on. As we can not travel much and there are no outdoor events/games, we have to be at home, all the days, weeks and months. Organizing indoor events are costly.
To spend the winter actively, many celebrations days are occurring. Halloween, Christmas, Boxing day, New year, Valentine day and more are there, to make the winter active.
Keeping the kids at home for 17 days winter break is tough. We have to engage them whole day. In our apartment, we are conducting many kids events like weekly chess hour, dance hours, board games day, movie time, sleep over nights etc.
Computer Literacy is good here. Kids are learning to use computer at school, from Grade 3 itself. They play many educational games at school. Homework are done with google slides and google docs, from grade 5. Scratch programming also trained here at grade 5. So, they know very well to use computer, read text online, search the internet and gather some info etc.
PyKids
This time, thought of having some tech events for kids. Called for 10 days training as โPyKidsโ, for grade 5 and above. The announcement was welcomed well by many parents. We had around 17 kids participated.
As our house is empty mostly, ( thanks to Nithya, for the minimalistic life style ), our hall helped for gathering and teaching.
By keeping the hall empty, we are using the place as Daily Zumba dance hall, mini party hall, DJ hall, kids play area and now as a learning place.
Teaching Python for kids is not easy. The kids are not ready to listen to any long talks. They can not even listen to my regular โpython introductionโ slides. So, jumped into hands-on on the day one itself.
My mentor, Asokan Pichai explained how we have to goto hands-on on any python training, few months ago. Experienced the benefits of it this time.
Even-though, I am using Python for 10+ years, teaching it to kids was really tough. I had to read few books and read on more basics, so that I can explain the building blocks of python with more relevant examples for kids.
The kids are good at asking questions. They share feedback with their eyes itself. It is a huge different on teaching to adults. Most of the adults donโt ask questions. They hesitate to say they donโt understand something. But, kids are brave enough to ask questions and express the feedback immediately.
With a training on 4-6 pm everyday, for around 10 days, we can cover so little of python only.
On the final day, my friend Jay Varadharajan, gave a Pizza party for all kids, along with a participation certificate.
Thanks for all the questions kids. Along with you, I learnt a lot. Thanks for all the parents for the great support.
PyLadies
Nithya wanted to try out full day training for her friends. Getting a time of 9-5 to learn something is so luxury for many people. Still, around 10 friends participated.
Nithya took the day with all hands-on. She covered the variables, getting input, if/else, for/while loop, string/list operations. The participants were happy to dive into programming so quickly.
Gave this link as asked to read/practice regularly. Hope they are following the book.
Home as Learning Space
Thus, we are converting our home as a learning space for kids and friends. Thinking of conducting some technical meetups too. ( I am missing all the Linux Users groups meetings and hackathons). Hope we can get more tech events in the winter and make it so interesting and productive.
class Database: def __init__(self,db): self.con=sqlite3.connect(db) self.cur=self.con.cursor() sql=โโโ CREATE TABLE IF NOT EXISTS Customer( id Integer Primary key, name text, mobile text, email text, address text, ) โโโ
#self.cur.execute (sql) (getting error while executing this line. if removed i have getting empty database output sheet)
self.con.commit()
O=Database(โCustomer.dbโ)
In this code Customer db is getting generated but there is no data
This is a Python-based single-file application designed for typing practice. It provides a simple interface to improve typing accuracy and speed. Over time, this minimal program has gradually increased my typing skill.
What I Learned from This Project
2D Array Validation I first simply used a 1D array to store user input, but I noticed some issues. After implementing a 2D array, I understood why the 2D array was more appropriate for handling user inputs.
Tkinter I wanted to visually see and update correct, wrong, and incomplete typing inputs, but I didnโt know how to implement it in the terminal. So, I used a simple Tkinter gui window
Run This Program
It depends on the following applications:
Python 3
python3-tk
Installation Command on Debian-Based Systems
sudo apt install python3 python3-tk
Clone repository and run program
git clone https://github.com/github-CS-krishna/TerminalTyping cd TerminalTyping python3 terminalType.py
GitHub Actions is a powerful tool for automating workflows directly in your repository.In this blog, weโll explore how to efficiently set up GitHub Actions to handle Docker workflows with environments, secrets, and protection rules.
Why Use GitHub Actions for Docker?
My Code base is in Github and i want to tryout gh-actions to build and push images to docker hub seamlessly.
Setting Up GitHub Environments
GitHub Environments let you define settings specific to deployment stages. Hereโs how to configure them:
1. Create an Environment
Go to your GitHub repository and navigate to Settings > Environments. Click New environment, name it (e.g., production), and save.
2. Add Secrets and Variables
Inside the environment settings, click Add secret to store sensitive information like DOCKER_USERNAME and DOCKER_TOKEN.
Use Variables for non-sensitive configuration, such as the Docker image name.
3. Optional: Set Protection Rules
Enforce rules like requiring manual approval before deployments. Restrict deployments to specific branches (e.g., main).
Sample Workflow for Building and Pushing Docker Images
Below is a GitHub Actions workflow for automating the build and push of a Docker image based on a minimal Flask app.
Workflow: .github/workflows/docker-build-push.yml
name: Build and Push Docker Image
on:
push:
branches:
- main # Trigger workflow on pushes to the `main` branch
jobs:
build-and-push:
runs-on: ubuntu-latest
environment: production # Specify the environment to use
steps:
# Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
# Log in to Docker Hub using environment secrets
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
# Build the Docker image using an environment variable
- name: Build Docker image
env:
DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
run: |
docker build -t ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }} .
# Push the Docker image to Docker Hub
- name: Push Docker image
env:
DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
run: |
docker push ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }}
I created a website called Vinmeen that allows users to rent products for temporary needs at a low cost. The goal was to design a simple UI for users to easily rent things they need temporarily.
Technologies Used
Node.js & Express
Node Packages
Express
EJS
Nodemailer
Bcrypt
Multer
Sync-SQL
MySQL
MySQL
What I Learned from This Project
This project helped me understand how dynamic websites work and how template rendering is done. I used EJS for rendering templates, MySQL for database handling, and Bcrypt for securely storing user passwords through hashing. I also learned how to send email notifications with OTP and rent requests, among other things.
Files.io offers a free MySQL database with a 10MB size limit and a maximum of 5 concurrent connections. Itโs ideal for students and self-study projects, but not recommended for startups or businesses.
I am regular follower of https://selfh.st/ , last week they showcased about BugSink. Bugsink is a tool to track errors in your applications that you can self-host. Itโs easy to install and use, is compatible with the Sentry SDK, and is scalable and reliable.
When an application breaks, finding and fixing the root cause quickly is critical. Hosted error tracking tools often make you trade privacy for convenience, and they can be expensive. On the other hand, self-hosted solutions are an alternative, but they are often a pain to set up and maintain.
What Is Error Tracking?
When code is deployed in production, errors are inevitable. They can arise from a variety of reasons like bugs in the code, network failures, integration mismatches, or even unforeseen user behavior. To ensure smooth operation and user satisfaction, error tracking is essential.
Error tracking involves monitoring and recording errors in your application code, particularly in production environments. A good error tracker doesnโt just log errors; it contextualizes them, offering insights that make troubleshooting straightforward.
Here are the key benefits of error tracking
Early Detection: Spot issues before they snowball into critical outages.
Context-Rich Reporting: Understand the โwhat, when, and whyโ of an error.
Faster Debugging: Detailed stack traces make it easier to pinpoint root causes.
Effective error tracking tools allow developers to respond to errors proactively, minimizing user impact.
Why Bugsink?
Bugsink takes error tracking to a new level by prioritizing privacy, simplicity, and compatibility.
1. Built for Self-Hosting
Unlike many hosted error tracking tools that require sensitive data to be shared with third-party servers, Bugsink is self-hosted. This ensures you retain full control over your data, a critical aspect for privacy-conscious teams.
2. Easy to Set Up and Manage
Whether youโre deploying it on your local server or in the cloud, the experience is smooth.
3. Resource Efficiency
Bugsink is designed to be lightweight and efficient. It doesnโt demand hefty server resources, making it an ideal choice for startups, small teams, or resource-constrained environments.
4. Compatible with Sentry
If youโve used Sentry before, youโll feel right at home with Bugsink. It offers Sentry compatibility, allowing you to migrate effortlessly or use it alongside existing tools. This compatibility also means you can leverage existing SDKs and integrations.
5. Proactive Notifications
Bugsink ensures youโre in the loop as soon as something goes wrong. Email notifications alert you the moment an error occurs, enabling swift action. This proactive approach reduces the mean time to resolution (MTTR) and keeps users happy.
Yesterday, i came to know about SBOM, from my friend Prasanth Baskar. Letโs say youโre building a website.
You decide to use a popular open-source tool to handle user logins. Hereโs the catch,
That library uses another library to store data.
That tool depends on another library to handle passwords.
Now, if one of those libraries has a bug or security issue, how do you even know itโs there? In this blog, i will jot down my understanding on SBOM with Trivy.
What is SBOM ?
A Software Bill of Materials (SBOM) is a list of everything that makes up a piece of software.
Think of it as,
A shopping list for all the tools, libraries, and pieces used to build the software.
A recipe card showing whatโs inside and how itโs structured.
For software, this means,
Components: These are the โingredients,โ such as open-source libraries, frameworks, and tools.
Versions: Just like you might want to know if the cake uses almond flour or regular flour, knowing the version of a software component matters.
Licenses: Did the baker follow the rules for the ingredients they used? Software components also come with licenses that dictate how they can be used.
So How come its Important ?
1. Understanding What Youโre Using
When you download or use software, especially something complex, you often donโt know whatโs inside. An SBOM helps you understand what components are being used are they secure? Are they trustworthy?
2. Finding Problems Faster
If someone discovers that a specific ingredient is badโlike flour with bacteria in itโyouโd want to know if thatโs in your cake. Similarly, if a software library has a security issue, an SBOM helps you figure out if your software is affected and needs fixing.
For example,
When the Log4j vulnerability made headlines, companies that had SBOMs could quickly identify whether they used Log4j and take action.
3. Building Trust
Imagine buying food without a label or list of ingredients.
Youโd feel doubtful, right ? Similarly, an SBOM builds trust by showing users exactly whatโs in the software theyโre using.
4. Avoiding Legal Trouble
Some software components come with specific rules or licenses about how they can be used. An SBOM ensures these rules are followed, avoiding potential legal headaches.
How to Create an SBOM?
For many developers, creating an SBOM manually would be impossible because modern software can have hundreds (or even thousands!) of components.
Thankfully, there are tools that automatically create SBOMs. Examples include,
Trivy: A lightweight tool to generate SBOMs and find vulnerabilities.
SPDX: Another format designed to make sharing SBOMs easier https://spdx.dev/
These tools can scan your software and automatically list out every component, its version, and its dependencies.
We will see example on generating a SBOM file for nginx using trivy.
How Trivy Works ?
On running trivy scan,
1. It downloads Trivy DB including vulnerability information.
2. Pull Missing layers in cache.
3. Analyze layers and stores information in cache.
4. Detect security issues and write to SBOM file.
Note: a CVE refers to a Common Vulnerabilities and Exposures identifier. A CVE is a unique code used to catalog and track publicly known security vulnerabilities and exposures in software or systems.
1. Ansh Arora, Gave a tour on FOSS United, How its formed, Motto, FOSS Hack, FOSS Clubs.
2. Karthikeyan A K, Gave a talk on his open source product injee (The no configuration instant database for frontend developers.). Itโs a great tool. He gave a personal demo for me. Itโs a great tool with lot of potentials. Would like to contribute !.
I usually have a question. As a developer, i have logs, isnโt that enough. With curious mind, i attended Grafana & Friends Chennai meetup (Jan 25th 2025)
Had an awesome time meeting fellow tech enthusiasts (devops engineers) and learning about cool ways to monitor and understand data better. Big shoutout to the Grafana Labs community and Presidio for hosting such a great event!
Sandwich and Juice was nice
Talk Summary,
1โฃ Making Data Collection Easier with Grafana Alloy Dinesh J. and Krithika R shared how Grafana Alloy, combined with Open Telemetry, makes it super simple to collect and manage data for better monitoring.
2โฃ Running Grafana in Kubernetes Lakshmi Narasimhan Parthasarathy (https://lnkd.in/gShxtucZ) showed how to set up Grafana in Kubernetes in 4 different ways (vanilla, helm chart, grafana operator, kube-prom-stack). He is building a SaaS product https://lnkd.in/gSS9XS5m (Heroku on your own servers).
3โฃ Observability for Frontend Apps with Grafana Faro Selvaraj Kuppusamy show how Grafana Faro can help frontend developers monitor whatโs happening on websites and apps in real time. This makes it easier to spot and fix issues quickly. Were able to see core web vitals, and traces too. I was surprised about this.
Thanks Achanandhi M for organising this wonderful meetup. You did well. I came to Achanandhi M from medium. He regularly writes blog on cloud related stuffs. https://lnkd.in/ghUS-GTc Checkout his blog.
Also, He shared some tasks for us,
1. Create your First Grafana Dashboard. Objective: Create a basic Grafana Dashboard to visualize data in various formats such as tables, charts and graphs. Aslo, try to connect to multiple data sources to get diverse data for your dashboard.
2. Monitor your linux systemโs health with prometheus, Node Exporter and Grafana. Objective: Use prometheus, Node Exporter adn Grafana to monitor your linux machines health system by tracking key metrics like CPU, memory and disk usage.
3. Using Grafana Faro to track User Actions (Like Button Clicks) and Identify the Most Used Features.
Topic: RabbitMQ: Asynchronous Communication Date: Feb 2 Sunday Time: 10:30 AM to 1 PM Venue: Online. Will be shared in mail after RSVP.
Join us for an in-depth session on RabbitMQ in เฎคเฎฎเฎฟเฎดเฏ, where weโll explore,
Message queuing fundamentals
Connections, channels, and virtual hosts
Exchanges, queues, and bindings
Publisher confirmations and consumer acknowledgments
Use cases and live demos
Whether youโre a developer, DevOps enthusiast, or curious learner, this session will empower you with the knowledge to build scalable and efficient messaging systems.
Donโt miss this opportunity to level up your messaging skills!