What do Reddit, Discord, Medium, and LinkedIn have in common? They use whatโs called a skeleton loading screen for their applications. A skeleton screen is essentially a wireframe of the application. The wireframe is a placeholder until the application finally loads.
Rise of skeleton loader.
The term โskeleton screenโ was introduced in 2013 by product designer Luke Wroblewski in a blog post about reducing perceived wait time. In this lukew.com/ff/entry.asp?1797 post, he explains how gradually revealing page content turns user attention to the content being loaded, and off of the loading time itself.
Skeleton Loader
Skeleton loading screens will improve your applicationโs user experience andย make it feel more performant. The skeleton loading screen essentiallyย impersonates the original layout.
This lets the user know whatโs happening on the screen. The user interprets this as the application is booting up and the content is loading.
In simplest terms, Skeleton Loader is a static / animated placeholder for the information that is still loading. It mimic the structure and look of the entire view.
Why not just a loading spinner ?
Instead of showing a loading spinner, we could show a skeleton screen that makes the user see that there is progress happening when launching and navigating the application.
They let the user know that some content is loading and, more importantly, provide an indication of what is loading, whether itโs an image, text, card, and so on.
This gives the user the impression that the website is faster because they already know what type of content is loading before it appears. This is referred to asย perceived performance.
Skeleton screensย donโt really make pages load faster. Instead, they are designed to make it feel like pages are loading faster.
When to use ?
Use on high-traffic pages where resources takes a bit long to load like account dashboard.
When the component containsย good amount of information, such as list or card.
Could be replaced byย spinย in any situation, but can provide a better user experience.
Use when thereโs more than 1 element loading at the same time that requires an indicator.
Use when you need to load multiple images at once, a skeleton screen might make a good placeholder. For these pages, consider implementing lazy loading first, which is a similar technique for decreasing perceived load time.
When not to use ?
Not to use for a long-running process, e.g. importing data, manipulation of data etc. (Operations on data intensive applications)
Not to use for fast processes that that takeย less than half a second.
Users still associate video buffering with spinners. Avoid skeleton screens any time a video is loading on your page.
For longer processes (uploads, download, file manipulation ) can use progress bar instead of skeleton loading.
As a replacement for poor performance: If you can further optimize your website to actually load content more quickly, always pursue that first.
In this blog, i jot down notes on what is smoke test, how it got its name, and how to approach the same in k6.
The term smoke testing originates from hardware testing, where engineers would power on a circuit or device and check if smoke appeared.
If smoke was detected, it indicated a fundamental issue, and further testing was halted. This concept was later adapted to software engineering.
What is Smoke Testing?
Smoke testing is a subset of test cases executed to verify that the major functionalities of an application work as expected. If a smoke test fails, the build is rejected, preventing further testing of a potentially unstable application. This test helps catch major defects early, saving time and effort.
Key Characteristics
Ensures that the application is not broken in major areas.
Runs quickly and is not exhaustive.
Usually automated as part of a CI/CD pipeline.
Writing a Basic Smoke Test with K6
A basic smoke test using K6 typically checks API endpoints for HTTP 200 responses and acceptable response times.
import http from 'k6/http';
import { check } from 'k6';
export let options = {
vus: 1, // 1 virtual user
iterations: 5, // Runs the test 5 times
};
export default function () {
let res = http.get('https://example.com/api/health');
check(res, {
'is status 200': (r) => r.status === 200,
'response time < 500ms': (r) => r.timings.duration < 500,
});
}
Advanced Smoke Test Example
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
vus: 2, // 2 virtual users
iterations: 10, // Runs the test 10 times
};
export default function () {
let res = http.get('https://example.com/api/login');
check(res, {
'status is 200': (r) => r.status === 200,
'response time < 400ms': (r) => r.timings.duration < 400,
});
sleep(1);
}
Running and Analyzing Results
Execute the test using
k6 run smoke-test.js
Sample Output
checks...
is status 200
response time < 500ms
If any of the checks fail, K6 will report an error, signaling an issue in the application.
Smoke testing with K6 is an effective way to ensure that key functionalities in your application work as expected. By integrating it into your CI/CD pipeline, you can catch major defects early, improve application stability, and streamline your development workflow.
When running load tests with K6, two fundamental aspects that shape test execution are the number of Virtual Users (VUs) and the test duration. These parameters help simulate realistic user behavior and measure system performance under different load conditions.
In this blog, i jot down notes on virtual users and test duration in options. Using this we can ramp up users.
K6 offers multiple ways to define VUs and test duration, primarily through options in the test script or the command line.
Basic VU and Duration Configuration
The simplest way to specify VUs and test duration is by setting them in the options object of your test script.
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10, // Number of virtual users
duration: '30s', // Duration of the test
};
export default function () {
http.get('https://test.k6.io/');
sleep(1);
}
This script runs a load test with 10 virtual users for 30 seconds, making requests to the specified URL.
Specifying VUs and Duration from the Command Line
You can also set the VUs and duration dynamically using command-line arguments without modifying the script.
k6 run --vus 20 --duration 1m script.js
This command runs the test with 20 virtual users for 1 minute.
Ramp Up and Ramp Down with Stages
Instead of a fixed number of VUs, you can simulate user load variations over time using stages. This helps to gradually increase or decrease the load on the system.
export const options = {
stages: [
{ duration: '30s', target: 10 }, // Ramp up to 10 VUs
{ duration: '1m', target: 50 }, // Ramp up to 50 VUs
{ duration: '30s', target: 10 }, // Ramp down to 10 VUs
{ duration: '20s', target: 0 }, // Ramp down to 0 VUs
],
};
This test gradually increases the load, sustains it, and then reduces it, simulating real-world traffic patterns.
Custom Execution Scenarios
For more advanced load testing strategies, K6 supports scenarios, allowing fine-grained control over execution behavior.
Syntax of Custom Execution Scenarios
A scenarios object defines different execution strategies. Each scenario consists of
executor: Defines how the test runs (e.g., ramping-vus, constant-arrival-rate, etc.).
vus: Number of virtual users (for certain executors).
duration: How long the scenario runs.
iterations: Total number of iterations per VU (for certain executors).
stages: Used in ramping-vus to define load variations over time.
rate: Defines the number of iterations per time unit in constant-arrival-rate.
preAllocatedVUs: Number of VUs reserved for the test.
Different Executors in K6
K6 provides several executors that define how virtual users (VUs) generate load
shared-iterations โ Distributes a fixed number of iterations across multiple VUs.
per-vu-iterations โ Each VU runs a specific number of iterations independently.
constant-vus โ Maintains a fixed number of VUs for a set duration.
ramping-vus โ Increases or decreases the number of VUs over time.
constant-arrival-rate โ Ensures a constant number of requests per time unit, independent of VUs.
ramping-arrival-rate โ Gradually increases or decreases the request rate over time.
externally-controlled โ Allows dynamic control of VUs via an external API.
Go a bit slower so that everyone can understand clearly without feeling rushed.
Provide more basics and examples to make learning easier for beginners.
Spend the first week explaining programming basics so that newcomers donโt feel lost.
Teach flowcharting methods to help participants understand the logic behind coding.
Try teaching Scratch as an interactive way to introduce programming concepts.
Offer weekend batches for those who prefer learning on weekends.
Encourage more conversations so that participants can actively engage in discussions.
Create sub-groups to allow participants to collaborate and support each other.
Get โcheerleadersโ within the team to make the classes more fun and interactive.
Increase promotion efforts to reach a wider audience and get more participants.
Provide better examples to make concepts easier to grasp.
Conduct more Q&A sessions so participants can ask and clarify their doubts.
Ensure that each participant gets a chance to speak and express their thoughts.
Showing your face in videos can help in building a more personal connection with the learners.
Organize mini-hackathons to provide hands-on experience and encourage practical learning.
Foster more interactions and connections between participants to build a strong learning community.
Encourage participants to write blogs daily to document their learning and share insights.
Motivate participants to give talks in class and other communities to build confidence.
Other Learnings & Suggestions
Avoid creating WhatsApp groups for communication, as the 1024 member limit makes it difficult to manage multiple groups.
Telegram works fine for now, but explore using mailing lists as an alternative for structured discussions.
Mute groups when necessary to prevent unnecessary messages like โHi, Hello, Good Morning.โ
Teach participants how to join mailing lists like ChennaiPy and KanchiLUG and guide them on asking questions in forums like Tamil Linux Community.
Show participants how to create a free blog on platforms like dev.to or WordPress to share their learning journey.
Avoid spending too much time explaining everything in-depth, as participants should start coding a small project by the 5th or 6th class.
Present topics as solutions to project ideas or real-world problem statements instead of just theory.
Encourage using names when addressing people, rather than calling them โSirโ or โMadam,โ to maintain an equal and friendly learning environment.
Zoom is costly, and since only around 50 people complete the training, consider alternatives like Jitsi or Google Meet for better cost-effectiveness.
In the evolving Python ecosystem, pyproject.toml has emerged as a pivotal configuration file, streamlining project management and enhancing interoperability across tools.
In this blog i delve deep into the significance, structure, and usage of pyproject.toml.
What is pyproject.toml?
Introduced in PEP 518, pyproject.toml is a standardized file format designed to specify build system requirements and manage project configurations. Its primary goal is to provide a unified, tool-agnostic approach to project setup, reducing the clutter of multiple configuration files.
Why Use pyproject.toml?
Standardization: Offers a consistent way to define project metadata, dependencies, and build tools.
Interoperability: Supported by various tools like Poetry, Flit, Black, isort, and even pip.
Simplification: Consolidates multiple configuration files (like setup.cfg, requirements.txt) into one.
Future-Proofing: As Python evolves, pyproject.toml is becoming the de facto standard for project configurations, ensuring compatibility with future tools and practices.
Structure of pyproject.toml
The pyproject.toml file uses the TOML format, which stands for โTomโs Obvious, Minimal Language.โ TOML is designed to be easy to read and write while being simple enough for parsing by tools.
1. [build-system]
Defines the build system requirements. Essential for tools like pip to know how to build the project.
requires: Lists the build dependencies required to build the project. These packages are installed in an isolated environment before the build process starts.
build-backend: Specifies the backend responsible for building the project. Common backends include:
setuptools.build_meta (for traditional Python projects)
flit_core.buildapi (for projects managed with Flit)
poetry.core.masonry.api (for Poetry projects)
2. [tool]
This section is used by third-party tools to store their configuration. Each tool manages its own sub-table under [tool].
This time, weโre shifting gears from theory to practice with mini projects that will help you build real-world solutions. Study materials will be shared beforehand, and youโll work hands-on to solve practical problems building actual projects that showcase your skills.
Whatโs New?
Real-world mini projects Task-based shortlisting process Limited seats for focused learning Dedicated WhatsApp group for discussions & mentorship Live streaming of sessions for wider participation Study materials, quizzes, surprise gifts, and more!
How to Join?
Fill the below RSVP โ Open for 20 days (till โ March 2) only!
After RSVP closes, shortlisted participants will receive tasks via email.
Complete the tasks to get shortlisted.
Selected students will be added to an exclusive WhatsApp group for intensive training.
Itโs a COST-FREE learning. We require your time, effort and support.
Donโt miss this chance to level up your Python skills Cost Free with hands-on projects and exciting rewards! RSVP now and be part of Python Learning 2.0!
In the field of Python development, maintaining clean, readable, and efficient code is needed.
The Ruff Python package is a faster linter and code formatter designed to boost code quality and developer productivity. Written in Rust, Ruff stands out for its blazing speed and comprehensive feature set.
This blog will delve into Ruffโs features, usage, and how it compares to other popular Python linters and formatters like flake8, pylint, and black.
What is Ruff?
Ruff is an extremely fast Python linter and code formatter that provides linting, code formatting, and static code analysis in a single package. It supports a wide range of rules out of the box, covering various Python standards and style guides.
Key Features of Ruff
Lightning-fast Performance: Written in Rust, Ruff is significantly faster than traditional Python linters.
All-in-One Tool: Combines linting, formatting, and static analysis.
Extensive Rule Support: Covers rules from flake8, isort, pyflakes, pylint, and more.
Customizable: Allows configuration of rules to fit specific project needs.
Seamless Integration: Works well with CI/CD pipelines and popular code editors.
Installing Ruff
# Using pip
pip install ruff
# Using Homebrew (macOS/Linux)
brew install ruff
# Using UV
uv add ruff
Basic Usage
1. Linting a python file
# Lint a single file
ruff check app.py
# Lint an entire directory
ruff check src/
2. Auto Fixing Issues
ruff check src/ --fix
3. Formatting Code
While Ruff primarily focuses on linting, it also handles some formatting tasks
ruff format src/
Configuration
Ruff can be configured using a pyproject.toml file
import sys
import os
print("Hello World !")
def add(a, b):
result = a + b
return a
x= 1
y =2
print(x+y)
def append_to_list(value, my_list=[]):
my_list.append(value)
return my_list
def append_to_list(value, my_list=[]):
my_list.append(value)
return my_list
Identifying Unused Imports
Auto-fixing Imports
Sorting Imports
Detecting Unused Variables
Enforcing Code Style (PEP 8 Violations)
Detecting Mutable Default Arguments
Fixing Line Length Issues
Integrating Ruff with Pre-commit
To ensure code quality before every commit, integrate Ruff with pre-commit
Step 1: Install Pre-Commit
pip install pre-commit
Step 2: Create a .pre-commit-config.yaml file
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.1.0 # Use the latest version
hooks:
- id: ruff
Step 3: Install the Pre-commit Hook
pre-commit install
Step 4: Test the Hook
pre-commit run --all-files
This setup ensures that Ruff automatically checks your code for linting issues before every commit, maintaining consistent code quality.
When to Use Ruff
Large Codebases: Ideal for projects with thousands of files due to its speed.
CI/CD Pipelines: Reduces linting time, accelerating build processes.
Code Reviews: Ensures consistent coding standards across teams.
Open Source Projects: Simplifies code quality management.
Pre-commit Hooks: Ensures code quality before committing changes.
Ruff is a game-changer in the Python development ecosystem. Its unmatched speed, comprehensive rule set, and ease of use make it a powerful tool for developers aiming to maintain high code quality.
Whether youโre working on small scripts or large-scale applications, Ruff can streamline your linting and formatting processes, ensuring clean, efficient, and consistent code.
Git is a powerful version control system that every developer should master. Whether youโre a beginner or an experienced developer, knowing a few handy Git command-line tricks can save you time and improve your workflow. Here are 20 essential Git tips and tricks to boost your efficiency.
1. Undo the Last Commit (Without Losing Changes)
git reset --soft HEAD~1
If you made a commit but want to undo it while keeping your changes, this command resets the last commit but retains the modified files in your staging area.
This is useful when you realize you need to make more changes before committing.
If you also want to remove the changes from the staging area but keep them in your working directory, use,
git reset HEAD~1
2. Discard Unstaged Changes
git checkout -- <file>
Use this to discard local changes in a file before staging. Be careful, as this cannot be undone! If you want to discard all unstaged changes in your working directory, use,
git reset --hard HEAD
3. Delete a Local Branch
git branch -d branch-name
Removes a local branch safely if itโs already merged. If itโs not merged and you still want to delete it, use -D
git branch -D branch-name
4. Delete a Remote Branch
git push origin --delete branch-name
Deletes a branch from the remote repository, useful for cleaning up old feature branches. If you mistakenly deleted the branch and want to restore it, you can use
git checkout -b branch-name origin/branch-name
if it still exists remotely.
5. Rename a Local Branch
git branch -m old-name new-name
Useful when you want to rename a branch locally without affecting the remote repository. To update the remote reference after renaming, push the renamed branch and delete the old one,
Instead of cloning the entire repository, this fetches only the specified branch, saving time and space. If you want all branches but donโt want to check them out initially:
git clone --mirror repository-url
12. Change the Last Commit Message
git commit --amend -m "New message"
Use this to correct a typo in your last commit message before pushing. Be cautiousโif youโve already pushed, use
git push --force-with-lease
13. See the List of Tracked Files
git ls-files
Displays all files being tracked by Git, which is useful for auditing your repository. To see ignored files
Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developerโs pain. In this blog i jot down my experience on Buildpacks.
Before going to try Buildpacks, we need to understand what is an OCI ?
What is an OCI ?
An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).
It consists of,
Manifest โ Metadata describing the image (layers, config, etc.).
Config JSON โ Information about how the container should run (CMD, ENV, etc.).
Filesystem Layers โ The actual file system of the container.
OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.
Does Docker Create OCI Images?
Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.
When you build an image with docker build, it follows the OCI Image format.
When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.
However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.
What is a Buildpack ?
A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).
Overview of Buildpack Process
The buildpack process consists of two primary phases
Detection Phase: Determines if the buildpack should be applied based on the appโs dependencies.
Build Phase: Executes the necessary steps to prepare the application for running in a container.
Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacksโ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.
Builder: The Image That Executes the Build
A builder is an image that contains all necessary components to run a buildpack.
Components of a Builder Image
Build Image โ Used during the build phase (includes compilers, dependencies, etc.).
Run Image โ A minimal environment for running the final built application.
Lifecycle โ The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.
Stack: The Combination of Build and Run Images
Build Image + Run Image = Stack
Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
Run Image: Lightweight OS with only the runtime dependencies for execution.
It detects Python, installs dependencies, and builds the app into a container. Docker requires a Dockerfile, which developers must manually configure and maintain.
Automatic Security Updates
Buildpacks automatically patch base images for security vulnerabilities.
If thereโs a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.
pack rebase my-python-app
No need to rebuild! It replaces only the OS layers while keeping the app the same.
Standardized & Reproducible Builds
Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,
pack build my-app
Extensibility: Custom Buildpacks
Developers can create custom Buildpacks to add special dependencies.
Letโs take the example of an online food ordering system like Swiggy or Zomato. Suppose a user places an order through the mobile app. If the application follows a synchronous approach, it would first send the order request to the restaurantโs system and then wait for confirmation. If the restaurant is busy, the app will have to keep waiting until it receives a response.
If the restaurantโs system crashes or temporarily goes offline, the order will fail, and the user may have to restart the process.
This approach leads to a poor user experience, increases the chances of failures, and makes the system less scalable, as multiple users waiting simultaneously can cause a bottleneck.
In a traditional synchronous communication model, one service directly interacts with another and waits for a response before proceeding. While this approach is simple and works for small-scale applications, it introduces several challenges, especially in systems that require high availability and scalability.
The main problems with synchronous communication include slow performance, system failures, and scalability issues. If the receiving service is slow or temporarily unavailable, the sender has no choice but to wait, which can degrade the overall performance of the application.
Moreover, if the receiving service crashes, the entire process fails, leading to potential data loss or incomplete transactions.
In this book, we are going to solve how this can be solved with a message queue.
What is a Message Queue ?
A message queue is a system that allows different parts of an application (or different applications) to communicate with each other asynchronously by sending and receiving messages.
It acts like a buffer or an intermediary where messages are stored until the receiving service is ready to process them.
How It Works
A producer (sender) creates a message and sends it to the queue.
The message sits in the queue until a consumer (receiver) picks it up.
The consumer processes the message and removes it from the queue.
This process ensures that the sender does not have to wait for the receiver to be available, making the system faster, more reliable, and scalable.
Real-Life Example
Imagine a fast-food restaurant where customers place orders at the counter. Instead of waiting at the counter for their food, customers receive a token number and move aside. The kitchen prepares the order in the background, and when itโs ready, the token number is called for pickup.
In this analogy,
The counter is the producer (sending orders).
The queue is the token system (storing orders).
The kitchen is the consumer (processing orders).
The customer picks up the food when ready (message is consumed).
Similarly, in applications, a message queue helps decouple systems, allowing them to work at their own pace without blocking each other. RabbitMQ, Apache Kafka, and Redis are popular message queue systems used in modern software development.
So Problem Solved !!! Not Yet
It seems like problem is solved, but the message life cycle in the queue is need to handled.
Message Routing & Binding (Optional) โ How a message is routed ?. If an exchange is used, the message is routed based on predefined rules.
Message Storage (Queue Retention) โ How long a message stays in the queue. The message stays in the queue until a consumer picks it up.
If the consumer successfully processes the message, it sends an acknowledgment (ACK), and the message is removed. If the consumer fails, the message requeues or moves to a dead-letter queue (DLQ).
Messages that fail multiple times, are not acknowledged, or expire may be moved to a Dead-Letter Queue for further analysis.
Messages stored only in memory can be lost if RabbitMQ crashes.
Messages not consumed within their TTL expire.
If a consumer fails to acknowledge a message, it may be reprocessed twice.
Messages failing multiple times may be moved to a DLQ.
Too many messages in the queue due to slow consumers can cause system slowdowns.
Network failures can disrupt message delivery between producers, RabbitMQ, and consumers.
Messages with corrupt or bad data may cause repeated consumer failures.
To handle all the above problems, we need a tool. Stable, Battle tested, Reliable tool. RabbitMQ is one kind of that tool. In this book we will cover the basics of RabbitMQ.
Imagine youโre sending messages between friends, but instead of delivering them directly, you drop them in a mailbox, and your friend picks them up when they are ready. RabbitMQ acts like this mailbox, but for computer programs. It helps applications communicate asynchronously, meaning they donโt have to wait for each other to process data.
RabbitMQ is a message broker, which means it handles and routes messages between different parts of an application. It ensures that messages are delivered efficiently, even when some components are running at different speeds or go offline temporarily.
Why Use RabbitMQ?
Modern applications often consist of multiple services that need to exchange data. Sometimes, one service produces data faster than another can consume it. Instead of forcing the slower service to catch up or making the faster service wait, RabbitMQ allows the fast service to place messages in a queue. The slow service can then process them at its own pace.
Some key benefits of using RabbitMQ include,
Decoupling services: Components communicate via messages rather than direct calls, reducing dependencies.
Scalability: RabbitMQ allows multiple consumers to process messages in parallel.
Reliability: It supports message durability and acknowledgments, preventing message loss.
Flexibility: Works with many programming languages and integrates well with different systems.
Efficient Load Balancing: Multiple consumers can share the message load to prevent overload on a single component.
Key Features and Use Cases
RabbitMQ is widely used in different applications, including
Chat applications: Messages are queued and delivered asynchronously to users.
Payment processing: Orders are placed in a queue and processed sequentially.
Event-driven systems: Used for microservices communication and event notification.
IoT systems: Devices publish data to RabbitMQ, which is then processed by backend services.
Job queues: Background tasks such as sending emails or processing large files.
Building Blocks of Message Broker
Connection & Channels
In RabbitMQ, connections and channels are fundamental concepts for communication between applications and the broker,
Connections: A connection is a TCP link between a client (producer or consumer) and the RabbitMQ broker. Each connection consumes system resources and is relatively expensive to create and maintain.
Channels: A channel is a virtual communication path inside a connection. It allows multiple logical streams of data over a single TCP connection, reducing overhead. Channels are lightweight and preferred for performing operations like publishing and consuming messages.
Queues โ Message Store
A queue is a message buffer that temporarily holds messages until a consumer retrieves and processes them.
1. Queues operate on a FIFO (First In, First Out) basis, meaning messages are processed in the order they arrive (unless priorities or other delivery strategies are set).
2. Queues persist messages if they are declared as durable and the messages are marked as persistent, ensuring reliability even if RabbitMQ restarts.
3. Multiple consumers can subscribe to a queue, and messages can be distributed among them in a round-robin manner.
Consumption by multiple consumers,
Can also be broadcasted,
4. If no consumers are available, messages remain in the queue until a consumer connects.
Analogy: Think of a queue as a to-do list where tasks (messages) are stored until someone (a worker/consumer) picks them up and processes them.
Exchanges โ Message Distributor and Binding
An exchange is responsible for routing messages to one or more queues based on routing rules.
When a producer sends a message, it doesnโt go directly to a queue but first reaches an exchange, which decides where to forward it.
The blue color line is called as Binding. A binding is the link between the exchange and the queue, guiding messages to the right place.
RabbitMQ supports different types of exchanges
Direct Exchange (direct)
Routes messages to queues based on an exact match between the routing key and the queueโs binding key.
Example: Sending messages to a specific queue based on a severity level (info, error, warning).
Fanout Exchange (fanout)
Routes messages to all bound queues, ignoring routing keys.
Example: Broadcasting notifications to multiple services at once.
Topic Exchange (topic)
Routes messages based on pattern matching using * (matches one word) and # (matches multiple words).
Example: Routing logs where log.info goes to one queue, log.error goes to another, and log.* captures all.
Headers Exchange (headers)
Routes messages based on message headers instead of routing keys.
Example: Delivering messages based on metadata like device: mobile or region: US.
Analogy: An exchange is like a traffic controller that decides which road (queue) a vehicle (message) should take based on predefined rules.
Binding
A binding is a link between an exchange and a queue that defines how messages should be routed.
When a queue is bound to an exchange with a binding key, messages with a matching routing key are delivered to that queue.
A queue can have multiple bindings to different exchanges, allowing it to receive messages from multiple sources.
Example:
A queue named error_logs can be bound to a direct exchange with a binding key error.
Another queue, all_logs, can be bound to the same exchange with a binding key # (wildcard in a topic exchange) to receive all logs.
Analogy: A binding is like a GPS route guiding messages (vehicles) from the exchange (traffic controller) to the right queue (destination).
Producing, Consuming and Acknowledging
RabbitMQ follows the producer-exchange-queue-consumer model,
Producing messages (Publishing): A producer creates a message and sends it to RabbitMQ, which routes it to the correct queue.
Consuming messages (Subscribing): A consumer listens for messages from the queue and processes them.
Acknowledgment: The consumer sends an acknowledgment (ack) after successfully processing a message.
Durability: Ensures messages and queues survive RabbitMQ restarts.
Why do we need an Acknowledgement ?
Ensures message reliability โ Prevents messages from being lost if a consumer crashes.
Prevents message loss โ Messages are redelivered if no ACK is received.
Avoids unintentional message deletion โ Messages stay in the queue until properly processed.
Supports at-least-once delivery โ Ensures every message is processed at least once.
Enables load balancing โ Distributes messages fairly among multiple consumers.
Allows manual control โ Consumers can acknowledge only after successful processing.
Handles redelivery โ Messages can be requeued and sent to another consumer if needed.
Problem #1 โ Task Queue for Background Job Processing
Context
A company runs an image processing application where users upload images that need to be resized, watermarked, and optimized before they can be served. Processing these images synchronously would slow down the user experience, so the company decides to implement an asynchronous task queue using RabbitMQ.
Problem
Users upload large images that require multiple processing steps.
Processing each image synchronously blocks the application, leading to slow response times.
High traffic results in queue buildup, making it challenging to scale the system efficiently.
Proposed Solution
1. Producer Service
Publishes image processing tasks to a RabbitMQ exchange (task_exchange).
Sends the image filename as the message body to the queue (image_queue).
2. Worker Consumers
Listen for new image processing tasks from the queue.
Process each image (resize, watermark, optimize, etc.).
Acknowledge completion to ensure no duplicate processing.
3. Scalability
Multiple workers can run in parallel to process images faster.
producer.py
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare exchange and queue
channel.exchange_declare(exchange='task_exchange', exchange_type='direct')
channel.queue_declare(queue='image_queue')
# Bind queue to exchange
channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task')
# List of images to process
images = ["image1.jpg", "image2.jpg", "image3.jpg"]
for image in images:
channel.basic_publish(exchange='task_exchange', routing_key='image_task', body=image)
print(f" [x] Sent {image}")
connection.close()
consumer.py
import pika
import time
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare exchange and queue
channel.exchange_declare(exchange='task_exchange', exchange_type='direct')
channel.queue_declare(queue='image_queue')
# Bind queue to exchange
channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task')
def process_image(ch, method, properties, body):
print(f" [x] Processing {body.decode()}")
time.sleep(2) # Simulate processing time
print(f" [x] Finished {body.decode()}")
ch.basic_ack(delivery_tag=method.delivery_tag)
# Start consuming
channel.basic_consume(queue='image_queue', on_message_callback=process_image)
print(" [*] Waiting for image tasks. To exit press CTRL+C")
channel.start_consuming()
Problem #2 โ Broadcasting NEWS to all subscribers
Problem
A news application wants to send breaking news alerts to all subscribers, regardless of their location or interest.
Use a fanout exchange (news_alerts_exchange) to broadcast messages to all connected queues, ensuring all users receive the alert.
The producer sends a news alert to the fanout exchange (news_alerts_exchange).
All queues (mobile_app_queue, email_alert_queue, web_notification_queue) bound to the exchange receive the message.
Each consumer listens to its queue and processes the alert.
This setup ensures all users receive the alert simultaneously across different platforms.
Intermediate Resources
Prefetch Count
Prefetch is a mechanism that defines how many messages can be delivered to a consumer at a time before the consumer sends an acknowledgment back to the broker. This ensures that the consumer does not get overwhelmed with too many unprocessed messages, which could lead to high memory usage and potential performance issues.
The Request-Reply Pattern is a fundamental communication style in distributed systems, where a requester sends a message to a responder and waits for a reply. Itโs widely used in systems that require synchronous communication, enabling the requester to receive a response for further processing.
A dead letter is a message that cannot be delivered to its intended queue or is rejected by a consumer. Common scenarios where messages are dead lettered include,
Message Rejection: A consumer explicitly rejects a message without requeuing it.
Message TTL (Time-To-Live) Expiry: The message remains in the queue longer than its TTL.
Queue Length Limit: The queue has reached its maximum capacity, and new messages are dropped.
Routing Failures: Messages that cannot be routed to any queue from an exchange.
An alternate exchange in RabbitMQ is a fallback exchange configured for another exchange. If a message cannot be routed to any queue bound to the primary exchange, RabbitMQ will publish the message to the alternate exchange instead. This mechanism ensures that undeliverable messages are not lost but can be processed in a different way, such as logging, alerting, or storing them for later inspection.
CDC stands for Change Data Capture. Itโs a technique that listens to a database and captures every change that happens in it. These changes can then be sent to other systems to,
Keep data in sync across multiple databases.
Power real-time analytics dashboards.
Trigger notifications for certain database events.
Backpressure occurs when a downstream system (consumer) cannot keep up with the rate of data being sent by an upstream system (producer). In distributed systems, this can arise in scenarios such as
A message queue filling up faster than it is drained.
A database struggling to handle the volume of write requests.
In the Choreography Pattern, services communicate directly with each other via asynchronous events, without a central controller. Each service is responsible for a specific part of the workflow and responds to events produced by other services. This pattern allows for a more autonomous and loosely coupled system.
The Outbox Pattern is a proven architectural solution to this problem, helping developers manage data consistency, especially when dealing with events, messaging systems, or external APIs.
The Queue-Based Loading Pattern leverages message queues to decouple and coordinate tasks between producers (such as applications or services generating data) and consumers (services or workers processing that data). By using queues as intermediaries, this pattern allows systems to manage workloads efficiently, ensuring seamless and scalable operation.
The Two-Phase Commit (2PC) protocol is a distributed algorithm used to ensure atomicity in transactions spanning multiple nodes or databases. Atomicity ensures that either all parts of a transaction are committed or none are, maintaining consistency in distributed systems.
The competing consumer pattern involves multiple consumers that independently compete to process messages or tasks from a shared queue. This pattern is particularly effective in scenarios where the rate of incoming tasks is variable or high, as it allows multiple consumers to process tasks concurrently.
The Retry Pattern is a design strategy used to manage transient failures by retrying failed operations. Instead of immediately failing an operation after an error, the pattern retries it with an optional delay or backoff strategy. This is particularly useful in distributed systems where failures are often temporary.
Developers try to use their RDBMS as a way to do background processing or service communication. While this can often appear to โget the job doneโ, there are a number of limitations and concerns with this approach.
There are two divisions to any asynchronous processing: the service(s) that create processing tasks and the service(s) that consume and process these tasks accordingly.
GitHub Actions is a powerful tool for automating workflows directly in your repository.In this blog, weโll explore how to efficiently set up GitHub Actions to handle Docker workflows with environments, secrets, and protection rules.
Why Use GitHub Actions for Docker?
My Code base is in Github and i want to tryout gh-actions to build and push images to docker hub seamlessly.
Setting Up GitHub Environments
GitHub Environments let you define settings specific to deployment stages. Hereโs how to configure them:
1. Create an Environment
Go to your GitHub repository and navigate to Settings > Environments. Click New environment, name it (e.g., production), and save.
2. Add Secrets and Variables
Inside the environment settings, click Add secret to store sensitive information like DOCKER_USERNAME and DOCKER_TOKEN.
Use Variables for non-sensitive configuration, such as the Docker image name.
3. Optional: Set Protection Rules
Enforce rules like requiring manual approval before deployments. Restrict deployments to specific branches (e.g., main).
Sample Workflow for Building and Pushing Docker Images
Below is a GitHub Actions workflow for automating the build and push of a Docker image based on a minimal Flask app.
Workflow: .github/workflows/docker-build-push.yml
name: Build and Push Docker Image
on:
push:
branches:
- main # Trigger workflow on pushes to the `main` branch
jobs:
build-and-push:
runs-on: ubuntu-latest
environment: production # Specify the environment to use
steps:
# Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
# Log in to Docker Hub using environment secrets
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
# Build the Docker image using an environment variable
- name: Build Docker image
env:
DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
run: |
docker build -t ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }} .
# Push the Docker image to Docker Hub
- name: Push Docker image
env:
DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
run: |
docker push ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }}
Yesterday, i came to know about SBOM, from my friend Prasanth Baskar. Letโs say youโre building a website.
You decide to use a popular open-source tool to handle user logins. Hereโs the catch,
That library uses another library to store data.
That tool depends on another library to handle passwords.
Now, if one of those libraries has a bug or security issue, how do you even know itโs there? In this blog, i will jot down my understanding on SBOM with Trivy.
What is SBOM ?
A Software Bill of Materials (SBOM) is a list of everything that makes up a piece of software.
Think of it as,
A shopping list for all the tools, libraries, and pieces used to build the software.
A recipe card showing whatโs inside and how itโs structured.
For software, this means,
Components: These are the โingredients,โ such as open-source libraries, frameworks, and tools.
Versions: Just like you might want to know if the cake uses almond flour or regular flour, knowing the version of a software component matters.
Licenses: Did the baker follow the rules for the ingredients they used? Software components also come with licenses that dictate how they can be used.
So How come its Important ?
1. Understanding What Youโre Using
When you download or use software, especially something complex, you often donโt know whatโs inside. An SBOM helps you understand what components are being used are they secure? Are they trustworthy?
2. Finding Problems Faster
If someone discovers that a specific ingredient is badโlike flour with bacteria in itโyouโd want to know if thatโs in your cake. Similarly, if a software library has a security issue, an SBOM helps you figure out if your software is affected and needs fixing.
For example,
When the Log4j vulnerability made headlines, companies that had SBOMs could quickly identify whether they used Log4j and take action.
3. Building Trust
Imagine buying food without a label or list of ingredients.
Youโd feel doubtful, right ? Similarly, an SBOM builds trust by showing users exactly whatโs in the software theyโre using.
4. Avoiding Legal Trouble
Some software components come with specific rules or licenses about how they can be used. An SBOM ensures these rules are followed, avoiding potential legal headaches.
How to Create an SBOM?
For many developers, creating an SBOM manually would be impossible because modern software can have hundreds (or even thousands!) of components.
Thankfully, there are tools that automatically create SBOMs. Examples include,
Trivy: A lightweight tool to generate SBOMs and find vulnerabilities.
SPDX: Another format designed to make sharing SBOMs easier https://spdx.dev/
These tools can scan your software and automatically list out every component, its version, and its dependencies.
We will see example on generating a SBOM file for nginx using trivy.
How Trivy Works ?
On running trivy scan,
1. It downloads Trivy DB including vulnerability information.
2. Pull Missing layers in cache.
3. Analyze layers and stores information in cache.
4. Detect security issues and write to SBOM file.
Note: a CVE refers to a Common Vulnerabilities and Exposures identifier. A CVE is a unique code used to catalog and track publicly known security vulnerabilities and exposures in software or systems.
1. Ansh Arora, Gave a tour on FOSS United, How its formed, Motto, FOSS Hack, FOSS Clubs.
2. Karthikeyan A K, Gave a talk on his open source product injee (The no configuration instant database for frontend developers.). Itโs a great tool. He gave a personal demo for me. Itโs a great tool with lot of potentials. Would like to contribute !.
I usually have a question. As a developer, i have logs, isnโt that enough. With curious mind, i attended Grafana & Friends Chennai meetup (Jan 25th 2025)
Had an awesome time meeting fellow tech enthusiasts (devops engineers) and learning about cool ways to monitor and understand data better. Big shoutout to the Grafana Labs community and Presidio for hosting such a great event!
Sandwich and Juice was nice
Talk Summary,
1โฃ Making Data Collection Easier with Grafana Alloy Dinesh J. and Krithika R shared how Grafana Alloy, combined with Open Telemetry, makes it super simple to collect and manage data for better monitoring.
2โฃ Running Grafana in Kubernetes Lakshmi Narasimhan Parthasarathy (https://lnkd.in/gShxtucZ) showed how to set up Grafana in Kubernetes in 4 different ways (vanilla, helm chart, grafana operator, kube-prom-stack). He is building a SaaS product https://lnkd.in/gSS9XS5m (Heroku on your own servers).
3โฃ Observability for Frontend Apps with Grafana Faro Selvaraj Kuppusamy show how Grafana Faro can help frontend developers monitor whatโs happening on websites and apps in real time. This makes it easier to spot and fix issues quickly. Were able to see core web vitals, and traces too. I was surprised about this.
Thanks Achanandhi M for organising this wonderful meetup. You did well. I came to Achanandhi M from medium. He regularly writes blog on cloud related stuffs. https://lnkd.in/ghUS-GTc Checkout his blog.
Also, He shared some tasks for us,
1. Create your First Grafana Dashboard. Objective: Create a basic Grafana Dashboard to visualize data in various formats such as tables, charts and graphs. Aslo, try to connect to multiple data sources to get diverse data for your dashboard.
2. Monitor your linux systemโs health with prometheus, Node Exporter and Grafana. Objective: Use prometheus, Node Exporter adn Grafana to monitor your linux machines health system by tracking key metrics like CPU, memory and disk usage.
3. Using Grafana Faro to track User Actions (Like Button Clicks) and Identify the Most Used Features.
Topic: RabbitMQ: Asynchronous Communication Date: Feb 2 Sunday Time: 10:30 AM to 1 PM Venue: Online. Will be shared in mail after RSVP.
Join us for an in-depth session on RabbitMQ in เฎคเฎฎเฎฟเฎดเฏ, where weโll explore,
Message queuing fundamentals
Connections, channels, and virtual hosts
Exchanges, queues, and bindings
Publisher confirmations and consumer acknowledgments
Use cases and live demos
Whether youโre a developer, DevOps enthusiast, or curious learner, this session will empower you with the knowledge to build scalable and efficient messaging systems.
Donโt miss this opportunity to level up your messaging skills!
Today, we faced a bug in our workflow due to implicit default value in an 3rd party api. In this blog i will be sharing my experience for future reference.
Understanding the Problem
Consider an API where some fields are optional, and a default value is used when those fields are not provided by the client. This design is common and seemingly harmless. However, problems arise when,
Unexpected Categorization: The default value influences logic, such as category assignment, in ways the client did not intend.
Implicit Assumptions: The API assumes a default value aligns with the clientโs intention, leading to misclassification or incorrect behavior.
Debugging Challenges: When issues occur, clients and developers spend significant time tracing the problem because the default behavior is not transparent.
Hereโs an example of how this might manifest,
POST /items
{
"name": "Sample Item",
"category": "premium"
}
If the category field is optional and a default value of "basic" is applied when itโs omitted, the following request,
POST /items
{
"name": "Another Item"
}
might incorrectly classify the item as basic, even if the client intended it to be uncategorized.
Why This is a Code Smell
Implicit default handling for optional fields often signals poor design. Letโs break down why,
Violation of the Principle of Least Astonishment: Clients may be unaware of default behavior, leading to unexpected outcomes.
Hidden Logic: The business logic embedded in defaults is not explicit in the APIโs contract, reducing transparency.
Coupling Between API and Business Logic: When defaults dictate core behavior, the API becomes tightly coupled to specific business rules, making it harder to adapt or extend.
Inconsistent Behavior: If the default logic changes in future versions, existing clients may experience breaking changes.
Best Practices to Avoid the Trap
Make Default Behavior Explicit
Clearly document default values in the API specification (but we still missed it.)
For example, use OpenAPI/Swagger to define optional fields and their default values explicitly
Avoid Implicit Defaults
Instead of applying defaults server-side, require the client to explicitly provide values, even if they are defaults.
This ensures the client is fully aware of the data being sent and its implications.
Use Null or Explicit Indicators
Allow optional fields to be explicitly null or undefined, and handle these cases appropriately.
In this case, the API can handle null as โno category specifiedโ rather than applying a default.
Fail Fast with Validation
Use strict validation to reject ambiguous requests, encouraging clients to provide clear inputs.
{
"error": "Field 'category' must be provided explicitly."
}
5. Version Your API Thoughtfully:
Document changes and provide clear migration paths for clients.
If you must change default behaviors, ensure backward compatibility through versioning.
Implicit default values for optional fields can lead to unintended consequences, obscure logic, and hard-to-debug issues. Recognizing this pattern as a code smell is the first step to building more robust APIs. By adopting explicitness, transparency, and rigorous validation, you can create APIs that are easier to use, understand, and maintain.
As part of cloud design patterns, today i learned about Gateway Aggregation Pattern. It seems like a motivation for GraphQL. In this blog, i write down the notes on Gateway Aggregation Pattern for my future self.
In the world of microservices, applications are often broken down into smaller, independent services, each responsible for a specific functionality.
While this architecture promotes scalability and maintainability, it can complicate communication between services. The Gateway Aggregation Pattern emerges as a solution, enabling streamlined interactions between clients and services.
What is the Gateway Aggregation Pattern?
The Gateway Aggregation Pattern involves introducing a gateway layer to handle requests from clients. Instead of the client making multiple calls to different services, the gateway aggregates the data by making calls to the relevant services and then returning a unified response to the client.
This pattern is particularly useful for:
Reducing the number of round-trips between clients and services.
Simplifying client logic.
Improving performance by centralizing the communication and aggregation logic.
How It Works
Client Request: The client sends a single request to the gateway.
Gateway Processing: The gateway makes multiple requests to the required services, aggregates their responses, and applies any necessary transformation.
Unified Response: The gateway sends a unified response back to the client.
This approach abstracts the complexity of service interactions from the client, improving the overall user experience.
Example Use Case
Imagine an e-commerce application where a client needs to display a productโs details, reviews, and availability. Without a gateway, the client must call three different microservices
Product Service: Provides details like name, description, and price.
Review Service: Returns customer reviews and ratings.
Using the Gateway Aggregation Pattern, the client makes a single request to the gateway. The gateway calls the three services, aggregates their responses, and returns a combined result, such as
By consolidating service calls and centralizing the aggregation logic, this pattern enhances performance and reduces complexity. Open-source tools like Express.js, Apache APISIX, Kong Gateway, and GraphQL make it easy to implement the pattern in diverse environments.
Today i learnt about Alternate Exchange, which provide a way to handle undeliverable messages. In this blog, i share the notes on what alternate exchanges are, why they are useful, and how to implement them in your RabbitMQ setup.
What Are Alternate Exchanges?
In the normal flow, producer will send a message to the exchange and if the queue is binded correctly then it will be placed in the correct queue.
An alternate exchange in RabbitMQ is a fallback exchange configured for another exchange. If a message cannot be routed to any queue bound to the primary exchange, RabbitMQ will publish the message to the alternate exchange instead. This mechanism ensures that undeliverable messages are not lost but can be processed in a different way, such as logging, alerting, or storing them for later inspection.
When this scenario happens
A message goes to an alternate exchange in RabbitMQ in the following scenarios:
1. No Binding for the Routing Key
The primary exchange does not have any queue bound to it with the routing key specified in the message.
Example: A message with routing key invalid_key is sent to a direct exchange that has no queue bound to invalid_key.
2. Unbound Queues:
Even if a queue exists, it is not bound to the primary exchange or the specific routing key used in the message.
Example: A queue exists for the primary exchange but is not explicitly bound to any routing key.
3. Exchange Type Mismatch
The exchange type (e.g., direct, fanout, topic) does not match the routing pattern of the message.
Example: A message is sent with a specific routing key to a fanout exchange that delivers to all bound queues regardless of the key.
4. Misconfigured Bindings
Bindings exist but do not align with the routing requirements of the message.
Example: A topic exchange has a binding for user.* but receives a message with the routing key order.processed.
5. Queue Deletion After Binding
A queue was bound to the exchange but is deleted or unavailable at runtime.
Example: A message with a valid routing key arrives, but the corresponding queue is no longer active.
6. TTL (Time-to-Live) Expired Queues
Messages routed to a queue with a time-to-live setting expire before being consumed and are re-routed to an alternate exchange if dead-lettering is enabled.
Example: A primary exchange routes messages to a TTL-bound queue, and expired messages are forwarded to the alternate exchange.
7. Exchange Misconfiguration
The primary exchange is operational, but its configurations prevent messages from being delivered to any queue.
Example: A missing or incorrect alternate-exchange argument setup leads to misrouting.
Use Cases for Alternate Exchanges
Error Handling: Route undeliverable messages to a dedicated queue for later inspection or reprocessing.
Logging: Keep track of messages that fail routing for auditing purposes.
Dead Letter Queues: Use alternate exchanges to implement dead-letter queues to analyze why messages could not be routed.
Load Balancing: Forward undeliverable messages to another exchange for alternative processing
How to Implement Alternate Exchanges in Python
Letโs walk through the steps to configure and use alternate exchanges in RabbitMQ using Python.
Scenario 1: Handling Messages with Valid and Invalid Routing Keys
producer.py
import pika
# Connect to RabbitMQ
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare the alternate exchange
channel.exchange_declare(exchange='alternate_exchange', exchange_type='fanout')
# Declare a queue and bind it to the alternate exchange
channel.queue_declare(queue='unroutable_queue')
channel.queue_bind(exchange='alternate_exchange', queue='unroutable_queue')
# Declare the primary exchange with an alternate exchange argument
channel.exchange_declare(
exchange='primary_exchange',
exchange_type='direct',
arguments={'alternate-exchange': 'alternate_exchange'}
)
# Declare and bind a queue to the primary exchange
channel.queue_declare(queue='valid_queue')
channel.queue_bind(exchange='primary_exchange', queue='valid_queue', routing_key='key1')
# Publish a message with a valid routing key
channel.basic_publish(
exchange='primary_exchange',
routing_key='key1',
body='Message with a valid routing key'
)
print("Message with valid routing key sent to 'valid_queue'.")
# Publish a message with an invalid routing key
channel.basic_publish(
exchange='primary_exchange',
routing_key='invalid_key',
body='Message with an invalid routing key'
)
print("Message with invalid routing key sent to 'alternate_exchange'.")
# Close the connection
connection.close()
consumer.py
import pika
# Connect to RabbitMQ
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Consume messages from the alternate queue
method_frame, header_frame, body = channel.basic_get(queue='unroutable_queue', auto_ack=True)
if method_frame:
print(f"Received message from alternate queue: {body.decode()}")
else:
print("No messages in the alternate queue")
# Close the connection
connection.close()
Scenario 2: Logging Unroutable Messages
producer.py
import pika
# Connect to RabbitMQ
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare the alternate exchange
channel.exchange_declare(exchange='logging_exchange', exchange_type='fanout')
# Declare a logging queue and bind it to the logging exchange
channel.queue_declare(queue='logging_queue')
channel.queue_bind(exchange='logging_exchange', queue='logging_queue')
# Declare the primary exchange with a logging alternate exchange argument
channel.exchange_declare(
exchange='primary_logging_exchange',
exchange_type='direct',
arguments={'alternate-exchange': 'logging_exchange'}
)
# Publish a message with an invalid routing key
channel.basic_publish(
exchange='primary_logging_exchange',
routing_key='invalid_logging_key',
body='Message for logging'
)
print("Message with invalid routing key sent to 'logging_exchange'.")
# Close the connection
connection.close()
consumer.py
import pika
# Connect to RabbitMQ
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Consume messages from the logging queue
method_frame, header_frame, body = channel.basic_get(queue='logging_queue', auto_ack=True)
if method_frame:
print(f"Logged message: {body.decode()}")
else:
print("No messages in the logging queue")
# Close the connection
connection.close()
Given an array arr[] of positive integers and another integer target. Determine if there exists two distinct indices such that the sum of there elements is equals to target.
Input: arr[] = [1, 2, 4, 3, 6], target = 11
Output: false
Explanation: None of the pair makes a sum of 11.
My Approach
Iterate through the array
For each element, check whether the remaining (target โ element) is also present in the array using the supportive hashmap.
If the remaining is also present then return True.
Else, save the element in the hashmap and go to the next element.
#User function Template for python3
class Solution:
def twoSum(self, arr, target):
# code here
maps = {}
for item in arr:
rem = target - item
if maps.get(rem):
return True
maps[item] = True
return False