RabbitMQ β All You Need To Know To Start Building Scalable Platforms
- Introduction
- What is a Message Queue ?
- So Problem Solved !!! Not Yet
- RabbitMQ: Installation
- RabbitMQ: An Introduction (Optional)
- Building Blocks of Message Broker
- Producing, Consuming and Acknowledging
- Problem #1 β Task Queue for Background Job Processing
- Problem #2 β Broadcasting NEWS to all subscribers
- Intermediate Resources
- Prefetch Count
- Request Reply Pattern
- Dead Letter Exchange
- Alternate Exchanges
- Lazy Queues
- Quorom Queues
- Change Data Capture
- Handling Backpressure in Distributed Systems
- Choreography Pattern
- Outbox Pattern
- Queue Based Loading
- Two Phase Commit Protocol
- Competing Consumer
- Retry Pattern
- Can We Use Database as a Queue
- Letβs Connect
Introduction
Letβs take the example of an online food ordering system like Swiggy or Zomato. Suppose a user places an order through the mobile app. If the application follows a synchronous approach, it would first send the order request to the restaurantβs system and then wait for confirmation. If the restaurant is busy, the app will have to keep waiting until it receives a response.
![](../themes/icons/grey.gif)
If the restaurantβs system crashes or temporarily goes offline, the order will fail, and the user may have to restart the process.
![](../themes/icons/grey.gif)
This approach leads to a poor user experience, increases the chances of failures, and makes the system less scalable, as multiple users waiting simultaneously can cause a bottleneck.
In a traditional synchronous communication model, one service directly interacts with another and waits for a response before proceeding. While this approach is simple and works for small-scale applications, it introduces several challenges, especially in systems that require high availability and scalability.
The main problems with synchronous communication include slow performance, system failures, and scalability issues. If the receiving service is slow or temporarily unavailable, the sender has no choice but to wait, which can degrade the overall performance of the application.
Moreover, if the receiving service crashes, the entire process fails, leading to potential data loss or incomplete transactions.
In this book, we are going to solve how this can be solved with a message queue.
What is a Message Queue ?
A message queue is a system that allows different parts of an application (or different applications) to communicate with each other asynchronously by sending and receiving messages.
![](../themes/icons/grey.gif)
It acts like a buffer or an intermediary where messages are stored until the receiving service is ready to process them.
How It Works
- A producer (sender) creates a message and sends it to the queue.
- The message sits in the queue until a consumer (receiver) picks it up.
- The consumer processes the message and removes it from the queue.
![](../themes/icons/grey.gif)
This process ensures that the sender does not have to wait for the receiver to be available, making the system faster, more reliable, and scalable.
Real-Life Example
Imagine a fast-food restaurant where customers place orders at the counter. Instead of waiting at the counter for their food, customers receive a token number and move aside. The kitchen prepares the order in the background, and when itβs ready, the token number is called for pickup.
In this analogy,
- The counter is the producer (sending orders).
- The queue is the token system (storing orders).
- The kitchen is the consumer (processing orders).
- The customer picks up the food when ready (message is consumed).
![](../themes/icons/grey.gif)
Similarly, in applications, a message queue helps decouple systems, allowing them to work at their own pace without blocking each other. RabbitMQ, Apache Kafka, and Redis are popular message queue systems used in modern software development.
So Problem Solved !!! Not Yet
It seems like problem is solved, but the message life cycle in the queue is need to handled.
- Message Routing & Binding (Optional) β How a message is routed ?. If an exchange is used, the message is routed based on predefined rules.
- Message Storage (Queue Retention) β How long a message stays in the queue. The message stays in the queue until a consumer picks it up.
- If the consumer successfully processes the message, it sends an acknowledgment (ACK), and the message is removed. If the consumer fails, the message requeues or moves to a dead-letter queue (DLQ).
- Messages that fail multiple times, are not acknowledged, or expire may be moved to a Dead-Letter Queue for further analysis.
- Messages stored only in memory can be lost if RabbitMQ crashes.
- Messages not consumed within their TTL expire.
- If a consumer fails to acknowledge a message, it may be reprocessed twice.
- Messages failing multiple times may be moved to a DLQ.
- Too many messages in the queue due to slow consumers can cause system slowdowns.
- Network failures can disrupt message delivery between producers, RabbitMQ, and consumers.
- Messages with corrupt or bad data may cause repeated consumer failures.
To handle all the above problems, we need a tool. Stable, Battle tested, Reliable tool. RabbitMQ is one kind of that tool. In this book we will cover the basics of RabbitMQ.
RabbitMQ: Installation
For RabbitMQ Installation please refer to https://www.rabbitmq.com/docs/download. In this book we will go with RabbitMQ docker.
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:4.0-management
![](../themes/icons/grey.gif)
RabbitMQ: An Introduction (Optional)
What is RabbitMQ?
Imagine youβre sending messages between friends, but instead of delivering them directly, you drop them in a mailbox, and your friend picks them up when they are ready. RabbitMQ acts like this mailbox, but for computer programs. It helps applications communicate asynchronously, meaning they donβt have to wait for each other to process data.
RabbitMQ is a message broker, which means it handles and routes messages between different parts of an application. It ensures that messages are delivered efficiently, even when some components are running at different speeds or go offline temporarily.
Why Use RabbitMQ?
Modern applications often consist of multiple services that need to exchange data. Sometimes, one service produces data faster than another can consume it. Instead of forcing the slower service to catch up or making the faster service wait, RabbitMQ allows the fast service to place messages in a queue. The slow service can then process them at its own pace.
Some key benefits of using RabbitMQ include,
- Decoupling services: Components communicate via messages rather than direct calls, reducing dependencies.
- Scalability: RabbitMQ allows multiple consumers to process messages in parallel.
- Reliability: It supports message durability and acknowledgments, preventing message loss.
- Flexibility: Works with many programming languages and integrates well with different systems.
- Efficient Load Balancing: Multiple consumers can share the message load to prevent overload on a single component.
Key Features and Use Cases
RabbitMQ is widely used in different applications, including
- Chat applications: Messages are queued and delivered asynchronously to users.
- Payment processing: Orders are placed in a queue and processed sequentially.
- Event-driven systems: Used for microservices communication and event notification.
- IoT systems: Devices publish data to RabbitMQ, which is then processed by backend services.
- Job queues: Background tasks such as sending emails or processing large files.
Building Blocks of Message Broker
Connection & Channels
In RabbitMQ, connections and channels are fundamental concepts for communication between applications and the broker,
Connections: A connection is a TCP link between a client (producer or consumer) and the RabbitMQ broker. Each connection consumes system resources and is relatively expensive to create and maintain.
![](../themes/icons/grey.gif)
Channels: A channel is a virtual communication path inside a connection. It allows multiple logical streams of data over a single TCP connection, reducing overhead. Channels are lightweight and preferred for performing operations like publishing and consuming messages.
![](../themes/icons/grey.gif)
Queues β Message Store
A queue is a message buffer that temporarily holds messages until a consumer retrieves and processes them.
1. Queues operate on a FIFO (First In, First Out) basis, meaning messages are processed in the order they arrive (unless priorities or other delivery strategies are set).
![](../themes/icons/grey.gif)
2. Queues persist messages if they are declared as durable and the messages are marked as persistent, ensuring reliability even if RabbitMQ restarts.
![](../themes/icons/grey.gif)
3. Multiple consumers can subscribe to a queue, and messages can be distributed among them in a round-robin manner.
![](../themes/icons/grey.gif)
Consumption by multiple consumers,
![](../themes/icons/grey.gif)
Can also be broadcasted,
![](../themes/icons/grey.gif)
4. If no consumers are available, messages remain in the queue until a consumer connects.
![](../themes/icons/grey.gif)
Analogy: Think of a queue as a to-do list where tasks (messages) are stored until someone (a worker/consumer) picks them up and processes them.
Exchanges β Message Distributor and Binding
![](../themes/icons/grey.gif)
An exchange is responsible for routing messages to one or more queues based on routing rules.
When a producer sends a message, it doesnβt go directly to a queue but first reaches an exchange, which decides where to forward it.
![](../themes/icons/grey.gif)
The blue color line is called as Binding. A binding is the link between the exchange and the queue, guiding messages to the right place.
RabbitMQ supports different types of exchanges
Direct Exchange (direct
)
- Routes messages to queues based on an exact match between the routing key and the queueβs binding key.
- Example: Sending messages to a specific queue based on a severity level (
info
,error
,warning
).
Fanout Exchange (fanout
)
- Routes messages to all bound queues, ignoring routing keys.
- Example: Broadcasting notifications to multiple services at once.
Topic Exchange (topic
)
- Routes messages based on pattern matching using
*
(matches one word) and#
(matches multiple words).
- Example: Routing logs where
log.info
goes to one queue,log.error
goes to another, andlog.*
captures all.
Headers Exchange (headers
)
- Routes messages based on message headers instead of routing keys.
- Example: Delivering messages based on metadata like
device: mobile
orregion: US
.
Analogy: An exchange is like a traffic controller that decides which road (queue) a vehicle (message) should take based on predefined rules.
Binding
A binding is a link between an exchange and a queue that defines how messages should be routed.
- When a queue is bound to an exchange with a binding key, messages with a matching routing key are delivered to that queue.
- A queue can have multiple bindings to different exchanges, allowing it to receive messages from multiple sources.
Example:
- A queue named
error_logs
can be bound to a direct exchange with a binding keyerror
. - Another queue,
all_logs
, can be bound to the same exchange with a binding key#
(wildcard in a topic exchange) to receive all logs.
Analogy: A binding is like a GPS route guiding messages (vehicles) from the exchange (traffic controller) to the right queue (destination).
Producing, Consuming and Acknowledging
RabbitMQ follows the producer-exchange-queue-consumer model,
- Producing messages (Publishing): A producer creates a message and sends it to RabbitMQ, which routes it to the correct queue.
- Consuming messages (Subscribing): A consumer listens for messages from the queue and processes them.
- Acknowledgment: The consumer sends an acknowledgment (
ack
) after successfully processing a message. - Durability: Ensures messages and queues survive RabbitMQ restarts.
![](../themes/icons/grey.gif)
Why do we need an Acknowledgement ?
- Ensures message reliability β Prevents messages from being lost if a consumer crashes.
- Prevents message loss β Messages are redelivered if no ACK is received.
- Avoids unintentional message deletion β Messages stay in the queue until properly processed.
- Supports at-least-once delivery β Ensures every message is processed at least once.
- Enables load balancing β Distributes messages fairly among multiple consumers.
- Allows manual control β Consumers can acknowledge only after successful processing.
- Handles redelivery β Messages can be requeued and sent to another consumer if needed.
Problem #1 β Task Queue for Background Job Processing
Context
A company runs an image processing application where users upload images that need to be resized, watermarked, and optimized before they can be served. Processing these images synchronously would slow down the user experience, so the company decides to implement an asynchronous task queue using RabbitMQ.
Problem
- Users upload large images that require multiple processing steps.
- Processing each image synchronously blocks the application, leading to slow response times.
- High traffic results in queue buildup, making it challenging to scale the system efficiently.
![](../themes/icons/grey.gif)
Proposed Solution
1. Producer Service
- Publishes image processing tasks to a RabbitMQ exchange (
task_exchange
). - Sends the image filename as the message body to the queue (
image_queue
).
2. Worker Consumers
- Listen for new image processing tasks from the queue.
- Process each image (resize, watermark, optimize, etc.).
- Acknowledge completion to ensure no duplicate processing.
3. Scalability
- Multiple workers can run in parallel to process images faster.
![](../themes/icons/grey.gif)
producer.py
import pika connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare exchange and queue channel.exchange_declare(exchange='task_exchange', exchange_type='direct') channel.queue_declare(queue='image_queue') # Bind queue to exchange channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task') # List of images to process images = ["image1.jpg", "image2.jpg", "image3.jpg"] for image in images: channel.basic_publish(exchange='task_exchange', routing_key='image_task', body=image) print(f" [x] Sent {image}") connection.close()
consumer.py
import pika import time connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare exchange and queue channel.exchange_declare(exchange='task_exchange', exchange_type='direct') channel.queue_declare(queue='image_queue') # Bind queue to exchange channel.queue_bind(exchange='task_exchange', queue='image_queue', routing_key='image_task') def process_image(ch, method, properties, body): print(f" [x] Processing {body.decode()}") time.sleep(2) # Simulate processing time print(f" [x] Finished {body.decode()}") ch.basic_ack(delivery_tag=method.delivery_tag) # Start consuming channel.basic_consume(queue='image_queue', on_message_callback=process_image) print(" [*] Waiting for image tasks. To exit press CTRL+C") channel.start_consuming()
![](../themes/icons/grey.gif)
Problem #2 β Broadcasting NEWS to all subscribers
Problem
A news application wants to send breaking news alerts to all subscribers, regardless of their location or interest.
Use a fanout exchange (news_alerts_exchange
) to broadcast messages to all connected queues, ensuring all users receive the alert.
Example
mobile_app_queue
(for users receiving push notifications)email_alert_queue
(for users receiving email alerts)web_notification_queue
(for users receiving notifications on the website)
![](../themes/icons/grey.gif)
Solution Overview
- We create a fanout exchange called
news_alerts_exchange
. - Multiple queues (mobile_app_queue, email_alert_queue, and web_notification_queue) are bound to this exchange.
- A producer publishes messages to the exchange.
- Each consumer listens to its respective queue and receives the alert.
![](../themes/icons/grey.gif)
Step 1: Producer (Publisher)
This script publishes a breaking news alert to the fanout exchange.
import pika # Establish connection connection = pika.BlockingConnection(pika.ConnectionParameters("localhost")) channel = connection.channel() # Declare a fanout exchange channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout") # Publish a message message = "Breaking News: Major event happening now!" channel.basic_publish(exchange="news_alerts_exchange", routing_key="", body=message) print(f" [x] Sent: {message}") # Close connection connection.close()
Step 2: Consumers (Subscribers)
Each consumer listens to its respective queue and processes the alert.
Consumer 1: Mobile App Notifications
import pika # Establish connection connection = pika.BlockingConnection(pika.ConnectionParameters("localhost")) channel = connection.channel() # Declare exchange channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout") # Declare a queue (auto-delete if no consumers) queue_name = "mobile_app_queue" channel.queue_declare(queue=queue_name) channel.queue_bind(exchange="news_alerts_exchange", queue=queue_name) # Callback function def callback(ch, method, properties, body): print(f" [Mobile App] Received: {body.decode()}") # Consume messages channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True) print(" [*] Waiting for news alerts...") channel.start_consuming()
Consumer 2: Email Alerts
import pika connection = pika.BlockingConnection(pika.ConnectionParameters("localhost")) channel = connection.channel() channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout") queue_name = "email_alert_queue" channel.queue_declare(queue=queue_name) channel.queue_bind(exchange="news_alerts_exchange", queue=queue_name) def callback(ch, method, properties, body): print(f" [Email Alert] Received: {body.decode()}") channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True) print(" [*] Waiting for news alerts...") channel.start_consuming()
Consumer 3: Web Notifications
import pika connection = pika.BlockingConnection(pika.ConnectionParameters("localhost")) channel = connection.channel() channel.exchange_declare(exchange="news_alerts_exchange", exchange_type="fanout") queue_name = "web_notification_queue" channel.queue_declare(queue=queue_name) channel.queue_bind(exchange="news_alerts_exchange", queue=queue_name) def callback(ch, method, properties, body): print(f" [Web Notification] Received: {body.decode()}") channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True) print(" [*] Waiting for news alerts...") channel.start_consuming()
How It Works
- The producer sends a news alert to the fanout exchange (
news_alerts_exchange
). - All queues (mobile_app_queue, email_alert_queue, web_notification_queue) bound to the exchange receive the message.
- Each consumer listens to its queue and processes the alert.
This setup ensures all users receive the alert simultaneously across different platforms.
Intermediate Resources
Prefetch Count
Prefetch is a mechanism that defines how many messages can be delivered to a consumer at a time before the consumer sends an acknowledgment back to the broker. This ensures that the consumer does not get overwhelmed with too many unprocessed messages, which could lead to high memory usage and potential performance issues.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2024/12/29/learning-notes-16-prefetch-count-rabbitmq/
Request Reply Pattern
The Request-Reply Pattern is a fundamental communication style in distributed systems, where a requester sends a message to a responder and waits for a reply. Itβs widely used in systems that require synchronous communication, enabling the requester to receive a response for further processing.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2024/12/28/learning-notes-15-request-reply-pattern-rabbitmq/
Dead Letter Exchange
A dead letter is a message that cannot be delivered to its intended queue or is rejected by a consumer. Common scenarios where messages are dead lettered include,
- Message Rejection: A consumer explicitly rejects a message without requeuing it.
- Message TTL (Time-To-Live) Expiry: The message remains in the queue longer than its TTL.
- Queue Length Limit: The queue has reached its maximum capacity, and new messages are dropped.
- Routing Failures: Messages that cannot be routed to any queue from an exchange.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2024/12/28/learning-notes-14-dead-letter-exchange-rabbitmq/
Alternate Exchanges
An alternate exchange in RabbitMQ is a fallback exchange configured for another exchange. If a message cannot be routed to any queue bound to the primary exchange, RabbitMQ will publish the message to the alternate exchange instead. This mechanism ensures that undeliverable messages are not lost but can be processed in a different way, such as logging, alerting, or storing them for later inspection.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2024/12/27/learning-notes-12-alternate-exchanges-rabbitmq/
Lazy Queues
- Lazy Queues are designed to store messages primarily on disk rather than in memory.
- They are optimized for use cases involving large message backlogs where minimizing memory usage is critical.
To Know More: https://parottasalna.com/2024/12/26/learning-notes-10-lazy-queues-rabbitmq/
Quorom Queues
- Quorum Queues are distributed queues built on the Raft consensus algorithm.
- They are designed for high availability, durability, and data safety by replicating messages across multiple nodes in a RabbitMQ cluster.
- Its a replacement of Mirrored Queues.
To Know More: https://parottasalna.com/2024/12/25/learning-notes-9-quorum-queues-rabbitmq/
Change Data Capture
CDC stands for Change Data Capture. Itβs a technique that listens to a database and captures every change that happens in it. These changes can then be sent to other systems to,
- Keep data in sync across multiple databases.
- Power real-time analytics dashboards.
- Trigger notifications for certain database events.
- Process data streams in real time.
To Know More: https://parottasalna.com/2025/01/19/learning-notes-63-change-data-capture-what-does-it-do/
Handling Backpressure in Distributed Systems
Backpressure occurs when a downstream system (consumer) cannot keep up with the rate of data being sent by an upstream system (producer). In distributed systems, this can arise in scenarios such as
- A message queue filling up faster than it is drained.
- A database struggling to handle the volume of write requests.
- A streaming system overwhelmed by incoming data.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2025/01/07/learning-notes-45-backpressure-handling-in-distributed-systems/
Choreography Pattern
In the Choreography Pattern, services communicate directly with each other via asynchronous events, without a central controller. Each service is responsible for a specific part of the workflow and responds to events produced by other services. This pattern allows for a more autonomous and loosely coupled system.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2025/01/05/learning-notes-38-choreography-pattern-cloud-pattern/
Outbox Pattern
The Outbox Pattern is a proven architectural solution to this problem, helping developers manage data consistency, especially when dealing with events, messaging systems, or external APIs.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2025/01/03/learning-notes-31-outbox-pattern-cloud-pattern/
Queue Based Loading
The Queue-Based Loading Pattern leverages message queues to decouple and coordinate tasks between producers (such as applications or services generating data) and consumers (services or workers processing that data). By using queues as intermediaries, this pattern allows systems to manage workloads efficiently, ensuring seamless and scalable operation.
To Know More: https://parottasalna.com/2025/01/03/learning-notes-30-queue-based-loading-cloud-patterns/
Two Phase Commit Protocol
The Two-Phase Commit (2PC) protocol is a distributed algorithm used to ensure atomicity in transactions spanning multiple nodes or databases. Atomicity ensures that either all parts of a transaction are committed or none are, maintaining consistency in distributed systems.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2025/01/03/learning-notes-29-two-phase-commit-protocol-acid-in-distributed-systems/
Competing Consumer
The competing consumer pattern involves multiple consumers that independently compete to process messages or tasks from a shared queue. This pattern is particularly effective in scenarios where the rate of incoming tasks is variable or high, as it allows multiple consumers to process tasks concurrently.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2025/01/01/learning-notes-24-competing-consumer-messaging-queue-patterns/
Retry Pattern
The Retry Pattern is a design strategy used to manage transient failures by retrying failed operations. Instead of immediately failing an operation after an error, the pattern retries it with an optional delay or backoff strategy. This is particularly useful in distributed systems where failures are often temporary.
![](../themes/icons/grey.gif)
To Know More: https://parottasalna.com/2024/12/31/learning-notes-23-retry-pattern-cloud-patterns/
Can We Use Database as a Queue
Developers try to use their RDBMS as a way to do background processing or service communication. While this can often appear to βget the job doneβ, there are a number of limitations and concerns with this approach.
There are two divisions to any asynchronous processing: the service(s) that create processing tasks and the service(s) that consume and process these tasks accordingly.
To Know More: https://parottasalna.com/2024/06/15/can-we-use-database-as-queue-in-asynchronous-process/
Letβs Connect
Telegram: https://t.me/parottasalna/1
LinkedIn: https://www.linkedin.com/in/syedjaferk/
Whatsapp Channel: https://whatsapp.com/channel/0029Vavu8mF2v1IpaPd9np0s
Youtube: https://www.youtube.com/@syedjaferk
Github: https://github.com/syedjaferk/