alias gitdir="cd ~/Git/" (This means gitdir switches to the ~/Git/ directory, but I wanted it to switch directly to a repository.) So, I wrote a Bash function.
Write Code to .bashrc File
The .bashrc file runs when a new terminal window is opened. So, we need to write the function inside this file.
Code
gitrepo() {
# Exact Match
repoList=$(ls $HOME/Git)
if [ -n "$(echo "$repoList" | grep -w $1)" ]; then
cd $HOME/Git/$1
else
# Relevant Match
getRepoName=$(echo "$repoList" | grep -i -m 1 $1)
if [ -n "$getRepoName" ]; then
cd "$HOME/Git/$getRepoName"
else
echo "Repository Not Founded"
cd $HOME/Git
fi
fi
}
Code Explanation
The $repoList variable stores the list of directories inside the Git folder.
Function Logic Has Two Parts:
Exact Match
Relevant Match
Exact Match
if [ -n "$(echo "$repoList" | grep -w $1)" ]; then
cd $HOME/Git/$1
If condition: The $repoList variable parses input for grep.
grep -w matches only whole words.
$1 is the function’s argument in bash.
-n checks if a variable is not empty. Example syntax: [ a != "" ] is equivalent to [ -n a ]
Relevant Match
getRepoName=$(echo "$repoList" | grep -i -m 1 $1)
if [ -n "$getRepoName" ]; then
cd "$HOME/Git/$getRepoName"
Relevant search: If no Exact Match is found, this logic is executed next.
getRepoName="$repoList" | grep -i -m 1 $1
-i ignores case sensitivity.
-m 1 returns only the first match.
Example of -m with grep: ls | grep i3 It returns i3WM and i3status, but -m 1 ensures only i3WM is selected.
No Match
If no match is found, it simply changes the directory to the Git folder.
I am big fan of logs. Would like to log everything. All the request, response of an API. But is it correct ? Though logs helped our team greatly during this new year, i want to know, is there a better approach to log things. That search made this blog. In this blog i jot down notes on logging. Lets log it.
Throughout this blog, i try to generalize things. Not biased to a particular language. But here and there you can see me biased towards Python. Also this is my opinion. Not a hard rule.
Which is a best logger ?
I’m not here to argue about which logger is the best, they all have their problems. But the worst one is usually the one you build yourself. Sure, existing loggers aren’t perfect, but trying to create your own is often a much bigger mistake.
1. Why Logging Matters
Logging provides visibility into your application’s behavior, helping to,
Diagnose and troubleshoot issues (This is most common usecase)
Monitor application health and performance (Metrics)
Meet compliance and auditing requirements (Audit Logs)
Enable debugging in production environments (we all do this.)
However, poorly designed logging strategies can lead to excessive log volumes, higher costs, and difficulty in pinpointing actionable insights.
2. Logging Best Practices
a. Use Structured Logs
Long story short, instead of unstructured plain text, use JSON or other structured formats. This makes parsing and querying easier, especially in log aggregation tools.
Define and adhere to appropriate logging levels to avoid log bloat:
DEBUG: Detailed information for debugging.
INFO: General operational messages.
WARNING: Indications of potential issues.
ERROR: Application errors that require immediate attention.
CRITICAL: Severe errors leading to application failure.
c. Avoid Sensitive Data
Sanitize your logs to exclude sensitive information like passwords, PII, or API keys. Instead, mask or hash such data. Don’t add token even for testing.
d. Include Contextual Information
Incorporate metadata like request IDs, user IDs, or transaction IDs to trace specific events effectively.
3. Log Ingestion at Scale
As applications scale, log ingestion can become a bottleneck. Here’s how to manage it,
a. Centralized Logging
Stream logs to centralized systems like Elasticsearch, Logstash, Kibana (ELK), or cloud-native services like AWS CloudWatch, Azure Monitor, or Google Cloud Logging.
b. Optimize Log Volume
Log only necessary information.
Use log sampling to reduce verbosity in high-throughput systems.
Rotate logs to limit disk usage.
c. Use Asynchronous Logging
Asynchronous loggers improve application performance by delegating logging tasks to separate threads or processes. (Not Suitable all time. It has its own problems)
d. Method return values are usually important
If you have a log in the method and don’t include the return value of the method, you’re missing important information. Make an effort to include that at the expense of slightly less elegant looking code.
e. Include filename in error messages
Mention the path/to/file:line-number to pinpoint the location of the issue.
3. Logging Don’ts
a. Don’t Log Everything at the Same Level
Logging all messages at the INFO or DEBUG level creates noise and makes it difficult to identify critical issues.
b. Don’t Hardcode Log Messages
Avoid static, vague, or generic log messages. Use dynamic and descriptive messages that include relevant context.
# Bad Example
Error occurred.
# Good Example
Error occurred while processing payment for user_id=12345, transaction_id=abc-6789.
c. Don’t Log Sensitive or Regulated Data
Exposing personally identifiable information (PII), passwords, or other sensitive data in logs can lead to compliance violations (e.g., GDPR, HIPAA).
d. Don’t Ignore Log Rotation
Failing to implement log rotation can result in disk space exhaustion, especially in high traffic systems (Log Retention).
e. Don’t Overlook Log Correlation
Logs without request IDs, session IDs, or contextual metadata make it difficult to correlate related events.
f. Don’t Forget to Monitor Log Costs
Logging everything without considering storage and processing costs can lead to financial inefficiency in large-scale systems.
g. Keep the log message short
Long and verbose messages are a cost. The cost is in reading time and ingestion time.
h. Never use log message in loop
This might seem obvious, but just to be clear -> logging inside a loop, even if the log level isn’t visible by default, can still hurt performance. It’s best to avoid this whenever possible.
If you absolutely need to log something at a hidden level and decide to break this guideline, keep it short and straightforward.
i. Log item you already “have”
We should avoid this,
logger.info("Reached X and value of method is {}", method());
Here, just for the logging purpose, we are calling the method() again. Even if the method is cheap. You’re effectively running the method regardless of the respective logging levels!
j. Dont log iterables
Even if it’s a small list. The concern is that the list might grow and “overcrowd” the log. Writing the content of the list to the log can balloon it up and slow processing noticeably. Also kills time in debugging.
k. Don’t Log What the Framework Logs for You
There are great things to log. E.g. the name of the current thread, the time, etc. But those are already written into the log by default almost everywhere. Don’t duplicate these efforts.
l.Don’t log Method Entry/Exit
Log only important events in the system. Entering or exiting a method isn’t an important event. E.g. if I have a method that enables feature X the log should be “Feature X enabled” and not “enable_feature_X entered”. I have done this a lot.
m. Dont fill the method
A complex method might include multiple points of failure, so it makes sense that we’d place logs in multiple points in the method so we can detect the failure along the way. Unfortunately, this leads to duplicate logging and verbosity.
Errors will typically map to error handling code which should be logged in generically. So all error conditions should already be covered.
This creates situations where we sometimes need to change the flow/behavior of the code, so logging will be more elegant.
n. Don’t use AOP logging
AOP (Aspect-Oriented Programming) logging allows you to automatically add logs at specific points in your application, such as when methods are entered or exited.
In Python, AOP-style logging can be implemented using decorators or middleware that inject logs into specific points, such as method entry and exit. While it might seem appealing for detailed tracing, the same problems apply as in other languages like Java.
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def log_method_entry_exit(func):
def wrapper(*args, **kwargs):
logger.info(f"Entering: {func.__name__} with args={args} kwargs={kwargs}")
result = func(*args, **kwargs)
logger.info(f"Exiting: {func.__name__} with result={result}")
return result
return wrapper
# Example usage
@log_method_entry_exit
def example_function(x, y):
return x + y
example_function(5, 3)
Why Avoid AOP Logging in Python
Performance Impact:
Injecting logs into every method increases runtime overhead, especially if used extensively in large-scale systems.
In Python, where function calls already add some overhead, this can significantly affect performance.
Log Verbosity:
If this decorator is applied to every function or method in a system, it produces an enormous amount of log data.
Debugging becomes harder because the meaningful logs are lost in the noise of entry/exit logs.
Limited Usefulness:
During local development, tools like Python debuggers (pdb), profilers (cProfile, line_profiler), or tracing libraries like trace are far more effective for inspecting function behavior and performance.
CI Issues:
Enabling such verbose logging during CI test runs can make tracking test failures more difficult because the logs are flooded with entry/exit messages, obscuring the root cause of failures.
Use Python-specific tools like pdb, ipdb, or IDE-integrated debuggers to inspect code locally.
o. Dont Double log
It’s pretty common to log an error when we’re about to throw an error. However, since most error code is generic, it’s likely there’s a log in the generic error handling code.
4. Ensuring Scalability
To keep your logging system robust and scalable,
Monitor Log Storage: Set alerts for log storage thresholds.
Implement Compression: Compress log files to reduce storage costs.
Automate Archival and Deletion: Regularly archive old logs and purge obsolete data.
Benchmark Logging Overhead: Measure the performance impact of logging on your application.
5. Logging for Metrics
Below, is the list of items that i wish can be logged for metrics.
General API Metrics
General API Metrics on HTTP methods, status codes, latency/duration, request size.
Total requests per endpoint over time. Requests per minute/hour.
System Metrics on CPU and Memory usage during request processing (this will be auto captured).
Usage Metrics
Traffic analysis on peak usage times.
Most/Least used endpoints.
6. Mapped Diagnostic Context (MDC)
MDC is the one, i longed for most. Also went into trouble by implementing without a middleware.
Mapped Diagnostic Context (MDC) is a feature provided by many logging frameworks, such as Logback, Log4j, and SLF4J. It allows developers to attach contextual information (key-value pairs) to the logging events, which can then be automatically included in log messages.
This context helps in differentiating and correlating log messages, especially in multi-threaded applications.
Why Use MDC?
Enhanced Log Clarity: By adding contextual information like user IDs, session IDs, or transaction IDs, MDC enables logs to provide more meaningful insights.
Easier Debugging: When logs contain thread-specific context, tracing the execution path of a specific transaction or user request becomes straightforward.
Reduced Log Ambiguity: MDC ensures that logs from different threads or components do not get mixed up, avoiding confusion.
Common Use Cases
Web Applications: Logging user sessions, request IDs, or IP addresses to trace the lifecycle of a request.
Microservices: Propagating correlation IDs across services for distributed tracing.
Background Tasks: Tracking specific jobs or tasks in asynchronous operations.
Limitations (Curated from other blogs. I havent tried yet )
Thread Boundaries: MDC is thread-local, so its context does not automatically propagate across threads (e.g., in asynchronous executions). For such scenarios, you may need to manually propagate the MDC context.
Overhead: Adding and managing MDC context introduces a small runtime overhead, especially in high-throughput systems.
Configuration Dependency: Proper MDC usage often depends on correctly configuring the logging framework.
Few days back i came across a concept of CDC. Like a notifier of database events. Instead of polling, this enables event to be available in a queue, which can be consumed by many consumers. In this blog, i try to explain the concepts, types in a theoretical manner.
You run a library. Every day, books are borrowed, returned, or new books are added. What if you wanted to keep a live record of all these activities so you always know the exact state of your library?
This is essentially what Change Data Capture (CDC) does for your databases. It’s a way to track changes (like inserts, updates, or deletions) in your database tables and send them to another system, like a live dashboard or a backup system. (Might be a bad example. Don’t lose hope. Continue …)
CDC is widely used in modern technology to power,
Real-Time Analytics: Live dashboards that show sales, user activity, or system performance.
Data Synchronization: Keeping multiple databases or microservices in sync.
Event-Driven Architectures: Triggering notifications, workflows, or downstream processes based on database changes.
Data Pipelines: Streaming changes to data lakes or warehouses for further processing.
Backup and Recovery: Incremental backups by capturing changes instead of full data dumps.
It’s a critical part of tools like Debezium, Kafka, and cloud services such as AWS Database Migration Service (DMS) and Azure Data Factory. CDC enables companies to move towards real-time data-driven decision-making.
What is CDC?
CDC stands for Change Data Capture. It’s a technique that listens to a database and captures every change that happens in it. These changes can then be sent to other systems to,
Keep data in sync across multiple databases.
Power real-time analytics dashboards.
Trigger notifications for certain database events.
Process data streams in real time.
In short, CDC ensures your data is always up-to-date wherever it’s needed.
Why is CDC Useful?
Imagine you have an online store. Whenever someone,
Places an order,
Updates their shipping address, or
Cancels an order,
you need these changes to be reflected immediately across,
The shipping system.
The inventory system.
The email notification service.
Instead of having all these systems query the database (this is one of main reasons) constantly (which is slow and inefficient), CDC automatically streams these changes to the relevant systems.
This means,
Real-Time Updates: Systems receive changes instantly.
Improved Performance: Your database isn’t overloaded with repeated queries.
Consistency: All systems stay in sync without manual intervention.
How Does CDC Work?
Note: I haven’t yet tried all these. But conceptually having a feeling.
CDC relies on tracking changes in your database. There are a few ways to do this,
1. Query-Based CDC
This method repeatedly checks the database for changes. For example:
Every 5 minutes, it queries the database: “What changed since my last check?”
Any new or modified data is identified and processed.
Drawbacks: This can miss changes if the timing isn’t right, and it’s not truly real-time (Long Polling).
2. Log-Based CDC
Most modern databases (like PostgreSQL or MySQL) keep logs of every operation. Log-based CDC listens to these logs and captures changes as they happen.
Advantages
It’s real-time.
It’s lightweight since it doesn’t query the database directly.
3. Trigger-Based CDC
In this method, the database uses triggers to log changes into a separate table. Whenever a change occurs, a trigger writes a record of it.
Advantages: Simple to set up.
Drawbacks: Can slow down the database if not carefully managed.
Tools That Make CDC Easy
Several tools simplify CDC implementation. Some popular ones are,
Debezium: Open-source and widely used for log-based CDC with databases like PostgreSQL, MySQL, and MongoDB.
Striim: A commercial tool for real-time data integration.
AWS Database Migration Service (DMS): A cloud-based CDC service.
StreamSets: Another tool for real-time data movement.
These tools integrate with databases, capture changes, and deliver them to systems like RabbitMQ, Kafka, or cloud storage.
To help visualize CDC, think of,
Social Media Feeds: When someone likes or comments on a post, you see the update instantly. This is CDC in action.
Bank Notifications: Whenever you make a transaction, your bank app updates instantly. Another example of CDC.
In upcoming blogs, will include Debezium implementation with CDC.
Event-driven architectures are awesome, but they come with their own set of challenges. Missteps can lead to unreliable systems, inconsistent data, and frustrated users. Let’s explore some of the most common pitfalls and how to address them effectively.
Events often get re-delivered due to retries or system failures. Without proper handling, duplicate events can,
Charge a customer twice for the same transaction: Imagine a scenario where a payment service retries a payment event after a temporary network glitch, resulting in a duplicate charge.
Cause duplicate inventory updates: For example, an e-commerce platform might update stock levels twice for a single order, leading to overestimating available stock.
Create inconsistent or broken system states: Duplicates can cascade through downstream systems, introducing mismatched or erroneous data.
Solution:
Assign unique IDs: Ensure every event has a globally unique identifier. Consumers can use these IDs to detect and discard duplicates.
Design idempotent processing: Structure your operations so they produce the same outcome even when executed multiple times. For instance, an API updating inventory could always set stock levels to a specific value rather than incrementing or decrementing.
2. Not Guaranteeing Order
Events can arrive out of order when distributed across partitions or queues. This can lead to
Processing a refund before the payment: If a refund event is processed before the corresponding payment event, the system might show a negative balance or fail to reconcile properly.
Breaking logic that relies on correct sequence: Certain workflows, such as assembling logs or transactional data, depend on a strict event order to function correctly.
Solution
Use brokers with ordering guarantees: Message brokers like Apache Kafka support partition-level ordering. Design your topics and partitions to align with entities requiring ordered processing (e.g., user or account ID).
Add sequence numbers or timestamps: Include metadata in events to indicate their position in a sequence. Consumers can use this data to reorder events if necessary, ensuring logical consistency.
When writing to a database and publishing an event, one might succeed while the other fails. This can
Lose events: If the event is not published after the database write, downstream systems might remain unaware of critical changes, such as a new order or a status update.
Cause mismatched states: For instance, a transaction might be logged in a database but not propagated to analytical or monitoring systems, creating inconsistencies.
Solution
Use the Transactional Outbox Pattern: In this pattern, events are written to an “outbox” table within the same database transaction as the main data write. A separate process then reads from the outbox and publishes events reliably.
Adopt Change Data Capture (CDC) tools: CDC tools like Debezium can monitor database changes and publish them as events automatically, ensuring no changes are missed.
4. Non-Backward-Compatible Changes
Changing event schemas without considering existing consumers can break systems. For example:
Removing a field: A consumer relying on this field might encounter null values or fail altogether.
Renaming or changing field types: This can lead to deserialization errors or misinterpretation of data.
Solution:
Maintain versioned schemas: Introduce new schema versions incrementally and ensure consumers can continue using older versions during the transition.
Use schema evolution-friendly formats: Formats like Avro or Protobuf natively support schema evolution, allowing you to add fields or make other non-breaking changes easily.
Add adapters for compatibility: Build adapters or translators that transform events from new schemas to older formats, ensuring backward compatibility for legacy systems.
As part of ACID series, i am refreshing on topic Durability. In this blog i jot down notes on durability for better understanding.
What is Durability?
Durability ensures that the effects of a committed transaction are permanently saved to the database. This property prevents data loss by ensuring that committed transactions survive unexpected interruptions such as power outages, crashes, or system reboots.
PostgreSQL achieves durability through a combination of
Write-Ahead Logging (WAL): Changes are written to a log file before they are applied to the database.
Checkpointing: Periodic snapshots of the database state.
fsync and Synchronous Commit: Ensures data is physically written to disk.
PostgreSQL uses WAL to ensure durability. Before modifying the actual data, it writes the changes to a WAL file. This ensures that even if the system crashes, the database can be recovered by replaying the WAL logs.
-- Enable WAL logging (default in PostgreSQL)
SHOW wal_level;
2. Checkpoints
A checkpoint is a mechanism where the database writes all changes to disk, ensuring the database’s state is up-to-date. Checkpoints reduce the time required for crash recovery by limiting the number of WAL files that need to be replayed.
-- Force a manual checkpoint
CHECKPOINT;
3. Synchronous Commit
By default, PostgreSQL ensures that changes are flushed to disk before a transaction is marked as committed. This is controlled by the synchronous_commit setting.
-- Show current synchronous commit setting
SHOW synchronous_commit;
-- Change synchronous commit setting
SET synchronous_commit = 'on';
4. Backup and Replication
To further ensure durability, PostgreSQL supports backups and replication. Logical and physical backups can be used to restore data in case of catastrophic failures.
Practical Examples of Durability
Example 1: Ensuring Transaction Durability
BEGIN;
-- Update an account balance
UPDATE accounts SET balance = balance - 500 WHERE account_id = 1;
-- Commit the transaction
COMMIT;
-- Crash the system now; the committed transaction will persist.
Even if the database crashes immediately after the COMMIT, the changes will persist, as the transaction logs have already been written to disk.
Example 2: WAL Recovery after Crash
Suppose a crash occurs immediately after a transaction is committed.
During the recovery process, PostgreSQL replays the WAL logs to restore the committed transactions.
Example 3: Configuring Synchronous Commit
Control durability settings based on performance and reliability needs.
-- Use asynchronous commit for faster performance (risking durability)
SET synchronous_commit = 'off';
-- Perform a transaction
BEGIN;
UPDATE accounts SET balance = balance + 200 WHERE account_id = 2;
COMMIT;
-- Changes might be lost if the system crashes before the WAL is flushed.
Trade-offs of Durability
While durability ensures data persistence, it can affect database performance. For example:
Enforcing synchronous commits may slow down transactions.
Checkpointing can momentarily impact query performance due to disk I/O.
For high-performance systems, durability settings can be fine-tuned based on the application’s tolerance for potential data loss.
Durability and Other ACID Properties
Durability works closely with the other ACID properties:
Atomicity: Ensures the all-or-nothing nature of transactions.
Consistency: Guarantees the database remains in a valid state after a transaction.
Isolation: Prevents concurrent transactions from interfering with each other.
Today, i learnt about compensating transaction pattern which leads to two phase commit protocol which helps in maintaining the Atomicity of a distributed transactions. Distributed transactions are hard.
In this blog, i jot down notes on Two Phase Commit protocol for better understanding.
The Two-Phase Commit (2PC) protocol is a distributed algorithm used to ensure atomicity in transactions spanning multiple nodes or databases. Atomicity ensures that either all parts of a transaction are committed or none are, maintaining consistency in distributed systems.
Why Two-Phase Commit?
In distributed systems, a transaction might involve several independent nodes, each maintaining its own database. Without a mechanism like 2PC, failures in one node can leave the system in an inconsistent state.
For example, consider an e-commerce platform where a customer places an order.
The transaction involves updating the inventory in one database, recording the payment in another, and generating a shipment request in a third system. If the payment database successfully commits but the inventory database fails, the system becomes inconsistent, potentially causing issues like double selling or incomplete orders. 2PC mitigates this by providing a coordinated protocol to commit or abort transactions across all nodes.
The Phases of 2PC
The protocol operates in two main phases
1. Prepare Phase (Voting Phase)
The coordinator node initiates the transaction and prepares to commit it across all participating nodes.
Request to Prepare: The coordinator sends a PREPARE request to all participant nodes.
Vote: Each participant checks if it can commit the transaction (e.g., no constraints violated, resources available). It logs its decision (YES or NO) locally and sends its vote to the coordinator. If any participant votes NO, the transaction cannot be committed.
2. Commit Phase (Decision Phase)
Based on the votes received in the prepare phase, the coordinator decides the final outcome.
Commit Decision:
If all participants vote YES, the coordinator logs a COMMIT decision, sends COMMIT messages to all participants, and participants apply the changes and confirm with an acknowledgment.
Abort Decision:
If any participant votes NO, the coordinator logs an ABORT decision, sends ABORT messages to all participants, and participants roll back any changes made during the transaction.
Implementation:
For a simple implementation of 2PC, we can try out the below flow using RabbitMQ as a medium for Co-Ordinator.
Basically, we need not to write this from scratch, we have tools,
1. Relational Databases
Most relational databases have built-in support for distributed transactions and 2PC.
PostgreSQL: Implements distributed transactions using foreign data wrappers (FDWs) with PREPARE TRANSACTION and COMMIT PREPARED.
MySQL: Supports XA transactions, which follow the 2PC protocol.
Oracle Database: Offers robust distributed transaction support using XA.
Microsoft SQL Server: Provides distributed transactions through MS-DTC.
2. Distributed Transaction Managers
These tools manage distributed transactions across multiple systems.
Atomikos: A popular Java-based transaction manager supporting JTA/XA for distributed systems.
Bitronix: Another lightweight transaction manager for Java applications supporting JTA/XA.
JBoss Transactions (Narayana): A robust Java transaction manager that supports 2PC, often used in conjunction with JBoss servers.
3. Message Brokers
Message brokers provide transaction capabilities with 2PC.
RabbitMQ: Supports the 2PC protocol using transactional channels.
Apache Kafka: Supports transactions, ensuring “exactly-once” semantics across producers and consumers.
ActiveMQ: Provides distributed transaction support through JTA integration
4. Workflow Engines
Workflow engines can orchestrate 2PC across distributed systems.
Apache Camel: Can coordinate 2PC transactions using its transaction policy.
Camunda: Provides BPMN-based orchestration that can include transactional boundaries.
Zeebe: Supports distributed transaction workflows in modern architectures.
Consistency: Guarantees system consistency across all nodes.
Durability: Uses logs to ensure decisions survive node failures.
Challenges of 2PC
Blocking Nature: If the coordinator fails during the commit phase, participants must wait indefinitely unless a timeout or external mechanism is implemented.
Performance Overhead: Multiple message exchanges and logging operations introduce latency.
Single Point of Failure: The coordinator’s failure can stall the entire transaction.
Today, i learned about AMQP Protocol, Components of RabbitMQ (Connections, Channels, Queues, Exchanges, Bindings and Different Types of Exchanges, Acknowledgement and Publisher Confirmation). I learned these all from CloudAMQP In this blog, you will find a crisp details on these topics.
1. Overview of AMQP Protocol
Advanced Message Queuing Protocol (AMQP) is an open standard for messaging middleware. It enables systems to exchange messages in a reliable and flexible manner.
Key components:
Producers: Applications that send messages.
Consumers: Applications that receive messages.
Broker: Middleware (e.g., RabbitMQ) that manages message exchanges.
Message: A unit of data transferred between producer and consumer.
2. How AMQP Works in RabbitMQ
RabbitMQ implements AMQP to facilitate message exchange. It acts as the broker, managing queues, exchanges, and bindings.
AMQP Operations:
Producer sends a message to an exchange.
The exchange routes the message to one or more queues based on bindings.
Consumer retrieves the message from the queue.
3. Connections and Channels
Connections
A connection is a persistent, long-lived TCP connection between a client application and the RabbitMQ broker. Connections are relatively resource-intensive because they involve socket communication and the overhead of establishing and maintaining the connection. Each connection is uniquely identified by the broker and can be shared across multiple threads or processes.
When an application establishes a connection to RabbitMQ, it uses it as a gateway to interact with the broker. This includes creating channels, declaring queues and exchanges, publishing messages, and consuming messages. Connections should ideally be reused across the application to reduce overhead and optimize resource usage.
Channels
A channel is a lightweight, logical communication pathway established within a connection. Channels provide a way to perform multiple operations concurrently over a single connection. They are less resource-intensive than connections and are designed to handle operations such as queue declarations, message publishing, and consuming.
Using channels allows applications to:
Scale efficiently: Instead of opening multiple connections, applications can open multiple channels over a single connection.
Isolate operations: Each channel operates independently. For instance, one channel can consume messages while another publishes.
How They Work Together
When a client connects to RabbitMQ, it first establishes a connection. Within that connection, it can open multiple channels. Each channel operates as a virtual connection, allowing concurrent tasks without needing separate TCP connections.
import pika
# Establish a connection to RabbitMQ
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
# Create multiple channels on the same connection
channel1 = connection.channel()
channel2 = connection.channel()
# Declare queues on each channel
channel1.queue_declare(queue='queue1')
channel2.queue_declare(queue='queue2')
# Publish messages on different channels
channel1.basic_publish(exchange='', routing_key='queue1', body='Message for Queue 1')
channel2.basic_publish(exchange='', routing_key='queue2', body='Message for Queue 2')
print("Messages sent to both queues!")
# Close the connection
connection.close()
Best Practices (Not Tried; Got this from the video)
Reusing Connections: Establish one connection per application or service and share it across threads or processes for efficiency.
Using Channels for Parallelism: Open separate channels for different operations like publishing and consuming.
Graceful Cleanup: Always close channels and connections when done to avoid resource leaks.
4. Queues
Act as message storage.
Can be:
Durable: Survives broker restarts.
Exclusive: Used by a single connection.
Auto-delete: Deleted when the last consumer disconnects.
An exchange in RabbitMQ is a routing mechanism that determines how messages sent by producers are directed to queues. Exchanges act as intermediaries between producers and queues, enabling flexible and efficient message routing based on routing rules and patterns.
Types of Exchanges
RabbitMQ supports four types of exchanges, each with its unique routing mechanism:
1. Direct Exchange
Routes messages to queues based on an exact match of the routing key.
If the routing key in the message matches the binding key of a queue, the message is routed to that queue.
Use case: Task queues where each task has a specific destination.
Example:
Queue queue1 is bound to the exchange with the routing key info.
A message with the routing key info is routed to queue1.
Declaration: Exchanges must be explicitly declared before use. If an exchange is not declared and a producer tries to publish a message to it, an error will occur.
Binding: Queues are bound to exchanges with routing keys or header arguments.
Publishing: Producers publish messages to exchanges with optional routing keys.
Durable and Non-Durable Exchanges
Durable Exchange: Survives broker restarts. Useful for critical applications.
Non-Durable Exchange: Deleted when the broker restarts. Suitable for transient tasks.
# Declare a durable exchange
channel.exchange_declare(exchange='durable_exchange', exchange_type='direct', durable=True)
Default Exchange
RabbitMQ provides a built-in default exchange (unnamed exchange) that routes messages directly to a queue with a name matching the routing key.
What is Spring Framework? Before we proceed with the definition let’s first understand what is framework.
Framework in software industry, is an environment where we can see collection of reusable software components or tools which are used to build the applications more efficiently with minimal code and time. It makes the developers life easier.
For example, If we are going to travel and stay in some place: furnished flat will be preferable than setting-up a new home. All available ready-made to pick-up and use. Another example LibreOffice Draw where you can draw, paint or creating logo. Here We will have set of drawing tools, just pick up and use it.
Definition: Spring is a comprehensive framework, provides a broad set of tools and solutions for almost all kind of application development, whether you are building a small standalone application or a complex enterprise system, in particular web applications.
What is Spring Boot? Spring Boot is a layer on top of Spring that simplifies application development, making the developer to focus mostly on the business logic and leaving all boiler plate codes to the spring boot framework.
Spring vs SpringBoot: The main difference is in Spring the developer have the higher responsibility (or must be an Advanced Developer) to handle all work step by step (obviously will take more time) whereas with SpringBoot, we can do the same stuff very easily, quickly and safely (We can do with Spring also, but here SpringBoot will takes care of lot of tasks and minimize the coder’s work)
Ex. Spring – Birthday party event arranging by parents. (Each activity should be taken care of, like venue, invitation, cakes, decoration, food arrangements, return gifts etc.)
Spring Boot – An event organizer will take care of everything, so the parents just concentrate on the child and guests (like business logic) – whatever they want event organizer(spring boot) will assist by providing it.
What are Spring Boot’s Advantages? Spring Boot is a layer on top of Spring that simplifies application development by providing the following:
Faster Setup (Based on the dependencies and annotations).
Simplifies development by Auto Configuration, application.properties or application.yml
Embedded web servers with our finished product (.jar/.war)-eliminates the need for an external server like Tomcat to run the application, during deployment.
Production-Ready features (Ex. Health checks-monitor application’s health, logging etc.)
Simplified Deployment.
Opinionated defaults. (TBD)
Security features.
Community and Ecosystem
Spring Framework’s main advantages are, – Inversion of Control – Dependency Injection
IoC (Inversion of Control): The core principle of the Spring Framework. Usually the program flow or developer will control the execution here as the name suggests, control reversed, ie framework controls the flow. Ex. The event organizer having everything whatever the party or parent needs. It makes the developers with minimal code and better organized.
It makes everything ready to build the application instead searching or creating whenever required . My understanding here is,
It scans the dependencies – based on that creates the required bean, checks the required .jar files are available in the class path.
Through dependency injection passing beans as parameter to another bean when @Autowired detected.
Spring Boot starts and initializes the IoC container (via ApplicationContext – its the container for all beans).IoC scans the classpath for annotated classes like @Component, @Service, @Controller, @Repository.It creates beans (objects) for those classes and makes them available for dependency injection.Spring Boot scans application.properties or application.yml, applying those configurations to the beans or the application as needed.
Dependency Injection (DI): –A design pattern that reduces the connection between the system components making the code more modular, maintainable and testable. It avoids the tight coupling between the classes and make the classes loosely coupled.
Coupling here is one class depends on another class.
For ex. In the same birthday party, if the parents arranged the setup one one theme (Dora-Bujju) for the kid and later the kid changed its mind set and asking for another theme (Julie – Jackie Chan). Now its a wastage of time and money, and parent’s frustration also. Instead, if they tell the organizer to change the theme (as its their work and having some days also) – its easily getting updated.
In Dependency Injection, we can consider like one class wants to use another class, then it should not use its object (bean) directly inside the body (Tight coupling). Future modification is getting tougher here, instead, just pass the bean (object) as a parameter (injecting that bean) into the required class (Constructor DI). In case if the injected bean (passed object as a parameter) wants to get changed in the future, just replace it with another bean(object) in the parameter section.
Internet Relay Chat (IRC) is a real-time text-based communication over the internet. Despite the rise of modern messaging platforms, IRC remains a vital part of online communication.
It allows users to join channels-virtual chat rooms where they can communicate with others in real-time. Each user connects through an IRC Client to an IRC Server, which manages the communication between users.
IRC operates on Client-Server Model.
Clients: Users connect to IRC network using client software (e.g., We-chat, Goguma). Each client must register with a unique name and user information when connecting to an IRC server.
Servers: These act as central hubs that manage connections and relay messages between clients. They handle multiple channels and maintain user connections.
Security Considerations
While standard IRC communications are not encrypted, it is possible to use TLS (Transport Layer Security) for secure client-to-server connections. However, messages become unencrypted once relayed to other users on standard connections. For secure file transfers, Secure DCC (Direct Client-to-Client) can be utilized.
The Evolution of IRC
Developed in 1988 by Jarkko Oikarinen, IRC was initially created for users on Bulletin Board Systems (BBS) to chat among themselves. Over the years, it has adapted to support a large number of global servers and clients while maintaining its core functionality. Although its popularity has waned with newer platforms like Slack and Discord, IRC continues to thrive within specific communities that value its simplicity and extensibility.
Conclusion
IRC remains an essential part of internet history as a straightforward yet effective means of real-time text communication. Its client-server architecture allows users to connect dynamically across various networks. While newer platforms have emerged, IRC’s enduring presence reflects its significance in facilitating online discussions and collaborations among diverse groups of users. Whether you’re looking for casual conversation or technical discussions, IRC provides a unique environment that fosters community engagement.
What is plaintext in my point of view: Its simply text without any makeup or add-on, it is just an organic content. For example,
A handwritten grocery list what our mother used to give to our father
A To-Do List
An essay/composition writing in our school days
Why plaintext is important? – The quality of the content only going to get score here: there is no marketing by giving some beautification or formats. – Less storage – Ideal for long term data storage because Cross-Platform Compatibility – Universal Accessibility. Many s/w using plain text for configuration files (.ini, .conf, .json) – Data interchange (.csv – interchange data into databases or spreadsheet application) – Command line environments, even in cryptography. – Batch Processing: Many batch processes use plain text files to define lists of actions or tasks that need to be executed in a batch mode, such as renaming files, converting data formats, or running programs.
So plain text is simple, powerful and something special we have no doubt about it.
What is IRC? IRC – Internet Relay Chat is a plain text based real time communication System over the internet for one-on-one chat, group chat, online community – making it ideal for discussion.
It’s a popular network for free and open-source software (FOSS) projects and developers in olden days. Ex. many large projects (like Debian, Arch Linux, GNOME, and Python) discussion used. Nowadays also IRC is using by many communities.
Usage : Mainly a discussion chat forum for open-source software developers, technology, and hobbyist communities.
Why IRC? Already we have so many chat platforms which are very advanced and I could use multimedia also there: but this is very basic, right? So Why should I go for this?
Yes it is very basic, but the infrastructure of this IRC is not like other chat platforms. In my point of view the important differences are Privacy and No Ads.
Advantages over other Chat Platforms:
No Ads Or Popups: We are not distracted from other ads or popups because my information is not shared with any companies for tracking or targeted marketing.
Privacy: Many IRC networks do not require your email, mobile number or even registration. You can simply type your name or nick name, select a server and start chatting instantly. Chat Logs also be stored if required.
Open Source and Free: Server, Client – the entire networking model is free and open source. Anybody can install the IRC servers/clients and connect with the network.
Decentralized : As servers are decentralized, it could able to work even one server has some issues and it is down. Users can connect to different servers within the same network which is improving reliability and performance.
Low Latency: Its a free real time communication system with low latency which is very important for technical communities and time sensitive conversations.
Customization and Extensibility: Custom scripts can be written to enhance functionality and IRC supports automation through bots which can record chats, sending notification or moderating channels, etc.
Channel Control: Channel Operators (Group Admin) have fine control over the users like who can join, who can be kicked off.
Light Weight Tool: As its light weight no high end hardware required. IRC can be accessed from even older computers or even low powered devices like Rasberry Pi.
History and Logging: Some IRC Servers allow logging of chats through bots or in local storage.
Inventor IRC is developed by Jarkko Oikarinen (Finland) in 1988.
Some IRC networks/Servers: Libera.Chat(#ubuntu, #debian, #python, #opensource) EFNet-Eris Free Network (#linux, #python, #hackers) IRCnet(#linux, #chat, #help) Undernet(#help, #anime, #music) QuakeNet (#quake, #gamers, #techsupport) DALnet- for both casual users and larger communities (#tech, #gaming, #music)
Directly on the Website – Libera WebClient – https://web.libera.chat/gamja/ You can click Join, then type the channel name (Group) (Ex. #kaniyam)
How to get Connected with IRC: After installed the IRC client, open. Add a new network (e.g., “Libera.Chat”). Set the server to irc.libera.chat (or any of the alternate servers above). Optionally, you can specify a port (default is 6667 for non-SSL, 6697 for SSL). Join a channel like #ubuntu, #python, or #freenode-migrants once you’re connected.
Popular channels to join on libera chat: #ubuntu, #debian, #python, #opensource, #kaniyam
Local Logs: Logs are typically saved in plain text and can be stored locally, allowing you to review past conversations. How to get local logsfrom our System (IRC libera.chat Server) folders – /home//.local/share/weechat/logs/ From Web-IRCBot History: https://ircbot.comm-central.org:8080
it requires external dependency parse for parsing the python string format with placeholders
import parse
from date import TA_MONTHS
from date import datetime
//POC of tamil date time parser
def strptime(format='{month}, {date} {year}',date_string ="நவம்பர், 16 2024"):
parsed = parse.parse(format,date_string)
month = TA_MONTHS.index(parsed['month'])+1
date = int(parsed['date'])
year = int(parsed['year'])
return datetime(year,month,date)
print(strptime("{date}-{month}-{year}","16-நவம்பர்-2024"))
#dt = datetime(2024,11,16);
# print(dt.strptime_ta("நவம்பர் , 16 2024","%m %d %Y"))
Hi folks , welcome to my blog. Here we are going to see some basic and important commands of linux.
One of the most distinctive features of Linux is its command-line interface (CLI). Knowing a few basic commands can unlock many possibilities in Linux. Essential Commands
Here are some fundamental commands to get you started: ls - Lists files and directories in the current directory.
ls
cd - Changes to a different directory.
cd /home/user/Documents
pwd - Prints the current working directory.
pwd
cp - Copies files or directories.
cp file1.txt /home/user/backup/
mv - Moves or renames files or directories.
mv file1.txt file2.txt
rm - Removes files or directories.
rm file1.txt
mkdir - Creates a new directory.
mkdir new_folder
touch - Creates a new empty file.
touch newfile.txt
cat - Displays the contents of a file.
cat file1.txt
nano or vim - Opens a file in the text editor.
nano file1.txt
chmod - Changes file permissions.
chmod 755 file1.txt
ps - Displays active processes.
ps
kill - Terminates a process.
kill [PID]
Each command is powerful on its own, and combining them enables you to manage your files and system effectively.We can see more about some basics and interesting things about linux in further upcoming blogs which I will be posting.
Introduction:
Linux is one of the most powerful and widely-used operating systems in the world, found everywhere from mobile devices to high-powered servers. Known for its stability, security, and open-source nature, Linux is an essential skill for anyone interested in IT, programming, or system administration.
In this blog , we are going to see What is linux and Why choose linux.
1) What is linux
Linux is an open-source operating system that was first introduced by Linus Torvalds in 1991. Built on a Unix-based foundation, Linux is community-driven, meaning anyone can view, modify, and contribute to its code. This collaborative approach has led to the creation of various Linux distributions, or "distros," each tailored to different types of users and use cases. Some of the most popular Linux distributions are:
Ubuntu: Known for its user-friendly interface, great for beginners.
Fedora: A cutting-edge distro with the latest software versions, popular with developers.
CentOS: Stable and widely used in enterprise environments.
Each distribution may look and function slightly differently, but they all share the same core Linux features.
2) Why choose linux
Linux is favored for many reasons, including its:
Stability: Linux is well-known for running smoothly without crashing, even in demanding environments.
Security: Its open-source nature allows the community to detect and fix vulnerabilities quickly, making it highly secure.
Customizability: Users have complete control to modify and customize their system.
Performance: Linux is efficient, allowing it to run on a wide range of devices, from servers to small IoT devices.
Conclusion
Learning Linux basics is the first step to becoming proficient in an operating system that powers much of the digital world. We can see more about some basics and interesting things about linux in further upcoming blogs which I will be posting.
For the past few months, I was preparing for an English Exam called CELPIP. It is an exam to check the Listening, Reading, Writing and Speaking. Though we know English, preparing for an exam is an exhausting one. We took online training from “Galaxy Training Academy“. https://galaxytraining.in The coach “Jay Kumar” gave nice intro about exam pattern. He gave many mock tests and gave good feedback on how to improve, on each test. Last month, Nithya and I cleared the exam. It is a good feel to released from exam fear. Postponed many activities because of the exam preparation. Will roll out them all soon. If you are preparing for any English Exam, I suggest taking training and mock tests with “Galaxy Training Academy”.
——
In Canada, Daylight saving ended yesterday. This happens every year in fall season and referred as “Fallback”. The clocks are moved one hour back. This is to adjust the dark winter season. It seems like all in a sudden, we got one hour extra to sleep in morning.
——
On Oct 31, we had Deepavali, Halloween and our Marriage day. Deepavali day went with great remembering our childhood memories. The evening was filled with fun, as we went to neighbourhood houses, with friends and kids, to play “Trick or Treat”. Saw many weird, spooky decorated houses and people. Kids collected a bag full of chocolates. Last year, it was too cold. This year, the same day had a nice weather, to roam around in the evening.
——
On Nov 1, we celebrated Deepavali with firing crackers. Bought a few crackers, which emit light. Here, we don’t get loud-full crackers like atom bombs, 1000 piece fireworks shots etc. With limited available crackers, kids enjoyed firing them, with all their friends together.
——
On Nov 8, we are planning for a mega Deepavali event with around 250 people here. I am contributing on the planning/photography. Nithya and kids are practicing dance with their friends. Hope it will be a fun-filled evening.
——
On Nov 2, gave a talk on tolkappiyam Canada monthly meeting, about our efforts on writing python code for tamil grammar rules in Tolkappiyam book. It was a good meeting. Few of the participants accepted to collaborate. You can read our progress here – https://github.com/KaniyamFoundation/ProjectIdeas/issues/214
Kids started going to tamil school on every Saturday morning. This week, they received books. Viyan is good at Tamil and English. Iyal started to read Tamil and English. Paari is trying to learn writing.
——
We conduct daily meetings in a text based chat system called IRC (Internet Relay Chat). daily, 7-8 pm IST. Good to see many people are joining and discussing many things about open source software and mentoring to contribute to open source software. More details here – https://goinggnu.wordpress.com/2024/10/21/open-source-projects-mentoring-via-irc/
——
Practicing Manual mode in photography for few weeks. Feeling like learning linux and Emacs. It gives the most flexible options and results are stunning. It is better to learn it in early days, so that we can do more magics with lighting.
——
The one thing I follow in photography is – shoot a lot, share a little. I keep and share only 10%. All others are deleted. Though it is hard to select the best photos, sharing only 10% is easy for viewers and brings a Wow from them.
For the past few months, I was preparing for an English Exam called CELPIP. It is an exam to check the Listening, Reading, Writing and Speaking. Though we know English, preparing for an exam is an exhausting one. We took online training from “Galaxy Training Academy“. https://galaxytraining.in The coach “Jay Kumar” gave nice intro about exam pattern. He gave many mock tests and gave good feedback on how to improve, on each test. Last month, Nithya and I cleared the exam. It is a good feel to released from exam fear. Postponed many activities because of the exam preparation. Will roll out them all soon. If you are preparing for any English Exam, I suggest taking training and mock tests with “Galaxy Training Academy”.
——
In Canada, Daylight saving ended yesterday. This happens every year in fall season and referred as “Fallback”. The clocks are moved one hour back. This is to adjust the dark winter season. It seems like all in a sudden, we got one hour extra to sleep in morning.
——
On Oct 31, we had Deepavali, Halloween and our Marriage day. Deepavali day went with great remembering our childhood memories. The evening was filled with fun, as we went to neighbourhood houses, with friends and kids, to play “Trick or Treat”. Saw many weird, spooky decorated houses and people. Kids collected a bag full of chocolates. Last year, it was too cold. This year, the same day had a nice weather, to roam around in the evening.
——
On Nov 1, we celebrated Deepavali with firing crackers. Bought a few crackers, which emit light. Here, we don’t get loud-full crackers like atom bombs, 1000 piece fireworks shots etc. With limited available crackers, kids enjoyed firing them, with all their friends together.
——
On Nov 8, we are planning for a mega Deepavali event with around 250 people here. I am contributing on the planning/photography. Nithya and kids are practicing dance with their friends. Hope it will be a fun-filled evening.
——
On Nov 2, gave a talk on tolkappiyam Canada monthly meeting, about our efforts on writing python code for tamil grammar rules in Tolkappiyam book. It was a good meeting. Few of the participants accepted to collaborate. You can read our progress here – https://github.com/KaniyamFoundation/ProjectIdeas/issues/214
Kids started going to tamil school on every Saturday morning. This week, they received books. Viyan is good at Tamil and English. Iyal started to read Tamil and English. Paari is trying to learn writing.
——
We conduct daily meetings in a text based chat system called IRC (Internet Relay Chat). daily, 7-8 pm IST. Good to see many people are joining and discussing many things about open source software and mentoring to contribute to open source software. More details here – https://goinggnu.wordpress.com/2024/10/21/open-source-projects-mentoring-via-irc/
——
Practicing Manual mode in photography for few weeks. Feeling like learning linux and Emacs. It gives the most flexible options and results are stunning. It is better to learn it in early days, so that we can do more magics with lighting.
——
The one thing I follow in photography is – shoot a lot, share a little. I keep and share only 10%. All others are deleted. Though it is hard to select the best photos, sharing only 10% is easy for viewers and brings a Wow from them.
In the programming world, if you say as ‘ I prefer watching videos, than reading docs’ it means you are a programmer already or you won’t become a programmer.
Do you feel that you are struggling to be a good programmer, even after watching 100s of hours of videos?
Let me share one secret. It is the fear of reading and writing PlainText. The more you go away from reading and writing, programming will go away from you.
Programming is all about dealing with the code, error messages, log files, documentation. All in PlainText. We have emails, tickets, docs, reports too there on the stack of IT life.
If you love terminal and PlainText tools, you are already into reading and writing. The more you read and write, the more you can get clarity in thinking, which is the essential part of programming.
To embrace the simplicity and powers of PlainText, few friends started to discuss in IRC. yes, the same 40+ years old Internet Relay Chat, an chat system which built the internet itself via chat.
Thanks to Indian Linux Users Group, Chennai, KanchiLUG, Kaniyam Foundation friends for joining the chat.
IRC stands for Internet Relay Chat. Its simply text messaging service (real time online communication service) and even with hundreds of people at once in fractions of a second messages are getting shared.
Actually it is a protocol for real-time text messaging created. It is mainly used for group discussion between the like minded people in chat rooms called “channels” (Ex. #Kaniyam) although it supports private messages between two users.
Mainly this is used in lots of open source community knowledge share – Ex. If you want to chat with a developer from linux, firefox etc.
The Chat Process works on a client-server networking model.
Wait! Wait!! Is it a Protocol or Chat App?
Its a Protocol which is implemented in client server for the text chat.WeeChat is one of the terminal application for IRC client and Pidgin is one of the Desktop client app.
Founder & Year :
IRC was developed in 1988 by Jakko Oikarinen in Finland
Merits/Advantages :
Simplicity : IRC is a simple, text-based environment for communication, without the multimedia features of other platforms.
Community building : IRC is ideal for interest-based or topic-based chat rooms, and is designed to foster a sense of community
Decentralized : IRC isn’t controlled by a single company, so anyone can set up their own server and network.
Privacy : Users can control their privacy with access levels, invitation-only channels, and one-on-one messaging.
Accessibility : IRC is accessible across various platforms, including desktop, laptop, and mobile devices.
Speed : IRC is a good option for fast text-based chat apps with a geographically distributed user-base, as it has low latency and can transmit large amounts of data quickly. (Simply its all texts – so speed is high).
Flexibility : IRC can be used for a variety of purposes, from critical operations to gaming discussions.
File sharing : IRC clients can be used to create file servers and share files with other users.
Everything is Fine, But already we have so many Apps for Chats, Why IRC?
So many People have so many answers, but according to me,
Reading is a Best Habit compares to watching video (unless it is necessary). It will increase our imagination capacity. Imagination is the key factor for our more technologies and inventions.
Environmental friendly – We are making so much of carbon footprints everyday. A carbon footprint is the total amount of greenhouse gases (including carbon dioxide and methane) that are generated by our actions.
We are wasting so much of energy (electricity and other resources) while sharing tons of files or images or videos everyday.
I am not asking you to stop watching TV or using internet etc…
Why don’t you contribute to our Mother Earth at leastby doing this simple help.
In the programming world, if you say as ‘ I prefer watching videos, than reading docs’ it means you are a programmer already or you won’t become a programmer.
Do you feel that you are struggling to be a good programmer, even after watching 100s of hours of videos?
Let me share one secret. It is the fear of reading and writing PlainText. The more you go away from reading and writing, programming will go away from you.
Programming is all about dealing with the code, error messages, log files, documentation. All in PlainText. We have emails, tickets, docs, reports too there on the stack of IT life.
If you love terminal and PlainText tools, you are already into reading and writing. The more you read and write, the more you can get clarity in thinking, which is the essential part of programming.
To embrace the simplicity and powers of PlainText, few friends started to discuss in IRC. yes, the same 40+ years old Internet Relay Chat, an chat system which built the internet itself via chat.
Thanks to Indian Linux Users Group, Chennai, KanchiLUG, Kaniyam Foundation friends for joining the chat.
IRC – Internet Relay Chat – is a text based chat program. The 2k kids may compare this to slack, telegram chat, whatapp chat or any other instant messaging.
IRC was created by Jarkko Oikarinen in August 1988. Wow. Too old, Right?
Why it is not famous nowadays? People are telling that there are many cons compared to modern instant messaging.
These cons are real Pros of IRC. They are not bugs. They are the intended features.
Are you hearing the word IRC for the first time? Here is a quick beginners guide
Yes. You can not add any image or video. Few servers accept file uploads. Still Text is the only preferred way of communication in IRC. Why?
Plain Text is the God of content, always.
It is searchable.
Anyone can read faster
Watching a one hour video will talk one hour time. Reading its transcription will take quarter of its time.
On server side maintenance, it will be great headache to keep on adding the storage to keep all the images, files, videos. Ask the admins, who manages RocketChat, Mattermost, Mastodon Instances. They will tell the pains of keep on increasing the hard disk spaces. The IRC server admins will live a peacefull life and can keep the history for decades.
If you can not explain anything on text, even videos wont help for many.
IRC is not keeping the history of chats
IRC simulates the realtime chatroom like a meeting room. If you are late to a meeting room, you miss the spoken things. You can read minutes and know what was spoken.
Similarly, IRC is only realtime chat. You can not read the previous chats like you do in telegram like instant messengers. Even in modern we dont read all the history of chats. Imagine, in a morning, a chat room having 200 unread message. What do you do? Just skip all the messages and mark them read. Thats life. IRC knew this 30 ago.
What If I want history?
As all the interaction is happening as plain text, anyone can share the chat history online as blogpost, pastebin or github gist. Ask any fellow mates to export them and read in leisure.
That’s too much work for me to ask for a chat history
Well. There are bots and bots and bots for IRC. Check for any logging bot and add to your channel. Host the bot yourself or use this bot. https://ircbot.comm-central.org:8080/ Add this bot to your channel and read all the chatlog on their website.
Who are using IRC still?
Most of the Free/Open Source software are having IRC Channels to provide free support. You can interact with the original developers of the software there. Anyone can create a channel, invite friends, hangout and have fun there.
Rules, Rules, Rules
It seems there are few rules to chat in IRC Channels. It depends on each channel. To maintain goodness, there are rules everywhere. Even in Roads, to avoid accidents, we have to follow rules. There are mailing list rules. Similarly, there are common IRC Rules. Read here for common IRC meeting Rules https://fedoraproject.org/wiki/How_to_use_IRC#Meeting_Protocol
There are many commands to learn
Yes. As everything is via only text, we have to give few commands to use the IRC. There are no “Join/Mute/Leave/Kick” buttons. They are just commands. Check your instant messenger GUI. You will be clicking so many buttons to interact. They are the original commands here.
I have to remember people by there NickName. Why cant we connect with their phone number?
Welcome to the world of Privacy. By giving all your contact’s phone numbers to your instant messenger like WhatsApp, you are selling all our privacy. Do you get frequent SPAM phone calls from all banks,credit cards? We did not get these on IRC times. IRC keeps all your privacy. You dont have to reveal your name, sex, country.
What if I want to chat with someone instantly? He is not there in the IRC Channel? In instant messenger, I leave him a message, on next day, he will see and reply.
IRC respects your offline life. You dont need to be online 24/7. IRC is like going to office and being in a meeting room. Are you in meeting room or a tea shop gang or friends gathering 24/7 ? How are you connecting with them when they are not around? Yes. You make calls or send a message. You choose another medium to connect with them and wait. Do the same here. If you have some query. Posted in IRC. No one replies or the persons who can reply are not there in the Channel. Then ask the same on respective mailing list or stackoverflow like forums. Check the IRC logs next day, you might got answers. You can quickly search over text to get your conversations.
If someone is not there online, it seems he is enjoying life in realtime. Let them enjoy it. Ask for their available time and interact only on that time.
Meet.Jit.si, BigBlueButton,Zoom, Skype, Facebook Live, YouTube Live are useful to meet people, discuss and for any training. Can we do the same on IRC?
Yes. We can do all the things in IRC too. There are tons and tons of training happening over IRC. DPGLug is conducting Training on Free Software from 2009 on every summer. You can read all the logs here https://dgplug.org/irclogs/
Anyone can read/skim these logs quickly. If I share a youtube channel or podcasts full of 100s of hours of content, how long will you watch? It depends on the need. The same applies for IRC also. But it easy and quick to read text.
Can I use IRC on the go with Mobile?
Yes. There are many mobile clients. IRCCloud’s web client and mobile client are modern and neat. I use their free plan. There are tons of clients available for all OS. Explore and find your lovable pair, yourself.
What if I want to read all the history?
You can setup IRC bouncer software like ZNC, or connect with a matrix channel or pay for IRCCloud.com like services. https://thelounge.chat/ seems a better self hostable IRC web client. Install it in a server or raspberry pi. If you want to be away from propitiatory software, for which you are the product, you have to host yourself or pay to some service providers. Or simply enable a free logging bot and read from their website.
I can talk faster than typing.
But people can read faster than hearing or seeing any video. If you want to reach more people, type the content in IRC or a blog post.
What about Matrix/Riot?
Matrix/Riot.im seems a upgraded IRC with all the bells and whistles of other instant messengers. They can be connected with IRC using bots. To and Fro both way communications are seamless. But it allows multimedia content, exporting all the content for public view is still tough, high maintenance tasks for server admins.
Want to discuss with 1000s of people same time? Use IRC
All the modern video chat services, YouTube, Facebook Live sessions take high bandwidth. Not everyone on the world have the bandwidth to connect with a video chat. Most of the video calls are spending time with “Am I audible? Do you see my screen? I can not see the screen. Stop the Noice” etc like discussions.
We dont have a 100s of people participating free video call service yet. You have to pay a lot for Zoom like services, just to hear the above voices and see blur faces of participants.
Instead, call for a meeting over IRC. Follow few meeting guidelines and all the meeting is done with low noise. Even anyone can connect with 2G or below bandwidth network.
Ok.Ok.Ok Stop this marketing for IRC
Few final thoughts.
IRC server is self hostable. irc.libera.chat, irc.oftc.net are few major free servers available to create a channel
Respect other people’s time. Video calls take all the participants time. Text chats are quick and respecting time.
Text chat is very minimalist. Enjoy the peacefulness of a IRC meeting.
Not all trainings need a video chat. If something is really important to demonstrate as a video, record a screencast, upload online and ask all to watch.
Text is great way to learn things. Remember, we still use text books, tutorials, documentation, wikipedia to learn many things. Videos can help only as supporting materials.
I will be available at #ilugc and #kaniyam at irc.libera.chat on weekdays on daytimes.
I will plan for few text based training on any free software and announce here soon.
Thanks to ShakthiKannan, Mohan of Indian Linux Users Group, Chennai and Kushal of DGPLug for inspiring me on using IRC.