React- Router is used for render the components depends on URL without reloading the browser page. It navigates a page to another page without page reloads. we can use router in react project, first we install the react-router-dom package from react.
Then, access the router using
import {BrowserRouter,Router,Routes} from 'react-router-dom';
after that we use link to diplay the browser using Link instead <a href=''>in react.
so,
import {BrowserRouter,Router,Routes,Link} from βreact-router-domβ;
example for using Link and routes:
Using Links:
'/' is the root page it display default home page
<Link to='/'>Home</Link>
<Link to='/about'>About</Link>
<Link to='/contact'>Contact</Link>
React- Router is used for render the components depends on URL without reloading the browser page. It navigates a page to another page without page reloads. we can use router in react project, first we install the react-router-dom package from react.
Then, access the router using
import {BrowserRouter,Router,Routes} from 'react-router-dom';
after that we use link to diplay the browser using Link instead <a href=''>in react.
so,
import {BrowserRouter,Router,Routes,Link} from βreact-router-domβ;
example for using Link and routes:
Using Links:
'/' is the root page it display default home page
<Link to='/'>Home</Link>
<Link to='/about'>About</Link>
<Link to='/contact'>Contact</Link>
Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, weβll explore how to implement Round Robin load balancing using Flask as our backend application and HAProxy as our load balancer.
What is Round Robin Load Balancing?
Round Robin load balancing works by distributing incoming requests sequentially across a group of servers.
For example, the first request goes to Server A, the second to Server B, the third to Server C, and so on. Once all servers have received a request, the cycle repeats. This algorithm is simple and works well when all servers have similar capabilities.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
Weβll create three separate Dockerfiles, one for each Flask app.
Flask App 1 (app1.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Each Flask app listens on a different port (5001, 5002, 5003).
Step 2: Dockerfiles for each flask application
Dockerfile for Flask App 1 (Dockerfile.app1)
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2)
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3)
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create a configuration for HAProxy
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance roundrobin
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance roundrobin directive sets the Round Robin algorithm for load balancing.
server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy)
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a Dockercompose file
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
Each Flask app is built from its respective Dockerfile and runs on its port.
HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.
Step 6: Build and Run the Docker Containers
Run the following commands to build and start all the containers:
# Build and run the containers
docker-compose up --build
This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.
You should see the responses alternating between βHello from Flask App 1!β, βHello from Flask App 2!β, and βHello from Flask App 3!β as HAProxy uses the Round Robin algorithm to distribute requests.
Step 7: Test the Load Balancer
Open your browser or use a tool like curl to make requests to the HAProxy server:
Meet Jafer, a talented developer (self boast) working at a fast growing tech company. His team is building an innovative app that fetches data from multiple third-party APIs in realtime to provide users with up-to-date information.
Everything is going smoothly until one day, a spike in traffic causes their app to face a wave of βHTTP 500β and βTimeoutβ errors. Requests start failing left and right, and users are left staring at the dreaded βData Unavailableβ message.
Jafer realizes that he needs a way to make their app more resilient against these unpredictable network hiccups. Thatβs when he discovers Tenacity a powerful Python library designed to help developers handle retries gracefully.
Join Jafer as he dives into Tenacity and learns how to turn his app from fragile to robust with just a few lines of code!
Step 0: Mock FLASK Api
from flask import Flask, jsonify, make_response
import random
import time
app = Flask(__name__)
# Scenario 1: Random server errors
@app.route('/random_error', methods=['GET'])
def random_error():
if random.choice([True, False]):
return make_response(jsonify({"error": "Server error"}), 500) # Simulate a 500 error randomly
return jsonify({"message": "Success"})
# Scenario 2: Timeouts
@app.route('/timeout', methods=['GET'])
def timeout():
time.sleep(5) # Simulate a long delay that can cause a timeout
return jsonify({"message": "Delayed response"})
# Scenario 3: 404 Not Found error
@app.route('/not_found', methods=['GET'])
def not_found():
return make_response(jsonify({"error": "Not found"}), 404)
# Scenario 4: Rate-limiting (simulated with a fixed chance)
@app.route('/rate_limit', methods=['GET'])
def rate_limit():
if random.randint(1, 10) <= 3: # 30% chance to simulate rate limiting
return make_response(jsonify({"error": "Rate limit exceeded"}), 429)
return jsonify({"message": "Success"})
# Scenario 5: Empty response
@app.route('/empty_response', methods=['GET'])
def empty_response():
if random.choice([True, False]):
return make_response("", 204) # Simulate an empty response with 204 No Content
return jsonify({"message": "Success"})
if __name__ == '__main__':
app.run(host='localhost', port=5000, debug=True)
To run the Flask app, use the command,
python mock_server.py
Step 1: Introducing Tenacity
Jafer decides to start with the basics. He knows that Tenacity will allow him to retry failed requests without cluttering his codebase with complex loops and error handling. So, he installs the library,
pip install tenacity
With Tenacity ready, Jafer decides to tackle his first problem, retrying a request that fails due to server errors.
Step 2: Retrying on Exceptions
He writes a simple function that fetches data from an API and wraps it with Tenacityβs @retry decorator
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(3),
wait=wait_fixed(2),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_random_error():
response = requests.get('http://localhost:5000/random_error')
response.raise_for_status() # Raises an HTTPError for 4xx/5xx responses
return response.json()
if __name__ == '__main__':
try:
data = fetch_random_error()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This code will attempt the request up to 3 times, waiting 2 seconds between each try. Jafer feels confident that this will handle the occasional hiccup. However, he soon realizes that he needs more control over which exceptions trigger a retry.
Step 3: Handling Specific Exceptions
Jaferβs app sometimes receives a β404 Not Foundβ error, which should not be retried because the resource doesnβt exist. He modifies the retry logic to handle only certain exceptions,
import requests
import logging
from tenacity import before_log, after_log
from requests.exceptions import HTTPError, Timeout
from tenacity import retry, retry_if_exception_type, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(3),
wait=wait_fixed(2),
retry=retry_if_exception_type((HTTPError, Timeout)),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get('http://localhost:5000/timeout', timeout=2) # Set a short timeout to simulate failure
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
Now, the function retries only on HTTPError or Timeout, avoiding unnecessary retries for a β404β error. Jaferβs app is starting to feel more resilient!
Step 4: Implementing Exponential Backoff
A few days later, the team notices that theyβre still getting rate-limited by some APIs. Jafer recalls the concept of exponential backoff a strategy where the wait time between retries increases exponentially, reducing the load on the server and preventing further rate limiting.
He decides to implement it,
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_exponential
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=2, max=10),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_rate_limit():
response = requests.get('http://localhost:5000/rate_limit')
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_rate_limit()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
With this code, the wait time starts at 2 seconds and doubles with each retry, up to a maximum of 10 seconds. Jaferβs app is now much less likely to be rate-limited!
Step 5: Retrying Based on Return Values
Jafer encounters another issue: some APIs occasionally return an empty response (204 No Content). These cases should also trigger a retry. Tenacity makes this easy with the retry_if_result feature,
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, retry_if_result
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(retry=retry_if_result(lambda x: x is None), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_empty_response():
response = requests.get('http://localhost:5000/empty_response')
if response.status_code == 204:
return None # Simulate an empty response
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_empty_response()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
Now, the function retries when it receives an empty response, ensuring that users get the data they need.
Step 6: Combining Multiple Retry Conditions
But Jafer isnβt done yet. Some situations require combining multiple conditions. He wants to retry on HTTPError, Timeout, or a None return value. With Tenacityβs retry_any feature, he can do just that,
import requests
import logging
from tenacity import before_log, after_log
from requests.exceptions import HTTPError, Timeout
from tenacity import retry_any, retry, retry_if_exception_type, retry_if_result, stop_after_attempt
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(retry=retry_any(retry_if_exception_type((HTTPError, Timeout)), retry_if_result(lambda x: x is None)), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get("http://localhost:5000/timeout")
if response.status_code == 204:
return None
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This approach covers all his bases, making the app even more resilient!
Step 7: Logging and Tracking Retries
As the app scales, Jafer wants to keep an eye on how often retries happen and why. He decides to add logging,
import logging
import requests
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(2), wait=wait_fixed(2),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get("http://localhost:5000/timeout", timeout=2)
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This logs messages before and after each retry attempt, giving Jafer full visibility into the retry process. Now, he can monitor the appβs behavior in production and quickly spot any patterns or issues.
The Happy Ending
With Tenacity, Jafer has transformed his app into a resilient powerhouse that gracefully handles intermittent failures. Users are happy, the servers are humming along smoothly, and Jaferβs team has more time to work on new features rather than firefighting network errors.
By mastering Tenacity, Jafer has learned that handling network failures gracefully can turn a fragile app into a robust and reliable one. Whether itβs dealing with flaky APIs, network blips, or rate limits, Tenacity is his go-to tool for retrying operations in Python.
So, the next time your app faces unpredictable network challenges, remember Jaferβs story and give Tenacity a try you might just save the day!
Once upon a time in ooty, there was a small business called βAmutha Hotel,β run by a passionate baker named Saravanan. Saravanan bakery was famous for its delicious sambar, and as his customer base grew, he needed to keep track of orders, customer information, and inventory.
Being a techie, he decided to store all this information in a flat file a simple spreadsheet named βHotelData.csv.β
The Early Days: Simple and Sweet
At first, everything was easy. Saravananβs flat file had only a few columns, OrderID, CustomerName, Product, Quantity, and Price. Each row represented a new order, and it was simple enough to manage. Saravanan could quickly find orders, calculate totals, and even check his inventory by filtering the file.
The Business Grows: Complexity Creeps In
As the business boomed, Saravanan started offering new products, special discounts, and loyalty programs. He added more columns to her flat file, like Discount, LoyaltyPoints, and DeliveryAddress. He once-simple file began to swell with information.
Then, Saravanan decided to start tracking customer preferences and order history. He began adding multiple rows for the same customer, each representing a different order. His flat file now had repeating groups of data for each customer, and it became harder and harder to find the information he needed.
His flat file was getting out of hand. For every new order from a returning customer, he had to re-enter all their information
CustomerName, DeliveryAddress, LoyaltyPoints
over and over again. This duplication wasnβt just tedious; it started to cause mistakes. One day, he accidentally typed βJohn Smythβ instead of βJohn Smith,β and suddenly, his loyal customer was split into two different entries.
On a Busy Saturday
One busy Saturday, Saravanan opened his flat file to update the dayβs orders, but instead of popping up instantly as it used to, it took several minutes to load. As he scrolled through the endless rows, his computer started to lag, and the spreadsheet software even crashed a few times. The file had become too large and cumbersome for him to handle efficiently.
Customers were waiting longer for their orders to be processed because Saravanan was struggling to find their previous details and apply the right discounts. The flat file that once served his so well was now slowing her down, and it was affecting her business.
The Journaling
Techie Saravanan started to note these issues in to a notepad. He badly wants a solution which will solve these problems. So he started listing out the problems with examples to look for a solution.
His journal continues β¦
Before databases became common for data storage, flat files (such as CSVs or text files) were often used to store and manage data. The data file that we use has no special structure; itβs just some lines of text that mean something to the particular application that reads it. It has no inherent structure
However, these flat files posed several challenges, particularly when dealing with repeating groups, which are essentially sets of related fields that repeat multiple times within a record. Here are some of the key problems associated with repeating groups in flat files,
1. Data Redundancy
Description: Repeating groups can lead to significant redundancy, as the same data might need to be repeated across multiple records.
Example: If an employee can have multiple skills, a flat file might need to repeat the employeeβs name, ID, and other details for each skill.
Problem: This not only increases the file size but also makes data entry, updates, and deletions more prone to errors.
Eg: Suppose you are maintaining a flat file to track employees and their skills. Each employee can have multiple skills, which you store as repeating groups in the file.
EmployeeID, EmployeeName, Skill1, Skill2, Skill3, Skill4
1, John Doe, Python, SQL, Java,
2, Jane Smith, Excel, PowerPoint, Python, SQL
If an employee has four skills, you need to add four columns (Skill1, Skill2, Skill3, Skill4). If an employee has more than four skills, you must either add more columns or create a new row with repeated employee details.
2. Data Inconsistency
Description: Repeating groups can lead to inconsistencies when data is updated.
Example: If an employeeβs name changes, and itβs stored multiple times in different rows because of repeating skills, itβs easy for some instances to be updated while others are not.
Problem: This can lead to situations where the same employee is listed under different names or IDs in the same file.
Eg: Suppose you are maintaining a flat file to track employees and their skills. Each employee can have multiple skills, which you store as repeating groups in the file.
EmployeeID, EmployeeName, Skill1, Skill2, Skill3, Skill4
1, John Doe, Python, SQL, Java,
2, Jane Smith, Excel, PowerPoint, Python, SQL
If Johnβs name changes to βJohn A. Doe,β you must manually update each occurrence of βJohn Doeβ across all rows, which increases the chance of inconsistencies.
3. Difficulty in Querying
Description: Querying data in flat files with repeating groups can be cumbersome and inefficient.
Example: Extracting a list of unique employees with their respective skills requires complex scripting or manual processing.
Problem: Unlike relational databases, which use joins to simplify such queries, flat files require custom logic to manage and extract data, leading to slower processing and more potential for errors.
Eg: Suppose you are maintaining a flat file to track employees and their skills. Each employee can have multiple skills, which you store as repeating groups in the file.
EmployeeID, EmployeeName, Skill1, Skill2, Skill3, Skill4
1, John Doe, Python, SQL, Java,
2, Jane Smith, Excel, PowerPoint, Python, SQL
Extracting a list of all employees proficient in βPythonβ requires you to search across multiple skill columns (Skill1, Skill2, etc.), which is cumbersome compared to a relational database where you can use a simple JOIN on a normalized EmployeeSkills table.
4. Limited Scalability
Description: Flat files do not scale well when the number of repeating groups or the size of the data grows.
Example: A file with multiple repeating fields can become extremely large and difficult to manage as the number of records increases.
Problem: This can lead to performance issues, such as slow read/write operations and difficulty in maintaining the file over time.
Eg: You are storing customer orders in a flat file where each customer can place multiple orders.
CustomerID, CustomerName, Order1ID, Order1Date, Order2ID, Order2Date, Order3ID, Order3Date
1001, Alice Brown, 5001, 2023-08-01, 5002, 2023-08-15,
1002, Bob White, 5003, 2023-08-05,
If Alice places more than three orders, youβll need to add more columns (Order4ID, Order4Date, etc.), leading to an unwieldy file with many empty cells for customers with fewer orders.
5. Challenges in Data Integrity
Description: Ensuring data integrity in flat files with repeating groups is difficult.
Example: Enforcing rules like βan employee can only have unique skillsβ is nearly impossible in a flat file format.
Problem: This can result in duplicated or invalid data, which is hard to detect and correct without a database system.
Eg: You are storing customer orders in a flat file where each customer can place multiple orders.
CustomerID, CustomerName, Order1ID, Order1Date, Order2ID, Order2Date, Order3ID, Order3Date
1001, Alice Brown, 5001, 2023-08-01, 5002, 2023-08-15,
1002, Bob White, 5003, 2023-08-05,
Thereβs no easy way to enforce that each order ID is unique and corresponds to the correct customer, which could lead to errors or duplicated orders.
6. Complex File Formats
Description: Managing and processing flat files with repeating groups often requires complex file formats.
Example: Custom delimiters or nested formats might be needed to handle repeating groups, making the file harder to understand and work with.
Problem: This increases the likelihood of errors during data entry, processing, or when the file is read by different systems.
Eg: You are storing customer orders in a flat file where each customer can place multiple orders.
CustomerID, CustomerName, Order1ID, Order1Date, Order2ID, Order2Date, Order3ID, Order3Date
1001, Alice Brown, 5001, 2023-08-01, 5002, 2023-08-15,
1002, Bob White, 5003, 2023-08-05,
As the number of orders grows, the file format becomes increasingly complex, requiring custom scripts to manage and extract order data for each customer.
7. Lack of Referential Integrity
Description: Flat files lack mechanisms to enforce referential integrity between related groups of data.
Example: Ensuring that a skill listed in one file corresponds to a valid skill ID in another file requires manual checks or complex logic.
Problem: This can lead to orphaned records or mismatches between related data sets.
Eg: A fleet management company tracks maintenance records for each vehicle in a flat file. Each vehicle can have multiple maintenance records.
Thereβs no way to ensure that the Maintenance1Type and Maintenance2Type fields are valid maintenance types or that the dates are in correct chronological order.
8. Difficulty in Data Modification
Description: Modifying data in flat files with repeating groups can be complex and error-prone.
Example: Adding or removing an item from a repeating group might require extensive manual edits across multiple records.
Problem: This increases the risk of errors and makes data management time-consuming.
Eg: A university maintains a flat file to record student enrollments in courses. Each student can enroll in multiple courses.
If a student drops a course or switches to a different one, manually editing the file can easily lead to errors, especially as the number of students and courses increases.
After listing down all these, Saravanan started looking into solutions. His search goes onβ¦
Binary insertion sort is a sorting algorithm similar to insertion sort, but instead of using linear search to find the position where the element should be inserted, we use binary search. Thus, we reduce the number of comparisons for inserting one element from O(N) (Time complexity in Insertion Sort) to O(log N).
Best of two worlds
Binary insertion sort is a combination of insertion sort and binary search.
Insertion sort is sorting technique that works by finding the correct position of the element in the array and then inserting it into its correct position. Binary search is searching technique that works by finding the middle of the array for finding the element.
As the complexity of binary search is of logarithmic order, the searching algorithmβs time complexity will also decrease to of logarithmic order. Implementation of binary Insertion sort. this program is a simple Insertion sort program but instead of the standard searching technique binary search is used.
How Binary Insertion Sort works ?
Process flow:
In binary insertion sort, we divide the array into two subarrays β sorted and unsorted. The first element of the array is in the sorted subarray, and the rest of the elements are in the unsorted one.
We then iterate from the second element to the last element. For the i-th iteration, we make the current element our βkey.β This key is the element that we have to add to our existing sorted subarray.
Example
Consider the array 29, 10, 14, 37, 14,
First Pass
Key = 1
Since we consider the first element is in the sorted array, we will be starting from the second element. Then we apply the binary search on the sorted array.
In this scenario, we can see that the middle element in sorted array (29) is greater than the key element 10 (key = 1).
Now we need to place the key element before the middle element (i.e.) to the position of 0. Then we can shift the remaining elements by 1 position.
Increment the value of key.
Second Pass
Key = 2
Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.
In this scenario, by applying binary search, we can see key element to be placed at index 1 (between 10 and 29). Then we can shift the remaining elements by 1 position.
Third Pass
Key = 3
Now the key element is 37. We will apply binary search in the sorted array to find the position of the key element.
In this scenario, by applying binary search, we can see key element is placed in its correct position.
Fourth Pass
Key = 4
Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.
In this scenario, by applying binary search, we can see key element to be placed at index 2 (between 14 and 29). Then we can shift the remaining elements by 1 position.
For inserting the i-th element in its correct position in the sorted, finding the position (pos) will take O(log i) steps. However, to insert the element, we need to shift all the elements from pos to i-1. This will take i steps in the worst case (when we have to insert at the starting position).
We make a total of N insertions. so, the worst-case time complexity of binary insertion sort is O(N^2).
This occurs when the array is initially sorted in descending order.
Best Case
The best case will be when the element is already in its sorted position. In this case, we donβt have to shift any of the elements; we can insert the element in O(1).
But we are using binary search to find the position where we need to insert. If the element is already in its sorted position, binary search will take (log i) steps. Thus, for the i-th element, we make (log i) operations, so its best-case time complexity is O(N log N).
This occurs when the array is initially sorted in ascending order.
Average Case
For average-case time complexity, we assume that the elements of the array are jumbled. Thus, on average, we will need O(i /2) steps for inserting the i-th element, so the average time complexity of binary insertion sort is O(N^2).
Space Complexity Analysis
Binary insertion sort is an in-place sorting algorithm. This means that it only requires a constant amount of additional space. We sort the given array by shifting and inserting the elements.
Therefore, the space complexity of this algorithm is O(1) if we use iterative binary search. It will be O(logN) if we use recursive binary search because of O(log N) recursive calls.
Is Binary Insertion Sort a stable algorithm
It is a stable sorting algorithm, the elements with the same values appear in the same order in the final array as they were in the initial array.
Cons and Pros
Binary insertion sort works efficiently for smaller arrays.
This algorithm also works well for almost-sorted arrays, where the elements are near their position in the sorted array.
However, when the size of the array is large, the binary insertion sort doesnβt perform well. We can use other sorting algorithms like merge sort or quicksort in such cases.
Making fewer comparisons is also one of the strengths of this sorting algorithm; therefore, it is efficient to use it when the cost of comparison is high.
Its efficient when the cost of comparison between keys is sufficiently high. For example, if we want to sort an array of strings, the comparison operation of two strings will be high.
Bonus Section
Binary Insertion Sort has a quadratic time complexity just as Insertion Sort. Still, it is usually faster than Insertion Sort in practice, which is apparent when comparison takes significantly more time than swapping two elements.
Once upon a time in the electronic city of Banglore, there was a popular digital tea kadai. This cafe was unique because it didnβt serve traditional coffee or pastries. Instead, it served data and services to its customers developers, businesses, and tech enthusiasts who were hungry for information and resources.
The Client:
One day, a young developer named Dinesh walked into tea kadai. He was working on a new app and needed to fetch some data from the cafeβs servers. In this story, Dinesh represents the client. As a client, his role was to request specific services and data from the cafe. He approached the counter and handed over his order slip, detailing what he needed.
The Server:
Behind the counter was Syed, the tea master, representing the server. Syedβs job was to take Dineshβs request, process it, and deliver the requested data back to him.
Syed had access to a vast array of resources stored in the cafeβs back room, where all the data was kept. When Dinesh made his request, Syed quickly went to the back, gathered the data, and handed it back to Dinesh.
The client-server architecture at Tea Kadai worked seamlessly.
Dinesh, as the client, could make requests whenever he needed, and
Syed, as the server, would respond by providing the requested data.
This interaction was efficient, allowing many clients to be served by a single server at the cafe.
Dockerβs Client-Server Technology
As Tea Kadai grew in popularity, it decided to expand its services to deliver data more efficiently and flexibly. To do this, they adopted a new technology called Docker, which helped them manage their operations more effectively.
Docker Client:
In the world of Docker at Tea Kadai, Dinesh still played the role of the client. But now, instead of just making simple data requests, she could request entire environments where he could test and run his applications.
These environments, called containers, were like personalized booths in the cafe where Alice could have her own setup with everything she needed to work on her app.
Dinesh used a special tool called the Docker Client to place his order. With this tool, he could specify exactly what he wanted in his container like the operating system, libraries, and applications needed for his app. The Docker Client was her interface for communicating with the cafeβs new backend system.
Docker Server (Daemon):
Behind the scenes, Tea Kadai had installed a powerful system known as the Docker Daemon, which acted as the server in this setup. The Docker Daemon was responsible for creating, running, and managing the containers requested by clients like Dinesh.
When Dinesh sent his container request using the Docker Client, the Docker Daemon received it, built the container environment, and handed it back to Dinesh for use.
Docker Images:
The Tea Kadai had a collection of premade recipes called Docker Images. These images were like blueprints for creating containers, containing all the necessary ingredients and instructions.
When Dinesh requested a new container, the Docker Daemon used these images to quickly prepare the environment.
Flexibility and Isolation:
The beauty of Docker at Tea Kadai was that it allowed multiple clients like Dinesh to have their containers running simultaneously, each isolated from the others. This isolation ensured that one clientβs work wouldnβt interfere with anotherβs, just like having separate booths in the cafe for each customer. Dinesh could run, test, and even destroy his environment without affecting anyone else.
At the end,
In the vibrant city of Banglore, Tea Kadai thrived by adopting client-server architecture and Dockerβs client-server technology. This approach allowed them to efficiently serve data and services while providing flexible, isolated environments for their clients. Dinesh and many others continued to come to tea kadai, knowing they could always get what they needed in a reliable and innovative way.
In this blog post, weβll explore how to set up a simple web server using Caddy to serve an HTML page over HTTPS.
Additionally, weβll configure port forwarding on your router to make the server accessible from the internet using your WAN IP address.
Caddy is an excellent choice for this task because of its ease of use, automatic HTTPS, and modern web technologies support.
What Youβll Need
Your WAN IP address (You can get this from your router)
A computer or server running Linux, macOS, or Windows
Caddy or any related web server (nginx, etcβ¦) installed on your machine
An HTML file to serve.
Access to your routerβs administration interface
Step 1: Install Caddy
Linux
To install Caddy on Linux, you can use the following commands:
# Download Caddy
wget https://github.com/caddyserver/caddy/releases/download/v2.6.4/caddy_2.6.4_linux_amd64.tar.gz
# Extract the downloaded file
tar -xzf caddy_2.6.4_linux_amd64.tar.gz
# Move Caddy to a directory in your PATH
sudo mv caddy /usr/local/bin/
# Give execute permissions
sudo chmod +x /usr/local/bin/caddy
macOS
On macOS, you can install Caddy using Homebrew,
brew install caddy
Windows
For Windows, download the Caddy binary from the official website and add it to your system PATH.
Step 2: Create an HTML Page
Create a simple HTML page that you want to serve with Caddy. For example, create a file named index.html with the following content:
Now, you can access your server from anywhere using WAN IP address (you can find the WAN IP from in your router). Enter the address in your browser to see your HTML page served over HTTPS!
Step 8: NO-IP dynamic dns.
You can have a free domain (mostly a subdomain/a random generated domain) from no-ip and assign your ip:port to a domain.
or if you own a domain, you can assign it. I created a subdomain named home.parottasalna.com
Create a dictionary named student with the following keys and values. and print the same
"name": "Alice"
"age": 21
"major": "Computer Science"
Using the student dictionary, print the values associated with the keys "name" and "major".
Add a new key-value pair to the student dictionary: "gpa": 3.8. Then update the "age" to 22.
Remove the key "major" from the student dictionary using the del statement. Print the dictionary to confirm the removal.
Check if the key "age" exists in the student dictionary. Print True or False based on the result.
Create a dictionary prices with three items, e.g., "apple": 0.5, "banana": 0.3, "orange": 0.7. Iterate over the dictionary and print each key-value pair.
Use the len() function to find the number of key-value pairs in the prices dictionary. Print the result.
Use the get() method to access the "gpa" in the student dictionary. Try to access a non-existing key, e.g., "graduation_year", with a default value of 2025.
Create another dictionary extra_info with the following keys and values. Also merge extra_info into the student dictionary using the update() method.
"graduation_year": 2025
"hometown": "Springfield"
Create a dictionary squares where the keys are numbers from 1 to 5 and the values are the squares of the keys. Use dictionary comprehension.
Using the prices dictionary, print the keys and values as separate lists using the keys() and values() methods.
Create a dictionary school with two nested dictionaries. Access and print the age of "student2".
"student1": {"name": "Alice", "age": 21}
"student2": {"name": "Bob", "age": 22}
Use the setdefault() method to add a new key "advisor" with the value "Dr. Smith" to the student dictionary if it does not exist.
Use the pop() method to remove the "hometown" key from the student dictionary and store its value in a variable. Print the variable.
Use the clear() method to remove all items from the prices dictionary. Print the dictionary to confirm itβs empty.
Make a copy of the student dictionary using the copy() method. Modify the copy by changing "name" to "Charlie". Print both dictionaries to see the differences.
Create two lists: keys = ["name", "age", "major"] and values = ["Eve", 20, "Mathematics"]. Use the zip() function to create a dictionary from these lists.
Use the items() method to iterate over the student dictionary and print each key-value pair.
Given a list of fruits: ["apple", "banana", "apple", "orange", "banana", "banana"], create a dictionary fruit_count that counts the occurrences of each fruit.
Use collections.defaultdict to create a dictionary word_count that counts the number of occurrences of each word in a list: ["hello", "world", "hello", "python"].
We are planning to opening a botanical garden with flowers which will attract people to visit.
Morning: Planting Unique Flowers
One morning, we decides to plant flowers in the garden. They ensure that each flower they plant is unique.
botanical_garden = {"Rose", "Lily", "Sunflower"}
Noon: Adding More Flowers
At noon, they find some more flowers and add them to the garden, making sure they only add flowers that arenβt already there.
Adding Elements to a Set:
# Adding more unique flowers to the enchanted garden
botanical_garden.add("Jasmine")
botanical_garden.add("Hibiscus")
print(botanical_garden)
# output: {'Hibiscus', 'Rose', 'Tulip', 'Sunflower', 'Jasmine'}
Afternoon: Trying to Plant Duplicate Flowers
In the afternoon, they accidentally try to plant another Rose, but the gardenβs rule prevents any duplicates from being added.
Adding Duplicate Elements:
# Attempting to add a duplicate flower
botanical_garden.add("Rose")
print(botanical_garden)
# output: {'Lily', 'Sunflower', 'Rose'}
Evening: Removing Unwanted Plants
As evening approaches, they decide to remove some flowers they no longer want in their garden.
Removing Elements from a Set:
# Removing a flower from the enchanted garden
botanical_garden.remove("Lily")
print(botanical_garden)
# output: {'Sunflower', 'Rose'}
Night: Checking Flower Types
Before going to bed, they check if certain flowers are present in their botanical garden.
Checking Membership:
# Checking if certain flowers are in the garden
is_rose_in_garden = "Rose" in botanical_garden
is_tulip_in_garden = "Tulip" in botanical_garden
print(f"Is Rose in the garden? {is_rose_in_garden}")
print(f"Is Tulip in the garden? {is_tulip_in_garden}")
# Output
# Is Rose in the garden? True
# Is Tulip in the garden? False
Midnight: Comparing with Rose Garden
Late at night, they compare their botanical garden with their rose garden to see which flowers they have in common and which are unique to each garden.
Set Operations:
Intersections:
# Neighbor's enchanted garden
rose_garden = {"Rose", "Lavender"}
# Flowers in both gardens (Intersection)
common_flowers = botanical_garden.intersection(rose_garden)
print(f"Common flowers: {common_flowers}")
# Output
# Common flowers: {'Rose'}
# Unique flowers: {'Sunflower'}
# All unique flowers: {'Sunflower', 'Lavender', 'Rose'}
Difference:
# Flowers unique to their garden (Difference)
unique_flowers = botanical_garden.difference(rose_garden)
print(f"Unique flowers: {unique_flowers}")
#output
# Unique flowers: {'Sunflower'}
Union:
# All unique flowers from both gardens (Union)
all_unique_flowers = botanical_garden.union(rose_garden)
print(f"All unique flowers: {all_unique_flowers}")
# Output: All unique flowers: {'Sunflower', 'Lavender', 'Rose'}
In a vibrant town in Tamil Nadu, there is a popular grocery store called Annachi Kadai. This store is always bustling with fresh deliveries of items.
The store owner, Pandian, uses a special inventory system to track the products. This system functions like a dictionary in Python, where each item is labeled with its name, and the quantity available is recorded.
Morning: Delivering Items to the Store
One bright morning, a new delivery truck arrives at the grocery store, packed with fresh items. Pandian records these new items in his inventory list.
Creating and Updating the Inventory:
# Initial delivery of items to the store
inventory = {
"apples": 20,
"bananas": 30,
"carrots": 15,
"milk": 10
}
print("Initial Inventory:", inventory)
# Output: Initial Inventory: {'apples': 20, 'bananas': 30, 'carrots': 15, 'milk': 10}
Noon: Additional Deliveries
As the day progresses, more deliveries arrive with additional items that need to be added to the inventory. Pandian updates the system with these new arrivals.
In the afternoon, Pandian notices that some items are running low and restocks them by updating the quantities in the inventory system.
Updating Quantities:
# Updating item quantities after restocking shelves
inventory["apples"] += 10 # 10 more apples added
inventory["milk"] += 5 # 5 more bottles of milk added
print("Inventory after Restocking:", inventory)
# Output: Inventory after Restocking: {'apples': 30, 'bananas': 30, 'carrots': 15, 'milk': 15, 'bread': 25, 'eggs': 50}
Evening: Removing Sold-Out Items
As evening falls, some items are sold out, and Pandian needs to remove them from the inventory to reflect their unavailability.
Removing Items from the Inventory:
# Removing sold-out items
del inventory["carrots"]
print("Inventory after Removal:", inventory)
# Output: Inventory after Removal: {'apples': 30, 'bananas': 30, 'milk': 15, 'bread': 25, 'eggs': 50}
Night: Checking Inventory
Before closing the store, Pandian checks the inventory to ensure that all items are accurately recorded and none are missing.
Checking for Items:
# Checking if specific items are in the inventory
is_bananas_in_stock = "bananas" in inventory
is_oranges_in_stock = "oranges" in inventory
print(f"Are bananas in stock? {is_bananas_in_stock}")
print(f"Are oranges in stock? {is_oranges_in_stock}")
# Output: Are bananas in stock? True
# Output: Are oranges in stock? False
Midnight: Reviewing Inventory
After a busy day, Pandian reviews the entire inventory to ensure all deliveries and sales are accurately recorded.
After some time, They decided to change the values of an ooty adventure
ooty_adventure[2] = "Visited Doddabeta Peak"
Tuples in Python are immutable, meaning that once a tuple is created, its elements cannot be changed.
However, there are some workarounds to βchangeβ values within a tuple by converting it to a mutable data type, such as a list, making the change, and then converting it back to a tuple.
Evening: Combining Adventures
As evening approaches, the family feels nostalgic and decides to combine the Paris and Kenya adventures into one big story.
Late at night, they decide to check if their favorite adventures are still part of their magical photo album.
Checking Membership:
# Checking membership
is_ooty_in_album = ooty_adventure in photo_album
is_munnar_in_album = munnar_adventure in photo_album
print(f"Is the Ooty adventure in the album? {is_ooty_in_album}")
print(f"Is the Munnar adventure in the album? {is_munnar_in_album}")
Counting Their Adventures
They also count the number of adventures they have documented in the album so far.
# Finding length of the tuple
number_of_adventures = len(photo_album)
print(f"Number of adventures in the album: {number_of_adventures}")
# Output: Number of adventures in the album: 3
Meet Alex, a dedicated delivery man who starts his day bright and early. He works for a company that handles all kinds of packages, and his truck is his mobile office where he keeps everything organized in a series of bins.
In Python terms, these bins are like lists.
Letβs follow Alex through his day to understand how list operations work in Python.
Morning Load
Alexβs day begins by loading his truck with packages. His truck is like an empty list in Python.
Today, his truck starts with three packages: "Letter", "Box", and "Parcel". He creates his initial list like this:
packages = ["Letter", "Box", "Parcel"]
Accessing Packages
As he drives around, he gets a call from a customer who needs the "Box" delivered. Alex quickly looks at his list to find the "Box", which is in the second spot.
He accesses it like this:
item = packages[1] # "Box"
New Delivery Request
On his next stop, Alex receives a new package a "Special Delivery". He needs to add this to the end of his truckβs list. He uses the append() method to add this new pack
Later in the day, Alex realizes that he needs to take the last package off the truck and needs to know which one it was.
He uses pop() to remove and return the last package:
last_package = packages.pop()
The last package removed is "Special Delivery", and his list is now:
["Letter", "Fragile Box", "Parcel"]
Finding the Position of a Package
One customer calls inquiring about the location of their "Parcel". Alex finds its position in the truck using the index() method:
position = packages.index("Parcel") # Returns 2
Inspecting a Range of Packages
Alex wants to check all the packages from "Letter" to "Parcel", so he looks at a slice of his truckβs list:
subset = packages[0:3]
This slice includes:
["Letter", "Fragile Box", "Parcel"]
Organizing Packages
Before heading to the depot, Alex decides to sort his remaining packages in alphabetical order. He uses the sort() method:
packages.sort()
The list of packages is now:
["Fragile Box", "Letter", "Parcel"]
End of the Day
Finally, Alex wants to see the packages in reverse order to double-check everything. He uses the reverse() method:
packages.reverse()
His truckβs list now looks like this:
["Parcel", "Letter", "Fragile Box"]
As Alex finishes his day, his truck is empty, and heβs ready to start fresh tomorrow. His day was full of managing, accessing, and organizing packages, just like managing a list in Python!
The ENV directive in a Dockerfile can be used to set environment variables.
Environment variables are key-value pairs that provide information to applications and processes running inside the container.
They can influence the behavior of programs and scripts by making dynamic values available during runtime.
Environment variables are defined as key-value pairs as per the following format:
ENV <key> <value>
For example, we can set a path using the ENV directive as below,
ENV PATH $PATH:/usr/local/app/bin/
We can set multiple environment variables in the same line separated by spaces. However, in this form, the key and value should be separated by the equal to (=) symbol:
ENV <key>=<value> <key=value> ...
Below, we set two environment variables configured.
The PATH environment variable is configured with the value of $PATH:/usr/local/app/bin, and
the VERSION environment variable is configured with the value of 1.0.0.
ENV PATH=$PATH:/usr/local/app/bin/ VERSION=1.0.0
Once an environment variable is set with the ENV directive in the Dockerfile, this variable is available in all subsequent Docker image layers.
This variable is even available in the Docker containers launched from this Docker image.
Below are some of the examples of using ENV file,
Example 1: Setting a single environment variable
# Use an official Node.js runtime as a parent image
FROM node:14
# Set the environment variable NODE_ENV to "production"
ENV NODE_ENV=production
# Copy package.json and package-lock.json files to the working directory
COPY package*.json ./
# Install app dependencies using the NODE_ENV variable
RUN if [ "$NODE_ENV" = "production" ]; then npm install --only=production; else npm install; fi
# Copy app source code to the container
COPY . .
# Expose the port the app runs on
EXPOSE 8080
# Define the command to run the app
CMD ["node", "app.js"]
##
Example 2: Using Environment Variables in Application Configuration
# Use an official Python runtime as a parent image
FROM python:3.8-slim
# Set environment variables
ENV APP_HOME=/usr/src/app
ENV APP_CONFIG=config.ProductionConfig
# Create application directory and set it as the working directory
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
# Copy the current directory contents into the container at /usr/src/app
COPY . .
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Use the environment variable in the command to run the application
CMD ["python", "app.py", "--config", "$APP_CONFIG"]
Example 3: Passing Environment Variables to the Application
# Use an official nginx image as a parent image
FROM nginx:alpine
# Set environment variables
ENV NGINX_HOST=localhost
ENV NGINX_PORT=8080
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
# Replace placeholders in the nginx.conf file with actual environment variable values
RUN sed -i "s/NGINX_HOST/$NGINX_HOST/g" /etc/nginx/nginx.conf && \
sed -i "s/NGINX_PORT/$NGINX_PORT/g" /etc/nginx/nginx.conf
# Expose ports
EXPOSE 8080
# Start nginx
CMD ["nginx", "-g", "daemon off;"]
If there are any rows/lines that does not match, it will be ignored. Here, a row about βdelhiβ is available in sample1.txt but not in sample2.txt. Hence see the result as below
file command
gives file type
Here -b argument denotes brief information. The last command you see that it return only the file type without file name.
If you check for an empty file β it returns empty.
file * gives file type of all files in the present directory
touch command
creates an empty file. This can also be used to create bulk files by giving commands like touch file{1..10}.txt and it creates 10 empty files.
touch -r "fromFilename" "toFileName"
This command helps to change timestamp or date from one file to other file. See the result below.
Cal command
gives calendar information. cal -y gives the complete year calendar
rev and tac command
rev reverses the file content character by character.
tac reverses line by line
redirection command
β>β denotes output redirection(write to a file)
β<β denotes input redirection(read from a file)
echo "welcome to linux" > file1.txt
The output redirection operator uses to replace the file content.
In this example, word count is read from file1.txt and the output is written to file2.txt
wc < file1.txt > file2.txt
>> β this operator appends the content that is already present in the file
tee command
writes standard output to another file. One can give any number of files where the output to be copied.
grep command
helps to find a string from a file. In real time scenario β mainly used to file keywords in error logs. Note: Here the search is case-sensitive
One can also instruct to search in multiple files.
uname #gives system information uname -a # all the information a- all uname -s # only kernel information uname -r # kernel release type
uptime
uptime #displays time up no of users, load uptime -p #prettify
Directory commands
pwd #present working directory mkdir dir1 #make directory directory name mkdir testing2 testing3 #make multiple directories with name1, name2 mkdir -v folder1 folder3 #v indicates verbose mkdir {foldr1,foldr2,foldr3} #creates 3 directories if folder already exists, returns message mkdir -p -v parent/dad/mom #-p creates parent/child/child2 directories rmdir #only removes only empty directories cd #change directory cd .. #go to parent directory cd #go to home directory locate filename.txt #returns file location if it exists updatedb #updatedb utility
who
who #username, terminal, when logged in, ipaddress date tty7 direct, tty2 or 3 virtual terminal who -r #run level 0-6 (how logged in)
Word count (wc)
wc # number of lines, number of words, number of bytes wc -l #number of lines wc -m # number of characters
copy command cp
cp filename1 newfilename #copies a file (creates a backup) cp filename location #copies to location cp filename1 filename2 location # cp -r #perform copy recursively cp -r foldername destination
Move command mv
mv oldname newname #rename a file mv filename destination #cut and paste (move)
Piping
cat /filename | wc #pipes the output of first command as input to second command grep #pattern matching cat /filename | grep word_to_search
vim text editor
vim filename.txt #opens a new file in the text editor mode # i insert text into file, escape,wq! for save and exit # escape q! quit without writing
find command
find . -name secret.txt #find the file name in this current location '.' indicates current location find . -type d -name foldername #d indicates that you are finding a document
Environment (env)
env #environment variable export name= kaniyam #assign value to name variable printenv name #prints the value of env variable
less filename.txt #reads the file one screen at a time
Sort
sort filename #sort the content in alphabetical order sort file1.txt > file2.txt #sort output in a new file sort file1.txt file2.txt #multiple file sorting sort -u file.txt #remove duplicates in the file
Unique
uniq file.txt #remove redundant string (careful about 'next line characters or whitespaces') uniq -c file.txt #prints strings and counts
Cut command
cut -c1 file.txt #cut first character from every line cut -c1-3 #first 3 chars from beginning of the line.
Format command
fmt fie.txt#collecting words and fill as a paragraph fmt -u file.txt #remove additional whitespaces
Head and Tail commands
head file.txt #first 10 lines of a file head -n 11 state.txt #-n specifies number of lines tail file.txt # last 10 lines of a file tail file.txt | sort tail -f file.txt #real time log of last line
Numbering
nl sample.txt #number the lines and displays nl -s ".." file.txt # adds a string after the numbering
Split
split file.txt #split larger file to smaller file split fsfs.txt #by default it splits at 1k lines split -l2 file.txt split2op #splits every two lines
last list of users who logged in
last #information about who logged in to the machine last -5 last -f #list of users who logged out
tac command (opposite of cat)
tac #concatenate and print in reverse order tac file.txt > file2.txt # reversed order stored in diff file
Translate command (tr)
tr [a-z] [A-Z] #translate from standard input and writes to output tr [:lower:] [:upper:] < sample.txt > trans.txt #trans;ate from file sample.txt and stores output to trans.txt lower to upper translation
sed command (some simple use cases), there are others
sed 's/unix/linux' sample #filter and transform the text 'search for 'unix' in file and transform it to linux' only first instance sed 's/unix/linux/g' sample #'g' indicates global sed 's/unix/linux/gi' sample #'i' for ignore
Paste command
paste file.txt file.txt #joins the file horizontally default delimiter is tab character paste -d '|' file.txt file.txt #joins with delimiter