Write-Ahead Logging (WAL) is a fundamental feature of PostgreSQL, ensuring data integrity and facilitating critical functionalities like crash recovery, replication, and backup.
This series of experimentation explores WAL in detail, its importance, how it works, and provides examples to demonstrate its usage.
What is Write-Ahead Logging (WAL)?
WAL is a logging mechanism where changes to the database are first written to a log file before being applied to the actual data files. This ensures that in case of a crash or unexpected failure, the database can recover and replay these logs to restore its state.
Your question is right !
Why do we need a WAL, when we do a periodic backup ?
Write-Ahead Logging (WAL) is critical even when periodic backups are in place because it complements backups to provide data consistency, durability, and flexibility in the following scenarios.
1. Crash Recovery
Why It’s Important: Periodic backups only capture the database state at specific intervals. If a crash occurs after the latest backup, all changes made since that backup would be lost.
Role of WAL: WAL ensures that any committed transactions not yet written to data files (due to PostgreSQL’s lazy-writing behavior) are recoverable. During recovery, PostgreSQL replays the WAL logs to restore the database to its last consistent state, bridging the gap between the last checkpoint and the crash.
Example:
Backup Taken: At 12:00 PM.
Crash Occurs: At 1:30 PM.
Without WAL: All changes after 12:00 PM are lost.
With WAL: All changes up to 1:30 PM are recovered.
2. Point-in-Time Recovery (PITR)
Why It’s Important: Periodic backups restore the database to the exact time of the backup. However, this may not be sufficient if you need to recover to a specific point, such as just before a mistake (e.g., accidental data deletion).
Role of WAL: WAL records every change, enabling you to replay transactions up to a specific time. This allows fine-grained recovery beyond what periodic backups can provide.
Example:
Backup Taken: At 12:00 AM.
Mistake Made: At 9:45 AM, an important table is accidentally dropped.
Without WAL: Restore only to 12:00 AM, losing 9 hours and 45 minutes of data.
With WAL: Restore to 9:44 AM, recovering all valid changes except the accidental drop.
3. Replication and High Availability
Why It’s Important: In a high-availability setup, replicas must stay synchronized with the primary database to handle failovers. Periodic backups cannot provide real-time synchronization.
Role of WAL: WAL enables streaming replication by transmitting logs to replicas, ensuring near real-time synchronization.
Example:
A primary database sends WAL logs to replicas as changes occur. If the primary fails, a replica can quickly take over without data loss.
4. Handling Incremental Changes
Why It’s Important: Periodic backups store complete snapshots of the database, which can be time-consuming and resource-intensive. They also do not capture intermediate changes.
Role of WAL: WAL allows incremental updates by recording only the changes made since the last backup or checkpoint. This is crucial for efficient data recovery and backup optimization.
5. Ensuring Data Durability
Why It’s Important: Even during normal operations, a database crash (e.g., power failure) can occur. Without WAL, transactions committed by users but not yet flushed to disk are lost.
Role of WAL: WAL ensures durability by logging all changes before acknowledging transaction commits. This guarantees that committed transactions are recoverable even if the system crashes before flushing the changes to data files.
6. Supporting Hot Backups
Why It’s Important: For large, active databases, taking a backup while the database is running can result in inconsistent snapshots.
Role of WAL: WAL ensures consistency by recording changes that occur during the backup process. When replayed, these logs synchronize the backup, ensuring it is valid and consistent.
7. Debugging and Auditing
Why It’s Important: Periodic backups are static snapshots and don’t provide a record of what happened in the database between backups.
Role of WAL: WAL contains a sequential record of all database modifications, which can help in debugging issues or auditing transactions.
Feature
Periodic Backups
Write-Ahead Logging
Crash Recovery
Limited to the last backup
Ensures full recovery to the crash point
Point-in-Time Recovery
Restores only to the backup time
Allows recovery to any specific point
Replication
Not supported
Enables real-time replication
Efficiency
Full snapshot
Incremental changes
Durability
Relies on backup frequency
Guarantees transaction durability
In upcoming sessions, we will all experiment each one of the failure scenarios for understanding.
it requires external dependency parse for parsing the python string format with placeholders
import parse
from date import TA_MONTHS
from date import datetime
//POC of tamil date time parser
def strptime(format='{month}, {date} {year}',date_string ="நவம்பர், 16 2024"):
parsed = parse.parse(format,date_string)
month = TA_MONTHS.index(parsed['month'])+1
date = int(parsed['date'])
year = int(parsed['year'])
return datetime(year,month,date)
print(strptime("{date}-{month}-{year}","16-நவம்பர்-2024"))
#dt = datetime(2024,11,16);
# print(dt.strptime_ta("நவம்பர் , 16 2024","%m %d %Y"))
பிஹெச்பி பொதிகளை பிஹெச்பி கம்போசர்-உடன் உருவாக்க மற்றும் வெளியிடுவது ஒரு நேரடியான வழிமுறை இந்த வழிமுறையை பின்பற்றினால் நாம் எளிமையாக பிஹெச்பி சமூகத்துடன் நமது நிரல்களை பொதிவடிவத்தில் பகிர்ந்துகொள்ளலாம்.
பின்னர் உங்களது குறிமுறையை கிட் பயன்படுத்தி கிட்ஹப்பில் பதிவேற்றவும்.
படி 5
குறியீட்டை கம்போசரில் பதிப்பிக்க பேக்கேஜிஸ்டில் உள்நுழையவும். பின்னர் submit பொத்தானை அழுத்தவும்
submit பொத்தானை அழுத்தியவுடன் பொதியை எற்றும் பக்கம் திறக்கப்பட்டு உங்களது கிட்ஹப் கணக்கில் உள்ள பொதுவாக அனுமதியில் இருக்ககூடிய ரெபொசிடரியின் வலைமுகவரியை உள்ளிட்டு சரிபார்க்கும் பொத்தானை அழுத்தி சரிபார்த்துகொள்ளவும்.
குறிப்பு : கம்போசரை பொறுத்தவகையில் பதிப்பிப்பவர் வென்டார் (vendor) என்று குறிப்பிடப்படுவர். நான் hariharan என்ற வென்டார் பெயரை பயன்படுத்தி இரு பொதிகளை பதிப்பித்துள்ளேன்.
புதிய பொதியை சரிபார்த்த பின் பொதியானது பதிப்பிக்க தயராகிவிடும்.
Load balancing helps distribute client requests across multiple servers to ensure high availability, performance, and reliability. Weighted Round Robin Load Balancing is an extension of the round-robin algorithm, where each server is assigned a weight based on its capacity or performance capabilities. This approach ensures that more powerful servers handle more traffic, resulting in a more efficient distribution of the load.
What is Weighted Round Robin Load Balancing?
Weighted Round Robin Load Balancing assigns a weight to each server. The weight determines how many requests each server should handle relative to the others. Servers with higher weights receive more requests compared to those with lower weights. This method is useful when backend servers have different processing capabilities or resources.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.
Flask App 1 (app1.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 1!"
@app.route("/data")
def data():
return "Data from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 2!"
@app.route("/data")
def data():
return "Data from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 3!"
@app.route("/data")
def data():
return "Data from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Step 2: Create Dockerfiles for Each Flask Application
Create Dockerfiles for each of the Flask applications:
Dockerfile for Flask App 1 (Dockerfile.app1):
# Use the official Python image from Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the application file into the container
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2):
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3):
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create the HAProxy Configuration File
Create an HAProxy configuration file (haproxy.cfg) to implement Weighted Round Robin Load Balancing
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance roundrobin
server server1 app1:5001 weight 2 check
server server2 app2:5002 weight 1 check
server server3 app3:5003 weight 3 check
Explanation:
The balance roundrobin directive tells HAProxy to use the Round Robin load balancing algorithm.
The weight option for each server specifies the weight associated with each server:
server1 (App 1) has a weight of 2.
server2 (App 2) has a weight of 1.
server3 (App 3) has a weight of 3.
Requests will be distributed based on these weights: App 3 will receive the most requests, App 2 the least, and App 1 will be in between.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy):
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a docker-compose.yml File
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
HAProxy depends on the three Flask applications to be up and running before it starts.
Step 6: Build and Run the Docker Containers
Run the following command to build and start all the containers
docker-compose up --build
This command builds Docker images for all three Flask apps and HAProxy, then starts them.
Step 7: Test the Load Balancer
Open your browser or use curl to make requests to the HAProxy server
curl http://localhost/
curl http://localhost/data
Observation:
With Weighted Round Robin Load Balancing, you should see that requests are distributed according to the weights specified in the HAProxy configuration.
For example, App 3 should receive three times more requests than App 2, and App 1 should receive twice as many as App 2.
Conclusion
By implementing Weighted Round Robin Load Balancing with HAProxy, you can distribute traffic more effectively according to the capacity or performance of each backend server. This approach helps optimize resource utilization and ensures a balanced load across servers.
Load balancing distributes client requests across multiple servers to ensure high availability and reliability. One of the simplest load balancing algorithms is Random Load Balancing, which selects a backend server randomly for each client request.
Although this approach does not consider server load or other metrics, it can be effective for less critical applications or when the goal is to achieve simplicity.
What is Random Load Balancing?
Random Load Balancing assigns incoming requests to a randomly chosen server from the available pool of servers. This method is straightforward and ensures that requests are distributed in a non-deterministic manner, which may work well for environments with equally capable servers and minimal concerns about server load or state.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
We’ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.
Flask App 1 – (app.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 1!"
@app.route("/data")
def data():
return "Data from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 – (app.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 2!"
@app.route("/data")
def data():
return "Data from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 – (app.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello from Flask App 3!"
@app.route("/data")
def data():
return "Data from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Step 2: Create Dockerfiles for Each Flask Application
Create Dockerfiles for each of the Flask applications:
Dockerfile for Flask App 1 (Dockerfile.app1):
# Use the official Python image from Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the application file into the container
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2):
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3):
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create a Dockerfile for HAProxy
HAProxy Configuration file,
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance random
random draw 2
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
The balance random directive tells HAProxy to use the Random load balancing algorithm.
The random draw 2 setting makes HAProxy select 2 servers randomly and choose the one with the least number of connections. This adds a bit of load awareness to the random choice.
The server directives define the backend servers and their ports.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy):
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a docker-compose.yml File
To manage all the containers together, create a docker-compose.yml file:
The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
HAProxy depends on the three Flask applications to be up and running before it starts.
Step 6: Build and Run the Docker Containers
Run the following command to build and start all the containers:
docker-compose up --build
This command builds Docker images for all three Flask apps and HAProxy, then starts them.
Step 7: Test the Load Balancer
Open your browser or use curl to make requests to the HAProxy server:
curl http://localhost/
curl http://localhost/data
Observation:
With Random Load Balancing, each request should randomly hit one of the three backend servers.
Since the selection is random, you may not see a predictable pattern; however, the requests should be evenly distributed across the servers over a large number of requests.
Conclusion
By implementing Random Load Balancing with HAProxy, we’ve demonstrated a simple way to distribute traffic across multiple servers without relying on complex metrics or state information. While this approach may not be ideal for all use cases, it can be useful in scenarios where simplicity is more valuable than fine-tuned load distribution.
Load balancing helps distribute traffic across multiple servers, enhancing performance and reliability. One common strategy is Source IP Hash load balancing, which ensures that requests from the same client IP are consistently directed to the same server.
This method is particularly useful for applications requiring session persistence, such as shopping carts or user sessions. In this blog, we’ll implement Source IP Hash load balancing using Flask and HAProxy, all within Docker containers.
What is Source IP Hash Load Balancing?
Source IP Hash Load Balancing is a technique that uses a hash function on the client’s IP address to determine which server should handle the request. This guarantees that a particular client will always be directed to the same backend server, ensuring session persistence and stateful behavior.
We’ll create three separate Dockerfiles, one for each Flask app.
Flask App 1 (app1.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Each Flask app listens on a different port (5001, 5002, 5003).
Step 2: Dockerfiles for each flask application
Dockerfile for Flask App 1 (Dockerfile.app1)
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2)
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3)
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create a configuration for HAProxy
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance source
hash-type consistent
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
The balance source directive tells HAProxy to use Source IP Hashing as the load balancing algorithm.
The hash-type consistent directive ensures consistent hashing, which is essential for minimizing disruption when backend servers are added or removed.
The server directives define the backend servers and their ports.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy)
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a Dockercompose file
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
Each Flask app is built from its respective Dockerfile and runs on its port.
HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.
Step 6: Build and Run the Docker Containers
Run the following commands to build and start all the containers:
# Build and run the containers
docker-compose up --build
This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.
Step 7: Test the Load Balancer
Open your browser or use a tool like curl to make requests to the HAProxy server:
curl http://localhost
Observation:
With Source IP Hash load balancing, each unique IP address (e.g., your local IP) should always be directed to the same backend server.
If you access the HAProxy from different IPs (e.g., using different devices or by simulating different client IPs), you will see that requests are consistently sent to the same server for each IP.
For the URI based hashing we just need to add,
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance uri
hash-type consistent
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
The balance uri directive tells HAProxy to use URI Hashing as the load balancing algorithm.
The hash-type consistent directive ensures consistent hashing to minimize disruption when backend servers are added or removed.
The server directives define the backend servers and their ports.
Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, we’ll explore how to implement Least Connection load balancing using Flask as our backend application and HAProxy as our load balancer.
What is Least Connection Load Balancing?
Least Connection Load Balancing is a dynamic algorithm that distributes requests to the server with the fewest active connections at any given time. This method ensures that servers with lighter loads receive more requests, preventing any single server from becoming a bottleneck.
Step-by-Step Implementation with Docker
Step 1: Create Dockerfiles for Each Flask Application
We’ll create three separate Dockerfiles, one for each Flask app.
Flask App 1 (app1.py) – Introduced Slowness by adding sleep
from flask import Flask
import time
app = Flask(__name__)
@app.route("/")
def hello():
time.sleep(5)
return "Hello from Flask App 1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
Flask App 2 (app2.py)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask App 2!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)
Flask App 3 (app3.py) – Introduced Slowness by adding sleep.
from flask import Flask
import time
app = Flask(__name__)
@app.route("/")
def hello():
time.sleep(5)
return "Hello from Flask App 3!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)
Each Flask app listens on a different port (5001, 5002, 5003).
Step 2: Dockerfiles for each flask application
Dockerfile for Flask App 1 (Dockerfile.app1)
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY app1.py .
# Install Flask inside the container
RUN pip install Flask
# Expose the port the app runs on
EXPOSE 5001
# Run the application
CMD ["python", "app1.py"]
Dockerfile for Flask App 2 (Dockerfile.app2)
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]
Dockerfile for Flask App 3 (Dockerfile.app3)
FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]
Step 3: Create a configuration for HAProxy
global
log stdout format raw local0
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend servers
backend servers
balance leastconn
server server1 app1:5001 check
server server2 app2:5002 check
server server3 app3:5003 check
Explanation:
frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance leastconn directive sets the Least Connection for load balancing.
server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.
Step 4: Create a Dockerfile for HAProxy
Create a Dockerfile for HAProxy (Dockerfile.haproxy)
# Use the official HAProxy image from Docker Hub
FROM haproxy:latest
# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# Expose the port for HAProxy
EXPOSE 80
Step 5: Create a Dockercompose file
To manage all the containers together, create a docker-compose.yml file
The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
Each Flask app is built from its respective Dockerfile and runs on its port.
HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.
Step 6: Build and Run the Docker Containers
Run the following commands to build and start all the containers:
# Build and run the containers
docker-compose up --build
This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.
You should see the responses alternating between “Hello from Flask App 1!”, “Hello from Flask App 2!”, and “Hello from Flask App 3!” as HAProxy uses the Round Robin algorithm to distribute requests.
Step 7: Test the Load Balancer
Open your browser or use a tool like curl to make requests to the HAProxy server:
curl http://localhost
You should see responses cycling between “Hello from Flask App 1!”, “Hello from Flask App 2!”, and “Hello from Flask App 3!” according to the Least Connection strategy.
Meet Jafer, a talented developer (self boast) working at a fast growing tech company. His team is building an innovative app that fetches data from multiple third-party APIs in realtime to provide users with up-to-date information.
Everything is going smoothly until one day, a spike in traffic causes their app to face a wave of “HTTP 500” and “Timeout” errors. Requests start failing left and right, and users are left staring at the dreaded “Data Unavailable” message.
Jafer realizes that he needs a way to make their app more resilient against these unpredictable network hiccups. That’s when he discovers Tenacity a powerful Python library designed to help developers handle retries gracefully.
Join Jafer as he dives into Tenacity and learns how to turn his app from fragile to robust with just a few lines of code!
Step 0: Mock FLASK Api
from flask import Flask, jsonify, make_response
import random
import time
app = Flask(__name__)
# Scenario 1: Random server errors
@app.route('/random_error', methods=['GET'])
def random_error():
if random.choice([True, False]):
return make_response(jsonify({"error": "Server error"}), 500) # Simulate a 500 error randomly
return jsonify({"message": "Success"})
# Scenario 2: Timeouts
@app.route('/timeout', methods=['GET'])
def timeout():
time.sleep(5) # Simulate a long delay that can cause a timeout
return jsonify({"message": "Delayed response"})
# Scenario 3: 404 Not Found error
@app.route('/not_found', methods=['GET'])
def not_found():
return make_response(jsonify({"error": "Not found"}), 404)
# Scenario 4: Rate-limiting (simulated with a fixed chance)
@app.route('/rate_limit', methods=['GET'])
def rate_limit():
if random.randint(1, 10) <= 3: # 30% chance to simulate rate limiting
return make_response(jsonify({"error": "Rate limit exceeded"}), 429)
return jsonify({"message": "Success"})
# Scenario 5: Empty response
@app.route('/empty_response', methods=['GET'])
def empty_response():
if random.choice([True, False]):
return make_response("", 204) # Simulate an empty response with 204 No Content
return jsonify({"message": "Success"})
if __name__ == '__main__':
app.run(host='localhost', port=5000, debug=True)
To run the Flask app, use the command,
python mock_server.py
Step 1: Introducing Tenacity
Jafer decides to start with the basics. He knows that Tenacity will allow him to retry failed requests without cluttering his codebase with complex loops and error handling. So, he installs the library,
pip install tenacity
With Tenacity ready, Jafer decides to tackle his first problem, retrying a request that fails due to server errors.
Step 2: Retrying on Exceptions
He writes a simple function that fetches data from an API and wraps it with Tenacity’s @retry decorator
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(3),
wait=wait_fixed(2),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_random_error():
response = requests.get('http://localhost:5000/random_error')
response.raise_for_status() # Raises an HTTPError for 4xx/5xx responses
return response.json()
if __name__ == '__main__':
try:
data = fetch_random_error()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This code will attempt the request up to 3 times, waiting 2 seconds between each try. Jafer feels confident that this will handle the occasional hiccup. However, he soon realizes that he needs more control over which exceptions trigger a retry.
Step 3: Handling Specific Exceptions
Jafer’s app sometimes receives a “404 Not Found” error, which should not be retried because the resource doesn’t exist. He modifies the retry logic to handle only certain exceptions,
import requests
import logging
from tenacity import before_log, after_log
from requests.exceptions import HTTPError, Timeout
from tenacity import retry, retry_if_exception_type, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(3),
wait=wait_fixed(2),
retry=retry_if_exception_type((HTTPError, Timeout)),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get('http://localhost:5000/timeout', timeout=2) # Set a short timeout to simulate failure
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
Now, the function retries only on HTTPError or Timeout, avoiding unnecessary retries for a “404” error. Jafer’s app is starting to feel more resilient!
Step 4: Implementing Exponential Backoff
A few days later, the team notices that they’re still getting rate-limited by some APIs. Jafer recalls the concept of exponential backoff a strategy where the wait time between retries increases exponentially, reducing the load on the server and preventing further rate limiting.
He decides to implement it,
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_exponential
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=2, max=10),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_rate_limit():
response = requests.get('http://localhost:5000/rate_limit')
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_rate_limit()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
With this code, the wait time starts at 2 seconds and doubles with each retry, up to a maximum of 10 seconds. Jafer’s app is now much less likely to be rate-limited!
Step 5: Retrying Based on Return Values
Jafer encounters another issue: some APIs occasionally return an empty response (204 No Content). These cases should also trigger a retry. Tenacity makes this easy with the retry_if_result feature,
import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, retry_if_result
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(retry=retry_if_result(lambda x: x is None), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_empty_response():
response = requests.get('http://localhost:5000/empty_response')
if response.status_code == 204:
return None # Simulate an empty response
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_empty_response()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
Now, the function retries when it receives an empty response, ensuring that users get the data they need.
Step 6: Combining Multiple Retry Conditions
But Jafer isn’t done yet. Some situations require combining multiple conditions. He wants to retry on HTTPError, Timeout, or a None return value. With Tenacity’s retry_any feature, he can do just that,
import requests
import logging
from tenacity import before_log, after_log
from requests.exceptions import HTTPError, Timeout
from tenacity import retry_any, retry, retry_if_exception_type, retry_if_result, stop_after_attempt
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(retry=retry_any(retry_if_exception_type((HTTPError, Timeout)), retry_if_result(lambda x: x is None)), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get("http://localhost:5000/timeout")
if response.status_code == 204:
return None
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This approach covers all his bases, making the app even more resilient!
Step 7: Logging and Tracking Retries
As the app scales, Jafer wants to keep an eye on how often retries happen and why. He decides to add logging,
import logging
import requests
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@retry(stop=stop_after_attempt(2), wait=wait_fixed(2),
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.INFO))
def fetch_data():
response = requests.get("http://localhost:5000/timeout", timeout=2)
response.raise_for_status()
return response.json()
if __name__ == '__main__':
try:
data = fetch_data()
print("Data fetched successfully:", data)
except Exception as e:
print("Failed to fetch data:", str(e))
This logs messages before and after each retry attempt, giving Jafer full visibility into the retry process. Now, he can monitor the app’s behavior in production and quickly spot any patterns or issues.
The Happy Ending
With Tenacity, Jafer has transformed his app into a resilient powerhouse that gracefully handles intermittent failures. Users are happy, the servers are humming along smoothly, and Jafer’s team has more time to work on new features rather than firefighting network errors.
By mastering Tenacity, Jafer has learned that handling network failures gracefully can turn a fragile app into a robust and reliable one. Whether it’s dealing with flaky APIs, network blips, or rate limits, Tenacity is his go-to tool for retrying operations in Python.
So, the next time your app faces unpredictable network challenges, remember Jafer’s story and give Tenacity a try you might just save the day!
Attackers can break through the Secure Boot process on millions of computers using Intel and ARM processors due to a leaked cryptographic key that many manufacturers used during the startup process. This key, called the Platform Key (PK), is meant to verify the authenticity of a device’s firmware and boot software.
Unfortunately, this key was leaked back in 2018. It seems that some manufacturers used this key in their devices instead of replacing it with a secure one, as was intended. As a result, millions of devices from brands like Lenovo, HP, Asus, and SuperMicro are vulnerable to attacks.
If an attacker has access to this leaked key, they can easily bypass Secure Boot, allowing them to install malicious software that can take control of the device. To fix this problem, manufacturers need to replace the compromised key and update the firmware on affected devices. Some have already started doing this, but it might take time for all devices to be updated, especially those in critical systems.
The problem is serious because the leaked key is like a master key that can unlock many devices. This issue highlights poor cryptographic key management practices, which have been a problem for many years.
What Are “Not Replaced Constants”?
In software, constants are values that are not meant to change during the execution of a program. They are often used to define configuration settings, cryptographic keys, and other critical values.
When these constants are hard-coded into a system and not updated or replaced when necessary, they become a code smell known as “Not Replaced Constants.”
Why Are They a Problem?
When constants are not replaced or updated:
Security Risks: Outdated or exposed constants, such as cryptographic keys, can become security vulnerabilities. If these constants are publicly leaked or discovered by attackers, they can be exploited to gain unauthorized access or control over a system.
Maintainability Issues: Hard-coded constants can make a codebase less maintainable. Changes to these values require code modifications, which can be error-prone and time-consuming.
Flexibility Limitations: Systems with hard-coded constants lack flexibility, making it difficult to adapt to new requirements or configurations without altering the source code.
The Secure Boot Case Study
The recent Secure Boot vulnerability is a perfect example of the dangers posed by “Not Replaced Constants.” Here’s a breakdown of what happened:
The Vulnerability
Researchers discovered that a cryptographic key used in the Secure Boot process of millions of devices was leaked publicly. This key, known as the Platform Key (PK), serves as the root of trust during the Secure Boot process, verifying the authenticity of a device’s firmware and boot software.
What Went Wrong
The leaked PK was originally intended as a test key by American Megatrends International (AMI). However, it was not replaced by some manufacturers when producing devices for the market. As a result, the same compromised key was used across millions of devices, leaving them vulnerable to attacks.
The Consequences
Attackers with access to the leaked key can bypass Secure Boot protections, allowing them to install persistent malware and gain control over affected devices. This vulnerability highlights the critical importance of replacing test keys and securely managing cryptographic constants.
Sample Code:
Wrong
def generate_pk() -> str:
return "DO NOT TRUST"
# Vendor forgets to replace PK
def use_default_pk() -> str:
pk = generate_pk()
return pk # "DO NOT TRUST" PK used in production
Right
def generate_pk() -> str:
# The documentation tells vendors to replace this value
return "DO NOT TRUST"
def use_default_pk() -> str:
pk = generate_pk()
if pk == "DO NOT TRUST":
raise ValueError("Error: PK must be replaced before use.")
return pk # Valid PK used in production
Ignoring important security steps, like changing default keys, can create big security holes. This ongoing problem shows how important it is to follow security procedures carefully. Instead of just relying on written instructions, make sure to test everything thoroughly to ensure it works as expected.
Creating a simple alarm clock application can be a fun project to develop programming skills. Here are the steps, input ideas, and additional features you might consider when building your alarm clock
Game Steps
Define the Requirements:
Determine the basic functionality your alarm clock should have (e.g., set alarm, snooze, dismiss).
Choose a Programming Language:
Select a language you are comfortable with, such as Python, JavaScript, or Java.
Design the User Interface:
Decide if you want a graphical user interface (GUI) or a command-line interface (CLI).
Implement Core Features:
Set Alarm: Allow users to set an alarm for a specific time.
Trigger Alarm: Play a sound or display a message when the alarm time is reached.
Snooze Functionality: Enable users to snooze the alarm for a set period.
Dismiss Alarm: Allow users to turn off the alarm once it’s triggered.
Test the Alarm Clock:
Ensure that all functions work as expected and fix any bugs.
Refine and Enhance:
Improve the interface and add additional features based on user feedback.
Input Ideas
Set Alarm Time:
Input format: “HHAM/PM” or 24-hour format “HH”.
Snooze Duration:
Allow users to input a snooze time in minutes.
Alarm Sound:
Let users choose from a list of available alarm sounds.
Repeat Alarm:
Options for repeating alarms (e.g., daily, weekdays, weekends).
Custom Alarm Message:
Input a custom message to display when the alarm goes off.
Additional Features
Multiple Alarms:
Allow users to set multiple alarms for different times and days.
Customizable Alarm Sounds:
Let users upload their own alarm sounds.
Volume Control:
Add an option to control the alarm sound volume.
Alarm Labels:
Enable users to label their alarms (e.g., “Wake Up,” “Meeting Reminder”).
Weather and Time Display:
Show current weather information and time on the main screen.
Recurring Alarms:
Allow users to set recurring alarms on specific days.
Dark Mode:
Implement a dark mode for the UI.
Integration with Calendars:
Sync alarms with calendar events or reminders.
Voice Control:
Add support for voice commands to set, snooze, or dismiss alarms.
Smart Alarm:
Implement a smart alarm feature that wakes the user at an optimal time based on their sleep cycle (e.g., using a sleep tracking app).
Implementing a simple grocery list management tool can be a fun and practical project. Here’s a detailed approach including game steps, input ideas, and additional features:
Game Steps
Introduction: Provide a brief introduction to the grocery list tool, explaining its purpose and how it can help manage shopping lists.
Menu Options: Present a menu with options to add, view, update, delete items, and clear the entire list.
User Interaction: Allow the user to select an option from the menu and perform the corresponding operation.
Perform Operations: Implement functionality to add items, view the list, update quantities, delete items, or clear the list.
Display Results: Show the updated grocery list and confirmation of any operations performed.
Repeat or Exit: Allow the user to perform additional operations or exit the program.
Input Ideas
Item Name: Allow the user to enter the name of the grocery item.
Quantity: Prompt the user to specify the quantity of each item (optional).
Operation Choice: Provide options to add, view, update, delete, or clear items from the list.
Item Update: For updating, allow the user to specify the item and new quantity.
Clear List Confirmation: Ask for confirmation before clearing the entire list.
Additional Features
Persistent Storage: Save the grocery list to a file (e.g., JSON or CSV) and load it on program startup.
GUI Interface: Create a graphical user interface using Tkinter or another library for a more user-friendly experience.
Search Functionality: Implement a search feature to find items in the list quickly.
Sort and Filter: Allow sorting the list by item name or quantity, and filtering by categories or availability.
Notification System: Add notifications or reminders for items that are running low or need to be purchased.
Multi-user Support: Implement features to manage multiple lists for different users or households.
Export/Import: Allow users to export the grocery list to a file or import from a file.
Item Categories: Organize items into categories (e.g., dairy, produce) for better management.
Undo Feature: Implement an undo feature to revert the last operation.
Statistics: Provide statistics on the number of items, total quantity, or other relevant data.
Implementing a simple key-value storage system is a great way to practice data handling and basic file operations in Python. Here’s a detailed approach including game steps, input ideas, and additional features:
Game Steps
Introduction: Provide an introduction explaining what a key-value storage system is and its uses.
Menu Options: Present a menu with options to add, retrieve, update, and delete key-value pairs.
User Interaction: Allow the user to interact with the system based on their choice from the menu.
Perform Operations: Implement functionality to perform the chosen operations (add, retrieve, update, delete).
Display Results: Show the results of the operations (e.g., value retrieved or confirmation of deletion).
Repeat or Exit: Allow the user to perform additional operations or exit the program.
Input Ideas
Key Input: Allow the user to enter a key for operations. Ensure that keys are unique for storage operations.
Value Input: Prompt the user to enter a value associated with a key. Values can be strings or numbers.
Operation Choice: Present options to add, retrieve, update, or delete key-value pairs.
File Handling: Optionally, allow users to specify a file to save and load the key-value pairs.
Validation: Ensure that keys and values are entered correctly and handle any errors (e.g., missing keys).
Additional Features
Persistent Storage: Save key-value pairs to a file (e.g., JSON or CSV) and load them on program startup.
Data Validation: Implement checks to validate the format of keys and values.
GUI Interface: Create a graphical user interface using Tkinter or another library for a more user-friendly experience.
Search Functionality: Add a feature to search for keys or values based on user input.
Data Backup: Implement a backup system to periodically save the key-value pairs.
Data Encryption: Encrypt the stored data for security purposes.
Command-Line Arguments: Allow users to perform operations via command-line arguments.
Multi-key Operations: Support operations on multiple keys at once (e.g., batch updates).
Undo Feature: Implement an undo feature to revert the last operation.
User Authentication: Add user authentication to secure access to the key-value storage system.
Implementing a Pomodoro technique timer is a practical way to manage time effectively using a simple and proven productivity method. Here’s a detailed approach for creating a Pomodoro timer, including game steps, input ideas, and additional features.
Game Steps
Introduction: Provide an introduction to the Pomodoro Technique, explaining that it involves working in 25-minute intervals (Pomodoros) followed by a short break, with longer breaks after several intervals.
Start Timer: Allow the user to start the timer for a Pomodoro session.
Timer Countdown: Display a countdown for the Pomodoro session and break periods.
Notify Completion: Alert the user when the Pomodoro session or break is complete.
Record Sessions: Track the number of Pomodoros completed and breaks taken.
End Session: Allow the user to end the session or reset the timer if needed.
Play Again Option: Offer the user the option to start a new session or stop the timer.
Input Ideas
Session Duration: Allow users to set the duration for Pomodoro sessions and breaks. The default is 25 minutes for work and 5 minutes for short breaks, with a longer break (e.g., 15 minutes) after a set number of Pomodoros (e.g., 4).
Custom Durations: Enable users to customize the duration of work sessions and breaks.
Notification Preferences: Allow users to choose how they want to be notified (e.g., sound alert, visual alert, or popup message).
Number of Pomodoros: Ask how many Pomodoro cycles the user wants to complete before taking a longer break.
Reset and Stop Options: Provide options to reset the timer or stop it if needed.
Additional Features
GUI Interface: Create a graphical user interface using Tkinter or another library for a more user-friendly experience.
Notifications: Implement system notifications or sound alerts to notify the user when a Pomodoro or break is over.
Progress Tracking: Track and display the number of completed Pomodoros and breaks, providing visual feedback on progress.
Task Management: Allow users to input and track tasks they want to accomplish during each Pomodoro session.
Statistics: Provide statistics on time spent working and taking breaks, possibly with visual charts or graphs.
Customizable Alerts: Enable users to set custom alert sounds or messages for different stages (start, end of Pomodoro, end of break).
Integration with Calendars: Integrate with calendar applications to schedule Pomodoro sessions and breaks automatically.
Desktop Widgets: Create desktop widgets or applets that display the remaining time for the current session and next break.
Focus Mode: Implement a focus mode that minimizes distractions by blocking certain apps or websites during Pomodoro sessions.
Daily/Weekly Goals: Allow users to set and track daily or weekly productivity goals based on completed Pomodoros.
Introduction: Provide a brief introduction to the Caesar Cipher, explaining that it’s a substitution cipher where each letter in the plaintext is shifted a fixed number of places down or up the alphabet.
Choose Operation: Ask the user whether they want to encrypt or decrypt a message.
Input Text: Prompt the user to enter the text they want to encrypt or decrypt.
Input Shift Value: Request the shift value (key) for the cipher. Ensure the value is within a valid range (typically 1 to 25).
Perform Operation: Apply the Caesar Cipher algorithm to the input text based on the user’s choice of encryption or decryption.
Display Result: Show the resulting encrypted or decrypted text to the user.
Play Again Option: Ask the user if they want to perform another encryption or decryption with new inputs.
Input Ideas
Text Input: Allow the user to input any string of text. Handle both uppercase and lowercase letters. Decide how to treat non-alphabetic characters (e.g., spaces, punctuation).
Shift Value: Ask the user for an integer shift value. Ensure it is within a reasonable range (1 to 25). Handle cases where the shift value is negative or greater than 25 by normalizing it.
Mode Selection: Provide options to select between encryption and decryption. For encryption, the shift will be added; for decryption, the shift will be subtracted.
Case Sensitivity: Handle uppercase and lowercase letters differently or consistently based on user preference.
Special Characters: Decide whether to include special characters and spaces in the encrypted/decrypted text. Define how these characters should be treated.
Additional Features
Input Validation: Implement checks to ensure the shift value is an integer and falls within the expected range. Validate that text input does not contain unsupported characters (if needed).
Help/Instructions: Provide an option for users to view help or instructions on how to use the tool, explaining the Caesar Cipher and how to enter inputs.
GUI Interface: Create a graphical user interface using Tkinter or another library to make the tool more accessible and user-friendly.
File Operations: Allow users to read from and write to text files for encryption and decryption. This is useful for larger amounts of text.
Brute Force Attack: Implement a brute force mode that tries all possible shifts for decryption and displays all possible plaintexts, useful for educational purposes or cracking simple ciphers.
Custom Alphabet: Allow users to define a custom alphabet or set of characters for the cipher, making it more flexible and adaptable.
Save and Load Settings: Implement functionality to save and load encryption/decryption settings, such as shift values or custom alphabets, for future use.
Creating a command-line to-do list application is a fantastic way to practice Python programming and work with basic data management. Here’s a structured approach to building this application, including game steps, input ideas, and additional features:
Game Steps (Workflow)
Introduction:
Start with a welcome message and brief instructions on how to use the application.
Explain the available commands and how to perform actions like adding, removing, and viewing tasks.
Main Menu:
Present a main menu with options for different actions:
Add a task
View all tasks
Mark a task as complete
Remove a task
Exit the application
Task Management:
Implement functionality to add, view, update, and remove tasks.
Store tasks with details such as title, description, and completion status.
Data Persistence:
Save tasks to a file or database so that they persist between sessions.
Load tasks from the file/database when the application starts.
User Interaction:
Use input prompts to interact with the user and execute their commands.
Provide feedback and confirmation messages for actions taken.
Exit and Save:
Save the current state of tasks when the user exits the application.
Confirm that tasks are saved and provide an exit message.
Input Ideas
Command Input:
Use text commands to navigate the menu and perform actions (e.g., add, view, complete, remove, exit).
Task Details:
For adding tasks, prompt the user for details like title and description.
Use input fields for the task details:
Title: Enter task title:
Description: Enter task description:
Task Identification:
Use a unique identifier (like a number) or task title to reference tasks for actions such as marking complete or removing.
Confirmation:
Prompt the user to confirm actions such as removing a task or marking it as complete.
Additional Features
Task Prioritization:
Allow users to set priorities (e.g., low, medium, high) for tasks.
Implement sorting or filtering by priority.
Due Dates:
Add due dates to tasks and provide options to view tasks by date or sort by due date.
Search and Filter:
Implement search functionality to find tasks by title or description.
Add filters to view tasks by status (e.g., completed, pending) or priority.
Task Categories:
Allow users to categorize tasks into different groups or projects.
Export and Import:
Provide options to export tasks to a file (e.g., CSV or JSON) and import tasks from a file.
User Authentication:
Add user authentication if multiple users need to manage their own tasks.
Reminders and Notifications:
Implement reminders or notifications for tasks with upcoming due dates.
Statistics:
Show statistics such as the number of completed tasks, pending tasks, or tasks by priority.
Unicode is an international standard extensively adopted across the industry and the Internet to represent Tamil and other languages. Yet, we still face several legacy issues and ongoing challenges.
The content from government documents cannot be easily extracted. The conversion of documents from one font to another presents problems due to inconsistencies. There exist various, slightly different standards for phonetic transcription of Tamil into latin scripts. There are varied keyboard layouts and input styles for desktop and mobile.
Researchers, developers and practitioners continue to evolve solutions to overcome these challenges. The presentations and discussions will identify needs, issues and solutions for working with Tamil content in varied computing environments.
Please fill this anonymous survey related to using Tamil in computers and smartphones.
Presentation Topics
Introduction to Unicode – Elango
Using Tamil Keyboards on Computer and Mobile Platforms – Suganthan
Android’s New Faster and More Intuitive Method to Type Tamil – Elango
Working with Tamil Content in PDFs – Shrinivasan
Tamil Font Styles – Uthayan
Challenges in Automatic Tamil Font Conversions – Parathan
Transliteration Approaches for Library Metadata Generation – Natkeeran
Date
July 27, 2024 (Saturday) – Virtual Presentations and Discussion
9:30 am – 11:30 am (Toronto time) 7 pm – 9 pm (Chennai/Jaffna time)
Unicode is an international standard extensively adopted across the industry and the Internet to represent Tamil and other languages. Yet, we still face several legacy issues and ongoing challenges.
The content from government documents cannot be easily extracted. The conversion of documents from one font to another presents problems due to inconsistencies. There exist various, slightly different standards for phonetic transcription of Tamil into latin scripts. There are varied keyboard layouts and input styles for desktop and mobile.
Researchers, developers and practitioners continue to evolve solutions to overcome these challenges. The presentations and discussions will identify needs, issues and solutions for working with Tamil content in varied computing environments.
Please fill this anonymous survey related to using Tamil in computers and smartphones.
Presentation Topics
Introduction to Unicode – Elango
Using Tamil Keyboards on Computer and Mobile Platforms – Suganthan
Android’s New Faster and More Intuitive Method to Type Tamil – Elango
Working with Tamil Content in PDFs – Shrinivasan
Tamil Font Styles – Uthayan
Challenges in Automatic Tamil Font Conversions – Parathan
Transliteration Approaches for Library Metadata Generation – Natkeeran
Date
July 27, 2024 (Saturday) – Virtual Presentations and Discussion
9:30 am – 11:30 am (Toronto time) 7 pm – 9 pm (Chennai/Jaffna time)