โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Postgres โ€“ Write-Ahead Logging (WAL) in PostgreSQL

16 November 2024 at 07:06

Write-Ahead Logging (WAL) is a fundamental feature of PostgreSQL, ensuring data integrity and facilitating critical functionalities like crash recovery, replication, and backup.

This series of experimentation explores WAL in detail, its importance, how it works, and provides examples to demonstrate its usage.

What is Write-Ahead Logging (WAL)?

WAL is a logging mechanism where changes to the database are first written to a log file before being applied to the actual data files. This ensures that in case of a crash or unexpected failure, the database can recover and replay these logs to restore its state.

Your question is right !

Why do we need a WAL, when we do a periodic backup ?

Write-Ahead Logging (WAL) is critical even when periodic backups are in place because it complements backups to provide data consistency, durability, and flexibility in the following scenarios.

1. Crash Recovery

  • Why Itโ€™s Important: Periodic backups only capture the database state at specific intervals. If a crash occurs after the latest backup, all changes made since that backup would be lost.
  • Role of WAL: WAL ensures that any committed transactions not yet written to data files (due to PostgreSQLโ€™s lazy-writing behavior) are recoverable. During recovery, PostgreSQL replays the WAL logs to restore the database to its last consistent state, bridging the gap between the last checkpoint and the crash.

Example:

  • Backup Taken: At 12:00 PM.
  • Crash Occurs: At 1:30 PM.
  • Without WAL: All changes after 12:00 PM are lost.
  • With WAL: All changes up to 1:30 PM are recovered.

2. Point-in-Time Recovery (PITR)

  • Why Itโ€™s Important: Periodic backups restore the database to the exact time of the backup. However, this may not be sufficient if you need to recover to a specific point, such as just before a mistake (e.g., accidental data deletion).
  • Role of WAL: WAL records every change, enabling you to replay transactions up to a specific time. This allows fine-grained recovery beyond what periodic backups can provide.

Example:

  • Backup Taken: At 12:00 AM.
  • Mistake Made: At 9:45 AM, an important table is accidentally dropped.
  • Without WAL: Restore only to 12:00 AM, losing 9 hours and 45 minutes of data.
  • With WAL: Restore to 9:44 AM, recovering all valid changes except the accidental drop.

3. Replication and High Availability

  • Why Itโ€™s Important: In a high-availability setup, replicas must stay synchronized with the primary database to handle failovers. Periodic backups cannot provide real-time synchronization.
  • Role of WAL: WAL enables streaming replication by transmitting logs to replicas, ensuring near real-time synchronization.

Example:

  • A primary database sends WAL logs to replicas as changes occur. If the primary fails, a replica can quickly take over without data loss.

4. Handling Incremental Changes

  • Why Itโ€™s Important: Periodic backups store complete snapshots of the database, which can be time-consuming and resource-intensive. They also do not capture intermediate changes.
  • Role of WAL: WAL allows incremental updates by recording only the changes made since the last backup or checkpoint. This is crucial for efficient data recovery and backup optimization.

5. Ensuring Data Durability

  • Why Itโ€™s Important: Even during normal operations, a database crash (e.g., power failure) can occur. Without WAL, transactions committed by users but not yet flushed to disk are lost.
  • Role of WAL: WAL ensures durability by logging all changes before acknowledging transaction commits. This guarantees that committed transactions are recoverable even if the system crashes before flushing the changes to data files.

6. Supporting Hot Backups

  • Why Itโ€™s Important: For large, active databases, taking a backup while the database is running can result in inconsistent snapshots.
  • Role of WAL: WAL ensures consistency by recording changes that occur during the backup process. When replayed, these logs synchronize the backup, ensuring it is valid and consistent.

7. Debugging and Auditing

  • Why Itโ€™s Important: Periodic backups are static snapshots and donโ€™t provide a record of what happened in the database between backups.
  • Role of WAL: WAL contains a sequential record of all database modifications, which can help in debugging issues or auditing transactions.
FeaturePeriodic BackupsWrite-Ahead Logging
Crash RecoveryLimited to the last backupEnsures full recovery to the crash point
Point-in-Time RecoveryRestores only to the backup timeAllows recovery to any specific point
ReplicationNot supportedEnables real-time replication
EfficiencyFull snapshotIncremental changes
DurabilityRelies on backup frequencyGuarantees transaction durability

In upcoming sessions, we will all experiment each one of the failure scenarios for understanding.

POC : Tamil Date parser using parse

By: Hariharan
15 November 2024 at 18:05

Tamil Date time parser POC
https://github.com/r1chardj0n3s/parse

it requires external dependency parse for parsing the python string format with placeholders

import parse
from date import TA_MONTHS
from date import datetime
//POC of tamil date time parser
def strptime(format='{month}, {date} {year}',date_string ="เฎจเฎตเฎฎเฏเฎชเฎฐเฏ, 16 2024"):        
    parsed = parse.parse(format,date_string)
    month = TA_MONTHS.index(parsed['month'])+1
    date = int(parsed['date'])
    year = int(parsed['year'])
    return datetime(year,month,date)

print(strptime("{date}-{month}-{year}","16-เฎจเฎตเฎฎเฏเฎชเฎฐเฏ-2024"))
#dt = datetime(2024,11,16);
# print(dt.strptime_ta("เฎจเฎตเฎฎเฏเฎชเฎฐเฏ , 16 2024","%m %d %Y"))

How to Create & Publish a PHP Package with Composer? โ€“ เฎคเฎฎเฎฟเฎดเฎฟเฎฒเฏ

By: Hariharan
8 November 2024 at 17:59

เฎ…เฎ•เฏ, 13 2024

เฎชเฎฟเฎนเฏ†เฎšเฏเฎชเฎฟ เฎชเฏŠเฎคเฎฟเฎ•เฎณเฏˆ เฎชเฎฟเฎนเฏ†เฎšเฏเฎชเฎฟ เฎ•เฎฎเฏเฎชเฏ‹เฎšเฎฐเฏ-เฎ‰เฎŸเฎฉเฏ เฎ‰เฎฐเฏเฎตเฎพเฎ•เฏเฎ• เฎฎเฎฑเฏเฎฑเฏเฎฎเฏ เฎตเฏ†เฎณเฎฟเฎฏเฎฟเฎŸเฏเฎตเฎคเฏ เฎ’เฎฐเฏ เฎจเฏ‡เฎฐเฎŸเฎฟเฎฏเฎพเฎฉ เฎตเฎดเฎฟเฎฎเฏเฎฑเฏˆ เฎ‡เฎจเฏเฎค เฎตเฎดเฎฟเฎฎเฏเฎฑเฏˆเฎฏเฏˆ เฎชเฎฟเฎฉเฏเฎชเฎฑเฏเฎฑเฎฟเฎฉเฎพเฎฒเฏ เฎจเฎพเฎฎเฏ เฎŽเฎณเฎฟเฎฎเฏˆเฎฏเฎพเฎ• เฎชเฎฟเฎนเฏ†เฎšเฏเฎชเฎฟ เฎšเฎฎเฏ‚เฎ•เฎคเฏเฎคเฏเฎŸเฎฉเฏ เฎจเฎฎเฎคเฏ เฎจเฎฟเฎฐเฎฒเฏเฎ•เฎณเฏˆ เฎชเฏŠเฎคเฎฟเฎตเฎŸเฎฟเฎตเฎคเฏเฎคเฎฟเฎฒเฏ เฎชเฎ•เฎฟเฎฐเฏเฎจเฏเฎคเฏเฎ•เฏŠเฎณเฏเฎณเฎฒเฎพเฎฎเฏ.

เฎ•เฎฎเฏเฎชเฏ‹เฎšเฎฐเฏ โ€“ (เฎชเฎฟเฎนเฏ†เฎšเฏเฎชเฎฟ เฎšเฎพเฎฐเฏเฎชเฏเฎ•เฎณเฎฟเฎฉเฏ เฎจเฎฟเฎฐเฏเฎตเฎพเฎ•เฎฟ) โ€“ PHP Dependency Manager

เฎคเฏ‡เฎตเฏˆเฎฏเฎพเฎฉเฎตเฏˆ:

เฎ‰เฎ™เฏเฎ•เฎณเฎคเฏ เฎ•เฎฃเฎฟเฎฉเฎฟเฎฏเฎฟเฎฒเฏ เฎชเฎฟเฎฉเฏเฎตเฎฐเฏเฎตเฎฑเฏเฎฑเฏˆ เฎจเฎฟเฎฑเฏเฎตเฎฟ เฎ‡เฎฐเฏเฎชเฏเฎชเฎคเฏ เฎ…เฎตเฎšเฎฟเฎฏเฎฎเฏ.

  • เฎชเฎฟเฎนเฏ†เฎšเฏเฎชเฎฟ (เฎชเฎคเฎฟเฎชเฏเฎชเฏ 7.4 or เฎ…เฎฃเฏเฎฎเฏˆ)
  • เฎ•เฎฎเฏเฎชเฏŠเฎšเฎฐเฏ (เฎ…เฎฃเฏเฎฎเฏˆ เฎชเฎคเฎฟเฎชเฏเฎชเฏ)
  • เฎ•เฎฟเฎŸเฏ (เฎ…เฎฃเฏเฎฎเฏˆ เฎชเฎคเฎฟเฎชเฏเฎชเฏ)
  • เฎ’เฎฐเฏ เฎ•เฎฟเฎŸเฏ เฎนเฎชเฏ เฎ•เฎฃเฎ•เฏเฎ•เฏ
  • เฎชเฏ‡เฎ•เฏเฎ•เฎœเฎฟเฎธเฏเฎŸเฏ เฎ•เฎฃเฎ•เฏเฎ•เฏ

เฎชเฎŸเฎฟเฎ•เฎณเฏ:

เฎชเฎŸเฎฟ 1: เฎจเฎฎเฏเฎฎเฏเฎŸเฏˆเฎฏ เฎชเฏŠเฎคเฎฟเฎ•เฏเฎ•เฎพเฎฉ เฎ’เฎฐเฏ เฎ•เฏ‹เฎชเฏเฎชเฏเฎฑเฏˆเฎฏเฏˆ เฎ‰เฎฐเฏเฎตเฎพเฎ•เฏเฎ•เฎฟ เฎ•เฏŠเฎณเฏเฎณเฎตเฏเฎฎเฏ.

mkdir open-tamil
cd open-tamil

เฎชเฎŸเฎฟ 2: เฎ•เฎฎเฏเฎชเฏ‹เฎšเฎฐเฏ เฎชเฏŠเฎคเฎฟเฎฏเฏˆ เฎคเฏเฎตเฎ•เฏเฎ•เฏเฎคเฎฒเฏ

เฎจเฎฎเฏ เฎ•เฎฃเฎฟเฎฉเฎฟเฎฏเฎฟเฎฒเฏ เฎ•เฎฎเฏเฎชเฏ‹เฎšเฎฐเฏ เฎชเฏŠเฎคเฎฟเฎฏเฏˆ เฎคเฏเฎตเฎ•เฏเฎ• เฎชเฎฟเฎฉเฏเฎตเฎฐเฏเฎฎเฏ เฎ•เฎŸเฏเฎŸเฎณเฏˆเฎฏเฏˆ เฎชเฎฏเฎฉเฏเฎชเฎŸเฏเฎคเฏเฎคเฎตเฏเฎฎเฏ.

composer init

เฎฎเฏ‡เฎฑเฏเฎ•เฎฃเฏเฎŸ เฎ•เฎŸเฏเฎŸเฎณเฏˆเฎฏเฏˆ เฎชเฎฏเฎฉเฏเฎชเฎŸเฏเฎคเฏเฎคเฏเฎฎเฏ เฎ•เฎŸเฏเฎŸเฎณเฏˆเฎตเฎฐเฎฟ เฎ‡เฎŸเฏˆเฎฎเฏเฎ•เฎฎเฏ เฎชเฎฟเฎฉเฏเฎตเฎฐเฏเฎฎเฏ เฎ•เฏ‡เฎณเฏเฎตเฎฟเฎ•เฎณเฏˆ เฎ•เฏ‡เฎŸเฏเฎ•เฏเฎฎเฏ

Package name:ย your-username/my-php-package

Description:ย A sample PHP package

Author:ย Your Name <your-email@example.com>

Minimum Stability:ย stable (or leave blank)

Package Type:ย library

License:ย MIT

เฎ‡เฎจเฏเฎค เฎ•เฏ‡เฎณเฏเฎตเฎฟเฎ•เฎณเฏเฎ•เฏเฎ•เฏ เฎตเฎฟเฎŸเฏˆเฎฏเฎณเฎฟเฎคเฏเฎค เฎชเฎฟเฎฉเฏเฎชเฏ เฎชเฎฟเฎฑเฎšเฎพเฎฐเฏเฎชเฏเฎ•เฎณเฏˆ เฎ•เฏ‡เฎŸเฏเฎ•เฏเฎฎเฏ no เฎ•เฏŠเฎŸเฏเฎ•เฏเฎ•เฎตเฏเฎฎเฏ.

เฎ‡เฎฑเฏเฎคเฎฟเฎฏเฎพเฎ• composer.json เฎ‰เฎฐเฏเฎตเฎพเฎ•เฏเฎ• เฎคเฏ‚เฎฃเฏเฎŸเฎฟเฎฏเฎฟเฎฒเฏ yes เฎ•เฏŠเฎŸเฏเฎคเฏเฎคเฏ เฎ‰เฎฐเฏเฎตเฎพเฎ•เฏเฎ•เฎฟ เฎ•เฏŠเฎณเฏเฎณเฎตเฏเฎฎเฏ.

เฎชเฎŸเฎฟ 3 :

composer.json เฎ•เฏ‹เฎชเฏเฎชเฏ เฎ‰เฎฐเฏเฎตเฎพเฎ•เฏเฎ•เฎฟเฎฏ เฎชเฎฟเฎฑเฎ•เฏ เฎ…เฎคเฏ เฎชเฎฟเฎฉเฏเฎตเฎฐเฏเฎฎเฎพเฎฑเฏ เฎคเฏ‹เฎฉเฏเฎฑเฏเฎฎเฏ

{
    "name": "your-username/my-php-package",
    "description": "A sample PHP package",
    "type": "library",
    "require": {
        "php": ">=7.4"
    },
    "autoload": {
        "psr-4": {
            "MyPackage\\": "src/"
        }
    },
    "authors": [
        {
            "name": "Your Name",
            "email": "your-email@example.com"
        }
    ],
    "license": "MIT"
}

เฎชเฎŸเฎฟ 4

เฎชเฎฟเฎฉเฏเฎฉเฎฐเฏ เฎ‰เฎ™เฏเฎ•เฎณเฎคเฏ เฎ•เฏเฎฑเฎฟเฎฎเฏเฎฑเฏˆเฎฏเฏˆ เฎ•เฎฟเฎŸเฏ เฎชเฎฏเฎฉเฏเฎชเฎŸเฏเฎคเฏเฎคเฎฟ เฎ•เฎฟเฎŸเฏเฎนเฎชเฏเฎชเฎฟเฎฒเฏ เฎชเฎคเฎฟเฎตเฏ‡เฎฑเฏเฎฑเฎตเฏเฎฎเฏ.

เฎชเฎŸเฎฟ 5

เฎ•เฏเฎฑเฎฟเฎฏเฏ€เฎŸเฏเฎŸเฏˆ เฎ•เฎฎเฏเฎชเฏ‹เฎšเฎฐเฎฟเฎฒเฏ เฎชเฎคเฎฟเฎชเฏเฎชเฎฟเฎ•เฏเฎ• เฎชเฏ‡เฎ•เฏเฎ•เฏ‡เฎœเฎฟเฎธเฏเฎŸเฎฟเฎฒเฏ เฎ‰เฎณเฏเฎจเฏเฎดเฏˆเฎฏเฎตเฏเฎฎเฏ. เฎชเฎฟเฎฉเฏเฎฉเฎฐเฏ submit เฎชเฏŠเฎคเฏเฎคเฎพเฎฉเฏˆ เฎ…เฎดเฏเฎคเฏเฎคเฎตเฏเฎฎเฏ

submit เฎชเฏŠเฎคเฏเฎคเฎพเฎฉเฏˆ เฎ…เฎดเฏเฎคเฏเฎคเฎฟเฎฏเฎตเฏเฎŸเฎฉเฏ เฎชเฏŠเฎคเฎฟเฎฏเฏˆ เฎŽเฎฑเฏเฎฑเฏเฎฎเฏ เฎชเฎ•เฏเฎ•เฎฎเฏ เฎคเฎฟเฎฑเฎ•เฏเฎ•เฎชเฏเฎชเฎŸเฏเฎŸเฏ เฎ‰เฎ™เฏเฎ•เฎณเฎคเฏ เฎ•เฎฟเฎŸเฏเฎนเฎชเฏ เฎ•เฎฃเฎ•เฏเฎ•เฎฟเฎฒเฏ เฎ‰เฎณเฏเฎณ เฎชเฏŠเฎคเฏเฎตเฎพเฎ• เฎ…เฎฉเฏเฎฎเฎคเฎฟเฎฏเฎฟเฎฒเฏ เฎ‡เฎฐเฏเฎ•เฏเฎ•เฎ•เฏ‚เฎŸเฎฟเฎฏ เฎฐเฏ†เฎชเฏŠเฎšเฎฟเฎŸเฎฐเฎฟเฎฏเฎฟเฎฉเฏ เฎตเฎฒเฏˆเฎฎเฏเฎ•เฎตเฎฐเฎฟเฎฏเฏˆ เฎ‰เฎณเฏเฎณเฎฟเฎŸเฏเฎŸเฏ เฎšเฎฐเฎฟเฎชเฎพเฎฐเฏเฎ•เฏเฎ•เฏเฎฎเฏ เฎชเฏŠเฎคเฏเฎคเฎพเฎฉเฏˆ เฎ…เฎดเฏเฎคเฏเฎคเฎฟ เฎšเฎฐเฎฟเฎชเฎพเฎฐเฏเฎคเฏเฎคเฏเฎ•เฏŠเฎณเฏเฎณเฎตเฏเฎฎเฏ.

เฎ•เฏเฎฑเฎฟเฎชเฏเฎชเฏ : เฎ•เฎฎเฏเฎชเฏ‹เฎšเฎฐเฏˆ เฎชเฏŠเฎฑเฏเฎคเฏเฎคเฎตเฎ•เฏˆเฎฏเฎฟเฎฒเฏ เฎชเฎคเฎฟเฎชเฏเฎชเฎฟเฎชเฏเฎชเฎตเฎฐเฏ เฎตเฏ†เฎฉเฏเฎŸเฎพเฎฐเฏ (vendor) เฎŽเฎฉเฏเฎฑเฏ เฎ•เฏเฎฑเฎฟเฎชเฏเฎชเฎฟเฎŸเฎชเฏเฎชเฎŸเฏเฎตเฎฐเฏ. เฎจเฎพเฎฉเฏ hariharan เฎŽเฎฉเฏเฎฑ เฎตเฏ†เฎฉเฏเฎŸเฎพเฎฐเฏ เฎชเฏ†เฎฏเฎฐเฏˆ เฎชเฎฏเฎฉเฏเฎชเฎŸเฏเฎคเฏเฎคเฎฟ เฎ‡เฎฐเฏ เฎชเฏŠเฎคเฎฟเฎ•เฎณเฏˆ เฎชเฎคเฎฟเฎชเฏเฎชเฎฟเฎคเฏเฎคเฏเฎณเฏเฎณเฏ‡เฎฉเฏ.

เฎชเฏเฎคเฎฟเฎฏ เฎชเฏŠเฎคเฎฟเฎฏเฏˆ เฎšเฎฐเฎฟเฎชเฎพเฎฐเฏเฎคเฏเฎค เฎชเฎฟเฎฉเฏ เฎชเฏŠเฎคเฎฟเฎฏเฎพเฎฉเฎคเฏ เฎชเฎคเฎฟเฎชเฏเฎชเฎฟเฎ•เฏเฎ• เฎคเฎฏเฎฐเฎพเฎ•เฎฟเฎตเฎฟเฎŸเฏเฎฎเฏ.

เฎชเฎพเฎฐเฏเฎ•เฏเฎ• :

https://packagist.org/packages/hariharan/open-tamil

https://packagist.org/packages/hariharan/thirukural

เฎจเฎฟเฎฑเฏเฎตเฎฟ เฎชเฎพเฎฐเฏเฎ•เฏเฎ•:

composer require hariharan/thirukural

composer require hariharan/open-tamil

HAProxy EP 9: Load Balancing with Weighted Round Robin

11 September 2024 at 14:39

Load balancing helps distribute client requests across multiple servers to ensure high availability, performance, and reliability. Weighted Round Robin Load Balancing is an extension of the round-robin algorithm, where each server is assigned a weight based on its capacity or performance capabilities. This approach ensures that more powerful servers handle more traffic, resulting in a more efficient distribution of the load.

What is Weighted Round Robin Load Balancing?

Weighted Round Robin Load Balancing assigns a weight to each server. The weight determines how many requests each server should handle relative to the others. Servers with higher weights receive more requests compared to those with lower weights. This method is useful when backend servers have different processing capabilities or resources.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

Weโ€™ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.

  • Flask App 1 (app1.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 1!"

@app.route("/data")
def data():
    return "Data from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)

  • Flask App 2 (app2.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 2!"

@app.route("/data")
def data():
    return "Data from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)

  • Flask App 3 (app3.py):

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 3!"

@app.route("/data")
def data():
    return "Data from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Step 2: Create Dockerfiles for Each Flask Application

Create Dockerfiles for each of the Flask applications:

  • Dockerfile for Flask App 1 (Dockerfile.app1):

# Use the official Python image from Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the application file into the container
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

  • Dockerfile for Flask App 2 (Dockerfile.app2):

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

  • Dockerfile for Flask App 3 (Dockerfile.app3):

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create the HAProxy Configuration File

Create an HAProxy configuration file (haproxy.cfg) to implement Weighted Round Robin Load Balancing


global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance roundrobin
    server server1 app1:5001 weight 2 check
    server server2 app2:5002 weight 1 check
    server server3 app3:5003 weight 3 check

Explanation:

  • The balance roundrobin directive tells HAProxy to use the Round Robin load balancing algorithm.
  • The weight option for each server specifies the weight associated with each server:
    • server1 (App 1) has a weight of 2.
    • server2 (App 2) has a weight of 1.
    • server3 (App 3) has a weight of 3.
  • Requests will be distributed based on these weights: App 3 will receive the most requests, App 2 the least, and App 1 will be in between.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy):


# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a docker-compose.yml File

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3


Explanation:

  • The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
  • HAProxy depends on the three Flask applications to be up and running before it starts.

Step 6: Build and Run the Docker Containers

Run the following command to build and start all the containers


docker-compose up --build

This command builds Docker images for all three Flask apps and HAProxy, then starts them.

Step 7: Test the Load Balancer

Open your browser or use curl to make requests to the HAProxy server


curl http://localhost/
curl http://localhost/data

Observation:

  • With Weighted Round Robin Load Balancing, you should see that requests are distributed according to the weights specified in the HAProxy configuration.
  • For example, App 3 should receive three times more requests than App 2, and App 1 should receive twice as many as App 2.

Conclusion

By implementing Weighted Round Robin Load Balancing with HAProxy, you can distribute traffic more effectively according to the capacity or performance of each backend server. This approach helps optimize resource utilization and ensures a balanced load across servers.

HAProxy EP 8: Load Balancing with Random Load Balancing

11 September 2024 at 14:23

Load balancing distributes client requests across multiple servers to ensure high availability and reliability. One of the simplest load balancing algorithms is Random Load Balancing, which selects a backend server randomly for each client request.

Although this approach does not consider server load or other metrics, it can be effective for less critical applications or when the goal is to achieve simplicity.

What is Random Load Balancing?

Random Load Balancing assigns incoming requests to a randomly chosen server from the available pool of servers. This method is straightforward and ensures that requests are distributed in a non-deterministic manner, which may work well for environments with equally capable servers and minimal concerns about server load or state.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

Weโ€™ll use the same three Flask applications (app1.py, app2.py, and app3.py) as in previous examples.

Flask App 1 โ€“ (app.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 1!"

@app.route("/data")
def data():
    return "Data from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 โ€“ (app.py)


from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 2!"

@app.route("/data")
def data():
    return "Data from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)

Flask App 3 โ€“ (app.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Hello from Flask App 3!"

@app.route("/data")
def data():
    return "Data from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)


Step 2: Create Dockerfiles for Each Flask Application

Create Dockerfiles for each of the Flask applications:

  • Dockerfile for Flask App 1 (Dockerfile.app1):
# Use the official Python image from Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the application file into the container
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

  • Dockerfile for Flask App 2 (Dockerfile.app2):
FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]


  • Dockerfile for Flask App 3 (Dockerfile.app3):

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a Dockerfile for HAProxy

HAProxy Configuration file,


global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance random
    random draw 2
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • The balance random directive tells HAProxy to use the Random load balancing algorithm.
  • The random draw 2 setting makes HAProxy select 2 servers randomly and choose the one with the least number of connections. This adds a bit of load awareness to the random choice.
  • The server directives define the backend servers and their ports.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy):

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80


Step 5: Create a docker-compose.yml File

To manage all the containers together, create a docker-compose.yml file:


version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines the services (app1, app2, app3, and haproxy) and their respective configurations.
  • HAProxy depends on the three Flask applications to be up and running before it starts.

Step 6: Build and Run the Docker Containers

Run the following command to build and start all the containers:


docker-compose up --build

This command builds Docker images for all three Flask apps and HAProxy, then starts them.

Step 7: Test the Load Balancer

Open your browser or use curl to make requests to the HAProxy server:

curl http://localhost/
curl http://localhost/data

Observation:

  • With Random Load Balancing, each request should randomly hit one of the three backend servers.
  • Since the selection is random, you may not see a predictable pattern; however, the requests should be evenly distributed across the servers over a large number of requests.

Conclusion

By implementing Random Load Balancing with HAProxy, weโ€™ve demonstrated a simple way to distribute traffic across multiple servers without relying on complex metrics or state information. While this approach may not be ideal for all use cases, it can be useful in scenarios where simplicity is more valuable than fine-tuned load distribution.

HAProxy EP 7: Load Balancing with Source IP Hash, URI โ€“ Consistent Hashing

11 September 2024 at 13:55

Load balancing helps distribute traffic across multiple servers, enhancing performance and reliability. One common strategy is Source IP Hash load balancing, which ensures that requests from the same client IP are consistently directed to the same server.

This method is particularly useful for applications requiring session persistence, such as shopping carts or user sessions. In this blog, weโ€™ll implement Source IP Hash load balancing using Flask and HAProxy, all within Docker containers.

What is Source IP Hash Load Balancing?

Source IP Hash Load Balancing is a technique that uses a hash function on the clientโ€™s IP address to determine which server should handle the request. This guarantees that a particular client will always be directed to the same backend server, ensuring session persistence and stateful behavior.

Consistent Hashing: https://parottasalna.com/2024/06/17/why-do-we-need-to-maintain-same-hash-in-load-balancer/

Step-by-Step Implementation with Docker

Step 1: Create Flask Application

Weโ€™ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance source
    hash-type consistent
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • The balance source directive tells HAProxy to use Source IP Hashing as the load balancing algorithm.
  • The hash-type consistent directive ensures consistent hashing, which is essential for minimizing disruption when backend servers are added or removed.
  • The server directives define the backend servers and their ports.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:

# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:

curl http://localhost

Observation:

  • With Source IP Hash load balancing, each unique IP address (e.g., your local IP) should always be directed to the same backend server.
  • If you access the HAProxy from different IPs (e.g., using different devices or by simulating different client IPs), you will see that requests are consistently sent to the same server for each IP.

For the URI based hashing we just need to add,

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance uri
    hash-type consistent
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check


Explanation:

  • The balance uri directive tells HAProxy to use URI Hashing as the load balancing algorithm.
  • The hash-type consistent directive ensures consistent hashing to minimize disruption when backend servers are added or removed.
  • The server directives define the backend servers and their ports.

HAProxy Ep 6: Load Balancing With Least Connection

11 September 2024 at 13:32

Load balancing is crucial for distributing incoming network traffic across multiple servers, ensuring optimal resource utilization and improving application performance. One of the simplest and most popular load balancing algorithms is Round Robin. In this blog, weโ€™ll explore how to implement Least Connection load balancing using Flask as our backend application and HAProxy as our load balancer.

What is Least Connection Load Balancing?

Least Connection Load Balancing is a dynamic algorithm that distributes requests to the server with the fewest active connections at any given time. This method ensures that servers with lighter loads receive more requests, preventing any single server from becoming a bottleneck.

Step-by-Step Implementation with Docker

Step 1: Create Dockerfiles for Each Flask Application

Weโ€™ll create three separate Dockerfiles, one for each Flask app.

Flask App 1 (app1.py) โ€“ Introduced Slowness by adding sleep

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 1!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001)


Flask App 2 (app2.py)

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from Flask App 2!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5002)


Flask App 3 (app3.py) โ€“ Introduced Slowness by adding sleep.

from flask import Flask
import time

app = Flask(__name__)

@app.route("/")
def hello():
    time.sleep(5)
    return "Hello from Flask App 3!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5003)

Each Flask app listens on a different port (5001, 5002, 5003).

Step 2: Dockerfiles for each flask application

Dockerfile for Flask App 1 (Dockerfile.app1)

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY app1.py .

# Install Flask inside the container
RUN pip install Flask

# Expose the port the app runs on
EXPOSE 5001

# Run the application
CMD ["python", "app1.py"]

Dockerfile for Flask App 2 (Dockerfile.app2)

FROM python:3.9-slim
WORKDIR /app
COPY app2.py .
RUN pip install Flask
EXPOSE 5002
CMD ["python", "app2.py"]

Dockerfile for Flask App 3 (Dockerfile.app3)

FROM python:3.9-slim
WORKDIR /app
COPY app3.py .
RUN pip install Flask
EXPOSE 5003
CMD ["python", "app3.py"]

Step 3: Create a configuration for HAProxy

global
    log stdout format raw local0
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend servers

backend servers
    balance leastconn
    server server1 app1:5001 check
    server server2 app2:5002 check
    server server3 app3:5003 check

Explanation:

  • frontend http_front: Defines the entry point for incoming traffic. It listens on port 80.
  • backend servers: Specifies the servers HAProxy will distribute traffic evenly the three Flask apps (app1, app2, app3). The balance leastconn directive sets the Least Connection for load balancing.
  • server directives: Lists the backend servers with their IP addresses and ports. The check option allows HAProxy to monitor the health of each server.

Step 4: Create a Dockerfile for HAProxy

Create a Dockerfile for HAProxy (Dockerfile.haproxy)

# Use the official HAProxy image from Docker Hub
FROM haproxy:latest

# Copy the custom HAProxy configuration file into the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

# Expose the port for HAProxy
EXPOSE 80

Step 5: Create a Dockercompose file

To manage all the containers together, create a docker-compose.yml file

version: '3'

services:
  app1:
    build:
      context: .
      dockerfile: Dockerfile.app1
    container_name: flask_app1
    ports:
      - "5001:5001"

  app2:
    build:
      context: .
      dockerfile: Dockerfile.app2
    container_name: flask_app2
    ports:
      - "5002:5002"

  app3:
    build:
      context: .
      dockerfile: Dockerfile.app3
    container_name: flask_app3
    ports:
      - "5003:5003"

  haproxy:
    build:
      context: .
      dockerfile: Dockerfile.haproxy
    container_name: haproxy
    ports:
      - "80:80"
    depends_on:
      - app1
      - app2
      - app3

Explanation:

  • The docker-compose.yml file defines four services: app1, app2, app3, and haproxy.
  • Each Flask app is built from its respective Dockerfile and runs on its port.
  • HAProxy is configured to wait (depends_on) for all three Flask apps to be up and running.

Step 6: Build and Run the Docker Containers

Run the following commands to build and start all the containers:

# Build and run the containers
docker-compose up --build

This command will build Docker images for all three Flask apps and HAProxy and start them up in the background.

You should see the responses alternating between โ€œHello from Flask App 1!โ€, โ€œHello from Flask App 2!โ€, and โ€œHello from Flask App 3!โ€ as HAProxy uses the Round Robin algorithm to distribute requests.

Step 7: Test the Load Balancer

Open your browser or use a tool like curl to make requests to the HAProxy server:

curl http://localhost

You should see responses cycling between โ€œHello from Flask App 1!โ€, โ€œHello from Flask App 2!โ€, and โ€œHello from Flask App 3!โ€ according to the Least Connection strategy.

Mastering Request Retrying in Python with Tenacity: A Developerโ€™s Journey

7 September 2024 at 01:49

Meet Jafer, a talented developer (self boast) working at a fast growing tech company. His team is building an innovative app that fetches data from multiple third-party APIs in realtime to provide users with up-to-date information.

Everything is going smoothly until one day, a spike in traffic causes their app to face a wave of โ€œHTTP 500โ€ and โ€œTimeoutโ€ errors. Requests start failing left and right, and users are left staring at the dreaded โ€œData Unavailableโ€ message.

Jafer realizes that he needs a way to make their app more resilient against these unpredictable network hiccups. Thatโ€™s when he discovers Tenacity a powerful Python library designed to help developers handle retries gracefully.

Join Jafer as he dives into Tenacity and learns how to turn his app from fragile to robust with just a few lines of code!

Step 0: Mock FLASK Api

from flask import Flask, jsonify, make_response
import random
import time

app = Flask(__name__)

# Scenario 1: Random server errors
@app.route('/random_error', methods=['GET'])
def random_error():
    if random.choice([True, False]):
        return make_response(jsonify({"error": "Server error"}), 500)  # Simulate a 500 error randomly
    return jsonify({"message": "Success"})

# Scenario 2: Timeouts
@app.route('/timeout', methods=['GET'])
def timeout():
    time.sleep(5)  # Simulate a long delay that can cause a timeout
    return jsonify({"message": "Delayed response"})

# Scenario 3: 404 Not Found error
@app.route('/not_found', methods=['GET'])
def not_found():
    return make_response(jsonify({"error": "Not found"}), 404)

# Scenario 4: Rate-limiting (simulated with a fixed chance)
@app.route('/rate_limit', methods=['GET'])
def rate_limit():
    if random.randint(1, 10) <= 3:  # 30% chance to simulate rate limiting
        return make_response(jsonify({"error": "Rate limit exceeded"}), 429)
    return jsonify({"message": "Success"})

# Scenario 5: Empty response
@app.route('/empty_response', methods=['GET'])
def empty_response():
    if random.choice([True, False]):
        return make_response("", 204)  # Simulate an empty response with 204 No Content
    return jsonify({"message": "Success"})

if __name__ == '__main__':
    app.run(host='localhost', port=5000, debug=True)

To run the Flask app, use the command,

python mock_server.py

Step 1: Introducing Tenacity

Jafer decides to start with the basics. He knows that Tenacity will allow him to retry failed requests without cluttering his codebase with complex loops and error handling. So, he installs the library,

pip install tenacity

With Tenacity ready, Jafer decides to tackle his first problem, retrying a request that fails due to server errors.

Step 2: Retrying on Exceptions

He writes a simple function that fetches data from an API and wraps it with Tenacityโ€™s @retry decorator

import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@retry(stop=stop_after_attempt(3),
        wait=wait_fixed(2),
        before=before_log(logger, logging.INFO),
        after=after_log(logger, logging.INFO))
def fetch_random_error():
    response = requests.get('http://localhost:5000/random_error')
    response.raise_for_status()  # Raises an HTTPError for 4xx/5xx responses
    return response.json()
 
if __name__ == '__main__':
    try:
        data = fetch_random_error()
        print("Data fetched successfully:", data)
    except Exception as e:
        print("Failed to fetch data:", str(e))

This code will attempt the request up to 3 times, waiting 2 seconds between each try. Jafer feels confident that this will handle the occasional hiccup. However, he soon realizes that he needs more control over which exceptions trigger a retry.

Step 3: Handling Specific Exceptions

Jaferโ€™s app sometimes receives a โ€œ404 Not Foundโ€ error, which should not be retried because the resource doesnโ€™t exist. He modifies the retry logic to handle only certain exceptions,

import requests
import logging
from tenacity import before_log, after_log
from requests.exceptions import HTTPError, Timeout
from tenacity import retry, retry_if_exception_type, stop_after_attempt, wait_fixed
 

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@retry(stop=stop_after_attempt(3),
        wait=wait_fixed(2),
        retry=retry_if_exception_type((HTTPError, Timeout)),
        before=before_log(logger, logging.INFO),
        after=after_log(logger, logging.INFO))
def fetch_data():
    response = requests.get('http://localhost:5000/timeout', timeout=2)  # Set a short timeout to simulate failure
    response.raise_for_status()
    return response.json()

if __name__ == '__main__':
    try:
        data = fetch_data()
        print("Data fetched successfully:", data)
    except Exception as e:
        print("Failed to fetch data:", str(e))

Now, the function retries only on HTTPError or Timeout, avoiding unnecessary retries for a โ€œ404โ€ error. Jaferโ€™s app is starting to feel more resilient!

Step 4: Implementing Exponential Backoff

A few days later, the team notices that theyโ€™re still getting rate-limited by some APIs. Jafer recalls the concept of exponential backoff a strategy where the wait time between retries increases exponentially, reducing the load on the server and preventing further rate limiting.

He decides to implement it,

import requests
import logging
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_exponential

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


@retry(stop=stop_after_attempt(5),
       wait=wait_exponential(multiplier=1, min=2, max=10),
       before=before_log(logger, logging.INFO),
       after=after_log(logger, logging.INFO))
def fetch_rate_limit():
    response = requests.get('http://localhost:5000/rate_limit')
    response.raise_for_status()
    return response.json()
 
if __name__ == '__main__':
    try:
        data = fetch_rate_limit()
        print("Data fetched successfully:", data)
    except Exception as e:
        print("Failed to fetch data:", str(e))

With this code, the wait time starts at 2 seconds and doubles with each retry, up to a maximum of 10 seconds. Jaferโ€™s app is now much less likely to be rate-limited!

Step 5: Retrying Based on Return Values

Jafer encounters another issue: some APIs occasionally return an empty response (204 No Content). These cases should also trigger a retry. Tenacity makes this easy with the retry_if_result feature,

import requests
import logging
from tenacity import before_log, after_log

from tenacity import retry, stop_after_attempt, retry_if_result

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
  

@retry(retry=retry_if_result(lambda x: x is None), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
       after=after_log(logger, logging.INFO))
def fetch_empty_response():
    response = requests.get('http://localhost:5000/empty_response')
    if response.status_code == 204:
        return None  # Simulate an empty response
    response.raise_for_status()
    return response.json()
 
if __name__ == '__main__':
    try:
        data = fetch_empty_response()
        print("Data fetched successfully:", data)
    except Exception as e:
        print("Failed to fetch data:", str(e))

Now, the function retries when it receives an empty response, ensuring that users get the data they need.

Step 6: Combining Multiple Retry Conditions

But Jafer isnโ€™t done yet. Some situations require combining multiple conditions. He wants to retry on HTTPError, Timeout, or a None return value. With Tenacityโ€™s retry_any feature, he can do just that,

import requests
import logging
from tenacity import before_log, after_log

from requests.exceptions import HTTPError, Timeout
from tenacity import retry_any, retry, retry_if_exception_type, retry_if_result, stop_after_attempt
 
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@retry(retry=retry_any(retry_if_exception_type((HTTPError, Timeout)), retry_if_result(lambda x: x is None)), stop=stop_after_attempt(3), before=before_log(logger, logging.INFO),
       after=after_log(logger, logging.INFO))
def fetch_data():
    response = requests.get("http://localhost:5000/timeout")
    if response.status_code == 204:
        return None
    response.raise_for_status()
    return response.json()

if __name__ == '__main__':
    try:
        data = fetch_data()
        print("Data fetched successfully:", data)
    except Exception as e:
        print("Failed to fetch data:", str(e))

This approach covers all his bases, making the app even more resilient!

Step 7: Logging and Tracking Retries

As the app scales, Jafer wants to keep an eye on how often retries happen and why. He decides to add logging,

import logging
import requests
from tenacity import before_log, after_log
from tenacity import retry, stop_after_attempt, wait_fixed

 
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
 
@retry(stop=stop_after_attempt(2), wait=wait_fixed(2),
       before=before_log(logger, logging.INFO),
       after=after_log(logger, logging.INFO))
def fetch_data():
    response = requests.get("http://localhost:5000/timeout", timeout=2)
    response.raise_for_status()
    return response.json()

if __name__ == '__main__':
    try:
        data = fetch_data()
        print("Data fetched successfully:", data)
    except Exception as e:
        print("Failed to fetch data:", str(e))

This logs messages before and after each retry attempt, giving Jafer full visibility into the retry process. Now, he can monitor the appโ€™s behavior in production and quickly spot any patterns or issues.

The Happy Ending

With Tenacity, Jafer has transformed his app into a resilient powerhouse that gracefully handles intermittent failures. Users are happy, the servers are humming along smoothly, and Jaferโ€™s team has more time to work on new features rather than firefighting network errors.

By mastering Tenacity, Jafer has learned that handling network failures gracefully can turn a fragile app into a robust and reliable one. Whether itโ€™s dealing with flaky APIs, network blips, or rate limits, Tenacity is his go-to tool for retrying operations in Python.

So, the next time your app faces unpredictable network challenges, remember Jaferโ€™s story and give Tenacity a try you might just save the day!

Security Incident : Code Smells โ€“ Not Replaced Constants

11 August 2024 at 12:11

The Secure Boot Case Study

Attackers can break through the Secure Boot process on millions of computers using Intel and ARM processors due to a leaked cryptographic key that many manufacturers used during the startup process. This key, called the Platform Key (PK), is meant to verify the authenticity of a deviceโ€™s firmware and boot software.

Unfortunately, this key was leaked back in 2018. It seems that some manufacturers used this key in their devices instead of replacing it with a secure one, as was intended. As a result, millions of devices from brands like Lenovo, HP, Asus, and SuperMicro are vulnerable to attacks.

If an attacker has access to this leaked key, they can easily bypass Secure Boot, allowing them to install malicious software that can take control of the device. To fix this problem, manufacturers need to replace the compromised key and update the firmware on affected devices. Some have already started doing this, but it might take time for all devices to be updated, especially those in critical systems.

The problem is serious because the leaked key is like a master key that can unlock many devices. This issue highlights poor cryptographic key management practices, which have been a problem for many years.

What Are โ€œNot Replaced Constantsโ€?

In software, constants are values that are not meant to change during the execution of a program. They are often used to define configuration settings, cryptographic keys, and other critical values.

When these constants are hard-coded into a system and not updated or replaced when necessary, they become a code smell known as โ€œNot Replaced Constants.โ€

Why Are They a Problem?

When constants are not replaced or updated:

  1. Security Risks: Outdated or exposed constants, such as cryptographic keys, can become security vulnerabilities. If these constants are publicly leaked or discovered by attackers, they can be exploited to gain unauthorized access or control over a system.
  2. Maintainability Issues: Hard-coded constants can make a codebase less maintainable. Changes to these values require code modifications, which can be error-prone and time-consuming.
  3. Flexibility Limitations: Systems with hard-coded constants lack flexibility, making it difficult to adapt to new requirements or configurations without altering the source code.

The Secure Boot Case Study

The recent Secure Boot vulnerability is a perfect example of the dangers posed by โ€œNot Replaced Constants.โ€ Hereโ€™s a breakdown of what happened:

The Vulnerability

Researchers discovered that a cryptographic key used in the Secure Boot process of millions of devices was leaked publicly. This key, known as the Platform Key (PK), serves as the root of trust during the Secure Boot process, verifying the authenticity of a deviceโ€™s firmware and boot software.

What Went Wrong

The leaked PK was originally intended as a test key by American Megatrends International (AMI). However, it was not replaced by some manufacturers when producing devices for the market. As a result, the same compromised key was used across millions of devices, leaving them vulnerable to attacks.

The Consequences

Attackers with access to the leaked key can bypass Secure Boot protections, allowing them to install persistent malware and gain control over affected devices. This vulnerability highlights the critical importance of replacing test keys and securely managing cryptographic constants.

Sample Code:

Wrong

def generate_pk() -> str:
    return "DO NOT TRUST"

# Vendor forgets to replace PK
def use_default_pk() -> str:
    pk = generate_pk()
    return pk  # "DO NOT TRUST" PK used in production


Right

def generate_pk() -> str:
    # The documentation tells vendors to replace this value
    return "DO NOT TRUST"

def use_default_pk() -> str:
    pk = generate_pk()

    if pk == "DO NOT TRUST":
        raise ValueError("Error: PK must be replaced before use.")

    return pk  # Valid PK used in production

Ignoring important security steps, like changing default keys, can create big security holes. This ongoing problem shows how important it is to follow security procedures carefully. Instead of just relying on written instructions, make sure to test everything thoroughly to ensure it works as expected.

Build A Simple Alarm Clock

11 August 2024 at 11:39

Creating a simple alarm clock application can be a fun project to develop programming skills. Here are the steps, input ideas, and additional features you might consider when building your alarm clock

Game Steps

  1. Define the Requirements:
    • Determine the basic functionality your alarm clock should have (e.g., set alarm, snooze, dismiss).
  2. Choose a Programming Language:
    • Select a language you are comfortable with, such as Python, JavaScript, or Java.
  3. Design the User Interface:
    • Decide if you want a graphical user interface (GUI) or a command-line interface (CLI).
  4. Implement Core Features:
    • Set Alarm: Allow users to set an alarm for a specific time.
    • Trigger Alarm: Play a sound or display a message when the alarm time is reached.
    • Snooze Functionality: Enable users to snooze the alarm for a set period.
    • Dismiss Alarm: Allow users to turn off the alarm once itโ€™s triggered.
  5. Test the Alarm Clock:
    • Ensure that all functions work as expected and fix any bugs.
  6. Refine and Enhance:
    • Improve the interface and add additional features based on user feedback.

Input Ideas

  • Set Alarm Time:
    • Input format: โ€œHHAM/PMโ€ or 24-hour format โ€œHHโ€.
  • Snooze Duration:
    • Allow users to input a snooze time in minutes.
  • Alarm Sound:
    • Let users choose from a list of available alarm sounds.
  • Repeat Alarm:
    • Options for repeating alarms (e.g., daily, weekdays, weekends).
  • Custom Alarm Message:
    • Input a custom message to display when the alarm goes off.

Additional Features

  • Multiple Alarms:
    • Allow users to set multiple alarms for different times and days.
  • Customizable Alarm Sounds:
    • Let users upload their own alarm sounds.
  • Volume Control:
    • Add an option to control the alarm sound volume.
  • Alarm Labels:
    • Enable users to label their alarms (e.g., โ€œWake Up,โ€ โ€œMeeting Reminderโ€).
  • Weather and Time Display:
    • Show current weather information and time on the main screen.
  • Recurring Alarms:
    • Allow users to set recurring alarms on specific days.
  • Dark Mode:
    • Implement a dark mode for the UI.
  • Integration with Calendars:
    • Sync alarms with calendar events or reminders.
  • Voice Control:
    • Add support for voice commands to set, snooze, or dismiss alarms.
  • Smart Alarm:
    • Implement a smart alarm feature that wakes the user at an optimal time based on their sleep cycle (e.g., using a sleep tracking app).

Implement a simple grocery list

11 August 2024 at 09:13

Implementing a simple grocery list management tool can be a fun and practical project. Hereโ€™s a detailed approach including game steps, input ideas, and additional features:

Game Steps

  1. Introduction: Provide a brief introduction to the grocery list tool, explaining its purpose and how it can help manage shopping lists.
  2. Menu Options: Present a menu with options to add, view, update, delete items, and clear the entire list.
  3. User Interaction: Allow the user to select an option from the menu and perform the corresponding operation.
  4. Perform Operations: Implement functionality to add items, view the list, update quantities, delete items, or clear the list.
  5. Display Results: Show the updated grocery list and confirmation of any operations performed.
  6. Repeat or Exit: Allow the user to perform additional operations or exit the program.

Input Ideas

  1. Item Name: Allow the user to enter the name of the grocery item.
  2. Quantity: Prompt the user to specify the quantity of each item (optional).
  3. Operation Choice: Provide options to add, view, update, delete, or clear items from the list.
  4. Item Update: For updating, allow the user to specify the item and new quantity.
  5. Clear List Confirmation: Ask for confirmation before clearing the entire list.

Additional Features

  1. Persistent Storage: Save the grocery list to a file (e.g., JSON or CSV) and load it on program startup.
  2. GUI Interface: Create a graphical user interface using Tkinter or another library for a more user-friendly experience.
  3. Search Functionality: Implement a search feature to find items in the list quickly.
  4. Sort and Filter: Allow sorting the list by item name or quantity, and filtering by categories or availability.
  5. Notification System: Add notifications or reminders for items that are running low or need to be purchased.
  6. Multi-user Support: Implement features to manage multiple lists for different users or households.
  7. Export/Import: Allow users to export the grocery list to a file or import from a file.
  8. Item Categories: Organize items into categories (e.g., dairy, produce) for better management.
  9. Undo Feature: Implement an undo feature to revert the last operation.
  10. Statistics: Provide statistics on the number of items, total quantity, or other relevant data.

Implement a simple key-value storage system โ€“ Python Project

11 August 2024 at 09:04

Implementing a simple key-value storage system is a great way to practice data handling and basic file operations in Python. Hereโ€™s a detailed approach including game steps, input ideas, and additional features:

Game Steps

  1. Introduction: Provide an introduction explaining what a key-value storage system is and its uses.
  2. Menu Options: Present a menu with options to add, retrieve, update, and delete key-value pairs.
  3. User Interaction: Allow the user to interact with the system based on their choice from the menu.
  4. Perform Operations: Implement functionality to perform the chosen operations (add, retrieve, update, delete).
  5. Display Results: Show the results of the operations (e.g., value retrieved or confirmation of deletion).
  6. Repeat or Exit: Allow the user to perform additional operations or exit the program.

Input Ideas

  1. Key Input: Allow the user to enter a key for operations. Ensure that keys are unique for storage operations.
  2. Value Input: Prompt the user to enter a value associated with a key. Values can be strings or numbers.
  3. Operation Choice: Present options to add, retrieve, update, or delete key-value pairs.
  4. File Handling: Optionally, allow users to specify a file to save and load the key-value pairs.
  5. Validation: Ensure that keys and values are entered correctly and handle any errors (e.g., missing keys).

Additional Features

  1. Persistent Storage: Save key-value pairs to a file (e.g., JSON or CSV) and load them on program startup.
  2. Data Validation: Implement checks to validate the format of keys and values.
  3. GUI Interface: Create a graphical user interface using Tkinter or another library for a more user-friendly experience.
  4. Search Functionality: Add a feature to search for keys or values based on user input.
  5. Data Backup: Implement a backup system to periodically save the key-value pairs.
  6. Data Encryption: Encrypt the stored data for security purposes.
  7. Command-Line Arguments: Allow users to perform operations via command-line arguments.
  8. Multi-key Operations: Support operations on multiple keys at once (e.g., batch updates).
  9. Undo Feature: Implement an undo feature to revert the last operation.
  10. User Authentication: Add user authentication to secure access to the key-value storage system.

Implement a Pomodoro technique timer.

11 August 2024 at 08:57

Implementing a Pomodoro technique timer is a practical way to manage time effectively using a simple and proven productivity method. Hereโ€™s a detailed approach for creating a Pomodoro timer, including game steps, input ideas, and additional features.

Game Steps

  1. Introduction: Provide an introduction to the Pomodoro Technique, explaining that it involves working in 25-minute intervals (Pomodoros) followed by a short break, with longer breaks after several intervals.
  2. Start Timer: Allow the user to start the timer for a Pomodoro session.
  3. Timer Countdown: Display a countdown for the Pomodoro session and break periods.
  4. Notify Completion: Alert the user when the Pomodoro session or break is complete.
  5. Record Sessions: Track the number of Pomodoros completed and breaks taken.
  6. End Session: Allow the user to end the session or reset the timer if needed.
  7. Play Again Option: Offer the user the option to start a new session or stop the timer.

Input Ideas

  1. Session Duration: Allow users to set the duration for Pomodoro sessions and breaks. The default is 25 minutes for work and 5 minutes for short breaks, with a longer break (e.g., 15 minutes) after a set number of Pomodoros (e.g., 4).
  2. Custom Durations: Enable users to customize the duration of work sessions and breaks.
  3. Notification Preferences: Allow users to choose how they want to be notified (e.g., sound alert, visual alert, or popup message).
  4. Number of Pomodoros: Ask how many Pomodoro cycles the user wants to complete before taking a longer break.
  5. Reset and Stop Options: Provide options to reset the timer or stop it if needed.

Additional Features

  1. GUI Interface: Create a graphical user interface using Tkinter or another library for a more user-friendly experience.
  2. Notifications: Implement system notifications or sound alerts to notify the user when a Pomodoro or break is over.
  3. Progress Tracking: Track and display the number of completed Pomodoros and breaks, providing visual feedback on progress.
  4. Task Management: Allow users to input and track tasks they want to accomplish during each Pomodoro session.
  5. Statistics: Provide statistics on time spent working and taking breaks, possibly with visual charts or graphs.
  6. Customizable Alerts: Enable users to set custom alert sounds or messages for different stages (start, end of Pomodoro, end of break).
  7. Integration with Calendars: Integrate with calendar applications to schedule Pomodoro sessions and breaks automatically.
  8. Desktop Widgets: Create desktop widgets or applets that display the remaining time for the current session and next break.
  9. Focus Mode: Implement a focus mode that minimizes distractions by blocking certain apps or websites during Pomodoro sessions.
  10. Daily/Weekly Goals: Allow users to set and track daily or weekly productivity goals based on completed Pomodoros.

Caesar Cipher: Implement a basic encryption and decryption tool.

11 August 2024 at 08:48

Caesar Cipher: https://en.wikipedia.org/wiki/Caesar_cipher

Game Steps

  1. Introduction: Provide a brief introduction to the Caesar Cipher, explaining that itโ€™s a substitution cipher where each letter in the plaintext is shifted a fixed number of places down or up the alphabet.
  2. Choose Operation: Ask the user whether they want to encrypt or decrypt a message.
  3. Input Text: Prompt the user to enter the text they want to encrypt or decrypt.
  4. Input Shift Value: Request the shift value (key) for the cipher. Ensure the value is within a valid range (typically 1 to 25).
  5. Perform Operation: Apply the Caesar Cipher algorithm to the input text based on the userโ€™s choice of encryption or decryption.
  6. Display Result: Show the resulting encrypted or decrypted text to the user.
  7. Play Again Option: Ask the user if they want to perform another encryption or decryption with new inputs.

Input Ideas

  1. Text Input: Allow the user to input any string of text. Handle both uppercase and lowercase letters. Decide how to treat non-alphabetic characters (e.g., spaces, punctuation).
  2. Shift Value: Ask the user for an integer shift value. Ensure it is within a reasonable range (1 to 25). Handle cases where the shift value is negative or greater than 25 by normalizing it.
  3. Mode Selection: Provide options to select between encryption and decryption. For encryption, the shift will be added; for decryption, the shift will be subtracted.
  4. Case Sensitivity: Handle uppercase and lowercase letters differently or consistently based on user preference.
  5. Special Characters: Decide whether to include special characters and spaces in the encrypted/decrypted text. Define how these characters should be treated.

Additional Features

  1. Input Validation: Implement checks to ensure the shift value is an integer and falls within the expected range. Validate that text input does not contain unsupported characters (if needed).
  2. Help/Instructions: Provide an option for users to view help or instructions on how to use the tool, explaining the Caesar Cipher and how to enter inputs.
  3. GUI Interface: Create a graphical user interface using Tkinter or another library to make the tool more accessible and user-friendly.
  4. File Operations: Allow users to read from and write to text files for encryption and decryption. This is useful for larger amounts of text.
  5. Brute Force Attack: Implement a brute force mode that tries all possible shifts for decryption and displays all possible plaintexts, useful for educational purposes or cracking simple ciphers.
  6. Custom Alphabet: Allow users to define a custom alphabet or set of characters for the cipher, making it more flexible and adaptable.
  7. Save and Load Settings: Implement functionality to save and load encryption/decryption settings, such as shift values or custom alphabets, for future use.

Build a simple version of Hangman.

11 August 2024 at 07:37

Creating a simple version of Hangman is a fun way to practice programming and game logic.

Hereโ€™s a structured approach to building this game, including game steps, input ideas, and additional features to enhance it.

Game Steps (Workflow)

  1. Introduction:
    • Start with a welcome message explaining the rules of Hangman.
    • Provide brief instructions on how to play (guessing letters, how many guesses are allowed, etc.).
  2. Word Selection:
    • Choose a word for the player to guess. This can be randomly selected from a predefined list or from a file.
  3. Display State:
    • Show the current state of the word with guessed letters and placeholders for remaining letters.
    • Display the number of incorrect guesses left (hangman stages).
  4. User Input:
    • Prompt the player to guess a letter.
    • Check if the letter is in the word.
  5. Update Game State:
    • Update the display with the correct guesses.
    • Keep track of incorrect guesses and update the hangman drawing if applicable.
  6. Check for Win/Loss:
    • Determine if the player has guessed the word or used all allowed guesses.
    • Display a win or loss message based on the result.
  7. Replay Option:
    • Offer the player the option to play again or exit the game.

Input Ideas

  1. Guess Input:
    • Prompt the player to enter a single letter.
    • Validate that the input is a single alphabetic character.
  2. Replay Input:
    • After a game ends, ask the player if they want to play again (e.g., y for yes, n for no).
  3. Word List:
    • Provide a list of words to choose from, which can be hardcoded or read from a file.

Additional Features

  1. Difficulty Levels:
    • Implement difficulty levels by varying word length or allowing more or fewer incorrect guesses.
  2. Hangman Drawing:
    • Add a visual representation of the hangman that updates with each incorrect guess.
  3. Hints:
    • Offer hints if the player is struggling (e.g., reveal a letter or provide a clue).
  4. Word Categories:
    • Categorize words into themes (e.g., animals, movies) and allow players to choose a category.
  5. Score Tracking:
    • Keep track of the playerโ€™s score across multiple games and display statistics.
  6. Save and Load:
    • Allow players to save their progress and load a game later.
  7. Custom Words:
    • Allow players to input their own words for the game.
  8. Leaderboard:
    • Create a leaderboard to track high scores and player achievements.

Create a command-line to-do list application.

11 August 2024 at 07:24

Creating a command-line to-do list application is a fantastic way to practice Python programming and work with basic data management. Hereโ€™s a structured approach to building this application, including game steps, input ideas, and additional features:

Game Steps (Workflow)

  1. Introduction:
    • Start with a welcome message and brief instructions on how to use the application.
    • Explain the available commands and how to perform actions like adding, removing, and viewing tasks.
  2. Main Menu:
    • Present a main menu with options for different actions:
      • Add a task
      • View all tasks
      • Mark a task as complete
      • Remove a task
      • Exit the application
  3. Task Management:
    • Implement functionality to add, view, update, and remove tasks.
    • Store tasks with details such as title, description, and completion status.
  4. Data Persistence:
    • Save tasks to a file or database so that they persist between sessions.
    • Load tasks from the file/database when the application starts.
  5. User Interaction:
    • Use input prompts to interact with the user and execute their commands.
    • Provide feedback and confirmation messages for actions taken.
  6. Exit and Save:
    • Save the current state of tasks when the user exits the application.
    • Confirm that tasks are saved and provide an exit message.

Input Ideas

  1. Command Input:
    • Use text commands to navigate the menu and perform actions (e.g., add, view, complete, remove, exit).
  2. Task Details:
    • For adding tasks, prompt the user for details like title and description.
    • Use input fields for the task details:
      • Title: Enter task title:
      • Description: Enter task description:
  3. Task Identification:
    • Use a unique identifier (like a number) or task title to reference tasks for actions such as marking complete or removing.
  4. Confirmation:
    • Prompt the user to confirm actions such as removing a task or marking it as complete.

Additional Features

  1. Task Prioritization:
    • Allow users to set priorities (e.g., low, medium, high) for tasks.
    • Implement sorting or filtering by priority.
  2. Due Dates:
    • Add due dates to tasks and provide options to view tasks by date or sort by due date.
  3. Search and Filter:
    • Implement search functionality to find tasks by title or description.
    • Add filters to view tasks by status (e.g., completed, pending) or priority.
  4. Task Categories:
    • Allow users to categorize tasks into different groups or projects.
  5. Export and Import:
    • Provide options to export tasks to a file (e.g., CSV or JSON) and import tasks from a file.
  6. User Authentication:
    • Add user authentication if multiple users need to manage their own tasks.
  7. Reminders and Notifications:
    • Implement reminders or notifications for tasks with upcoming due dates.
  8. Statistics:
    • Show statistics such as the number of completed tasks, pending tasks, or tasks by priority.

Working with Tamil Content in Computing Environments (27 July 2024)

27 July 2024 at 04:33

https://tamil.digital.utsc.utoronto.ca/working-with-tamil-content-in-computing-environments-27-july-2024

Unicode is an international standard extensively adopted across the industry and the Internet to represent Tamil and other languages.ย  Yet, we still face several legacy issues and ongoing challenges.ย 

The content from government documents cannot be easily extracted. The conversion of documents from one font to another presents problems due to inconsistencies. There exist various, slightly different standards for phonetic transcription of Tamil into latin scripts.ย  There are varied keyboard layouts and input styles for desktop and mobile.

Researchers, developers and practitioners continue to evolve solutions to overcome these challenges. The presentations and discussions will identify needs, issues and solutions for working with Tamil content in varied computing environments.

Please fill this anonymous survey related to using Tamil in computers and smartphones.

Presentation Topicsย 

  • Introduction to Unicode โ€“ Elango
  • Using Tamil Keyboards on Computer and Mobile Platforms โ€“ Suganthan
  • Androidโ€™s New Faster and More Intuitive Method to Type Tamil โ€“ Elango
  • Working with Tamil Content in PDFs โ€“ Shrinivasan
  • Tamil Font Styles โ€“ Uthayan
  • Challenges in Automatic Tamil Font Conversions โ€“ Parathan
  • Transliteration Approaches for Library Metadata Generation โ€“ Natkeeran

Date

July 27, 2024 (Saturday) โ€“ Virtual Presentations and Discussion

9:30 am โ€“ 11:30 am (Toronto time)
7 pm โ€“ 9 pm (Chennai/Jaffna time)

Zoom
https://utoronto.zoom.us/j/87507821579

Contributors

  • UTSC Library Digital Tamil Studies
  • Kaniyam Foundation
  • Tamil Kanimai Maiyam (เฎคเฎ•เฎฎเฏˆ)
  • South Asian Canadian Digital Archive (SACDA)

Working with Tamil Content in Computing Environments (27 July 2024)

27 July 2024 at 04:33

https://tamil.digital.utsc.utoronto.ca/working-with-tamil-content-in-computing-environments-27-july-2024

Unicode is an international standard extensively adopted across the industry and the Internet to represent Tamil and other languages.ย  Yet, we still face several legacy issues and ongoing challenges.ย 

The content from government documents cannot be easily extracted. The conversion of documents from one font to another presents problems due to inconsistencies. There exist various, slightly different standards for phonetic transcription of Tamil into latin scripts.ย  There are varied keyboard layouts and input styles for desktop and mobile.

Researchers, developers and practitioners continue to evolve solutions to overcome these challenges. The presentations and discussions will identify needs, issues and solutions for working with Tamil content in varied computing environments.

Please fill this anonymous survey related to using Tamil in computers and smartphones.

Presentation Topicsย 

  • Introduction to Unicode โ€“ Elango
  • Using Tamil Keyboards on Computer and Mobile Platforms โ€“ Suganthan
  • Androidโ€™s New Faster and More Intuitive Method to Type Tamil โ€“ Elango
  • Working with Tamil Content in PDFs โ€“ Shrinivasan
  • Tamil Font Styles โ€“ Uthayan
  • Challenges in Automatic Tamil Font Conversions โ€“ Parathan
  • Transliteration Approaches for Library Metadata Generation โ€“ Natkeeran

Date

July 27, 2024 (Saturday) โ€“ Virtual Presentations and Discussion

9:30 am โ€“ 11:30 am (Toronto time)
7 pm โ€“ 9 pm (Chennai/Jaffna time)

Zoom
https://utoronto.zoom.us/j/87507821579

Contributors

  • UTSC Library Digital Tamil Studies
  • Kaniyam Foundation
  • Tamil Kanimai Maiyam (เฎคเฎ•เฎฎเฏˆ)
  • South Asian Canadian Digital Archive (SACDA)

โŒ
โŒ