Normal view

There are new articles available, click to refresh the page.
Yesterday — 20 November 2024Main stream

Introduction to AWS

By: Ragul.M
20 November 2024 at 16:13

Hi folks , welcome to my blog. Here we are going to see about "Introduction to AWS".

Amazon Web Services (AWS) is the world’s leading cloud computing platform, offering a wide range of services to help businesses scale and innovate. Whether you're building an application, hosting a website, or storing data, AWS provides reliable and cost-effective solutions for individuals and organizations of all sizes.

What is AWS?
AWS is a comprehensive cloud computing platform provided by Amazon. It offers on-demand resources such as compute power, storage, networking, and databases on a pay-as-you-go basis. This eliminates the need for businesses to invest in and maintain physical servers.

Core Benefits of AWS

  1. Scalability: AWS allows you to scale your resources up or down based on your needs.
  2. Cost-Effective: With its pay-as-you-go pricing, you only pay for what you use.
  3. Global Availability: AWS has data centers worldwide, ensuring low latency and high availability.
  4. Security: AWS follows a shared responsibility model, offering top-notch security features like encryption and access control.
  5. Flexibility: Supports multiple programming languages, operating systems, and architectures.

Key AWS Services
Here are some of the most widely used AWS services:

  1. Compute:
    • Amazon EC2: Virtual servers to run your applications.
    • AWS Lambda: Serverless computing to run code without managing servers.
  2. Storage:
    • Amazon S3: Object storage for data backup and distribution.
    • Amazon EBS: Block storage for EC2 instances.
  3. Database:
    • Amazon RDS: Managed relational databases like MySQL, PostgreSQL, and Oracle.
    • Amazon DynamoDB: NoSQL database for high-performance applications.
  4. Networking:
    • Amazon VPC: Create isolated networks in the cloud.
    • Amazon Route 53: Domain name system (DNS) and traffic management.
  5. AI/ML:
    • Amazon SageMaker: Build, train, and deploy machine learning models.
  6. DevOps Tools:
    • AWS CodePipeline: Automates the release process.
    • Amazon EKS: Managed Kubernetes service.

Conclusion
AWS has revolutionized the way businesses leverage technology by providing scalable, secure, and flexible cloud solutions. Whether you're a developer, an enterprise, or an enthusiast, understanding AWS basics is the first step toward mastering the cloud. Start your AWS journey today and unlock endless possibilities!

Follow for more and happy learning :)

SQL Loader

20 November 2024 at 06:23
  • Its nothing but " Bulk Loader Utility ".
  • With this concept we can load the data to the table in bulk.
  • Main word is LOAD.
  • Then comes to your mind , what is the difference between load and insert ? Insert happens one by one.Load happens in one go.
  • what data ? which table ? loading script ? Execute --> these are the four things YOU NEED TO KEEP IN MIND.

Image description

  • Flat files --> csv ( comma separated value ) , txt , dat , excel , etc.
  • Always use notepad to load the data.
select employee_id || ',' || first_name || ',' || salary from employees_table where rownum <= 10 ; --> this will fetch only 10 rows.
  • save this file in a folder as csv.

Image description

select employee_id || ',' || first_name || ',' || salary from employees_table where employee_id between 150 and 170 ; --> this will fetch rows between those values.
  • Save this file as txt.

Now coming to table creation

create table sample(id number , name varchar2(25) , salary number);

Now coming to creation of script

  • loading script or control file both are same.

load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample
fields terminated by ','
(id,name,salary)

  • create the script and save as ALL FILES ( notepad ) with .ctl file.

Now coming to Execute

sqlldr hr_schema_name/password control='file_location_of_control_file_or_execution_file' direct = true

  • here why direct=true --> it will load very fast and it will by-pass all constraints and triggers.
  • if direct=false --> constraints and triggers it will check and then it execute.

Image description

  • In short ,

Image description

Excluding one column

  • If you some column should not be loaded , then use FILLER.

load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample
fields terminated by ','
(id,name,salary filler)

load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample
fields terminated by ','
(id,name filler,salary)

  • In above example , salary and name will be empty . It won't load the data.

Condition

  • WHEN --> loading data should obey the condition which you give. If the condition fails , then it stores the failed data in DISCARD FILE.
  • If there is Oracle error , then it gets captured in BAD FILE.

Image description

  • WHEN condition should be used here,

load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample when ?
fields terminated by ','
(id,name filler,salary)

How to get the process summary ?

  • It will be stored in log file.
  • you can set all the files in the command itself , like below.

sqlldr hr_schema_name/password control='file_location_of_control_file_or_execution_file' log = summary.log bad = sample.bad discard = sample.dsc direct = true

  • If you are giving any file name here , then it will generate automatically.
  • So Import take here is ,

Image description

skip

  • If you want to skip the rows while loading , then you can specify in the command itself.

sqlldr hr_schema_name/password control='file_location_of_control_file_or_execution_file' skip = 2 direct = true

  • 2 rows will be skipped.

Notes

  • SQL loader short key word is sqlldr.
  • insert into table sample --> this will work only when the table is EMPTY. If you try to execute again , then it throw below error.

Image description

so you can use ,

load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
append into table sample
fields terminated by ','
(id,name,salary)

  • Also you use truncate ( it will delete old data and insert new data again )

load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
truncate into table sample
fields terminated by ','
(id,name,salary)

Task

  1. For a particular column instead of (,) separated it's used as (#) - how to load ?
  2. how to load the excel file ?
Before yesterdayMain stream

Reading#Eat the Frog – Ch:1

19 November 2024 at 11:51

I always have the challenge of reading. Whether it is technical documentation, general documentation or anything. If I remember correctly, the last time I read something continuously was when I was in my school and college days. And that too nothing extraordinary but weekly magazines like Anandha Vikatan/Kumudham and very rarely newspapers. That got improved when I started my work and regularly read the news headlines from “The Hindu”. That’s all the reading I have got in my entire life. I have this habit of purchasing the books and will think.. One day.. that One day will come and I will become a Pro Reader and I will read all the books. But that did not happened till date.

So I was pouring all these frustration in the “#Kaniyam” IRC chat along with some more concerns like I have trouble planning things. I use to start with one and if I come across something else I will leave whatever I was doing and start doing the new item and it goes on and on. Then Srini from the Kaniyam IRC group suggested various ideas to give a try and one such idea is reading this book called “Eat the Frog”.

I wouldn’t say the book has changed me completely overnight but the practice of reading a few pages continuously gives a sense of satisfaction. I am not saying I have read 20-30 pages continuously instead I planned to complete a chapter whenever i start.

The book as such has got things we often hear or see elsewhere but more importantly it is structured. When I say it is structured, it starts with the topic explanation on why the author has named the book as “Eat the Frog”.

In our daily life if we think eating a frog is one of our primary task. How will one plan. Because eating a frog is not that easy. And that too if you have more than one frog how will one plan to do that. Here the author compares the frog to that of the tasks we have in a day. Not all tasks are difficult as eating a frog. So if we have frogs of different size and the task is to complete eating them all in a day. How will one approach. He will target finishing the bigger one then the next then the next and it goes on. By the time one completes the biggest he will get the confidence to go for the next smaller sized frog.

This analogy works the same way for our daily tasks. Rather than picking the easy ones and save the bulk or harder tasks for a later time, plan to finish the harder or most difficult task first which will help us move with the next difficult task with a lot more confidence.

This was primarily discussed on Chapter 1. After reading this I wanted to see if this approach works. I started implementing it immediately but listing the items it wanted to complete for that day. And in that I sorted those items based on the difficulty(in terms of time). I did not create an exhaustive list rather 4 tasks for that day and out of which 2 are time taking or difficult task.

End of the day I was able to complete the top 2 leaving the remaining 2. I still felt happy because i completed the top 2 which is harder. And moved the pending 2 to next day and kept the priority as top for those 2.

So far it is working and I will continue to write about the other chapters as I complete reading them.

“Let us all start get into the habit of reading and celebrate..happy reading”

Locust ep 4: Why on_start and on_stop are Essential for Locust Users

19 November 2024 at 04:30

Locust provides two special methods, on_start and on_stop, to handle setup and teardown actions for individual users. These methods allow you to execute specific code when a simulated user starts or stops, making it easier to simulate real-world scenarios like login/logout or initialization tasks.

In this blog, we’ll cover,

  1. What on_start and on_stop do.
  2. Why they are important.
  3. Practical examples of using these methods.
  4. Running and testing Locust scripts.

What Are on_start and on_stop?

  • on_start: This method is executed once when a new simulated user starts. It’s commonly used for tasks like logging in or setting up the environment.
  • on_stop: This method is executed once when a simulated user stops. It’s often used for cleanup tasks like logging out.

These methods are executed only once per user during the lifecycle of a test, as opposed to tasks that are run repeatedly.

Why Use on_start and on_stop?

  1. Simulating Real User Behavior: Real users often start a session with an action (e.g., login) and end it with another (e.g., logout).
  2. Initial Setup: Some tasks require initializing data or setting up user state before performing other actions.
  3. Cleanup: Ensure that actions like logout are performed to leave the system in a clean state.

Examples

Basic Usage of on_start and on_stop

In this example, we just print on start and `on stop` for each user while running a task.


from locust import User, task, between, constant, constant_pacing
from datetime import datetime


class MyUser(User):

    wait_time = between(1, 5)

    def on_start(self):
        print("on start")

    def on_stop(self):
        print("on stop")

    @task
    def print_datetime(self):
        print(datetime.now())

Collections Tasks

By: Sugirtha
19 November 2024 at 01:55

TASKS:

  • // 1. Reverse an ArrayList without using inbuilt method
  • // 2. Find Duplicate Elements in a List
  • // 3. Alphabetical Order and Ascending Order (Done in ArrayList)
  • // 4. Merge Two Lists and Remove Duplicates
  • // 5. Removing Even Nos from the List
  • // 6. Array to List, List to Array
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;


public class CollectionsInJava {
	public static void main(String[] args) {
		// 1. Reverse an ArrayList without using inbuilt method
		// 2. Find Duplicate Elements in a List 
		// 3. Alphabetical Order and Ascending Order (Done in ArrayList)
		// 4. Merge Two Lists and Remove Duplicates
		// 5. Removing Even Nos from the List
		// 6. Array to List, List to Array
		ArrayList<String> names = new ArrayList<>(Arrays.asList("Abinaya", "Ramya", "Gowri", "Swetha",  "Sugi", "Anusuya", "Moogambigai","Jasima","Aysha"));
		ArrayList<Integer> al2 = new ArrayList<>(Arrays.asList(100,90,30,20,60,40));
		
		ArrayList<Integer> al =  insertValuesIntoAL();
		System.out.println("Before Reversing ArrayList="+ al);
		System.out.println("Reversed ArrayList="+ reverseArrayList(al));
		
		System.out.println("Duplicates in ArrayList="+findDuplicates(al));
		
		System.out.println("Before Order = "+names);
		Collections.sort(names);
		System.out.println("After Alphabetical Order = " + names);
		Collections.sort(al);
		System.out.println("Ascending Order = "+ al);
		
		System.out.println("List -1 = "+al);
		System.out.println("List -2 = "+al2);
		System.out.println("After Merging and Removing Duplicates="+mergeTwoLists(al,al2));
		System.out.println("After Removing Even Nos fromt the List-1 = "+removeEvenNos(al));
		
		arrayToListViceVersa(al,new int[] {11,12,13,14,15}); //Sending ArrayList and anonymous array
	}
	
	// 1. Reverse an ArrayList without using inbuilt method
	private static ArrayList<Integer> reverseArrayList(ArrayList<Integer> al) {
		int n=al.size();
		int j=n-1, mid=n/2;
		for (int i=0; i<mid; i++) {
			int temp = al.get(i);
			al.set(i, al.get(j));
			al.set(j--, temp);
		}
		return al;
	}
	
	// 2. Find Duplicate Elements in a List 
	private static ArrayList<Integer> findDuplicates(ArrayList<Integer> al) {
		HashSet<Integer> hs = new HashSet<>();
		ArrayList<Integer> arl = new ArrayList<>();
		for (int ele:al) {
			if (!hs.add(ele)) arl.add(ele);
		}
		return arl;
	}
	
	//4. Merge Two Lists into one and Remove Duplicates
	private static HashSet<Integer> mergeTwoLists(ArrayList<Integer> arl1,ArrayList<Integer> arl2) {
		ArrayList<Integer> resAl = new ArrayList<>();
		HashSet<Integer> hs = new HashSet<>();
		hs.addAll(arl1);
		hs.addAll(arl2);
		return hs;
	}
	
	// 5. Removing Even Nos from the List
	private static ArrayList<Integer> removeEvenNos(ArrayList<Integer> al) {
		ArrayList<Integer> res = new ArrayList<>();
		Iterator itr = al.iterator();
		while (itr.hasNext()) {
			int ele = (int)itr.next();
			if (ele%2==1) res.add(ele);
		}
		return res;
	}
	
	// 6. Array to List, List to Array
	private static void arrayToListViceVersa(ArrayList<Integer> arl, int[] ar) {
		Integer arr[] = arl.toArray(new Integer[0]);
		System.out.println("Convert List to Array = " + Arrays.toString(arr));
		List<Integer> lst = Arrays.asList(arr);
		System.out.println("Convert Array to List = " +  lst);
	}
	
	private static ArrayList<Integer> insertValuesIntoAL() {
		Integer[] ar = {30,40,60,10,94,23,05,46, 40, 94};
		ArrayList<Integer> arl = new ArrayList<>();
		Collections.addAll(arl, ar);
		//Collections.reverse(al);   //IN BUILT METHOD
		return arl;
			//Arrays.sort(ar);  
		//List lst = Arrays.asList(ar);    //TBD
		//return new ArrayList<Integer>(lst);
		
	}

}

OUTPUT:

Locust EP 3: Simulating Multiple User Types in Locust

18 November 2024 at 04:30

Locust allows you to define multiple user types in your load tests, enabling you to simulate different user behaviors and traffic patterns. This is particularly useful when your application serves diverse client types, such as web and mobile users, each with unique interaction patterns.

In this blog, we will

  1. Discuss the concept of multiple user types in Locust.
  2. Explore how to implement multiple user classes with weights.
  3. Run and analyze the test results.

Why Use Multiple User Types?

In real-world applications, different user groups interact with your system differently. For example,

  • Web Users might spend more time browsing through the UI.
  • Mobile Users could make faster but more frequent requests.

By simulating distinct user types with varying behaviors, you can identify performance bottlenecks across all client groups.

Understanding User Classes and Weights

Locust provides the ability to define user classes by extending the User or HttpUser base class. Each user class can,

  • Have a unique set of tasks.
  • Define its own wait times.
  • Be assigned a weight, which determines the proportion of that user type in the simulation.

For example, if WebUser has a weight of 1 and MobileUser has a weight of 2, the simulation will spawn 1 web user for every 2 mobile users.

Example: Simulating Web and Mobile Users

Below is an example Locust test with two user types


from locust import User, task, between

# Define a user class for web users
class MyWebUser(User):
    wait_time = between(1, 3)  # Web users wait between 1 and 3 seconds between tasks
    weight = 1  # Web users are less frequent

    @task
    def login_url(self):
        print("I am logging in as a Web User")


# Define a user class for mobile users
class MyMobileUser(User):
    wait_time = between(1, 3)  # Mobile users wait between 1 and 3 seconds
    weight = 2  # Mobile users are more frequent

    @task
    def login_url(self):
        print("I am logging in as a Mobile User")

How Locust Uses Weights

With the above configuration

  • For every 3 users spawned, 1 will be a Web User, and 2 will be Mobile Users (based on their weights: 1 and 2).

Locust automatically handles spawning these users in the specified ratio.

Running the Locust Test

  1. Save the Code
    Save the above code in a file named locustfile.py.
  2. Start Locust
    Open your terminal and run `locust -f locustfile.py`
  3. Access the Web UI
  4. Enter Test Parameters
    • Number of users (e.g., 30).
    • Spawn rate (e.g., 5 users per second).
    • Host: If you are testing an actual API or website, specify its URL (e.g., http://localhost:8000).
  5. Analyze Results
    • Observe how Locust spawns the users according to their weights and tracks metrics like request counts and response times.

After running the test:

  • Check the distribution of requests to ensure it matches the weight ratio (e.g., for every 1 web user request, there should be ~3 mobile user requests).
  • Use the metrics (response time, failure rate) to evaluate performance for each user type.

Introduction to PostgreSQL database – free online course in Tamil

18 November 2024 at 02:26

Introduction to PostgreSQL database – free online course in Tamil

Monday, wednesday, Friday IST evening.

First class – 18-Nov-2024 7-8 PM IST

Syllabus: https://parottasalna.com/postgres-database-syllabus/

Trainer – Syed Jafer – contact.syedjafer@gmail.com

Get the meeting link here

Telegram Group – https://t.me/parottasalna
Whatsapp channel- https://whatsapp.com/channel/0029Vavu8mF2v1IpaPd9np0s Kaniyam Tech events Calendar – https://kaniyam.com/events/

Locust EP 2: Understanding Locust Wait Times with Complete Examples

17 November 2024 at 07:43

Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.

In this blog, we’ll cover,

  1. What wait times are in Locust.
  2. Built-in wait time options.
  3. Creating custom wait times.
  4. A full example with instructions to run the test.

What Are Wait Times in Locust?

In real-world scenarios, users don’t interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.

Locust provides several ways to define these wait times within your test scenarios.

FastAPI App Overview

Here’s the FastAPI app that we’ll test,


from fastapi import FastAPI

# Create a FastAPI app instance
app = FastAPI()

# Define a route with a GET method
@app.get("/")
def read_root():
    return {"message": "Welcome to FastAPI!"}

@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
    return {"item_id": item_id, "q": q}

Locust Examples for FastAPI

1. Constant Wait Time Example

Here, we’ll simulate constant pauses between user requests


from locust import HttpUser, task, constant

class FastAPIUser(HttpUser):
    wait_time = constant(2)  # Wait for 2 seconds between requests

    @task
    def get_root(self):
        self.client.get("/")  # Simulates a GET request to the root endpoint

    @task
    def get_item(self):
        self.client.get("/items/42?q=test")  # Simulates a GET request with path and query parameters

2. Between wait time Example

Simulating random pauses between requests.


from locust import HttpUser, task, between

class FastAPIUser(HttpUser):
    wait_time = between(1, 5)  # Random wait time between 1 and 5 seconds

    @task(3)  # Weighted task: this runs 3 times more often
    def get_root(self):
        self.client.get("/")

    @task(1)
    def get_item(self):
        self.client.get("/items/10?q=locust")

3. Custom Wait Time Example

Using a custom wait time function to introduce more complex user behavior


import random
from locust import HttpUser, task

def custom_wait():
    return max(1, random.normalvariate(3, 1))  # Normal distribution (mean: 3s, stddev: 1s)

class FastAPIUser(HttpUser):
    wait_time = custom_wait

    @task
    def get_root(self):
        self.client.get("/")

    @task
    def get_item(self):
        self.client.get("/items/99?q=custom")


Full Test Example

Combining all the above elements, here’s a complete Locust test for your FastAPI app.


from locust import HttpUser, task, between
import random

# Custom wait time function
def custom_wait():
    return max(1, random.uniform(1, 3))  # Random wait time between 1 and 3 seconds

class FastAPIUser(HttpUser):
    wait_time = custom_wait  # Use the custom wait time

    @task(3)
    def browse_homepage(self):
        """Simulates browsing the root endpoint."""
        self.client.get("/")

    @task(1)
    def browse_item(self):
        """Simulates fetching an item with ID and query parameter."""
        item_id = random.randint(1, 100)
        self.client.get(f"/items/{item_id}?q=test")

Running Locust for FastAPI

  1. Run Your FastAPI App
    Save the FastAPI app code in a file (e.g., main.py) and start the server

uvicorn main:app --reload

By default, the app will run on http://127.0.0.1:8000.

2. Run Locust
Save the Locust file as locustfile.py and start Locust.


locust -f locustfile.py

3. Configure Locust
Open http://localhost:8089 in your browser and enter:

  • Host: http://127.0.0.1:8000
  • Number of users and spawn rate based on your testing requirements.

4. Run in Headless Mode (Optional)
Use the following command to run Locust in headless mode


locust -f locustfile.py --headless -u 50 -r 10 --host http://127.0.0.1:8000`

-u 50: Simulate 50 users.

-r 10: Spawn 10 users per second.

Postgres – Write-Ahead Logging (WAL) in PostgreSQL

16 November 2024 at 07:06

Write-Ahead Logging (WAL) is a fundamental feature of PostgreSQL, ensuring data integrity and facilitating critical functionalities like crash recovery, replication, and backup.

This series of experimentation explores WAL in detail, its importance, how it works, and provides examples to demonstrate its usage.

What is Write-Ahead Logging (WAL)?

WAL is a logging mechanism where changes to the database are first written to a log file before being applied to the actual data files. This ensures that in case of a crash or unexpected failure, the database can recover and replay these logs to restore its state.

Your question is right !

Why do we need a WAL, when we do a periodic backup ?

Write-Ahead Logging (WAL) is critical even when periodic backups are in place because it complements backups to provide data consistency, durability, and flexibility in the following scenarios.

1. Crash Recovery

  • Why It’s Important: Periodic backups only capture the database state at specific intervals. If a crash occurs after the latest backup, all changes made since that backup would be lost.
  • Role of WAL: WAL ensures that any committed transactions not yet written to data files (due to PostgreSQL’s lazy-writing behavior) are recoverable. During recovery, PostgreSQL replays the WAL logs to restore the database to its last consistent state, bridging the gap between the last checkpoint and the crash.

Example:

  • Backup Taken: At 12:00 PM.
  • Crash Occurs: At 1:30 PM.
  • Without WAL: All changes after 12:00 PM are lost.
  • With WAL: All changes up to 1:30 PM are recovered.

2. Point-in-Time Recovery (PITR)

  • Why It’s Important: Periodic backups restore the database to the exact time of the backup. However, this may not be sufficient if you need to recover to a specific point, such as just before a mistake (e.g., accidental data deletion).
  • Role of WAL: WAL records every change, enabling you to replay transactions up to a specific time. This allows fine-grained recovery beyond what periodic backups can provide.

Example:

  • Backup Taken: At 12:00 AM.
  • Mistake Made: At 9:45 AM, an important table is accidentally dropped.
  • Without WAL: Restore only to 12:00 AM, losing 9 hours and 45 minutes of data.
  • With WAL: Restore to 9:44 AM, recovering all valid changes except the accidental drop.

3. Replication and High Availability

  • Why It’s Important: In a high-availability setup, replicas must stay synchronized with the primary database to handle failovers. Periodic backups cannot provide real-time synchronization.
  • Role of WAL: WAL enables streaming replication by transmitting logs to replicas, ensuring near real-time synchronization.

Example:

  • A primary database sends WAL logs to replicas as changes occur. If the primary fails, a replica can quickly take over without data loss.

4. Handling Incremental Changes

  • Why It’s Important: Periodic backups store complete snapshots of the database, which can be time-consuming and resource-intensive. They also do not capture intermediate changes.
  • Role of WAL: WAL allows incremental updates by recording only the changes made since the last backup or checkpoint. This is crucial for efficient data recovery and backup optimization.

5. Ensuring Data Durability

  • Why It’s Important: Even during normal operations, a database crash (e.g., power failure) can occur. Without WAL, transactions committed by users but not yet flushed to disk are lost.
  • Role of WAL: WAL ensures durability by logging all changes before acknowledging transaction commits. This guarantees that committed transactions are recoverable even if the system crashes before flushing the changes to data files.

6. Supporting Hot Backups

  • Why It’s Important: For large, active databases, taking a backup while the database is running can result in inconsistent snapshots.
  • Role of WAL: WAL ensures consistency by recording changes that occur during the backup process. When replayed, these logs synchronize the backup, ensuring it is valid and consistent.

7. Debugging and Auditing

  • Why It’s Important: Periodic backups are static snapshots and don’t provide a record of what happened in the database between backups.
  • Role of WAL: WAL contains a sequential record of all database modifications, which can help in debugging issues or auditing transactions.
FeaturePeriodic BackupsWrite-Ahead Logging
Crash RecoveryLimited to the last backupEnsures full recovery to the crash point
Point-in-Time RecoveryRestores only to the backup timeAllows recovery to any specific point
ReplicationNot supportedEnables real-time replication
EfficiencyFull snapshotIncremental changes
DurabilityRelies on backup frequencyGuarantees transaction durability

In upcoming sessions, we will all experiment each one of the failure scenarios for understanding.

Azure VNET

15 November 2024 at 19:43
  • Network --> communications between devices.
  • IP Address --> unique identifier to each device which is internet protocol address.

IPv4

  1. 4th of version of Internet protocol.
  2. 32 bit
  3. Totally 4 blocks with 8 bit segments each. A , B , C & D.
  4. 2 types of IP address.
  5. Public ( Mainly using internet routing ) & Private ( Office )
  6. Range --> 0 to 255
  7. 0 & 255 reserved by system.
  8. 127 --> loop-back address. 253 address we can use.
  9. Then how to find whether its public or private ? By classes.
  10. A, B , C --> Commonly used.
  11. D & E --> Multi-casting & Research purpose.
  12. Class A --> 0 to 127 Public , Private 10 is only used eg., 10.0.0.1 ( 16 million hosts can be declared )
  13. Class B --> 128 to 191 Public , Private 172 is only used eg., 172.16.0.1 to 171.16.255.254 ( 65,536 hosts can be declared ) - Med size networks.
  14. Class C --> 192 to 223 Public , Private 192.168.1.1 small network for 254 hosts.
  15. Class D --> 224 to 239 for multicast groups.
  16. Class E --> 240 - 255 for research purpose.

As a whole , the private range is .

A --> 10.0.0.1
B --> 172.16.0.0
C --> 192.168.0.0

Image description

Subnetting

  • Slashing the network.

Image description

Virtual Network in Azure

  • Software based network connects virtual machines.

Subnet in Azure

  • Subdivison of VNET.
  • we can organise the resources within a network.
  • Features as follows ,

Image description

Azure Portal

All service >> Networking >> Virtual Networks >> Create

Image description

  • Gave a wrong IP , then you can see a prompt is coming.

Image description

  • VNET & Subnet creation

Image description

Image description

Image description

Notes

  1. DNS , DHCP & Gateway , 255 Broadcast --> 4 IP's are reserved.

Introduction to PostgreSQL database – free online course in Tamil

18 November 2024 at 02:26

Introduction to PostgreSQL database – free online course in Tamil

Monday, wednesday, Friday IST evening.

First class – 18-Nov-2024 7-8 PM IST

Syllabus: https://parottasalna.com/postgres-database-syllabus/

Trainer – Syed Jafer – contact.syedjafer@gmail.com

Get the meeting link here

Telegram Group – https://t.me/parottasalna
Whatsapp channel- https://whatsapp.com/channel/0029Vavu8mF2v1IpaPd9np0s Kaniyam Tech events Calendar – https://kaniyam.com/events/

POC : Tamil Date parser using parse

By: Hariharan
15 November 2024 at 18:05

Tamil Date time parser POC
https://github.com/r1chardj0n3s/parse

it requires external dependency parse for parsing the python string format with placeholders

import parse
from date import TA_MONTHS
from date import datetime
//POC of tamil date time parser
def strptime(format='{month}, {date} {year}',date_string ="நவம்பர், 16 2024"):        
    parsed = parse.parse(format,date_string)
    month = TA_MONTHS.index(parsed['month'])+1
    date = int(parsed['date'])
    year = int(parsed['year'])
    return datetime(year,month,date)

print(strptime("{date}-{month}-{year}","16-நவம்பர்-2024"))
#dt = datetime(2024,11,16);
# print(dt.strptime_ta("நவம்பர் , 16 2024","%m %d %Y"))

Basic Linux Commands

15 November 2024 at 15:08
  1. pwd — When you first open the terminal, you are in the home directory of your user. To know which directory you are in, you can use the “pwd” command. It gives us the absolute path, which means the path that starts from the root. The root is the base of the Linux file system and is denoted by a forward slash( / ). The user directory is usually something like “/home/username”.

Image description

  1. ls — Use the “ls” command to know what files are in the directory you are in. You can see all the hidden files by using the command “ls -a”.

Image description

  1. cd — Use the “cd” command to go to a directory. “cd” expects directory name or path of new directory as input.

Image description

  1. mkdir & rmdir — Use the mkdir command when you need to create a folder or a directory.Use rmdir to delete a directory. But rmdir can only be used to delete an empty directory. To delete a directory containing files, use rm.

Image description

  1. rm – Use the rm command to delete a file. Use “rm -r” to recursively delete all files within a specific directory.

Image description

  1. touch — The touch command is used to create an empty file. For example, “touch new.txt”.

Image description

  1. cp — Use the cp command to copy files through the command line.

Image description

  1. mv — Use the mv command to move files through the command line. We can also use the mv command to rename a file.

Image description

9.cat — Use the cat command to display the contents of a file. It is usually used to easily view programs.

Image description

10.vi - You can create a new file or modify a file using this editor.

Image description

Basic Linux Commands

By: Ragul.M
15 November 2024 at 14:25

Hi folks , welcome to my blog. Here we are going to see some basic and important commands of linux.

One of the most distinctive features of Linux is its command-line interface (CLI). Knowing a few basic commands can unlock many possibilities in Linux.
Essential Commands
Here are some fundamental commands to get you started:
ls - Lists files and directories in the current directory.

ls

cd - Changes to a different directory.

cd /home/user/Documents

pwd - Prints the current working directory.

pwd

cp - Copies files or directories.

cp file1.txt /home/user/backup/

mv - Moves or renames files or directories.

mv file1.txt file2.txt

rm - Removes files or directories.

rm file1.txt

mkdir - Creates a new directory.

mkdir new_folder

touch - Creates a new empty file.

touch newfile.txt

cat - Displays the contents of a file.

cat file1.txt

nano or vim - Opens a file in the text editor.

nano file1.txt

chmod - Changes file permissions.

chmod 755 file1.txt

ps - Displays active processes.

ps

kill - Terminates a process.

kill [PID]

Each command is powerful on its own, and combining them enables you to manage your files and system effectively.We can see more about some basics and interesting things about linux in further upcoming blogs which I will be posting.

Follow for more and happy learning :)

Locust EP 1 : Load Testing: Ensuring Application Reliability with Real-Time Examples and Metrics

14 November 2024 at 15:48

In today’s fast-paced digital application, delivering a reliable and scalable application is key to providing a positive user experience.

One of the most effective ways to guarantee this is through load testing. This post will walk you through the fundamentals of load testing, real-time examples of its application, and crucial metrics to watch for.

What is Load Testing?

Load testing is a type of performance testing that simulates real-world usage of an application. By applying load to a system, testers observe how it behaves under peak and normal conditions. The primary goal is to identify any performance bottlenecks, ensure the system can handle expected user traffic, and maintain optimal performance.

Load testing answers these critical questions:

  • Can the application handle the expected user load?
  • How does performance degrade as the load increases?
  • What is the system’s breaking point?

Why is Load Testing Important?

Without load testing, applications are vulnerable to crashes, slow response times, and unavailability, all of which can lead to a poor user experience, lost revenue, and brand damage. Proactive load testing allows teams to address issues before they impact end-users.

Real-Time Load Testing Examples

Let’s explore some real-world examples that demonstrate the importance of load testing.

Example 1: E-commerce Website During a Sale Event

An online retailer preparing for a Black Friday sale knows that traffic will spike. They conduct load testing to simulate thousands of users browsing, adding items to their cart, and checking out simultaneously. By analyzing the system’s response under these conditions, the retailer can identify weak points in the checkout process or database and make necessary optimizations.

Example 2: Video Streaming Platform Launch

A new streaming platform is preparing for launch, expecting millions of users. Through load testing, the team simulates high traffic, testing how well video streaming performs under maximum user load. This testing also helps check if CDN (Content Delivery Network) configurations are optimized for global access, ensuring minimal buffering and downtime during peak hours.

Example 3: Financial Services Platform During Market Hours

A trading platform experiences intense usage during market open and close hours. Load testing helps simulate these peak times, ensuring that real-time data updates, transactions, and account management work flawlessly. Testing for these scenarios helps avoid issues like slow trade executions and platform unavailability during critical trading periods.

Key Metrics to Monitor in Load Testing

Understanding key metrics is essential for interpreting load test results. Here are some critical metrics to focus on:

1. Response Time

  • Definition: The time taken by the system to respond to a request.
  • Why It Matters: Slow response times can frustrate users and indicate bottlenecks.
  • Example Thresholds: For websites, a response time below 2 seconds is considered acceptable.

2. Throughput

  • Definition: The number of requests processed per second.
  • Why It Matters: Throughput indicates how many concurrent users your application can handle.
  • Real-Time Use Case: In our e-commerce example, the retailer would track throughput to ensure the checkout process doesn’t become a bottleneck.

3. Error Rate

  • Definition: The percentage of failed requests out of total requests.
  • Why It Matters: A high error rate could indicate application instability under load.
  • Real-Time Use Case: The trading platform monitors the error rate during market close, ensuring the system doesn’t throw errors under peak trading load.

4. CPU and Memory Utilization

  • Definition: The percentage of CPU and memory resources used during the load test.
  • Why It Matters: High CPU or memory utilization can signal that the server may not handle additional load.
  • Real-Time Use Case: The video streaming platform tracks memory usage to prevent lag or interruptions in streaming as users increase.

5. Concurrent Users

  • Definition: The number of users active on the application at the same time.
  • Why It Matters: Concurrent users help you understand how much load the system can handle before performance starts degrading.
  • Real-Time Use Case: The retailer tests how many concurrent users can shop simultaneously without crashing the website.

6. Latency

  • Definition: The time it takes for a request to travel from the client to the server and back.
  • Why It Matters: High latency indicates network or processing delays that can slow down the user experience.
  • Real-Time Use Case: For a financial app, reducing latency ensures trades execute in near real-time, which is crucial for users during volatile market conditions.

7. 95th and 99th Percentile Response Times

  • Definition: The time within which 95% or 99% of requests are completed.
  • Why It Matters: These percentiles help identify outliers that may impact user experience.
  • Real-Time Use Case: The streaming service may analyze these percentiles to ensure smooth playback for most users, even under peak loads.

Best Practices for Effective Load Testing

  1. Set Clear Objectives: Define specific goals, such as the expected number of concurrent users or acceptable response times, based on the nature of the application.
  2. Use Realistic Load Scenarios: Create scenarios that mimic actual user behavior, including peak times, user interactions, and geographical diversity.
  3. Analyze Bottlenecks and Optimize: Use test results to identify and address performance bottlenecks, whether in the application code, database queries, or server configurations.
  4. Monitor in Real-Time: Track metrics like response time, throughput, and error rates in real-time to identify issues as they arise during the test.
  5. Repeat and Compare: Conduct multiple load tests to ensure consistent performance over time, especially after any significant update or release.

Load testing is crucial for building a resilient and scalable application. By using real-world scenarios and keeping a close eye on metrics like response time, throughput, and error rates, you can ensure your system performs well under load. Proactive load testing helps to deliver a smooth, reliable experience for users, even during peak times.

Linux basics for beginners

By: Ragul.M
14 November 2024 at 16:04

Introduction:
Linux is one of the most powerful and widely-used operating systems in the world, found everywhere from mobile devices to high-powered servers. Known for its stability, security, and open-source nature, Linux is an essential skill for anyone interested in IT, programming, or system administration.
In this blog , we are going to see What is linux and Why choose linux.

1) What is linux
Linux is an open-source operating system that was first introduced by Linus Torvalds in 1991. Built on a Unix-based foundation, Linux is community-driven, meaning anyone can view, modify, and contribute to its code. This collaborative approach has led to the creation of various Linux distributions, or "distros," each tailored to different types of users and use cases. Some of the most popular Linux distributions are:

  • Ubuntu: Known for its user-friendly interface, great for beginners.
  • Fedora: A cutting-edge distro with the latest software versions, popular with developers.
  • CentOS: Stable and widely used in enterprise environments. Each distribution may look and function slightly differently, but they all share the same core Linux features.

2) Why choose linux
Linux is favored for many reasons, including its:

  1. Stability: Linux is well-known for running smoothly without crashing, even in demanding environments.
  2. Security: Its open-source nature allows the community to detect and fix vulnerabilities quickly, making it highly secure.
  3. Customizability: Users have complete control to modify and customize their system.
  4. Performance: Linux is efficient, allowing it to run on a wide range of devices, from servers to small IoT devices.

Conclusion
Learning Linux basics is the first step to becoming proficient in an operating system that powers much of the digital world. We can see more about some basics and interesting things about linux in further upcoming blogs which I will be posting.

Follow for more and happy learning :)

Exception Handling

By: Sugirtha
13 November 2024 at 03:06

What is Exception?

An Exception is an unexpected or unwanted event which occurs during the execution of a program that typically disrupts the normal flow of execution.

This is definition OK but what I understand : The program’s execution is getting stopped abnormally when it reaches some point which have some mistake/error – it may be small or big, compilation or runtime or logical mistake. And its not proceeding with further statements. Handling this situation is called exception handling. Cool. Isn’t it?

Why Exception Handling?

If we do not handle the exception program execution will stop. To make the program run smoothly and avoid stopping due to minor issues/exceptions, we should handle it.

So, how it is represented, In Java everything is class, right? The derived classes of java.lang.Throwable are Error and Exception. For better understanding lets have a look at the hierarchical structure.

Here I could see Exception and Error – when we hear these words looks similar right. So What could be the difference?

Errors are some serious issues which is beyond our control like System oriented errors. Ex. StackOverFlowError, OutOfMemoryError (Lets discuss this later).

Exception is a situation which we can handle,

  1. through try-catch-(finally) block of code
  2. Throws keyword in method signature.

try-catch-Finally :

What is the meaning? As we are the owner of our code, we do have some idea about the possible problems or exceptions which we can solve or handle through this try-catch block of code.

For Ex. Task : Make a Tasty Dish.

What could be the exceptions?

  1. Some spice added in lower quantity.
  2. Chosen vessel may be smaller in size
  3. some additional stuff may not be available

To overcome these exception we can use try-catch block.

try {
  Cooking_Process();
}
catch(VesselException chosenLittle) {
   Replace_with_Bigger_One();
}
catch(QtyException_Spice spiceLow) {
   add_Little_More_Spice();
}
catch(AddOnsException e) {
  ignore_Additional_Flavors_If_Not_Available();
}
catch(Exception e) {
  notListedIssue();
}
Finally {
  cleanUp_Kitchen();
}

Here there could be more catches for one try as one task may encounter many different issues. If it is solvable its called exception and we try to catch in catch blocks. The JVM process our code (cookingProcess) and if it encounter one problem like QtyException_Spice, it will throw the appropriate object. Then it will be caught by the corresponding catch, which will execute add_Little_More_Spice() and prevent the code from failing.

Here we see one more word, Exception, which is the parent class of all exceptions. Sometimes we may encounter the issue that is not listed (perhaps forgotten) but its solvable. In such cases, we can use the parent class object (since a parent class object can act as a child object) to catch any exception that is not listed.

Fine, all good. But what is the purpose of Finally here? Finally is the block of code that will always be executed, no matter if exception occurs or not. It doesn’t matter if you made a good dish or a bad one, but the kitchen must be cleaned. The same applies here: the finally block is used for releasing system resources that were mainly used (Ex. File). However, we can also write our own code in the finally block based on the specific requirements.

We have a situation where you have one cylinder to cook, and it gets emptied during cooking, so we cannot proceed. This will fail our process TastyDish, this situation cannot be handled immediately. This is called Error. Now lets recall the definition “Errors are serious issues that are beyond our control like a system crash or resource limitations.” Now we could understand, right?

Ex. OutOfMemoryError – when we load too much data, JVM runs out of memory. StackOverFlowError – when an infinite loop or recursion without base condition will make the stack overflow.

Lets revisit exceptions – they can be classified into two categories:

  • Checked Exception
  • UnChecked Exception.

What is Checked Exceptions?

Checked Exception is the exception which occurs at compile time. It will not allow you to run the code if you are not handling through try-catch or declares throws method.

Lets get into deeper for the clear understanding, the compiler predicts/doubts the part of our code which may throw the exceptions/mistakes which lead to stopping the execution. So that it will not allow you to run, it is forcing you to catch the exception through the above one of mechanisms.

If it is not clear, let us take an example, in the above code we have VesselException and QtyException_Spice . You are at your initial stage of cooking under the supervision of your parent. So we are ordered/ instructed to keep the big vessel and the spices nearby in case you may need it when the problem arise. If you are not keeping it nearby parent is not allowing you to start cooking (initial time ). Parent is compiler here.

throws:

So Expected exception by the compiler is called Checked Exception, and the compiler force us to handle. One solution we know try-catch-finally, what is that through declaration in the method? The exception in which method can be expected, that method should use the keyword “throws <ExceptionClassName-1>” that is, it specifies this method may lead to the exceptions from the list of classes specified after throws keyword. After throws can be one class or more than one. whoever using this method with this declaration in method signature will aware of that and may handle it.

The good example for this is, IOException (parent) – FileNotFoundException (child). If you are trying to open a file, read it, the possible exceptions are: File Path Incorrect, File doesn’t exist, File Permission, Network issues etc. For Ex.

public static void main(String[] args) {
        try {
            // Calling the method that may throw a FileNotFoundException
            readFile("nonexistentfile.txt");
        } catch (FileNotFoundException e) {
            // Handle exception here
            System.out.println("File not found! Please check the file path.");
            e.printStackTrace();
        }
    }   

 // Method that throws FileNotFoundException
    public static void readFile(String fileName) throws FileNotFoundException {
        File file = new File(fileName);
        Scanner scanner = new Scanner(file);  // This line may throw FileNotFoundException
        while (scanner.hasNextLine()) {
            System.out.println(scanner.nextLine());
        }
        scanner.close();
    }

What is Unchecked Exception?

The compiler will not alert you about this exception, instead you will experience at runtime only. This not required to be declared or caught, but handling is advisable. These are all subclasses of RunTimeException (Error also will throw runtime exception only). It could be thrown when runtime issues, illegal arguments, or programming issues.

Ex.Invalid index in an array, or trying to take value from a null object, or dividing by zero.

Ex. NullPointerException

String str = null; System.out.println(str.length()); /* Throws NullPointerException */

ArrayIndexOutOfBoundsException

int[] arr = new int[3]; System.out.println(arr[5]); /* Throws ArrayIndexOutOfBoundsException */

What is throw?

Instead JRE throws error, the developer can throw the exception object (Predefined or UserDefined) to signal some erroneous situation and wants to stop the execution. For ex, you have the idea of wrong input and wants to give your own error message.

public class SampleOfThrow {
    public static void main(String[] args) {
        // a/b --> b should not be 0
        Scanner scn = new Scanner(System.in);
        int a = scn.nextInt();
        int b = scn.nextInt();
        if (b==0) throw new ArithmeticException("b value could not be zero");
        System.out.print(a/b);
    }
}

Hey, wait, I read the word, User Defined Exception above. which means the developer (we) also can create our own exception and can throw it. Yes, absolutely. How? In Java everything is class, right? So through class only, but on one condition it should extend the parent Exception class in order to specify it is an exception.

//User Defined Exception
class UsDef extends Exception {
    public UsDef(String message) {
        super(message); //will call Exception class // and send the own error message
    }
}

public class MainClass {
    public static void main(String[] args) {
        try {
            Scanner scn = new Scanner(System.in);
            boolean moreSalt = scn.nextBoolean(); 
            validateFood(moreSalt);
 // This method will throw an TooMuchSaltException
        } catch (TooMuchSaltException e) {
            System.out.println(e.getMessage());  // Catching and handling the custom exception
        }
    }

    // Method that throws TooMuchSaltException if food contains too much salt and can't eat
    public static void validateFood(boolean moreSalt) throws TooMuchSaltException {
        if (moreSalt) {
            throw new TooMuchSaltException("Food is too salty.");
        }
        System.out.println("Salt is in correct quantity");
    }
}

Now Lets have a look at some important Exception Handling points in java of view. (The following are taken from chatGPT)

Error Vs. Exception

AspectErrorException
DefinitionAn Error represents a serious problem that a Java application cannot reasonably recover from. These are usually related to the Java runtime environment or the system.An Exception represents conditions that can be handled or recovered from during the application’s execution, usually due to issues in the program’s logic or input.
Superclassjava.lang.Errorjava.lang.Exception
RecoveryErrors usually cannot be recovered from, and it is generally not advisable to catch them.Exceptions can typically be caught and handled by the program to allow for recovery or graceful failure.
Common TypesOutOfMemoryError, StackOverflowError, VirtualMachineError, InternalErrorIOException, SQLException, NullPointerException, IllegalArgumentException, FileNotFoundException
Occurs Due ToTypically caused by severe issues like running out of memory, system failures, or hardware errors.Typically caused by program bugs or invalid operations, such as accessing null objects, dividing by zero, or invalid user input.
Checked or UncheckedAlways unchecked (extends Throwable but not Exception).Checked exceptions extend Exception or unchecked exceptions extend RuntimeException.
ExamplesOutOfMemoryError
StackOverflowError
VirtualMachineError
IOException
SQLException
NullPointerException
ArithmeticException
HandlingErrors are usually not handled explicitly by the program. They indicate fatal problems.Exceptions can and should be handled, either by the program or by throwing them to the calling method.
PurposeErrors are used to indicate severe problems that are typically out of the program’s control.Exceptions are used to handle exceptional conditions that can be anticipated and managed in the program.
Examples of Causes– System crash
– Exhaustion of JVM resources (e.g., memory)
– Hardware failure
– File not found
– Invalid input
– Network issues
ThrowingYou generally should not throw Error explicitly. These are thrown by the JVM when something critical happens.You can explicitly throw exceptions using the throw keyword, especially for custom exceptions.

Checked vs. Unchecked Exception:

AspectChecked ExceptionUnchecked Exception
DefinitionExceptions that are explicitly checked by the compiler at compile time.Exceptions that are not checked by the compiler, and are typically runtime exceptions.
SuperclassSubclasses of Exception but not RuntimeException.Subclasses of RuntimeException.
Handling RequirementMust be caught or declared in the method signature using throws.No explicit handling required; they can be left uncaught.
ExamplesIOException, SQLException, ClassNotFoundException.NullPointerException, ArrayIndexOutOfBoundsException, ArithmeticException.
Common UsageTypically used for exceptional conditions that a program might want to recover from.Used for programming errors or unforeseen runtime issues.
Checked atCompile-time.Runtime (execution time).
Effect on CodeForces the developer to handle the exception (either with a try-catch or throws).No such requirement; can be ignored without compiler errors.
Examples of CausesMissing file, network failure, database errors.Null pointer dereference, dividing by zero, illegal array index access.
When to UseWhen recovery from the exception is possible or expected.When the error typically indicates a bug or programming mistake that cannot be recovered from.

throw vs. throws:

Aspectthrowthrows
DefinitionUsed to explicitly throw an exception from a method or block of code.Used in a method signature to declare that a method can throw one or more exceptions.
UsageUsed with an actual exception object to initiate the throwing of an exception.Used in the method header to inform the compiler and the caller that the method might throw specific exceptions.
Keyword TypeStatement (flow control keyword).Modifier (appears in the method declaration).
Examplethrow new IOException("File not found");public void readFile() throws IOException { ... }
LocationCan be used anywhere inside the method or block to throw an exception.Appears only in the method signature, usually right after the method name.
ControlImmediately transfers control to the nearest catch block or exits the program if uncaught.Allows a method to propagate the exception up the call stack to the caller, who must handle it.
Checked vs UncheckedCan throw both checked and unchecked exceptions.Typically used for checked exceptions (like IOException, SQLException) but can also be used for unchecked exceptions.
Example ScenarioYou encounter an error condition, and you want to throw an exception.You are writing a method that may encounter an error (like file I/O) and want to pass the responsibility for handling the exception to the caller.

References : 1. https://www.geeksforgeeks.org/exceptions-in-java/

Installing Arch Linux in UEFI systems(windows)

12 November 2024 at 15:22

This will be a very basic overview in what is to be done for installing Arch Linux. For more information check out Arch wiki installation guide.

The commands shown in this guide will be in italian(font).

Step 1: Downloading the required files and applications

I have downloaded a few applications to help ease the process for the installation. You can download them using the links below.

Rufus:
This helps in formatting the USB and converting the disc image file to a dd image file. I have used rufus, you can use other tools too. This only works on windows.
rufus link

BitTorrent
The download option in the wiki page suggests we use BitTorrent for downloading the disc image file.
BitTorrent for windows

Arch Linux torrent file
This is for downloading the Arch Linux Torrent File. The download link can be found in the website given below.
Arch Linux Download Page

Step 2: The bootable USB

You will need a USB of size at least 2GB and 4GB or above should be very comfortable to use.

First open the BitTorrent application or the web based version and upload the magnet link or the torrent file to start downloading the disc image file.

Then to prepare the USB:

  1. Launch the application to make the bootable USB like rufus.

2.In the device section select your USB and remember all the data in the drive will be lost after the process.

3.In boot selection, choose the disc image file that was downloaded through torrent.

4.In the target system select UEFI as we are using a UEFI system.

5.In the partition scheme make sure GPT is selected.

6.In file system select fat32 and 4096 bytes as cluster size.

7.When you click ready it will present you with 2 options, select the dd image file which is not the default option.

After the process is done the USB will not be readable to windows, so there is no need to panic if you cannot access the USB.

If you are using a dual boot make sure you have at least 30 GB of unallocated space.

I would recommend to turn off bitlocker settings as it could give rise to other challenges during the installation.

Then get into the UEFI Firmware settings of your system. One easy way is to:
1.Hold shift key while pressing to restart the computer
2.Go into Troubleshoot
3.Go into Advanced Settings
4.Select UEFI Firmware Settings
5.You will have to restart again but you will be in the required place.

Turn off secure boot state. It is usually in the security settings.

Select save changes and exit.

When you log back into your system ensure that secure boot state is off by going into system information.

Go back to UEFI Firmware settings by repeating the process.

In the boot priority section, give your USB device the highest priority. This is usually in the boot section. Then select save changes and exit.

Step 3: Preparing Arch Linux Installation

When all the above steps are done and the system restarts, you will be prompted with a few options. Select Arch Linux install medium and press 'Enter' to enter the installation environment. After this you will need to follow a series of steps.

1. Verifying you are in UEFI mode.

To do that type the command
cat /sys/firmware/efi/fw_platform_size

You should get the result as 32 or 64. If you get no result then you may not be using UEFI mode.

2. Connecting to the internet:

If you are using an ethernet cable then you don't have to worry as you might already be connected to internet.
Use the command
ping -c 4 google.com
or another website to ping from to check if you're connected to the internet.

To connect to wi-fi, type in the command
ip link

This should show you all the internet devices you have. Your wi-fi should typically be wlan0 or something like wlp3s0, which is your device name.

Then type the command
iwctl

This should get you into an interactive command line interface.
You can explore the options by using the command
help

My device name was wlan0 so I'm using wlan0 in the command I'm going to show if yours is different make the appropriate changes.

To connect to the wifi use the command
station wlan0 connect "Network Name"
where "Network Name" is the name of your network.

If you want to know the name of your network before doing this you can try the command
station wlan0 get-networks

To get out of the environment simply use the command
exit

After you exit, you can verify your connection with
ping -c 4 google.com

If it doesn't work, try the command
ping -c 4 8.8.8.8

If the above also doesn't work, the problem may lie with your network.

However if the second option works for you, the fix would be to manually change the DNS server you're using.
To do that, run the command
nano /etc/systemd/resolved.conf

In this file if the DNS part is commented using a #, remove the # and replace it with a DNS server you desire. For eg: 8.8.8.8

ctrl + x to save and exit

Now try pinging a website such as google.com again to make sure you're properly connected to the internet.

3. Set the proper time

When you connect to the internet you should have the proper time. To check you can use the command
timedatectl

4. Create the partitions for Arch Linux

To check what partitions you have, use the command
lsblk

This will list the partitions you have. It will be in the format /dev/sda or /dev/nvme0n1 or something else. Mine was /dev/nvme0n1 so I'll be using the same in the commands below.

To make the partitions, use the command
fdisk /dev/nvme0n1

This should bring you to a separate command line interface.

It will give you an introduction on what to do.

Now we will create the partitions.
To create a partition, use the command
n

It will show you what you want to number your partition and the default option. Click enter as it will automatically take the default option if you don't enter any value. Let's say mine is 1.

It will show you what sector you want the partition to start from and the default option. Click enter.

Then it will ask you where you want the sectors to end: type
+1g

1g will allot 1 GB to the partition you just created.

Then create another partition in the same way, let's say mine is sector number 2 this time and finally instead of
+1g use +4g

This will allot 4 GB to the second partition you just created.

Create another partition and this time leave the last sector to default so it can have the remaining space. Let's say this partition is number 3.

partition 1 - EFI system partition
partition 2 - Linux SWAP partition
partition 3 - Linux root partition

5. Prepare the created partitions for Arch Linux installation

Here, we are going to format the memory in the chosen partitions and make them the appropriate file systems.

For the EFI partition:
mkfs.fat -F 32 /dev/nvme0n1p1

This converts the 1 GB partition into a fat32 file system.

For SWAP partition:
mkswap /dev/nvme0n1p2

This converts the 4 GB partition into something that can be used as virtual RAM.

For root partition:
mkfs.ext4 /dev/nvme0n1p3

This converts the root partition into a file system that is called ext4.

6. Mounting the partitions

This is for setting a reference point to the partitions we just created.

For the EFI partition:
mount --mkdir /dev/nvme0n1p1 /mnt/boot

For the root partition:
mount /dev/nvme0n1p3 /mnt

For the swap partition:
swapon /dev/nvme0n1p2

Step 3: The Arch Linux Installation

1. Updating the mirrorlist (optional)

The mirrorlist is a list of mirror servers from which packages can be downloaded. Choosing the right mirror server could get you higher download speeds.

This step isn't required as the mirror list is automatically updated when connected to the internet but if you would like to manually do it, its in the file
/etc/pacman.d/mirrorlist

2. Installing base Linux kernel and firmware

To do this, use the command
pacstrap -K /mnt base linux linux-firmware

Step 4: Configuring Arch Linux system

1. generating fstab

The fstab is the file system table. It contains information on each of the file partitions and storage devices. It also contains information on how they should be mounted during boot.

To do it, use the command:
genfstab -U /mnt >> /mnt/etc/fstab

2. Chroot

Chroot is short for change root. It is used to directly interact with the Arch Linux partitions from the live environment in the USB.

To do it, use the command:
arch-chroot /mnt

3. Time

The timezone has 2 parts the region and the city. I am from India so my region is Asia and the city is Kolkata. Change yours appropriately to your needs.

The command:
ln -sf /usr/share/zoneinfo/Asia/Kolkata /etc/localtime

We can also set the time in hardware clock as UTC.
To do that:
hwclock --systohc

4. Installing some important tools

The system you have installed is a very basic system, so it doesn't have a lot of stuff. I'm recommending two very basic tools as they can be handy.

i) nano:
This is a text editor file so you can make changes to configuration files.
pacman -S nano

ii) iwd:
This is called iNet wireless daemon. I recommend this so that you can connect to wi-fi once you reboot to your actual arch system.
pacman -S iwd

5. Localization

This is for setting the keyboard layout and language. Go to the file /etc/locale.conf by using
nano /etc/locale.conf

I want to use the english language that is the default in most devices so for doing that you have to uncomment(remove the #) for the line that says
LANG=en_US.UTF-8

As there are a lot of lines you can search using ctrl+F.

Then ctrl+X to save and exit.

Then use the command
locale-gen

This command generates the locale you just uncommented.

6. Host and password

To create the host name, we should do it in the /etc/hostname file. Use
nano /etc/hostname

Then type in what your hostname would be.
ctrl + X to save and exit.

To set the password of your root user, use the command
passwd

7. Getting out of chroot and rebooting the system

To get out of chroot simply use
exit

Then to reboot the system use
reboot

Remove the installation medium(USB) as the device turns off.

Step 5: Enjoy Arch Linux

Arch Linux is one of the most minimal systems. So you can customize it to your liking. You can also install other desktop environments if you feel like it.

My First Public Speaking

By: Sugirtha
12 November 2024 at 14:59

Public Speaking – It’s just two words, but it makes many people feel frightened. Even I did. I felt embarrassed to stand in front of my schoolmates/colleagues.

Usually, I am present in college during working days, but if it’s seminar days, you can’t find me – I will be absent. But whatever you try to avoid in life, one day you’ll face it, right? That was what happened in my interview. Fear! Fear!!

But how could we compare an interview with public speaking? Why not? If the interview panel has multiple people, and they ask you questions you may or may not know the answers – but at least in public speaking, you will speak about what you know.

I still have that fear. So, I decided not to run away but FACE THE ISSUE. A few good people supported me in overcoming this situation. First, my professor Muthu Sir, who advised me to join open-source communities, specifically ILUGC and KanchiILUGC. He said, “Just join, they will take care of you if you follow them.” I joined, and under Mr. Shrini’s guidance, I started doing simple projects. In between, he asked me to give a presentation at an ILUGC meet.

I said OK immediately (I already wanted to overcome my fear). I felt I accepted in a rush, and suddenly had mixed feelings like run away 🙂 But he was so fast – I received an email to give my name, and the formalities proceeded. The real race started in my heart.

My inner thoughts: “What, Sugi? What are you going to do? The subject is fine, but can you speak in front of people?”

I said to myself, It’s OK, whatever, I have to do. Then, Muthu Sir, Ms.Jothy, friends, classmates, my family and all others encouraged me.

I still remember what Muthu Sir said: “What’s the worst that can happen? One, you can do well. If so, you’ll feel good and confident. Two, you may not do well, but that will push you to do better next time. Both outcomes will yield good and positive results, so just go for it.”

Then I practiced alone and felt OK. I had some paper notes in my hand, but when the laptop screen turned on, my heart rate went up, and my hands started shaking. When people asked me to start, I said, “I am Sugirtha,” and then forgot everything.

Thank God I at least remembered my name! Fine, let’s see the paper – What is this? I couldn’t read it, nothing was going inside my brain. It felt like Latin, which I don’t understand. I threw the paper aside, started recollecting, and said, “HTML stands for HyperText Markup Language.” Inside, I thought, Oh my God, this is not my first line to say, I thought I would start differently. For about 5 to 10 minutes, I fumbled with the points but didn’t deliver them as I expected. But when I started working on the code, I felt OK, as I got immersed in it.

Finally, it was over. There was still some tension, and after some time, I thought, I don’t know if my presentation was good or not, but at least I finished it. Then, after a while, I thought, Oh God, you did it, Sugi! Finally, you did something.

Now, I wonder if I get another chance, could I do it again? Back then, I somehow managed, but now… the fear returns. But this not the same as before which I feel I can overcome easily. So to overcome this I have to do more and more. I don’t want to prove anything to anyone, but I just want to prove something to myself. For my own satisfaction, I want to do more. I feel I will do better.

If I can, why can’t you?

❌
❌