In continuation from Chapter 1, Chapter 2 starts with how to approach the priority list. Because for a normal person[without any priority like me :-)] every task will be a priority. Unlike that, the author is suggesting an alternative.
It goes like this. Take a piece of paper or use a text editor and name that as โPrioritiesโ and start writing all the tasks that comes to your mind which you wanted to do. Not necessarily in a day, week or month. Just random. For eg: Complete reading a book, finish the assignment, save a minimum amount, practice meditation etc., By the end of this activity, you will have an exhaustive list of task that you wish you could complete.
Next, take one more sheet of paper or open one more text editor and name this as monthly. In here, from the list that you created in โPrioritiesโ, pick up those tasks which could be or has to be completed in the next month. From our example, we can choose โFinish an assignmentโ and add it to the โmonthlyโ list.
Now the monthly list will be comparatively less than the entire priorities and you have a clear idea of what needs to be done in next 30 days. From here, create one more list for โWeeklyโ. And do the same exercise of choosing the things that has to or could be completed in next 7 days. And start including them in the weekly list.
Hard part is now complete. From here, connect the things that was described in Chapter#1. Pick up the frog that is bigger to eat and add it to your daily list.
From a larger picture, the moment you knock off one task from daily it creates a ripple effect. That is, it knocks off a task from weekly, then monthly and from the entire priorities. You will accomplished by the end of first week. And if we do this on week 2 we will feel even more happier and accomplished.
This is all about Chapter-2. Once again, this is my understanding and nothing to do with authors narration.
See you again in Chapter-3! Thank you for reading!
What is plaintext in my point of view: Its simply text without any makeup or add-on, it is just an organic content. For example,
A handwritten grocery list what our mother used to give to our father
A To-Do List
An essay/composition writing in our school days
Why plaintext is important? โ The quality of the content only going to get score here: there is no marketing by giving some beautification or formats. โ Less storage โ Ideal for long term data storage because Cross-Platform Compatibility โ Universal Accessibility. Many s/w using plain text for configuration files (.ini, .conf, .json) โ Data interchange (.csv โ interchange data into databases or spreadsheet application) โ Command line environments, even in cryptography. โ Batch Processing: Many batch processes use plain text files to define lists of actions or tasks that need to be executed in a batch mode, such as renaming files, converting data formats, or running programs.
So plain text is simple, powerful and something special we have no doubt about it.
What is IRC? IRC โ Internet Relay Chat is a plain text based real time communication System over the internet for one-on-one chat, group chat, online community โ making it ideal for discussion.
Itโs a popular network for free and open-source software (FOSS) projects and developers in olden days. Ex. many large projects (like Debian, Arch Linux, GNOME, and Python) discussion used. Nowadays also IRC is using by many communities.
Usage : Mainly a discussion chat forum for open-source software developers, technology, and hobbyist communities.
Why IRC? Already we have so many chat platforms which are very advanced and I could use multimedia also there. So this is very basic, right?
Yes it is very basic, but the infrastructure of this IRC is not like other chat platforms. In my point of view the important differences are privacy and decentralized.
Advantages over other Chat Platforms:
No Ads Or Popups: We are not distracted from other ads or popups because my information are not passed to any company to track my needs and give marketing.
Privacy: Many IRC networks does not need your email or mobile number or even registration. Simply you can type your name or nick name, select your server and start chat instantly. Chat Logs also getting stored if required.
Open Source and Free: Server, Client โ the entire networking model is free and open source. Anybody can install the IRC servers/clients and connect with the network.
Decentralized : As servers are decentralized, it could able to work even one server has some issues and it is down. Users can connect to different servers within the same network which is improving reliability and performance.
Low Latency: Its a free real time communication system with low latency which is very important for technical communities and time sensitive conversations.
Customization and Extensibility: Custom scripts can be written to enhance functionality and IRC supports automation through bots which can record chats, sending notification or moderating channels, etc.
Channel Control: Channel Operators (Group Admin) have fine control over the users like who can join, who can be kicked off.
Light Weight Tool: As its light weight no high end hardware required. IRC can be accessed from even older computers or even low powered devices like Rasberry Pi.
History and Logging: Some IRC Servers allow logging of chats through bots or in local storage.
Inventor IRC is developed by Jarkko Oikarinen (Finland) in 1988.
Some IRC networks/Servers: Libera.Chat(#ubuntu, #debian, #python, #opensource) EFNet-Eris Free Network (#linux, #python, #hackers) IRCnet(#linux, #chat, #help) Undernet(#help, #anime, #music) QuakeNet (#quake, #gamers, #techsupport) DALnet- for both casual users and larger communities (#tech, #gaming, #music)
Directly on the Website โ Libera WebClient โ https://web.libera.chat/gamja/You can click Join, then type the channel name (Group) (Ex. #kaniyam)
How to get Connected with IRC: After installed the IRC client, open. Add a new network (e.g., โLibera.Chatโ). Set the server to irc.libera.chat (or any of the alternate servers above). Optionally, you can specify a port (default is 6667 for non-SSL, 6697 for SSL). Join a channel like #ubuntu, #python, or #freenode-migrants once youโre connected.
Popular channels to join on libera chat: #ubuntu, #debian, #python, #opensource, #kaniyam
Local Logs: Logs are typically saved in plain text and can be stored locally, allowing you to review past conversations. How to get local logsfrom our System (IRC libera.chat Server) folders โ /home//.local/share/weechat/logs/ From Web-IRCBot History: https://ircbot.comm-central.org:8080/
Locust provides powerful event hooks, such as test_start and test_stop, to execute custom logic before and after a load test begins or ends. These events allow you to implement setup and teardown operations at the test level, which applies to the entire test run rather than individual users.
In this blog, we will
Understand what test_start and test_stop are.
Explore their use cases.
Provide examples of implementing these events.
Discuss how to run and validate the setup.
What Are test_start and test_stop?
test_start: Triggered when the test starts. Use this event to perform actions like initializing global resources, starting external systems, or logging test start information.
test_stop: Triggered when the test ends. This event is ideal for cleanup operations, aggregating results, or stopping external systems.
These events are global and apply to the entire test environment rather than individual user instances.
Why Use test_start and test_stop?
Global Setup: Initialize shared resources, like database connections or external services.
Logging: Record timestamps or test details for audit or reporting purposes.
External System Management: Start/stop services that the test depends on, such as mock servers or third-party APIs.
Example: Basic Usage of test_start and test_stop
Hereโs a basic example demonstrating the usage of these events
from locust import User, task, between, events
from datetime import datetime
# Global setup: Perform actions at test start
@events.test_start.add_listener
def on_test_start(environment, **kwargs):
print("Test started at:", datetime.now())
# Global teardown: Perform actions at test stop
@events.test_stop.add_listener
def on_test_stop(environment, **kwargs):
print("Test stopped at:", datetime.now())
# Simulated user behavior
class MyUser(User):
wait_time = between(1, 5)
@task
def print_datetime(self):
"""Task that prints the current datetime."""
print("Current datetime:", datetime.now())
Running the Example
Save the code as locustfile.py.
Start Locust -> `locust -f locustfile.py`
Configure the test parameters (number of users, spawn rate, etc.) in the web UI at http://localhost:8089.
Observe the console output:
A message when the test starts (on_test_start).
Messages during the test as users execute tasks.
A message when the test stops (on_test_stop).
Example: Logging Test Details
You can log detailed test information, like the number of users and host under test, using environment and kwargs
Hi folks , welcome to my blog. Here we are going to see about "Introduction to AWS".
Amazon Web Services (AWS) is the worldโs leading cloud computing platform, offering a wide range of services to help businesses scale and innovate. Whether you're building an application, hosting a website, or storing data, AWS provides reliable and cost-effective solutions for individuals and organizations of all sizes.
What is AWS?
AWS is a comprehensive cloud computing platform provided by Amazon. It offers on-demand resources such as compute power, storage, networking, and databases on a pay-as-you-go basis. This eliminates the need for businesses to invest in and maintain physical servers.
Core Benefits of AWS
Scalability: AWS allows you to scale your resources up or down based on your needs.
Cost-Effective: With its pay-as-you-go pricing, you only pay for what you use.
Global Availability: AWS has data centers worldwide, ensuring low latency and high availability.
Security: AWS follows a shared responsibility model, offering top-notch security features like encryption and access control.
Flexibility: Supports multiple programming languages, operating systems, and architectures.
Key AWS Services
Here are some of the most widely used AWS services:
Compute:
Amazon EC2: Virtual servers to run your applications.
AWS Lambda: Serverless computing to run code without managing servers.
Storage:
Amazon S3: Object storage for data backup and distribution.
Amazon EBS: Block storage for EC2 instances.
Database:
Amazon RDS: Managed relational databases like MySQL, PostgreSQL, and Oracle.
Amazon DynamoDB: NoSQL database for high-performance applications.
Networking:
Amazon VPC: Create isolated networks in the cloud.
Amazon Route 53: Domain name system (DNS) and traffic management.
AI/ML:
Amazon SageMaker: Build, train, and deploy machine learning models.
DevOps Tools:
AWS CodePipeline: Automates the release process.
Amazon EKS: Managed Kubernetes service.
Conclusion
AWS has revolutionized the way businesses leverage technology by providing scalable, secure, and flexible cloud solutions. Whether you're a developer, an enterprise, or an enthusiast, understanding AWS basics is the first step toward mastering the cloud. Start your AWS journey today and unlock endless possibilities!
With this concept we can load the data to the table in bulk.
Main word is LOAD.
Then comes to your mind , what is the difference between load and insert ? Insert happens one by one.Load happens in one go.
what data ? which table ? loading script ? Execute --> these are the four things YOU NEED TO KEEP IN MIND.
Flat files --> csv ( comma separated value ) , txt , dat , excel , etc.
Always use notepad to load the data.
select employee_id || ',' || first_name || ',' || salary from employees_table where rownum <= 10 ; --> this will fetch only 10 rows.
save this file in a folder as csv.
select employee_id || ',' || first_name || ',' || salary from employees_table where employee_id between 150 and 170 ; --> this will fetch rows between those values.
Save this file as txt.
Now coming to table creation
create table sample(id number , name varchar2(25) , salary number);
Now coming to creation of script
loading script or control file both are same.
load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample
fields terminated by ','
(id,name,salary)
create the script and save as ALL FILES ( notepad ) with .ctl file.
Now coming to Execute
sqlldr hr_schema_name/password control='file_location_of_control_file_or_execution_file' direct = true
here why direct=true --> it will load very fast and it will by-pass all constraints and triggers.
if direct=false --> constraints and triggers it will check and then it execute.
In short ,
Excluding one column
If you some column should not be loaded , then use FILLER.
load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample
fields terminated by ','
(id,name,salary filler)
load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample
fields terminated by ','
(id,name filler,salary)
In above example , salary and name will be empty . It won't load the data.
Condition
WHEN --> loading data should obey the condition which you give.
If the condition fails , then it stores the failed data in DISCARD FILE.
If there is Oracle error , then it gets captured in BAD FILE.
WHEN condition should be used here,
load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
insert into table sample when ?
fields terminated by ','
(id,name filler,salary)
How to get the process summary ?
It will be stored in log file.
you can set all the files in the command itself , like below.
sqlldr hr_schema_name/password control='file_location_of_control_file_or_execution_file' log = summary.log bad = sample.bad discard = sample.dsc direct = true
If you are giving any file name here , then it will generate automatically.
So Import take here is ,
skip
If you want to skip the rows while loading , then you can specify in the command itself.
sqlldr hr_schema_name/password control='file_location_of_control_file_or_execution_file' skip = 2 direct = true
2 rows will be skipped.
Notes
SQL loader short key word is sqlldr.
insert into table sample --> this will work only when the table is EMPTY. If you try to execute again , then it throw below error.
so you can use ,
load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
append into table sample
fields terminated by ','
(id,name,salary)
Also you use truncate ( it will delete old data and insert new data again )
load data infile 'path_of_the_file.csv'
infile 'path_of_the_file.txt'
truncate into table sample
fields terminated by ','
(id,name,salary)
Task
For a particular column instead of (,) separated it's used as (#) - how to load ?
I always have the challenge of reading. Whether it is technical documentation, general documentation or anything. If I remember correctly, the last time I read something continuously was when I was in my school and college days. And that too nothing extraordinary but weekly magazines like Anandha Vikatan/Kumudham and very rarely newspapers. That got improved when I started my work and regularly read the news headlines from โThe Hinduโ. Thatโs all the reading I have got in my entire life. I have this habit of purchasing the books and will think.. One day.. that One day will come and I will become a Pro Reader and I will read all the books. But that did not happened till date.
So I was pouring all these frustration in the โ#Kaniyamโ IRC chat along with some more concerns like I have trouble planning things. I use to start with one and if I come across something else I will leave whatever I was doing and start doing the new item and it goes on and on. Then Srini from the Kaniyam IRC group suggested various ideas to give a try and one such idea is reading this book called โEat the Frogโ.
I wouldnโt say the book has changed me completely overnight but the practice of reading a few pages continuously gives a sense of satisfaction. I am not saying I have read 20-30 pages continuously instead I planned to complete a chapter whenever i start.
The book as such has got things we often hear or see elsewhere but more importantly it is structured. When I say it is structured, it starts with the topic explanation on why the author has named the book as โEat the Frogโ.
In our daily life if we think eating a frog is one of our primary task. How will one plan. Because eating a frog is not that easy. And that too if you have more than one frog how will one plan to do that. Here the author compares the frog to that of the tasks we have in a day. Not all tasks are difficult as eating a frog. So if we have frogs of different size and the task is to complete eating them all in a day. How will one approach. He will target finishing the bigger one then the next then the next and it goes on. By the time one completes the biggest he will get the confidence to go for the next smaller sized frog.
This analogy works the same way for our daily tasks. Rather than picking the easy ones and save the bulk or harder tasks for a later time, plan to finish the harder or most difficult task first which will help us move with the next difficult task with a lot more confidence.
This was primarily discussed on Chapter 1. After reading this I wanted to see if this approach works. I started implementing it immediately but listing the items it wanted to complete for that day. And in that I sorted those items based on the difficulty(in terms of time). I did not create an exhaustive list rather 4 tasks for that day and out of which 2 are time taking or difficult task.
End of the day I was able to complete the top 2 leaving the remaining 2. I still felt happy because i completed the top 2 which is harder. And moved the pending 2 to next day and kept the priority as top for those 2.
So far it is working and I will continue to write about the other chapters as I complete reading them.
โLet us all start get into the habit of reading and celebrate..happy readingโ
Locust provides two special methods, on_start and on_stop, to handle setup and teardown actions for individual users. These methods allow you to execute specific code when a simulated user starts or stops, making it easier to simulate real-world scenarios like login/logout or initialization tasks.
In this blog, weโll cover,
What on_start and on_stop do.
Why they are important.
Practical examples of using these methods.
Running and testing Locust scripts.
What Are on_start and on_stop?
on_start: This method is executed once when a new simulated user starts. Itโs commonly used for tasks like logging in or setting up the environment.
on_stop: This method is executed once when a simulated user stops. Itโs often used for cleanup tasks like logging out.
These methods are executed only once per user during the lifecycle of a test, as opposed to tasks that are run repeatedly.
Why Use on_start and on_stop?
Simulating Real User Behavior: Real users often start a session with an action (e.g., login) and end it with another (e.g., logout).
Initial Setup: Some tasks require initializing data or setting up user state before performing other actions.
Cleanup: Ensure that actions like logout are performed to leave the system in a clean state.
Examples
Basic Usage of on_start and on_stop
In this example, we just print on start and `on stop` for each user while running a task.
from locust import User, task, between, constant, constant_pacing
from datetime import datetime
class MyUser(User):
wait_time = between(1, 5)
def on_start(self):
print("on start")
def on_stop(self):
print("on stop")
@task
def print_datetime(self):
print(datetime.now())
// 1. Reverse an ArrayList without using inbuilt method
// 2. Find Duplicate Elements in a List
// 3. Alphabetical Order and Ascending Order (Done in ArrayList)
// 4. Merge Two Lists and Remove Duplicates
// 5. Removing Even Nos from the List
// 6. Array to List, List to Array
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
public class CollectionsInJava {
public static void main(String[] args) {
// 1. Reverse an ArrayList without using inbuilt method
// 2. Find Duplicate Elements in a List
// 3. Alphabetical Order and Ascending Order (Done in ArrayList)
// 4. Merge Two Lists and Remove Duplicates
// 5. Removing Even Nos from the List
// 6. Array to List, List to Array
ArrayList<String> names = new ArrayList<>(Arrays.asList("Abinaya", "Ramya", "Gowri", "Swetha", "Sugi", "Anusuya", "Moogambigai","Jasima","Aysha"));
ArrayList<Integer> al2 = new ArrayList<>(Arrays.asList(100,90,30,20,60,40));
ArrayList<Integer> al = insertValuesIntoAL();
System.out.println("Before Reversing ArrayList="+ al);
System.out.println("Reversed ArrayList="+ reverseArrayList(al));
System.out.println("Duplicates in ArrayList="+findDuplicates(al));
System.out.println("Before Order = "+names);
Collections.sort(names);
System.out.println("After Alphabetical Order = " + names);
Collections.sort(al);
System.out.println("Ascending Order = "+ al);
System.out.println("List -1 = "+al);
System.out.println("List -2 = "+al2);
System.out.println("After Merging and Removing Duplicates="+mergeTwoLists(al,al2));
System.out.println("After Removing Even Nos fromt the List-1 = "+removeEvenNos(al));
arrayToListViceVersa(al,new int[] {11,12,13,14,15}); //Sending ArrayList and anonymous array
}
// 1. Reverse an ArrayList without using inbuilt method
private static ArrayList<Integer> reverseArrayList(ArrayList<Integer> al) {
int n=al.size();
int j=n-1, mid=n/2;
for (int i=0; i<mid; i++) {
int temp = al.get(i);
al.set(i, al.get(j));
al.set(j--, temp);
}
return al;
}
// 2. Find Duplicate Elements in a List
private static ArrayList<Integer> findDuplicates(ArrayList<Integer> al) {
HashSet<Integer> hs = new HashSet<>();
ArrayList<Integer> arl = new ArrayList<>();
for (int ele:al) {
if (!hs.add(ele)) arl.add(ele);
}
return arl;
}
//4. Merge Two Lists into one and Remove Duplicates
private static HashSet<Integer> mergeTwoLists(ArrayList<Integer> arl1,ArrayList<Integer> arl2) {
ArrayList<Integer> resAl = new ArrayList<>();
HashSet<Integer> hs = new HashSet<>();
hs.addAll(arl1);
hs.addAll(arl2);
return hs;
}
// 5. Removing Even Nos from the List
private static ArrayList<Integer> removeEvenNos(ArrayList<Integer> al) {
ArrayList<Integer> res = new ArrayList<>();
Iterator itr = al.iterator();
while (itr.hasNext()) {
int ele = (int)itr.next();
if (ele%2==1) res.add(ele);
}
return res;
}
// 6. Array to List, List to Array
private static void arrayToListViceVersa(ArrayList<Integer> arl, int[] ar) {
Integer arr[] = arl.toArray(new Integer[0]);
System.out.println("Convert List to Array = " + Arrays.toString(arr));
List<Integer> lst = Arrays.asList(arr);
System.out.println("Convert Array to List = " + lst);
}
private static ArrayList<Integer> insertValuesIntoAL() {
Integer[] ar = {30,40,60,10,94,23,05,46, 40, 94};
ArrayList<Integer> arl = new ArrayList<>();
Collections.addAll(arl, ar);
//Collections.reverse(al); //IN BUILT METHOD
return arl;
//Arrays.sort(ar);
//List lst = Arrays.asList(ar); //TBD
//return new ArrayList<Integer>(lst);
}
}
Locust allows you to define multiple user types in your load tests, enabling you to simulate different user behaviors and traffic patterns. This is particularly useful when your application serves diverse client types, such as web and mobile users, each with unique interaction patterns.
In this blog, we will
Discuss the concept of multiple user types in Locust.
Explore how to implement multiple user classes with weights.
Run and analyze the test results.
Why Use Multiple User Types?
In real-world applications, different user groups interact with your system differently. For example,
Web Users might spend more time browsing through the UI.
Mobile Users could make faster but more frequent requests.
By simulating distinct user types with varying behaviors, you can identify performance bottlenecks across all client groups.
Understanding User Classes and Weights
Locust provides the ability to define user classes by extending the User or HttpUser base class. Each user class can,
Have a unique set of tasks.
Define its own wait times.
Be assigned a weight, which determines the proportion of that user type in the simulation.
For example, if WebUser has a weight of 1 and MobileUser has a weight of 2, the simulation will spawn 1 web user for every 2 mobile users.
Example: Simulating Web and Mobile Users
Below is an example Locust test with two user types
from locust import User, task, between
# Define a user class for web users
class MyWebUser(User):
wait_time = between(1, 3) # Web users wait between 1 and 3 seconds between tasks
weight = 1 # Web users are less frequent
@task
def login_url(self):
print("I am logging in as a Web User")
# Define a user class for mobile users
class MyMobileUser(User):
wait_time = between(1, 3) # Mobile users wait between 1 and 3 seconds
weight = 2 # Mobile users are more frequent
@task
def login_url(self):
print("I am logging in as a Mobile User")
How Locust Uses Weights
With the above configuration
For every 3 users spawned, 1 will be a Web User, and 2 will be Mobile Users (based on their weights: 1 and 2).
Locust automatically handles spawning these users in the specified ratio.
Running the Locust Test
Save the Code Save the above code in a file named locustfile.py.
Start Locust Open your terminal and run `locust -f locustfile.py`
Host: If you are testing an actual API or website, specify its URL (e.g., http://localhost:8000).
Analyze Results
Observe how Locust spawns the users according to their weights and tracks metrics like request counts and response times.
After running the test:
Check the distribution of requests to ensure it matches the weight ratio (e.g., for every 1 web user request, there should be ~3 mobile user requests).
Use the metrics (response time, failure rate) to evaluate performance for each user type.
Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.
In this blog, weโll cover,
What wait times are in Locust.
Built-in wait time options.
Creating custom wait times.
A full example with instructions to run the test.
What Are Wait Times in Locust?
In real-world scenarios, users donโt interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.
Locust provides several ways to define these wait times within your test scenarios.
FastAPI App Overview
Hereโs the FastAPI app that weโll test,
from fastapi import FastAPI
# Create a FastAPI app instance
app = FastAPI()
# Define a route with a GET method
@app.get("/")
def read_root():
return {"message": "Welcome to FastAPI!"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
Locust Examples for FastAPI
1. Constant Wait Time Example
Here, weโll simulate constant pauses between user requests
from locust import HttpUser, task, constant
class FastAPIUser(HttpUser):
wait_time = constant(2) # Wait for 2 seconds between requests
@task
def get_root(self):
self.client.get("/") # Simulates a GET request to the root endpoint
@task
def get_item(self):
self.client.get("/items/42?q=test") # Simulates a GET request with path and query parameters
2. Between wait time Example
Simulating random pauses between requests.
from locust import HttpUser, task, between
class FastAPIUser(HttpUser):
wait_time = between(1, 5) # Random wait time between 1 and 5 seconds
@task(3) # Weighted task: this runs 3 times more often
def get_root(self):
self.client.get("/")
@task(1)
def get_item(self):
self.client.get("/items/10?q=locust")
3. Custom Wait Time Example
Using a custom wait time function to introduce more complex user behavior
import random
from locust import HttpUser, task
def custom_wait():
return max(1, random.normalvariate(3, 1)) # Normal distribution (mean: 3s, stddev: 1s)
class FastAPIUser(HttpUser):
wait_time = custom_wait
@task
def get_root(self):
self.client.get("/")
@task
def get_item(self):
self.client.get("/items/99?q=custom")
Full Test Example
Combining all the above elements, hereโs a complete Locust test for your FastAPI app.
from locust import HttpUser, task, between
import random
# Custom wait time function
def custom_wait():
return max(1, random.uniform(1, 3)) # Random wait time between 1 and 3 seconds
class FastAPIUser(HttpUser):
wait_time = custom_wait # Use the custom wait time
@task(3)
def browse_homepage(self):
"""Simulates browsing the root endpoint."""
self.client.get("/")
@task(1)
def browse_item(self):
"""Simulates fetching an item with ID and query parameter."""
item_id = random.randint(1, 100)
self.client.get(f"/items/{item_id}?q=test")
Running Locust for FastAPI
Run Your FastAPI App Save the FastAPI app code in a file (e.g., main.py) and start the server
uvicorn main:app --reload
By default, the app will run on http://127.0.0.1:8000.
2. Run Locust Save the Locust file as locustfile.py and start Locust.
Write-Ahead Logging (WAL) is a fundamental feature of PostgreSQL, ensuring data integrity and facilitating critical functionalities like crash recovery, replication, and backup.
This series of experimentation explores WAL in detail, its importance, how it works, and provides examples to demonstrate its usage.
What is Write-Ahead Logging (WAL)?
WAL is a logging mechanism where changes to the database are first written to a log file before being applied to the actual data files. This ensures that in case of a crash or unexpected failure, the database can recover and replay these logs to restore its state.
Your question is right !
Why do we need a WAL, when we do a periodic backup ?
Write-Ahead Logging (WAL) is critical even when periodic backups are in place because it complements backups to provide data consistency, durability, and flexibility in the following scenarios.
1. Crash Recovery
Why Itโs Important: Periodic backups only capture the database state at specific intervals. If a crash occurs after the latest backup, all changes made since that backup would be lost.
Role of WAL: WAL ensures that any committed transactions not yet written to data files (due to PostgreSQLโs lazy-writing behavior) are recoverable. During recovery, PostgreSQL replays the WAL logs to restore the database to its last consistent state, bridging the gap between the last checkpoint and the crash.
Example:
Backup Taken: At 12:00 PM.
Crash Occurs: At 1:30 PM.
Without WAL: All changes after 12:00 PM are lost.
With WAL: All changes up to 1:30 PM are recovered.
2. Point-in-Time Recovery (PITR)
Why Itโs Important: Periodic backups restore the database to the exact time of the backup. However, this may not be sufficient if you need to recover to a specific point, such as just before a mistake (e.g., accidental data deletion).
Role of WAL: WAL records every change, enabling you to replay transactions up to a specific time. This allows fine-grained recovery beyond what periodic backups can provide.
Example:
Backup Taken: At 12:00 AM.
Mistake Made: At 9:45 AM, an important table is accidentally dropped.
Without WAL: Restore only to 12:00 AM, losing 9 hours and 45 minutes of data.
With WAL: Restore to 9:44 AM, recovering all valid changes except the accidental drop.
3. Replication and High Availability
Why Itโs Important: In a high-availability setup, replicas must stay synchronized with the primary database to handle failovers. Periodic backups cannot provide real-time synchronization.
Role of WAL: WAL enables streaming replication by transmitting logs to replicas, ensuring near real-time synchronization.
Example:
A primary database sends WAL logs to replicas as changes occur. If the primary fails, a replica can quickly take over without data loss.
4. Handling Incremental Changes
Why Itโs Important: Periodic backups store complete snapshots of the database, which can be time-consuming and resource-intensive. They also do not capture intermediate changes.
Role of WAL: WAL allows incremental updates by recording only the changes made since the last backup or checkpoint. This is crucial for efficient data recovery and backup optimization.
5. Ensuring Data Durability
Why Itโs Important: Even during normal operations, a database crash (e.g., power failure) can occur. Without WAL, transactions committed by users but not yet flushed to disk are lost.
Role of WAL: WAL ensures durability by logging all changes before acknowledging transaction commits. This guarantees that committed transactions are recoverable even if the system crashes before flushing the changes to data files.
6. Supporting Hot Backups
Why Itโs Important: For large, active databases, taking a backup while the database is running can result in inconsistent snapshots.
Role of WAL: WAL ensures consistency by recording changes that occur during the backup process. When replayed, these logs synchronize the backup, ensuring it is valid and consistent.
7. Debugging and Auditing
Why Itโs Important: Periodic backups are static snapshots and donโt provide a record of what happened in the database between backups.
Role of WAL: WAL contains a sequential record of all database modifications, which can help in debugging issues or auditing transactions.
Feature
Periodic Backups
Write-Ahead Logging
Crash Recovery
Limited to the last backup
Ensures full recovery to the crash point
Point-in-Time Recovery
Restores only to the backup time
Allows recovery to any specific point
Replication
Not supported
Enables real-time replication
Efficiency
Full snapshot
Incremental changes
Durability
Relies on backup frequency
Guarantees transaction durability
In upcoming sessions, we will all experiment each one of the failure scenarios for understanding.
it requires external dependency parse for parsing the python string format with placeholders
import parse
from date import TA_MONTHS
from date import datetime
//POC of tamil date time parser
def strptime(format='{month}, {date} {year}',date_string ="เฎจเฎตเฎฎเฏเฎชเฎฐเฏ, 16 2024"):
parsed = parse.parse(format,date_string)
month = TA_MONTHS.index(parsed['month'])+1
date = int(parsed['date'])
year = int(parsed['year'])
return datetime(year,month,date)
print(strptime("{date}-{month}-{year}","16-เฎจเฎตเฎฎเฏเฎชเฎฐเฏ-2024"))
#dt = datetime(2024,11,16);
# print(dt.strptime_ta("เฎจเฎตเฎฎเฏเฎชเฎฐเฏ , 16 2024","%m %d %Y"))
pwd โ When you first open the terminal, you are in the home directory of your user. To know which directory you are in, you can use the โpwdโ command. It gives us the absolute path, which means the path that starts from the root. The root is the base of the Linux file system and is denoted by a forward slash( / ). The user directory is usually something like โ/home/usernameโ.
ls โ Use the โlsโ command to know what files are in the directory you are in. You can see all the hidden files by using the command โls -aโ.
cd โ Use the โcdโ command to go to a directory. โcdโ expects directory name or path of new directory as input.
mkdir & rmdir โ Use the mkdir command when you need to create a folder or a directory.Use rmdir to delete a directory. But rmdir can only be used to delete an empty directory. To delete a directory containing files, use rm.
rm โ Use the rm command to delete a file. Use โrm -rโ to recursively delete all files within a specific directory.
touch โ The touch command is used to create an empty file. For example, โtouch new.txtโ.
cp โ Use the cp command to copy files through the command line.
mv โ Use the mv command to move files through the command line. We can also use the mv command to rename a file.
9.cat โ Use the cat command to display the contents of a file. It is usually used to easily view programs.
10.vi - You can create a new file or modify a file using this editor.
Hi folks , welcome to my blog. Here we are going to see some basic and important commands of linux.
One of the most distinctive features of Linux is its command-line interface (CLI). Knowing a few basic commands can unlock many possibilities in Linux. Essential Commands
Here are some fundamental commands to get you started: ls - Lists files and directories in the current directory.
ls
cd - Changes to a different directory.
cd /home/user/Documents
pwd - Prints the current working directory.
pwd
cp - Copies files or directories.
cp file1.txt /home/user/backup/
mv - Moves or renames files or directories.
mv file1.txt file2.txt
rm - Removes files or directories.
rm file1.txt
mkdir - Creates a new directory.
mkdir new_folder
touch - Creates a new empty file.
touch newfile.txt
cat - Displays the contents of a file.
cat file1.txt
nano or vim - Opens a file in the text editor.
nano file1.txt
chmod - Changes file permissions.
chmod 755 file1.txt
ps - Displays active processes.
ps
kill - Terminates a process.
kill [PID]
Each command is powerful on its own, and combining them enables you to manage your files and system effectively.We can see more about some basics and interesting things about linux in further upcoming blogs which I will be posting.
In todayโs fast-paced digital application, delivering a reliable and scalable application is key to providing a positive user experience.
One of the most effective ways to guarantee this is through load testing. This post will walk you through the fundamentals of load testing, real-time examples of its application, and crucial metrics to watch for.
What is Load Testing?
Load testing is a type of performance testing that simulates real-world usage of an application. By applying load to a system, testers observe how it behaves under peak and normal conditions. The primary goal is to identify any performance bottlenecks, ensure the system can handle expected user traffic, and maintain optimal performance.
Load testing answers these critical questions:
Can the application handle the expected user load?
How does performance degrade as the load increases?
What is the systemโs breaking point?
Why is Load Testing Important?
Without load testing, applications are vulnerable to crashes, slow response times, and unavailability, all of which can lead to a poor user experience, lost revenue, and brand damage. Proactive load testing allows teams to address issues before they impact end-users.
Real-Time Load Testing Examples
Letโs explore some real-world examples that demonstrate the importance of load testing.
Example 1: E-commerce Website During a Sale Event
An online retailer preparing for a Black Friday sale knows that traffic will spike. They conduct load testing to simulate thousands of users browsing, adding items to their cart, and checking out simultaneously. By analyzing the systemโs response under these conditions, the retailer can identify weak points in the checkout process or database and make necessary optimizations.
Example 2: Video Streaming Platform Launch
A new streaming platform is preparing for launch, expecting millions of users. Through load testing, the team simulates high traffic, testing how well video streaming performs under maximum user load. This testing also helps check if CDN (Content Delivery Network) configurations are optimized for global access, ensuring minimal buffering and downtime during peak hours.
Example 3: Financial Services Platform During Market Hours
A trading platform experiences intense usage during market open and close hours. Load testing helps simulate these peak times, ensuring that real-time data updates, transactions, and account management work flawlessly. Testing for these scenarios helps avoid issues like slow trade executions and platform unavailability during critical trading periods.
Key Metrics to Monitor in Load Testing
Understanding key metrics is essential for interpreting load test results. Here are some critical metrics to focus on:
1. Response Time
Definition: The time taken by the system to respond to a request.
Why It Matters: Slow response times can frustrate users and indicate bottlenecks.
Example Thresholds: For websites, a response time below 2 seconds is considered acceptable.
2. Throughput
Definition: The number of requests processed per second.
Why It Matters: Throughput indicates how many concurrent users your application can handle.
Real-Time Use Case: In our e-commerce example, the retailer would track throughput to ensure the checkout process doesnโt become a bottleneck.
3. Error Rate
Definition: The percentage of failed requests out of total requests.
Why It Matters: A high error rate could indicate application instability under load.
Real-Time Use Case: The trading platform monitors the error rate during market close, ensuring the system doesnโt throw errors under peak trading load.
4. CPU and Memory Utilization
Definition: The percentage of CPU and memory resources used during the load test.
Why It Matters: High CPU or memory utilization can signal that the server may not handle additional load.
Real-Time Use Case: The video streaming platform tracks memory usage to prevent lag or interruptions in streaming as users increase.
5. Concurrent Users
Definition: The number of users active on the application at the same time.
Why It Matters: Concurrent users help you understand how much load the system can handle before performance starts degrading.
Real-Time Use Case: The retailer tests how many concurrent users can shop simultaneously without crashing the website.
6. Latency
Definition: The time it takes for a request to travel from the client to the server and back.
Why It Matters: High latency indicates network or processing delays that can slow down the user experience.
Real-Time Use Case: For a financial app, reducing latency ensures trades execute in near real-time, which is crucial for users during volatile market conditions.
7. 95th and 99th Percentile Response Times
Definition: The time within which 95% or 99% of requests are completed.
Why It Matters: These percentiles help identify outliers that may impact user experience.
Real-Time Use Case: The streaming service may analyze these percentiles to ensure smooth playback for most users, even under peak loads.
Best Practices for Effective Load Testing
Set Clear Objectives: Define specific goals, such as the expected number of concurrent users or acceptable response times, based on the nature of the application.
Use Realistic Load Scenarios: Create scenarios that mimic actual user behavior, including peak times, user interactions, and geographical diversity.
Analyze Bottlenecks and Optimize: Use test results to identify and address performance bottlenecks, whether in the application code, database queries, or server configurations.
Monitor in Real-Time: Track metrics like response time, throughput, and error rates in real-time to identify issues as they arise during the test.
Repeat and Compare: Conduct multiple load tests to ensure consistent performance over time, especially after any significant update or release.
Load testing is crucial for building a resilient and scalable application. By using real-world scenarios and keeping a close eye on metrics like response time, throughput, and error rates, you can ensure your system performs well under load. Proactive load testing helps to deliver a smooth, reliable experience for users, even during peak times.