❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Prompting & my life

Source: BingΒ AI

In the AI Era, we all use AI in our daily lives. Prompting is an efficient way of using AI like ChatGPT, Perplexity, and others. Today, I would like to share the β€œPower of Prompting” in this blog. But to know the power of prompting, I would like to share my story. How the Prompting should change myΒ life.

As a small introduction, my name is Anand. I am a Front-End Developer, Tech & Finance Enthusiast and have a desire to work as an developer in Top ITΒ company.

Where do IΒ start???

Ok, I am pursuing my Higher Education in Government Aided School. A School filled with the teaching of discipline & education. Due to loosening of strict in school about language. I have no confidence in speaking in English. Even try to speak in English. We all know about the response of friends. How do they teaseΒ us?

Fast forward to 2022, I completed my schooling and joined BSc Computer Science. At that time, to be frank, I have no knowledge about tech, coding, hardware, and all. But I have that spark to learn. As we always know, the internet is filled with knowledge. But language is a big barrier to gaining knowledge.

Here is the intro of today’s AI war beginner or the reason for today’s AI warβ€Šβ€”β€ŠChatGPT. In November 2022, the ChatGPT was launched. Due to got an good roommates in my first year. I have know about the power of the internet. So, in that time, I gained the knowledge using only YouTube. Within one week of its release, I started usingΒ ChatGPT.

It helps to understand tech, education, and what I want using simple English or English filled with Grammar mistakes. After the lot of prompting I gained more knowledge about tech. Then move on to the courses in various platforms like Coursera, Udemy and other platform. Then move on to Blogs, Research Papers as little bit and moreΒ on.

Fast forward today, I have good foundational knowledge in various tech & finance. But the barrier of english and without knowledge of β€œHow search anything in internet deeply?” are broken and the AI acts as goodΒ mentor.

Nowadays even though I have gain knowledge through various things from youtube, blogs, AI, research papers, books. This all begin from that. Because without that beginning, now I amΒ nothing.

I think this blog is interesting in the way of denote the β€œPrompting & my life”. Stay tuned for my tech & financeΒ blogs.

Connect withΒ Me:

🎯 PostgreSQL Zero to Hero with Parottasalna – 2 Day Bootcamp (FREE!) πŸš€

2 March 2025 at 07:09

Databases power the backbone of modern applications, and PostgreSQL is one of the most powerful open-source relational databases trusted by top companies worldwide. Whether you’re a beginner or a developer looking to sharpen your database skills, this FREE bootcamp will take you from Zero to Hero in PostgreSQL!

What You’ll Learn?

βœ… PostgreSQL fundamentals & installation

βœ… Postgres Architecture
βœ… Writing optimized queries
βœ… Indexing & performance tuning
βœ… Transactions & locking mechanisms
βœ… Advanced joins, CTEs & subqueries
βœ… Real-world best practices & hands-on exercises

This intensive hands on bootcamp is designed for developers, DBAs, and tech enthusiasts who want to master PostgreSQL from scratch and apply it in real-world scenarios.

Who Should Attend?

πŸ”Ή Beginners eager to learn databases
πŸ”Ή Developers & Engineers working with PostgreSQL
πŸ”Ή Anyone looking to optimize their SQL skills

πŸ“… Date: March 22, 23 -> (Moved to April 5, 6)
⏰ Time: Will be finalized later.
πŸ“ Location: Online
πŸ’° Cost: 100% FREE πŸŽ‰

πŸ”— RSVP Here

Prerequisite

  1. Checkout this playlist of our previous postgres session https://www.youtube.com/playlist?list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt

πŸŽ‰ This bootcamp is completely FREE – Learn without any cost! πŸŽ‰

πŸ’‘ Spots are limited – RSVP now to reserve your seat!

How Stress Testing Can Make More Attractive Systems ?

1 March 2025 at 06:06

Introduction

Stress testing is a critical aspect of performance testing that evaluates how a system performs under extreme loads. Unlike load testing, which simulates expected user traffic, stress testing pushes a system beyond its limits to identify breaking points and measure recovery capabilities.

In this blog, we will explore stress testing using K6, an open-source load testing tool, with detailed explanations and full examples to help you implement stress testing effectively.

Why Stress Testing?

Stress testing helps you

  • Identify the maximum capacity of your system.
  • Detect potential failures and bottlenecks.
  • Measure system stability and recovery under high loads.
  • Ensure infrastructure can handle unexpected spikes in traffic.

Setting Up K6 for Stress Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Understanding Stress Testing Scenarios

K6 provides various executors to simulate different traffic patterns. For stress testing, we mainly use

  1. ramping-vus – Gradually increases virtual users to a high level.
  2. constant-vus – Maintains a fixed high number of virtual users.
  3. spike – Simulates a sudden surge in traffic.

Example 1: Basic Stress Test with Ramping VUs

This script gradually increases the number of virtual users, holds a peak load, and then reduces it.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '1m', target: 100 }, // Ramp up to 100 users in 1 min
    { duration: '3m', target: 100 }, // Stay at 100 users for 3 min
    { duration: '1m', target: 0 },   // Ramp down to 0 users
  ],
};

export default function () {
  let res = http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • The test starts with 0 users and ramps up to 100 users in 1 minute.
  • Holds 100 users for 3 minutes.
  • Gradually reduces load to 0 users.
  • The sleep(1) function helps simulate real user behavior between requests.

Example 2: Constant High Load Test

This test maintains a consistently high number of virtual users.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  vus: 200, // 200 virtual users
  duration: '5m', // Run the test for 5 minutes
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • 200 virtual users are constantly hitting the endpoint for 5 minutes.
  • Helps evaluate system performance under sustained high traffic.

Example 3: Spike Testing (Sudden Traffic Surge)

This test simulates a sudden spike in traffic, followed by a drop.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '10s', target: 10 },  // Start with 10 users
    { duration: '10s', target: 500 }, // Spike to 500 users
    { duration: '10s', target: 10 },  // Drop back to 10 users
  ],
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 users.
  • Spikes suddenly to 500 users in 10 seconds.
  • Drops back to 10 users.
  • Helps determine how the system handles sudden surges in traffic.

Analyzing Test Results

After running the tests, K6 provides detailed statistics

checks..................: 100.00% βœ“ 5000 βœ— 0
http_req_duration......: avg=300ms min=200ms max=2000ms
http_reqs..............: 5000 requests
vus_max................: 500

Key Metrics to Analyze

  • http_req_duration β†’ Measures response time.
  • vus_max β†’ Maximum concurrent virtual users.
  • http_reqs β†’ Total number of requests.
  • errors β†’ Number of failed requests.

Stress testing is vital to ensure application stability and scalability. Using K6, we can simulate different stress scenarios like ramping load, constant high load, and spikes to identify system weaknesses before they affect users.

Achieving Better User Engaging via Realistic Load Testing in K6

1 March 2025 at 05:55

Introduction

Load testing is essential to evaluate how a system behaves under expected and peak loads. Traditionally, we rely on metrics like requests per second (RPS), response time, and error rates. However, an insightful approach called Average Load Testing has been discussed recently. This blog explores that concept in detail, providing practical examples to help you apply it effectively.

Understanding Average Load Testing

Average Load Testing focuses on simulating real-world load patterns rather than traditional peak load tests. Instead of sending a fixed number of requests per second, this approach

  • Generates requests based on the average concurrency over time.
  • More accurately reflects real-world traffic patterns.
  • Helps identify performance bottlenecks in a realistic manner.

Setting Up Load Testing with K6

K6 is an excellent tool for implementing Average Load Testing. Let’s go through practical examples of setting up such tests.

Install K6

brew install k6  # macOS
sudo apt install k6  # Ubuntu/Debian
docker pull grafana/k6  # Using Docker

Example 1: Basic K6 Script for Average Load Testing

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    avg_load: {
      executor: 'constant-arrival-rate',
      rate: 10, // 10 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 20,
      maxVUs: 50,
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The constant-arrival-rate executor ensures a steady request rate.
  • rate: 10 sends 10 requests per second.
  • duration: '2m' runs the test for 2 minutes.
  • preAllocatedVUs: 20 and maxVUs: 50 define virtual users needed to sustain the load.
  • The script logs response times to the console.

Example 2: Testing with Varying Load

To better reflect real-world scenarios, we can use ramping arrival rate to simulate gradual increases in traffic

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    ramping_load: {
      executor: 'ramping-arrival-rate',
      startRate: 5, // Start with 5 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 100,
      stages: [
        { duration: '1m', target: 20 },
        { duration: '2m', target: 50 },
        { duration: '3m', target: 100 },
      ],
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The ramping-arrival-rate gradually increases requests per second over time.
  • The stages array defines a progression from 5 to 100 requests/sec over 6 minutes.
  • Logs response times to help analyze system performance.

Example 3: Load Testing with Multiple Endpoints

In real applications, multiple endpoints are often tested simultaneously. Here’s how to test different API routes

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoints: {
      executor: 'constant-arrival-rate',
      rate: 15, // 15 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 30,
      maxVUs: 60,
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The script randomly selects an API endpoint to test different routes.
  • Uses check to ensure status codes are 200.
  • Logs response times for deeper insights.

Analyzing Results

To analyze test results, you can store logs or metrics in a database or monitoring tool and visualize trends over time. Some popular options include

  • Prometheus for time-series data storage.
  • InfluxDB for handling large-scale performance metrics.
  • ELK Stack (Elasticsearch, Logstash, Kibana) for log-based analysis.

Average Load Testing provides a more realistic way to measure system performance. By leveraging K6, you can create flexible, real-world simulations to optimize your applications effectively.

Learning Notes #41 – Shared Lock and Exclusive Locks | Postgres

6 January 2025 at 14:07

Today, I learnt about various locking mechanism to prevent double update. In this blog, i make notes on Shared Lock and Exclusive Lock for my future self.

What Are Locks in Databases?

Locks are mechanisms used by a DBMS to control access to data. They ensure that transactions are executed in a way that maintains the ACID (Atomicity, Consistency, Isolation, Durability) properties of the database. Locks can be classified into several types, including

  • Shared Locks (S Locks): Allow multiple transactions to read a resource simultaneously but prevent any transaction from writing to it.
  • Exclusive Locks (X Locks): Allow a single transaction to modify a resource, preventing both reading and writing by other transactions.
  • Intent Locks: Used to signal the type of lock a transaction intends to acquire at a lower level.
  • Deadlock Prevention Locks: Special locks aimed at preventing deadlock scenarios.

Shared Lock

A shared lock is used when a transaction needs to read a resource (e.g., a database row or table) without altering it. Multiple transactions can acquire a shared lock on the same resource simultaneously. However, as long as one or more shared locks exist on a resource, no transaction can acquire an exclusive lock on that resource.


-- Transaction A: Acquire a shared lock on a row
BEGIN;
SELECT * FROM employees WHERE id = 1 FOR SHARE;
-- Transaction B: Acquire a shared lock on the same row
BEGIN;
SELECT * FROM employees WHERE id = 1 FOR SHARE;
-- Both transactions can read the row concurrently
-- Transaction C: Attempt to update the same row
BEGIN;
UPDATE employees SET salary = salary + 1000 WHERE id = 1;
-- Transaction C will be blocked until Transactions A and B release their locks

Key Characteristics of Shared Locks

1. Concurrent Reads

  • Shared locks allow multiple transactions to read the same resource at the same time.
  • This is ideal for operations like SELECT queries that do not modify data.

2. Write Blocking

  • While a shared lock is active, no transaction can modify the locked resource.
  • Prevents dirty writes and ensures read consistency.

3. Compatibility

  • Shared locks are compatible with other shared locks but not with exclusive locks.

When Are Shared Locks Used?

Shared locks are typically employed in read operations under certain isolation levels. For instance,

1. Read Committed Isolation Level:

  • Shared locks are held for the duration of the read operation.
  • Prevents dirty reads by ensuring the data being read is not modified by other transactions during the read.

2. Repeatable Read Isolation Level:

  • Shared locks are held until the transaction completes.
  • Ensures that the data read during a transaction remains consistent and unmodified.

3. Snapshot Isolation:

  • Shared locks may not be explicitly used, as the DBMS creates a consistent snapshot of the data for the transaction.

    Exclusive Locks

    An exclusive lock is used when a transaction needs to modify a resource. Only one transaction can hold an exclusive lock on a resource at a time, ensuring no other transactions can read or write to the locked resource.

    
    -- Transaction X: Acquire an exclusive lock to update a row
    BEGIN;
    UPDATE employees SET salary = salary + 1000 WHERE id = 2;
    -- Transaction Y: Attempt to read the same row
    BEGIN;
    SELECT * FROM employees WHERE id = 2;
    -- Transaction Y will be blocked until Transaction X completes
    -- Transaction Z: Attempt to update the same row
    BEGIN;
    UPDATE employees SET salary = salary + 500 WHERE id = 2;
    -- Transaction Z will also be blocked until Transaction X completes
    

    Key Characteristics of Exclusive Locks

    1. Write Operations: Exclusive locks are essential for operations like INSERT, UPDATE, and DELETE.

    2. Blocking Reads and Writes: While an exclusive lock is active, no other transaction can read or write to the resource.

    3. Isolation: Ensures that changes made by one transaction are not visible to others until the transaction is complete.

      When Are Exclusive Locks Used?

      Exclusive locks are typically employed in write operations or any operation that modifies the database. For instance:

      1. Transactional Updates – A transaction that updates a row acquires an exclusive lock to ensure no other transaction can access or modify the row during the update.

      2. Table Modifications – When altering a table structure, the DBMS may place an exclusive lock on the entire table.

      Benefits of Shared and Exclusive Locks

      Benefits of Shared Locks

      1. Consistency in Multi-User Environments – Ensure that data being read is not altered by other transactions, preserving consistency.
      2. Concurrency Support – Allow multiple transactions to read data simultaneously, improving system performance.
      3. Data Integrity – Prevent dirty reads and writes, ensuring that operations yield reliable results.

      Benefits of Exclusive Locks

      1. Data Integrity During Modifications – Prevents other transactions from accessing data being modified, ensuring changes are applied safely.
      2. Isolation of Transactions – Ensures that modifications by one transaction are not visible to others until committed.

      Limitations and Challenges

      Shared Locks

      1. Potential for Deadlocks – Deadlocks can occur if two transactions simultaneously hold shared locks and attempt to upgrade to exclusive locks.
      2. Blocking Writes – Shared locks can delay write operations, potentially impacting performance in write-heavy systems.
      3. Lock Escalation – In systems with high concurrency, shared locks may escalate to table-level locks, reducing granularity and concurrency.

      Exclusive Locks

      1. Reduced Concurrency – Exclusive locks prevent other transactions from accessing the locked resource, which can lead to bottlenecks in highly concurrent systems.
      2. Risk of Deadlocks – Deadlocks can occur if two transactions attempt to acquire exclusive locks on resources held by each other.

      Lock Compatibility

      Locust ep 5: How to use test_start and test_stop Events in Locust

      21 November 2024 at 04:30

      Locust provides powerful event hooks, such as test_start and test_stop, to execute custom logic before and after a load test begins or ends. These events allow you to implement setup and teardown operations at the test level, which applies to the entire test run rather than individual users.

      In this blog, we will

      1. Understand what test_start and test_stop are.
      2. Explore their use cases.
      3. Provide examples of implementing these events.
      4. Discuss how to run and validate the setup.

      What Are test_start and test_stop?

      • test_start: Triggered when the test starts. Use this event to perform actions like initializing global resources, starting external systems, or logging test start information.
      • test_stop: Triggered when the test ends. This event is ideal for cleanup operations, aggregating results, or stopping external systems.

      These events are global and apply to the entire test environment rather than individual user instances.

      Why Use test_start and test_stop?

      • Global Setup: Initialize shared resources, like database connections or external services.
      • Logging: Record timestamps or test details for audit or reporting purposes.
      • External System Management: Start/stop services that the test depends on, such as mock servers or third-party APIs.

      Example: Basic Usage of test_start and test_stop

      Here’s a basic example demonstrating the usage of these events

      
      from locust import User, task, between, events
      from datetime import datetime
      
      # Global setup: Perform actions at test start
      @events.test_start.add_listener
      def on_test_start(environment, **kwargs):
          print("Test started at:", datetime.now())
      
      # Global teardown: Perform actions at test stop
      @events.test_stop.add_listener
      def on_test_stop(environment, **kwargs):
          print("Test stopped at:", datetime.now())
      
      # Simulated user behavior
      class MyUser(User):
          wait_time = between(1, 5)
      
          @task
          def print_datetime(self):
              """Task that prints the current datetime."""
              print("Current datetime:", datetime.now())
      
      

      Running the Example

      • Save the code as locustfile.py.
      • Start Locust -> `locust -f locustfile.py`
      • Configure the test parameters (number of users, spawn rate, etc.) in the web UI at http://localhost:8089.
      • Observe the console output:
        • A message when the test starts (on_test_start).
        • Messages during the test as users execute tasks.
        • A message when the test stops (on_test_stop).

      Example: Logging Test Details

      You can log detailed test information, like the number of users and host under test, using environment and kwargs

      
      from locust import User, task, between, events
      
      @events.test_start.add_listener
      def on_test_start(environment, **kwargs):
          print("Test started!")
          print(f"Target host: {environment.host}")
          print(f"Total users: {environment.runner.target_user_count}")
      
      @events.test_stop.add_listener
      def on_test_stop(environment, **kwargs):
          print("Test finished!")
          print("Summary:")
          print(f"Requests completed: {environment.stats.total.num_requests}")
          print(f"Failures: {environment.stats.total.num_failures}")
      
      class MyUser(User):
          wait_time = between(1, 5)
      
          @task
          def dummy_task(self):
              pass
      
      

      Observing the Results

      When you run the above examples

      • At Test Start: Look for messages indicating setup actions, like initializing external systems or printing start time.
      • During the Test: Observe user tasks being executed.
      • At Test Stop: Verify that cleanup actions were executed successfully.

      Locust ep 4: Why on_start and on_stop are Essential for Locust Users

      19 November 2024 at 04:30

      Locust provides two special methods, on_start and on_stop, to handle setup and teardown actions for individual users. These methods allow you to execute specific code when a simulated user starts or stops, making it easier to simulate real-world scenarios like login/logout or initialization tasks.

      In this blog, we’ll cover,

      1. What on_start and on_stop do.
      2. Why they are important.
      3. Practical examples of using these methods.
      4. Running and testing Locust scripts.

      What Are on_start and on_stop?

      • on_start: This method is executed once when a new simulated user starts. It’s commonly used for tasks like logging in or setting up the environment.
      • on_stop: This method is executed once when a simulated user stops. It’s often used for cleanup tasks like logging out.

      These methods are executed only once per user during the lifecycle of a test, as opposed to tasks that are run repeatedly.

      Why Use on_start and on_stop?

      1. Simulating Real User Behavior: Real users often start a session with an action (e.g., login) and end it with another (e.g., logout).
      2. Initial Setup: Some tasks require initializing data or setting up user state before performing other actions.
      3. Cleanup: Ensure that actions like logout are performed to leave the system in a clean state.

      Examples

      Basic Usage of on_start and on_stop

      In this example, we just print on start and `on stop` for each user while running a task.

      
      from locust import User, task, between, constant, constant_pacing
      from datetime import datetime
      
      
      class MyUser(User):
      
          wait_time = between(1, 5)
      
          def on_start(self):
              print("on start")
      
          def on_stop(self):
              print("on stop")
      
          @task
          def print_datetime(self):
              print(datetime.now())
      
      

      Locust EP 3: Simulating Multiple User Types in Locust

      18 November 2024 at 04:30

      Locust allows you to define multiple user types in your load tests, enabling you to simulate different user behaviors and traffic patterns. This is particularly useful when your application serves diverse client types, such as web and mobile users, each with unique interaction patterns.

      In this blog, we will

      1. Discuss the concept of multiple user types in Locust.
      2. Explore how to implement multiple user classes with weights.
      3. Run and analyze the test results.

      Why Use Multiple User Types?

      In real-world applications, different user groups interact with your system differently. For example,

      • Web Users might spend more time browsing through the UI.
      • Mobile Users could make faster but more frequent requests.

      By simulating distinct user types with varying behaviors, you can identify performance bottlenecks across all client groups.

      Understanding User Classes and Weights

      Locust provides the ability to define user classes by extending the User or HttpUser base class. Each user class can,

      • Have a unique set of tasks.
      • Define its own wait times.
      • Be assigned a weight, which determines the proportion of that user type in the simulation.

      For example, if WebUser has a weight of 1 and MobileUser has a weight of 2, the simulation will spawn 1 web user for every 2 mobile users.

      Example: Simulating Web and Mobile Users

      Below is an example Locust test with two user types

      
      from locust import User, task, between
      
      # Define a user class for web users
      class MyWebUser(User):
          wait_time = between(1, 3)  # Web users wait between 1 and 3 seconds between tasks
          weight = 1  # Web users are less frequent
      
          @task
          def login_url(self):
              print("I am logging in as a Web User")
      
      
      # Define a user class for mobile users
      class MyMobileUser(User):
          wait_time = between(1, 3)  # Mobile users wait between 1 and 3 seconds
          weight = 2  # Mobile users are more frequent
      
          @task
          def login_url(self):
              print("I am logging in as a Mobile User")
      
      

      How Locust Uses Weights

      With the above configuration

      • For every 3 users spawned, 1 will be a Web User, and 2 will be Mobile Users (based on their weights: 1 and 2).

      Locust automatically handles spawning these users in the specified ratio.

      Running the Locust Test

      1. Save the Code
        Save the above code in a file named locustfile.py.
      2. Start Locust
        Open your terminal and run `locust -f locustfile.py`
      3. Access the Web UI
      4. Enter Test Parameters
        • Number of users (e.g., 30).
        • Spawn rate (e.g., 5 users per second).
        • Host: If you are testing an actual API or website, specify its URL (e.g., http://localhost:8000).
      5. Analyze Results
        • Observe how Locust spawns the users according to their weights and tracks metrics like request counts and response times.

      After running the test:

      • Check the distribution of requests to ensure it matches the weight ratio (e.g., for every 1 web user request, there should be ~3 mobile user requests).
      • Use the metrics (response time, failure rate) to evaluate performance for each user type.

      Locust EP 2: Understanding Locust Wait Times with Complete Examples

      17 November 2024 at 07:43

      Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.

      In this blog, we’ll cover,

      1. What wait times are in Locust.
      2. Built-in wait time options.
      3. Creating custom wait times.
      4. A full example with instructions to run the test.

      What Are Wait Times in Locust?

      In real-world scenarios, users don’t interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.

      Locust provides several ways to define these wait times within your test scenarios.

      FastAPI App Overview

      Here’s the FastAPI app that we’ll test,

      
      from fastapi import FastAPI
      
      # Create a FastAPI app instance
      app = FastAPI()
      
      # Define a route with a GET method
      @app.get("/")
      def read_root():
          return {"message": "Welcome to FastAPI!"}
      
      @app.get("/items/{item_id}")
      def read_item(item_id: int, q: str = None):
          return {"item_id": item_id, "q": q}
      
      

      Locust Examples for FastAPI

      1. Constant Wait Time Example

      Here, we’ll simulate constant pauses between user requests

      
      from locust import HttpUser, task, constant
      
      class FastAPIUser(HttpUser):
          wait_time = constant(2)  # Wait for 2 seconds between requests
      
          @task
          def get_root(self):
              self.client.get("/")  # Simulates a GET request to the root endpoint
      
          @task
          def get_item(self):
              self.client.get("/items/42?q=test")  # Simulates a GET request with path and query parameters
      
      

      2. Between wait time Example

      Simulating random pauses between requests.

      
      from locust import HttpUser, task, between
      
      class FastAPIUser(HttpUser):
          wait_time = between(1, 5)  # Random wait time between 1 and 5 seconds
      
          @task(3)  # Weighted task: this runs 3 times more often
          def get_root(self):
              self.client.get("/")
      
          @task(1)
          def get_item(self):
              self.client.get("/items/10?q=locust")
      
      

      3. Custom Wait Time Example

      Using a custom wait time function to introduce more complex user behavior

      
      import random
      from locust import HttpUser, task
      
      def custom_wait():
          return max(1, random.normalvariate(3, 1))  # Normal distribution (mean: 3s, stddev: 1s)
      
      class FastAPIUser(HttpUser):
          wait_time = custom_wait
      
          @task
          def get_root(self):
              self.client.get("/")
      
          @task
          def get_item(self):
              self.client.get("/items/99?q=custom")
      
      
      

      Full Test Example

      Combining all the above elements, here’s a complete Locust test for your FastAPI app.

      
      from locust import HttpUser, task, between
      import random
      
      # Custom wait time function
      def custom_wait():
          return max(1, random.uniform(1, 3))  # Random wait time between 1 and 3 seconds
      
      class FastAPIUser(HttpUser):
          wait_time = custom_wait  # Use the custom wait time
      
          @task(3)
          def browse_homepage(self):
              """Simulates browsing the root endpoint."""
              self.client.get("/")
      
          @task(1)
          def browse_item(self):
              """Simulates fetching an item with ID and query parameter."""
              item_id = random.randint(1, 100)
              self.client.get(f"/items/{item_id}?q=test")
      
      

      Running Locust for FastAPI

      1. Run Your FastAPI App
        Save the FastAPI app code in a file (e.g., main.py) and start the server
      
      uvicorn main:app --reload
      

      By default, the app will run on http://127.0.0.1:8000.

      2. Run Locust
      Save the Locust file as locustfile.py and start Locust.

      
      locust -f locustfile.py
      

      3. Configure Locust
      Open http://localhost:8089 in your browser and enter:

      • Host: http://127.0.0.1:8000
      • Number of users and spawn rate based on your testing requirements.

      4. Run in Headless Mode (Optional)
      Use the following command to run Locust in headless mode

      
      locust -f locustfile.py --headless -u 50 -r 10 --host http://127.0.0.1:8000`
      

      -u 50: Simulate 50 users.

      -r 10: Spawn 10 users per second.

      ❌
      ❌