โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Boost System Performance During Traffic Surges with Spike Testing

1 March 2025 at 06:17

Introduction

Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.

In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.

Why Perform Spike Testing?

Spike testing helps you

  • Determine system stability under unexpected traffic surges.
  • Identify bottlenecks that arise due to rapid load increases.
  • Assess auto-scaling capabilities of cloud-based infrastructures.
  • Measure response time degradation during high-demand spikes.
  • Ensure system recovery after the sudden load disappears.

Setting Up K6 for Spike Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Choosing the Right Test Scenario

K6 provides different executors to simulate load patterns. For spike testing, we use

  • ramping-arrival-rate โ†’ Gradually increases the request rate over time.
  • constant-arrival-rate โ†’ Maintains a fixed number of requests per second after the spike.

Example 1: Basic Spike Test

This test starts with low traffic, spikes suddenly, and then drops back to normal.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    spike_test: {
      executor: 'ramping-arrival-rate',
      startRate: 10, // Start with 10 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 100,
      maxVUs: 500,
      stages: [
        { duration: '30s', target: 10 },  // Low traffic
        { duration: '10s', target: 500 }, // Sudden spike
        { duration: '30s', target: 10 },  // Traffic drops
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 requests per second for 30 seconds.
  • Spikes to 500 requests per second in 10 seconds.
  • Drops back to 10 requests per second.
  • Tests the systemโ€™s ability to handle and recover from traffic spikes.

Example 2: Spike Test with High User Load

This test simulates a spike in virtual users rather than just requests per second.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    user_spike: {
      executor: 'ramping-vus',
      stages: [
        { duration: '30s', target: 20 },  // Normal traffic
        { duration: '10s', target: 300 }, // Sudden spike in users
        { duration: '30s', target: 20 },  // Drop back to normal
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation:

  • Simulates a sudden increase in concurrent virtual users (VUs).
  • Helps test server stability, database handling, and auto-scaling.

Example 3: Spike Test on Multiple Endpoints

In real-world applications, multiple endpoints may experience spikes simultaneously. Hereโ€™s how to test different API routes.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoint_spike: {
      executor: 'ramping-arrival-rate',
      startRate: 5,
      timeUnit: '1s',
      preAllocatedVUs: 200,
      maxVUs: 500,
      stages: [
        { duration: '20s', target: 10 },  // Normal traffic
        { duration: '10s', target: 300 }, // Spike across endpoints
        { duration: '20s', target: 10 },  // Traffic drop
      ],
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • Simulates traffic spikes across multiple API endpoints.
  • Helps identify which API calls suffer under extreme load.

Analyzing Test Results

After running the tests, K6 provides key performance metrics

http_req_duration......: avg=350ms min=150ms max=3000ms
http_reqs..............: 10,000 requests
vus_max................: 500
errors.................: 2%

Key Metrics

  • http_req_duration โ†’ Measures response time impact.
  • vus_max โ†’ Peak virtual users during the spike.
  • errors โ†’ Percentage of failed requests due to overload.

Best Practices for Spike Testing

  • Monitor application logs and database performance during the test.
  • Use auto-scaling mechanisms for cloud-based environments.
  • Combine spike tests with stress testing for better insights.
  • Analyze error rates and recovery time to ensure system stability.

Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.

How Stress Testing Can Make More Attractive Systems ?

1 March 2025 at 06:06

Introduction

Stress testing is a critical aspect of performance testing that evaluates how a system performs under extreme loads. Unlike load testing, which simulates expected user traffic, stress testing pushes a system beyond its limits to identify breaking points and measure recovery capabilities.

In this blog, we will explore stress testing using K6, an open-source load testing tool, with detailed explanations and full examples to help you implement stress testing effectively.

Why Stress Testing?

Stress testing helps you

  • Identify the maximum capacity of your system.
  • Detect potential failures and bottlenecks.
  • Measure system stability and recovery under high loads.
  • Ensure infrastructure can handle unexpected spikes in traffic.

Setting Up K6 for Stress Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Understanding Stress Testing Scenarios

K6 provides various executors to simulate different traffic patterns. For stress testing, we mainly use

  1. ramping-vus โ€“ Gradually increases virtual users to a high level.
  2. constant-vus โ€“ Maintains a fixed high number of virtual users.
  3. spike โ€“ Simulates a sudden surge in traffic.

Example 1: Basic Stress Test with Ramping VUs

This script gradually increases the number of virtual users, holds a peak load, and then reduces it.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '1m', target: 100 }, // Ramp up to 100 users in 1 min
    { duration: '3m', target: 100 }, // Stay at 100 users for 3 min
    { duration: '1m', target: 0 },   // Ramp down to 0 users
  ],
};

export default function () {
  let res = http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • The test starts with 0 users and ramps up to 100 users in 1 minute.
  • Holds 100 users for 3 minutes.
  • Gradually reduces load to 0 users.
  • The sleep(1) function helps simulate real user behavior between requests.

Example 2: Constant High Load Test

This test maintains a consistently high number of virtual users.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  vus: 200, // 200 virtual users
  duration: '5m', // Run the test for 5 minutes
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • 200 virtual users are constantly hitting the endpoint for 5 minutes.
  • Helps evaluate system performance under sustained high traffic.

Example 3: Spike Testing (Sudden Traffic Surge)

This test simulates a sudden spike in traffic, followed by a drop.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '10s', target: 10 },  // Start with 10 users
    { duration: '10s', target: 500 }, // Spike to 500 users
    { duration: '10s', target: 10 },  // Drop back to 10 users
  ],
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 users.
  • Spikes suddenly to 500 users in 10 seconds.
  • Drops back to 10 users.
  • Helps determine how the system handles sudden surges in traffic.

Analyzing Test Results

After running the tests, K6 provides detailed statistics

checks..................: 100.00% โœ“ 5000 โœ— 0
http_req_duration......: avg=300ms min=200ms max=2000ms
http_reqs..............: 5000 requests
vus_max................: 500

Key Metrics to Analyze

  • http_req_duration โ†’ Measures response time.
  • vus_max โ†’ Maximum concurrent virtual users.
  • http_reqs โ†’ Total number of requests.
  • errors โ†’ Number of failed requests.

Stress testing is vital to ensure application stability and scalability. Using K6, we can simulate different stress scenarios like ramping load, constant high load, and spikes to identify system weaknesses before they affect users.

Achieving Better User Engaging via Realistic Load Testing in K6

1 March 2025 at 05:55

Introduction

Load testing is essential to evaluate how a system behaves under expected and peak loads. Traditionally, we rely on metrics like requests per second (RPS), response time, and error rates. However, an insightful approach called Average Load Testing has been discussed recently. This blog explores that concept in detail, providing practical examples to help you apply it effectively.

Understanding Average Load Testing

Average Load Testing focuses on simulating real-world load patterns rather than traditional peak load tests. Instead of sending a fixed number of requests per second, this approach

  • Generates requests based on the average concurrency over time.
  • More accurately reflects real-world traffic patterns.
  • Helps identify performance bottlenecks in a realistic manner.

Setting Up Load Testing with K6

K6 is an excellent tool for implementing Average Load Testing. Letโ€™s go through practical examples of setting up such tests.

Install K6

brew install k6  # macOS
sudo apt install k6  # Ubuntu/Debian
docker pull grafana/k6  # Using Docker

Example 1: Basic K6 Script for Average Load Testing

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    avg_load: {
      executor: 'constant-arrival-rate',
      rate: 10, // 10 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 20,
      maxVUs: 50,
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The constant-arrival-rate executor ensures a steady request rate.
  • rate: 10 sends 10 requests per second.
  • duration: '2m' runs the test for 2 minutes.
  • preAllocatedVUs: 20 and maxVUs: 50 define virtual users needed to sustain the load.
  • The script logs response times to the console.

Example 2: Testing with Varying Load

To better reflect real-world scenarios, we can use ramping arrival rate to simulate gradual increases in traffic

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    ramping_load: {
      executor: 'ramping-arrival-rate',
      startRate: 5, // Start with 5 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 100,
      stages: [
        { duration: '1m', target: 20 },
        { duration: '2m', target: 50 },
        { duration: '3m', target: 100 },
      ],
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The ramping-arrival-rate gradually increases requests per second over time.
  • The stages array defines a progression from 5 to 100 requests/sec over 6 minutes.
  • Logs response times to help analyze system performance.

Example 3: Load Testing with Multiple Endpoints

In real applications, multiple endpoints are often tested simultaneously. Hereโ€™s how to test different API routes

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoints: {
      executor: 'constant-arrival-rate',
      rate: 15, // 15 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 30,
      maxVUs: 60,
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The script randomly selects an API endpoint to test different routes.
  • Uses check to ensure status codes are 200.
  • Logs response times for deeper insights.

Analyzing Results

To analyze test results, you can store logs or metrics in a database or monitoring tool and visualize trends over time. Some popular options include

  • Prometheus for time-series data storage.
  • InfluxDB for handling large-scale performance metrics.
  • ELK Stack (Elasticsearch, Logstash, Kibana) for log-based analysis.

Average Load Testing provides a more realistic way to measure system performance. By leveraging K6, you can create flexible, real-world simulations to optimize your applications effectively.

Learning Notes #77 โ€“ Smoke Testing with K6

16 February 2025 at 07:12

In this blog, i jot down notes on what is smoke test, how it got its name, and how to approach the same in k6.

The term smoke testing originates from hardware testing, where engineers would power on a circuit or device and check if smoke appeared.

If smoke was detected, it indicated a fundamental issue, and further testing was halted. This concept was later adapted to software engineering.

What is Smoke Testing?

Smoke testing is a subset of test cases executed to verify that the major functionalities of an application work as expected. If a smoke test fails, the build is rejected, preventing further testing of a potentially unstable application. This test helps catch major defects early, saving time and effort.

Key Characteristics

  • Ensures that the application is not broken in major areas.
  • Runs quickly and is not exhaustive.
  • Usually automated as part of a CI/CD pipeline.

Writing a Basic Smoke Test with K6

A basic smoke test using K6 typically checks API endpoints for HTTP 200 responses and acceptable response times.

import http from 'k6/http';
import { check } from 'k6';

export let options = {
    vus: 1, // 1 virtual user
    iterations: 5, // Runs the test 5 times
};

export default function () {
    let res = http.get('https://example.com/api/health');
    check(res, {
        'is status 200': (r) => r.status === 200,
        'response time < 500ms': (r) => r.timings.duration < 500,
    });
}

Advanced Smoke Test Example

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
    vus: 2, // 2 virtual users
    iterations: 10, // Runs the test 10 times
};

export default function () {
    let res = http.get('https://example.com/api/login');
    check(res, {
        'status is 200': (r) => r.status === 200,
        'response time < 400ms': (r) => r.timings.duration < 400,
    });
    sleep(1);
}

Running and Analyzing Results

Execute the test using

k6 run smoke-test.js

Sample Output

checks...
โœ” is status 200
โœ” response time < 500ms

If any of the checks fail, K6 will report an error, signaling an issue in the application.

Smoke testing with K6 is an effective way to ensure that key functionalities in your application work as expected. By integrating it into your CI/CD pipeline, you can catch major defects early, improve application stability, and streamline your development workflow.

Learning Notes #76 โ€“ Specifying Virtual Users (VUs) and Test duration in K6

16 February 2025 at 05:13

When running load tests with K6, two fundamental aspects that shape test execution are the number of Virtual Users (VUs) and the test duration. These parameters help simulate realistic user behavior and measure system performance under different load conditions.

In this blog, i jot down notes on virtual users and test duration in options. Using this we can ramp up users.

  1. Defining VUs and Duration in K6
  2. Basic VU and Duration Configuration
  3. Specifying VUs and Duration from the Command Line
  4. Ramp Up and Ramp Down with Stages
  5. Custom Execution Scenarios
    1. Syntax of Custom Execution Scenarios
    2. Different Executors in K6
    3. Example: Ramping VUs Scenario
    4. Example: Constant Arrival Rate Scenario
    5. Example: Per VU Iteration Scenario
  6. Choosing the Right Configuration
  7. References

Defining VUs and Duration in K6

K6 offers multiple ways to define VUs and test duration, primarily through options in the test script or the command line.

Basic VU and Duration Configuration

The simplest way to specify VUs and test duration is by setting them in the options object of your test script.

import http from 'k6/http';
import { sleep } from 'k6';

export const options = {
  vus: 10, // Number of virtual users
  duration: '30s', // Duration of the test
};

export default function () {
  http.get('https://test.k6.io/');
  sleep(1);
}

This script runs a load test with 10 virtual users for 30 seconds, making requests to the specified URL.

Specifying VUs and Duration from the Command Line

You can also set the VUs and duration dynamically using command-line arguments without modifying the script.

k6 run --vus 20 --duration 1m script.js

This command runs the test with 20 virtual users for 1 minute.

Ramp Up and Ramp Down with Stages

Instead of a fixed number of VUs, you can simulate user load variations over time using stages. This helps to gradually increase or decrease the load on the system.

export const options = {
  stages: [
    { duration: '30s', target: 10 }, // Ramp up to 10 VUs
    { duration: '1m', target: 50 },  // Ramp up to 50 VUs
    { duration: '30s', target: 10 }, // Ramp down to 10 VUs
    { duration: '20s', target: 0 },  // Ramp down to 0 VUs
  ],
};

This test gradually increases the load, sustains it, and then reduces it, simulating real-world traffic patterns.

Custom Execution Scenarios

For more advanced load testing strategies, K6 supports scenarios, allowing fine-grained control over execution behavior.

Syntax of Custom Execution Scenarios

A scenarios object defines different execution strategies. Each scenario consists of

  • executor: Defines how the test runs (e.g., ramping-vus, constant-arrival-rate, etc.).
  • vus: Number of virtual users (for certain executors).
  • duration: How long the scenario runs.
  • iterations: Total number of iterations per VU (for certain executors).
  • stages: Used in ramping-vus to define load variations over time.
  • rate: Defines the number of iterations per time unit in constant-arrival-rate.
  • preAllocatedVUs: Number of VUs reserved for the test.

Different Executors in K6

K6 provides several executors that define how virtual users (VUs) generate load

  1. shared-iterations โ€“ Distributes a fixed number of iterations across multiple VUs.
  2. per-vu-iterations โ€“ Each VU runs a specific number of iterations independently.
  3. constant-vus โ€“ Maintains a fixed number of VUs for a set duration.
  4. ramping-vus โ€“ Increases or decreases the number of VUs over time.
  5. constant-arrival-rate โ€“ Ensures a constant number of requests per time unit, independent of VUs.
  6. ramping-arrival-rate โ€“ Gradually increases or decreases the request rate over time.
  7. externally-controlled โ€“ Allows dynamic control of VUs via an external API.

Example: Ramping VUs Scenario

export const options = {
  scenarios: {
    ramping_users: {
      executor: 'ramping-vus',
      startVUs: 0,
      stages: [
        { duration: '30s', target: 20 },
        { duration: '1m', target: 100 },
        { duration: '30s', target: 0 },
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Example: Constant Arrival Rate Scenario

export const options = {
  scenarios: {
    constant_request_rate: {
      executor: 'constant-arrival-rate',
      rate: 50, // 50 iterations per second
      timeUnit: '1s', // Per second
      duration: '1m', // Test duration
      preAllocatedVUs: 20, // Number of VUs to allocate
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Example: Per VU Iteration Scenario

export const options = {
  scenarios: {
    per_vu_iterations: {
      executor: 'per-vu-iterations',
      vus: 10,
      iterations: 50, // Each VU executes 50 iterations
      maxDuration: '1m',
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Choosing the Right Configuration

  • Use fixed VUs and duration for simple, constant load testing.
  • Use stages for ramping up and down load gradually.
  • Use scenarios for more complex and controlled testing setups.

References

  1. Scenarios & Executors โ€“ https://grafana.com/docs/k6/latest/using-k6/scenarios/

Learning Notes #72 โ€“ Metrics in K6 Load Testing

12 February 2025 at 17:15

In our previous blog on K6, we ran a script.js to test an api. As an output we received some metrics in the cli.

In this blog we are going to delve deep in to understanding metrics in K6.

1. HTTP Request Metrics

http_reqs

  • Description: Total number of HTTP requests initiated during the test.
  • Usage: Indicates the volume of traffic generated. A high number of requests can simulate real-world usage patterns.

http_req_duration

  • Description: Time taken for a request to receive a response (in milliseconds).
  • Components:
    • http_req_connecting: Time spent establishing a TCP connection.
    • http_req_tls_handshaking: Time for completing the TLS handshake.
    • http_req_waiting (TTFB): Time spent waiting for the first byte from the server.
    • http_req_sending: Time taken to send the HTTP request.
    • http_req_receiving: Time spent receiving the response data.
  • Usage: Identifies performance bottlenecks like slow server responses or network latency.

http_req_failed

  • Description: Proportion of failed HTTP requests (ratio between 0 and 1).
  • Usage: Highlights reliability issues. A high failure rate indicates problems with server stability or network errors.

2. VU (Virtual User) Metrics

vus

  • Description: Number of active Virtual Users at any given time.
  • Usage: Reflects concurrency level. Helps analyze how the system performs under varying loads.

vus_max

  • Description: Maximum number of Virtual Users during the test.
  • Usage: Defines the peak load. Useful for stress testing and capacity planning.

3. Iteration Metrics

iterations

  • Description: Total number of script iterations executed.
  • Usage: Measures the testโ€™s progress and workload. Useful in endurance (soak) testing to observe long-term stability.

iteration_duration

  • Description: Time taken to complete one iteration of the script.
  • Usage: Helps identify performance degradation over time, especially under sustained load.

4. Data Transfer Metrics

data_sent

  • Description: Total amount of data sent over the network (in bytes).
  • Usage: Monitors network usage. High data volumes might indicate inefficient request payloads.

data_received

  • Description: Total data received from the server (in bytes).
  • Usage: Detects bandwidth usage and helps identify heavy response payloads.

5. Custom Metrics (Optional)

While K6 provides default metrics, you can define custom metrics like Counters, Gauges, Rates, and Trends for specific business logic or technical KPIs.

Example

import { Counter } from 'k6/metrics';

let myCounter = new Counter('my_custom_metric');

export default function () {
  myCounter.add(1); // Increment the custom metric
}

Interpreting Metrics for Performance Optimization

  • Low http_req_duration + High http_reqs = Good scalability.
  • High http_req_failed = Investigate server errors or timeouts.
  • High data_sent / data_received = Optimize payloads.
  • Increasing iteration_duration over time = Possible memory leaks or resource exhaustion.

Learning Notes #69 โ€“ Getting Started with k6: Writing Your First Load Test

5 February 2025 at 15:38

Performance testing is a crucial part of ensuring the stability and scalability of web applications. k6 is a modern, open-source load testing tool that allows developers and testers to script and execute performance tests efficiently. In this blog, weโ€™ll explore the basics of k6 and write a simple test script to get started.

What is k6?

k6 is a load testing tool designed for developers. It is written in Go but uses JavaScript for scripting tests. Key features include,

  • High performance with minimal resource consumption
  • JavaScript-based scripting
  • CLI-based execution with detailed reporting
  • Integration with monitoring tools like Grafana and Prometheus

Installation

For installation check : https://grafana.com/docs/k6/latest/set-up/install-k6/

Writing a Basic k6 Test

A k6 test is written in JavaScript. Hereโ€™s a simple script to test an API endpoint,


import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  vus: 10, // Number of virtual users
  duration: '10s', // Test duration
};

export default function () {
  let res = http.get('https://api.restful-api.dev/objects');
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  sleep(1); // Simulate user wait time
}

Running the Test

Save the script as script.js and execute the test using the following command,

k6 run script.js

Understanding the Output

After running the test, k6 will provide a summary including

1. HTTP requests: Total number of requests made during the test.

    2. Response time metrics:

    • min: The shortest response time recorded.
    • max: The longest response time recorded.
    • avg: The average response time of all requests.
    • p(90), p(95), p(99): Percentile values indicating response time distribution.

    3. Checks: Number of checks passed or failed, such as status code validation.

    4. Virtual users (VUs):

    • vus_max: The maximum number of virtual users active at any time.
    • vus: The current number of active virtual users.

    5. Request Rate (RPS โ€“ Requests Per Second): The number of requests handled per second.

    6. Failures: Number of errors or failed requests due to timeouts or HTTP status codes other than expected.

    Next Steps

    Once youโ€™ve successfully run your first k6 test, you can explore,

    • Load testing different APIs and endpoints
    • Running distributed tests
    • Exporting results to Grafana
    • Integrating k6 with CI/CD pipelines

    k6 is a powerful tool that helps developers and QA engineers ensure their applications perform under load. Stay tuned for more in-depth tutorials on advanced k6 features!

    RSVP for K6 : Load Testing Made Easy in Tamil

    5 February 2025 at 10:57

    Ensuring your applications perform well under high traffic is crucial. Join us for an interactive K6 Bootcamp, where weโ€™ll explore performance testing, load testing strategies, and real-world use cases to help you build scalable and resilient systems.

    ๐ŸŽฏ What is K6 and Why Should You Learn It?

    Modern applications must handle thousands (or millions!) of users without breaking. K6 is an open-source, developer-friendly performance testing tool that helps you

    โœ… Simulate real-world traffic and identify performance bottlenecks.
    โœ… Write tests in JavaScript โ€“ no need for complex tools!
    โœ… Run efficient load tests on APIs, microservices, and web applications.
    โœ… Integrate with CI/CD pipelines to automate performance testing.
    โœ… Gain deep insights with real-time performance metrics.

    By mastering K6, youโ€™ll gain the skills to predict failures before they happen, optimize performance, and build systems that scale with confidence!

    ๐Ÿ“Œ Bootcamp Details

    ๐Ÿ“… Date: Feb 23 2024 โ€“ Sunday
    ๐Ÿ•’ Time: 10:30 AM
    ๐ŸŒ Mode: Online (Link Will be shared in Email after RSVP)
    ๐Ÿ—ฃ Language: เฎคเฎฎเฎฟเฎดเฏ

    ๐ŸŽ“ Who Should Attend?

    • Developers โ€“ Ensure APIs and services perform well under load.
    • QA Engineers โ€“ Validate system reliability before production.
    • SREs / DevOps Engineers โ€“ Continuously test performance in CI/CD pipelines.

    RSVP Now

    ๐Ÿ”ฅ Donโ€™t miss this opportunity to master load testing with K6 and take your performance engineering skills to the next level!

    Got questions? Drop them in the comments or reach out to me. See you at the bootcamp! ๐Ÿš€

    Our Previous Monthly meetsย โ€“ย https://www.youtube.com/watch?v=cPtyuSzeaa8&list=PLiutOxBS1MizPGGcdfXF61WP5pNUYvxUl&pp=gAQB

    Our Previous Sessions,

    1. Python โ€“ย https://www.youtube.com/watch?v=lQquVptFreE&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&pp=gAQB
    2. Docker โ€“ย https://www.youtube.com/watch?v=nXgUBanjZP8&list=PLiutOxBS1Mizi9IRQM-N3BFWXJkb-hQ4U&pp=gAQB
    3. Postgres โ€“ย https://www.youtube.com/watch?v=04pE5bK2-VA&list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt&pp=gAQB

    Locust ep 5: How to use test_start and test_stop Events in Locust

    21 November 2024 at 04:30

    Locust provides powerful event hooks, such as test_start and test_stop, to execute custom logic before and after a load test begins or ends. These events allow you to implement setup and teardown operations at the test level, which applies to the entire test run rather than individual users.

    In this blog, we will

    1. Understand what test_start and test_stop are.
    2. Explore their use cases.
    3. Provide examples of implementing these events.
    4. Discuss how to run and validate the setup.

    What Are test_start and test_stop?

    • test_start: Triggered when the test starts. Use this event to perform actions like initializing global resources, starting external systems, or logging test start information.
    • test_stop: Triggered when the test ends. This event is ideal for cleanup operations, aggregating results, or stopping external systems.

    These events are global and apply to the entire test environment rather than individual user instances.

    Why Use test_start and test_stop?

    • Global Setup: Initialize shared resources, like database connections or external services.
    • Logging: Record timestamps or test details for audit or reporting purposes.
    • External System Management: Start/stop services that the test depends on, such as mock servers or third-party APIs.

    Example: Basic Usage of test_start and test_stop

    Hereโ€™s a basic example demonstrating the usage of these events

    
    from locust import User, task, between, events
    from datetime import datetime
    
    # Global setup: Perform actions at test start
    @events.test_start.add_listener
    def on_test_start(environment, **kwargs):
        print("Test started at:", datetime.now())
    
    # Global teardown: Perform actions at test stop
    @events.test_stop.add_listener
    def on_test_stop(environment, **kwargs):
        print("Test stopped at:", datetime.now())
    
    # Simulated user behavior
    class MyUser(User):
        wait_time = between(1, 5)
    
        @task
        def print_datetime(self):
            """Task that prints the current datetime."""
            print("Current datetime:", datetime.now())
    
    

    Running the Example

    • Save the code as locustfile.py.
    • Start Locust -> `locust -f locustfile.py`
    • Configure the test parameters (number of users, spawn rate, etc.) in the web UI at http://localhost:8089.
    • Observe the console output:
      • A message when the test starts (on_test_start).
      • Messages during the test as users execute tasks.
      • A message when the test stops (on_test_stop).

    Example: Logging Test Details

    You can log detailed test information, like the number of users and host under test, using environment and kwargs

    
    from locust import User, task, between, events
    
    @events.test_start.add_listener
    def on_test_start(environment, **kwargs):
        print("Test started!")
        print(f"Target host: {environment.host}")
        print(f"Total users: {environment.runner.target_user_count}")
    
    @events.test_stop.add_listener
    def on_test_stop(environment, **kwargs):
        print("Test finished!")
        print("Summary:")
        print(f"Requests completed: {environment.stats.total.num_requests}")
        print(f"Failures: {environment.stats.total.num_failures}")
    
    class MyUser(User):
        wait_time = between(1, 5)
    
        @task
        def dummy_task(self):
            pass
    
    

    Observing the Results

    When you run the above examples

    • At Test Start: Look for messages indicating setup actions, like initializing external systems or printing start time.
    • During the Test: Observe user tasks being executed.
    • At Test Stop: Verify that cleanup actions were executed successfully.

    Locust ep 4: Why on_start and on_stop are Essential for Locust Users

    19 November 2024 at 04:30

    Locust provides two special methods, on_start and on_stop, to handle setup and teardown actions for individual users. These methods allow you to execute specific code when a simulated user starts or stops, making it easier to simulate real-world scenarios like login/logout or initialization tasks.

    In this blog, weโ€™ll cover,

    1. What on_start and on_stop do.
    2. Why they are important.
    3. Practical examples of using these methods.
    4. Running and testing Locust scripts.

    What Are on_start and on_stop?

    • on_start: This method is executed once when a new simulated user starts. Itโ€™s commonly used for tasks like logging in or setting up the environment.
    • on_stop: This method is executed once when a simulated user stops. Itโ€™s often used for cleanup tasks like logging out.

    These methods are executed only once per user during the lifecycle of a test, as opposed to tasks that are run repeatedly.

    Why Use on_start and on_stop?

    1. Simulating Real User Behavior: Real users often start a session with an action (e.g., login) and end it with another (e.g., logout).
    2. Initial Setup: Some tasks require initializing data or setting up user state before performing other actions.
    3. Cleanup: Ensure that actions like logout are performed to leave the system in a clean state.

    Examples

    Basic Usage of on_start and on_stop

    In this example, we just print on start and `on stop` for each user while running a task.

    
    from locust import User, task, between, constant, constant_pacing
    from datetime import datetime
    
    
    class MyUser(User):
    
        wait_time = between(1, 5)
    
        def on_start(self):
            print("on start")
    
        def on_stop(self):
            print("on stop")
    
        @task
        def print_datetime(self):
            print(datetime.now())
    
    

    Locust EP 3: Simulating Multiple User Types in Locust

    18 November 2024 at 04:30

    Locust allows you to define multiple user types in your load tests, enabling you to simulate different user behaviors and traffic patterns. This is particularly useful when your application serves diverse client types, such as web and mobile users, each with unique interaction patterns.

    In this blog, we will

    1. Discuss the concept of multiple user types in Locust.
    2. Explore how to implement multiple user classes with weights.
    3. Run and analyze the test results.

    Why Use Multiple User Types?

    In real-world applications, different user groups interact with your system differently. For example,

    • Web Users might spend more time browsing through the UI.
    • Mobile Users could make faster but more frequent requests.

    By simulating distinct user types with varying behaviors, you can identify performance bottlenecks across all client groups.

    Understanding User Classes and Weights

    Locust provides the ability to define user classes by extending the User or HttpUser base class. Each user class can,

    • Have a unique set of tasks.
    • Define its own wait times.
    • Be assigned a weight, which determines the proportion of that user type in the simulation.

    For example, if WebUser has a weight of 1 and MobileUser has a weight of 2, the simulation will spawn 1 web user for every 2 mobile users.

    Example: Simulating Web and Mobile Users

    Below is an example Locust test with two user types

    
    from locust import User, task, between
    
    # Define a user class for web users
    class MyWebUser(User):
        wait_time = between(1, 3)  # Web users wait between 1 and 3 seconds between tasks
        weight = 1  # Web users are less frequent
    
        @task
        def login_url(self):
            print("I am logging in as a Web User")
    
    
    # Define a user class for mobile users
    class MyMobileUser(User):
        wait_time = between(1, 3)  # Mobile users wait between 1 and 3 seconds
        weight = 2  # Mobile users are more frequent
    
        @task
        def login_url(self):
            print("I am logging in as a Mobile User")
    
    

    How Locust Uses Weights

    With the above configuration

    • For every 3 users spawned, 1 will be a Web User, and 2 will be Mobile Users (based on their weights: 1 and 2).

    Locust automatically handles spawning these users in the specified ratio.

    Running the Locust Test

    1. Save the Code
      Save the above code in a file named locustfile.py.
    2. Start Locust
      Open your terminal and run `locust -f locustfile.py`
    3. Access the Web UI
    4. Enter Test Parameters
      • Number of users (e.g., 30).
      • Spawn rate (e.g., 5 users per second).
      • Host: If you are testing an actual API or website, specify its URL (e.g., http://localhost:8000).
    5. Analyze Results
      • Observe how Locust spawns the users according to their weights and tracks metrics like request counts and response times.

    After running the test:

    • Check the distribution of requests to ensure it matches the weight ratio (e.g., for every 1 web user request, there should be ~3 mobile user requests).
    • Use the metrics (response time, failure rate) to evaluate performance for each user type.

    Locust EP 2: Understanding Locust Wait Times with Complete Examples

    17 November 2024 at 07:43

    Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.

    In this blog, weโ€™ll cover,

    1. What wait times are in Locust.
    2. Built-in wait time options.
    3. Creating custom wait times.
    4. A full example with instructions to run the test.

    What Are Wait Times in Locust?

    In real-world scenarios, users donโ€™t interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.

    Locust provides several ways to define these wait times within your test scenarios.

    FastAPI App Overview

    Hereโ€™s the FastAPI app that weโ€™ll test,

    
    from fastapi import FastAPI
    
    # Create a FastAPI app instance
    app = FastAPI()
    
    # Define a route with a GET method
    @app.get("/")
    def read_root():
        return {"message": "Welcome to FastAPI!"}
    
    @app.get("/items/{item_id}")
    def read_item(item_id: int, q: str = None):
        return {"item_id": item_id, "q": q}
    
    

    Locust Examples for FastAPI

    1. Constant Wait Time Example

    Here, weโ€™ll simulate constant pauses between user requests

    
    from locust import HttpUser, task, constant
    
    class FastAPIUser(HttpUser):
        wait_time = constant(2)  # Wait for 2 seconds between requests
    
        @task
        def get_root(self):
            self.client.get("/")  # Simulates a GET request to the root endpoint
    
        @task
        def get_item(self):
            self.client.get("/items/42?q=test")  # Simulates a GET request with path and query parameters
    
    

    2. Between wait time Example

    Simulating random pauses between requests.

    
    from locust import HttpUser, task, between
    
    class FastAPIUser(HttpUser):
        wait_time = between(1, 5)  # Random wait time between 1 and 5 seconds
    
        @task(3)  # Weighted task: this runs 3 times more often
        def get_root(self):
            self.client.get("/")
    
        @task(1)
        def get_item(self):
            self.client.get("/items/10?q=locust")
    
    

    3. Custom Wait Time Example

    Using a custom wait time function to introduce more complex user behavior

    
    import random
    from locust import HttpUser, task
    
    def custom_wait():
        return max(1, random.normalvariate(3, 1))  # Normal distribution (mean: 3s, stddev: 1s)
    
    class FastAPIUser(HttpUser):
        wait_time = custom_wait
    
        @task
        def get_root(self):
            self.client.get("/")
    
        @task
        def get_item(self):
            self.client.get("/items/99?q=custom")
    
    
    

    Full Test Example

    Combining all the above elements, hereโ€™s a complete Locust test for your FastAPI app.

    
    from locust import HttpUser, task, between
    import random
    
    # Custom wait time function
    def custom_wait():
        return max(1, random.uniform(1, 3))  # Random wait time between 1 and 3 seconds
    
    class FastAPIUser(HttpUser):
        wait_time = custom_wait  # Use the custom wait time
    
        @task(3)
        def browse_homepage(self):
            """Simulates browsing the root endpoint."""
            self.client.get("/")
    
        @task(1)
        def browse_item(self):
            """Simulates fetching an item with ID and query parameter."""
            item_id = random.randint(1, 100)
            self.client.get(f"/items/{item_id}?q=test")
    
    

    Running Locust for FastAPI

    1. Run Your FastAPI App
      Save the FastAPI app code in a file (e.g., main.py) and start the server
    
    uvicorn main:app --reload
    

    By default, the app will run on http://127.0.0.1:8000.

    2. Run Locust
    Save the Locust file as locustfile.py and start Locust.

    
    locust -f locustfile.py
    

    3. Configure Locust
    Open http://localhost:8089 in your browser and enter:

    • Host: http://127.0.0.1:8000
    • Number of users and spawn rate based on your testing requirements.

    4. Run in Headless Mode (Optional)
    Use the following command to run Locust in headless mode

    
    locust -f locustfile.py --headless -u 50 -r 10 --host http://127.0.0.1:8000`
    

    -u 50: Simulate 50 users.

    -r 10: Spawn 10 users per second.

    Locust EP 1 : Load Testing: Ensuring Application Reliability with Real-Time Examples and Metrics

    14 November 2024 at 15:48

    In todayโ€™s fast-paced digital application, delivering a reliable and scalable application is key to providing a positive user experience.

    One of the most effective ways to guarantee this is through load testing. This post will walk you through the fundamentals of load testing, real-time examples of its application, and crucial metrics to watch for.

    What is Load Testing?

    Load testing is a type of performance testing that simulates real-world usage of an application. By applying load to a system, testers observe how it behaves under peak and normal conditions. The primary goal is to identify any performance bottlenecks, ensure the system can handle expected user traffic, and maintain optimal performance.

    Load testing answers these critical questions:

    • Can the application handle the expected user load?
    • How does performance degrade as the load increases?
    • What is the systemโ€™s breaking point?

    Why is Load Testing Important?

    Without load testing, applications are vulnerable to crashes, slow response times, and unavailability, all of which can lead to a poor user experience, lost revenue, and brand damage. Proactive load testing allows teams to address issues before they impact end-users.

    Real-Time Load Testing Examples

    Letโ€™s explore some real-world examples that demonstrate the importance of load testing.

    Example 1: E-commerce Website During a Sale Event

    An online retailer preparing for a Black Friday sale knows that traffic will spike. They conduct load testing to simulate thousands of users browsing, adding items to their cart, and checking out simultaneously. By analyzing the systemโ€™s response under these conditions, the retailer can identify weak points in the checkout process or database and make necessary optimizations.

    Example 2: Video Streaming Platform Launch

    A new streaming platform is preparing for launch, expecting millions of users. Through load testing, the team simulates high traffic, testing how well video streaming performs under maximum user load. This testing also helps check if CDN (Content Delivery Network) configurations are optimized for global access, ensuring minimal buffering and downtime during peak hours.

    Example 3: Financial Services Platform During Market Hours

    A trading platform experiences intense usage during market open and close hours. Load testing helps simulate these peak times, ensuring that real-time data updates, transactions, and account management work flawlessly. Testing for these scenarios helps avoid issues like slow trade executions and platform unavailability during critical trading periods.

    Key Metrics to Monitor in Load Testing

    Understanding key metrics is essential for interpreting load test results. Here are some critical metrics to focus on:

    1. Response Time

    • Definition: The time taken by the system to respond to a request.
    • Why It Matters: Slow response times can frustrate users and indicate bottlenecks.
    • Example Thresholds: For websites, a response time below 2 seconds is considered acceptable.

    2. Throughput

    • Definition: The number of requests processed per second.
    • Why It Matters: Throughput indicates how many concurrent users your application can handle.
    • Real-Time Use Case: In our e-commerce example, the retailer would track throughput to ensure the checkout process doesnโ€™t become a bottleneck.

    3. Error Rate

    • Definition: The percentage of failed requests out of total requests.
    • Why It Matters: A high error rate could indicate application instability under load.
    • Real-Time Use Case: The trading platform monitors the error rate during market close, ensuring the system doesnโ€™t throw errors under peak trading load.

    4. CPU and Memory Utilization

    • Definition: The percentage of CPU and memory resources used during the load test.
    • Why It Matters: High CPU or memory utilization can signal that the server may not handle additional load.
    • Real-Time Use Case: The video streaming platform tracks memory usage to prevent lag or interruptions in streaming as users increase.

    5. Concurrent Users

    • Definition: The number of users active on the application at the same time.
    • Why It Matters: Concurrent users help you understand how much load the system can handle before performance starts degrading.
    • Real-Time Use Case: The retailer tests how many concurrent users can shop simultaneously without crashing the website.

    6. Latency

    • Definition: The time it takes for a request to travel from the client to the server and back.
    • Why It Matters: High latency indicates network or processing delays that can slow down the user experience.
    • Real-Time Use Case: For a financial app, reducing latency ensures trades execute in near real-time, which is crucial for users during volatile market conditions.

    7. 95th and 99th Percentile Response Times

    • Definition: The time within which 95% or 99% of requests are completed.
    • Why It Matters: These percentiles help identify outliers that may impact user experience.
    • Real-Time Use Case: The streaming service may analyze these percentiles to ensure smooth playback for most users, even under peak loads.

    Best Practices for Effective Load Testing

    1. Set Clear Objectives: Define specific goals, such as the expected number of concurrent users or acceptable response times, based on the nature of the application.
    2. Use Realistic Load Scenarios: Create scenarios that mimic actual user behavior, including peak times, user interactions, and geographical diversity.
    3. Analyze Bottlenecks and Optimize: Use test results to identify and address performance bottlenecks, whether in the application code, database queries, or server configurations.
    4. Monitor in Real-Time: Track metrics like response time, throughput, and error rates in real-time to identify issues as they arise during the test.
    5. Repeat and Compare: Conduct multiple load tests to ensure consistent performance over time, especially after any significant update or release.

    Load testing is crucial for building a resilient and scalable application. By using real-world scenarios and keeping a close eye on metrics like response time, throughput, and error rates, you can ensure your system performs well under load. Proactive load testing helps to deliver a smooth, reliable experience for users, even during peak times.

    โŒ
    โŒ