❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 11 March 2025Main stream

Boost System Performance During Traffic Surges with Spike Testing

1 March 2025 at 06:17

Introduction

Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.

In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.

Why Perform Spike Testing?

Spike testing helps you

  • Determine system stability under unexpected traffic surges.
  • Identify bottlenecks that arise due to rapid load increases.
  • Assess auto-scaling capabilities of cloud-based infrastructures.
  • Measure response time degradation during high-demand spikes.
  • Ensure system recovery after the sudden load disappears.

Setting Up K6 for Spike Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Choosing the Right Test Scenario

K6 provides different executors to simulate load patterns. For spike testing, we use

  • ramping-arrival-rate β†’ Gradually increases the request rate over time.
  • constant-arrival-rate β†’ Maintains a fixed number of requests per second after the spike.

Example 1: Basic Spike Test

This test starts with low traffic, spikes suddenly, and then drops back to normal.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    spike_test: {
      executor: 'ramping-arrival-rate',
      startRate: 10, // Start with 10 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 100,
      maxVUs: 500,
      stages: [
        { duration: '30s', target: 10 },  // Low traffic
        { duration: '10s', target: 500 }, // Sudden spike
        { duration: '30s', target: 10 },  // Traffic drops
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 requests per second for 30 seconds.
  • Spikes to 500 requests per second in 10 seconds.
  • Drops back to 10 requests per second.
  • Tests the system’s ability to handle and recover from traffic spikes.

Example 2: Spike Test with High User Load

This test simulates a spike in virtual users rather than just requests per second.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    user_spike: {
      executor: 'ramping-vus',
      stages: [
        { duration: '30s', target: 20 },  // Normal traffic
        { duration: '10s', target: 300 }, // Sudden spike in users
        { duration: '30s', target: 20 },  // Drop back to normal
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation:

  • Simulates a sudden increase in concurrent virtual users (VUs).
  • Helps test server stability, database handling, and auto-scaling.

Example 3: Spike Test on Multiple Endpoints

In real-world applications, multiple endpoints may experience spikes simultaneously. Here’s how to test different API routes.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoint_spike: {
      executor: 'ramping-arrival-rate',
      startRate: 5,
      timeUnit: '1s',
      preAllocatedVUs: 200,
      maxVUs: 500,
      stages: [
        { duration: '20s', target: 10 },  // Normal traffic
        { duration: '10s', target: 300 }, // Spike across endpoints
        { duration: '20s', target: 10 },  // Traffic drop
      ],
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • Simulates traffic spikes across multiple API endpoints.
  • Helps identify which API calls suffer under extreme load.

Analyzing Test Results

After running the tests, K6 provides key performance metrics

http_req_duration......: avg=350ms min=150ms max=3000ms
http_reqs..............: 10,000 requests
vus_max................: 500
errors.................: 2%

Key Metrics

  • http_req_duration β†’ Measures response time impact.
  • vus_max β†’ Peak virtual users during the spike.
  • errors β†’ Percentage of failed requests due to overload.

Best Practices for Spike Testing

  • Monitor application logs and database performance during the test.
  • Use auto-scaling mechanisms for cloud-based environments.
  • Combine spike tests with stress testing for better insights.
  • Analyze error rates and recovery time to ensure system stability.

Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.

How Stress Testing Can Make More Attractive Systems ?

1 March 2025 at 06:06

Introduction

Stress testing is a critical aspect of performance testing that evaluates how a system performs under extreme loads. Unlike load testing, which simulates expected user traffic, stress testing pushes a system beyond its limits to identify breaking points and measure recovery capabilities.

In this blog, we will explore stress testing using K6, an open-source load testing tool, with detailed explanations and full examples to help you implement stress testing effectively.

Why Stress Testing?

Stress testing helps you

  • Identify the maximum capacity of your system.
  • Detect potential failures and bottlenecks.
  • Measure system stability and recovery under high loads.
  • Ensure infrastructure can handle unexpected spikes in traffic.

Setting Up K6 for Stress Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Understanding Stress Testing Scenarios

K6 provides various executors to simulate different traffic patterns. For stress testing, we mainly use

  1. ramping-vus – Gradually increases virtual users to a high level.
  2. constant-vus – Maintains a fixed high number of virtual users.
  3. spike – Simulates a sudden surge in traffic.

Example 1: Basic Stress Test with Ramping VUs

This script gradually increases the number of virtual users, holds a peak load, and then reduces it.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '1m', target: 100 }, // Ramp up to 100 users in 1 min
    { duration: '3m', target: 100 }, // Stay at 100 users for 3 min
    { duration: '1m', target: 0 },   // Ramp down to 0 users
  ],
};

export default function () {
  let res = http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • The test starts with 0 users and ramps up to 100 users in 1 minute.
  • Holds 100 users for 3 minutes.
  • Gradually reduces load to 0 users.
  • The sleep(1) function helps simulate real user behavior between requests.

Example 2: Constant High Load Test

This test maintains a consistently high number of virtual users.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  vus: 200, // 200 virtual users
  duration: '5m', // Run the test for 5 minutes
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • 200 virtual users are constantly hitting the endpoint for 5 minutes.
  • Helps evaluate system performance under sustained high traffic.

Example 3: Spike Testing (Sudden Traffic Surge)

This test simulates a sudden spike in traffic, followed by a drop.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '10s', target: 10 },  // Start with 10 users
    { duration: '10s', target: 500 }, // Spike to 500 users
    { duration: '10s', target: 10 },  // Drop back to 10 users
  ],
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 users.
  • Spikes suddenly to 500 users in 10 seconds.
  • Drops back to 10 users.
  • Helps determine how the system handles sudden surges in traffic.

Analyzing Test Results

After running the tests, K6 provides detailed statistics

checks..................: 100.00% βœ“ 5000 βœ— 0
http_req_duration......: avg=300ms min=200ms max=2000ms
http_reqs..............: 5000 requests
vus_max................: 500

Key Metrics to Analyze

  • http_req_duration β†’ Measures response time.
  • vus_max β†’ Maximum concurrent virtual users.
  • http_reqs β†’ Total number of requests.
  • errors β†’ Number of failed requests.

Stress testing is vital to ensure application stability and scalability. Using K6, we can simulate different stress scenarios like ramping load, constant high load, and spikes to identify system weaknesses before they affect users.

Achieving Better User Engaging via Realistic Load Testing in K6

1 March 2025 at 05:55

Introduction

Load testing is essential to evaluate how a system behaves under expected and peak loads. Traditionally, we rely on metrics like requests per second (RPS), response time, and error rates. However, an insightful approach called Average Load Testing has been discussed recently. This blog explores that concept in detail, providing practical examples to help you apply it effectively.

Understanding Average Load Testing

Average Load Testing focuses on simulating real-world load patterns rather than traditional peak load tests. Instead of sending a fixed number of requests per second, this approach

  • Generates requests based on the average concurrency over time.
  • More accurately reflects real-world traffic patterns.
  • Helps identify performance bottlenecks in a realistic manner.

Setting Up Load Testing with K6

K6 is an excellent tool for implementing Average Load Testing. Let’s go through practical examples of setting up such tests.

Install K6

brew install k6  # macOS
sudo apt install k6  # Ubuntu/Debian
docker pull grafana/k6  # Using Docker

Example 1: Basic K6 Script for Average Load Testing

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    avg_load: {
      executor: 'constant-arrival-rate',
      rate: 10, // 10 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 20,
      maxVUs: 50,
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The constant-arrival-rate executor ensures a steady request rate.
  • rate: 10 sends 10 requests per second.
  • duration: '2m' runs the test for 2 minutes.
  • preAllocatedVUs: 20 and maxVUs: 50 define virtual users needed to sustain the load.
  • The script logs response times to the console.

Example 2: Testing with Varying Load

To better reflect real-world scenarios, we can use ramping arrival rate to simulate gradual increases in traffic

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    ramping_load: {
      executor: 'ramping-arrival-rate',
      startRate: 5, // Start with 5 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 100,
      stages: [
        { duration: '1m', target: 20 },
        { duration: '2m', target: 50 },
        { duration: '3m', target: 100 },
      ],
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The ramping-arrival-rate gradually increases requests per second over time.
  • The stages array defines a progression from 5 to 100 requests/sec over 6 minutes.
  • Logs response times to help analyze system performance.

Example 3: Load Testing with Multiple Endpoints

In real applications, multiple endpoints are often tested simultaneously. Here’s how to test different API routes

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoints: {
      executor: 'constant-arrival-rate',
      rate: 15, // 15 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 30,
      maxVUs: 60,
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The script randomly selects an API endpoint to test different routes.
  • Uses check to ensure status codes are 200.
  • Logs response times for deeper insights.

Analyzing Results

To analyze test results, you can store logs or metrics in a database or monitoring tool and visualize trends over time. Some popular options include

  • Prometheus for time-series data storage.
  • InfluxDB for handling large-scale performance metrics.
  • ELK Stack (Elasticsearch, Logstash, Kibana) for log-based analysis.

Average Load Testing provides a more realistic way to measure system performance. By leveraging K6, you can create flexible, real-world simulations to optimize your applications effectively.

Before yesterdayMain stream

Learning Notes #72 – Metrics in K6 Load Testing

12 February 2025 at 17:15

In our previous blog on K6, we ran a script.js to test an api. As an output we received some metrics in the cli.

In this blog we are going to delve deep in to understanding metrics in K6.

1. HTTP Request Metrics

http_reqs

  • Description: Total number of HTTP requests initiated during the test.
  • Usage: Indicates the volume of traffic generated. A high number of requests can simulate real-world usage patterns.

http_req_duration

  • Description: Time taken for a request to receive a response (in milliseconds).
  • Components:
    • http_req_connecting: Time spent establishing a TCP connection.
    • http_req_tls_handshaking: Time for completing the TLS handshake.
    • http_req_waiting (TTFB): Time spent waiting for the first byte from the server.
    • http_req_sending: Time taken to send the HTTP request.
    • http_req_receiving: Time spent receiving the response data.
  • Usage: Identifies performance bottlenecks like slow server responses or network latency.

http_req_failed

  • Description: Proportion of failed HTTP requests (ratio between 0 and 1).
  • Usage: Highlights reliability issues. A high failure rate indicates problems with server stability or network errors.

2. VU (Virtual User) Metrics

vus

  • Description: Number of active Virtual Users at any given time.
  • Usage: Reflects concurrency level. Helps analyze how the system performs under varying loads.

vus_max

  • Description: Maximum number of Virtual Users during the test.
  • Usage: Defines the peak load. Useful for stress testing and capacity planning.

3. Iteration Metrics

iterations

  • Description: Total number of script iterations executed.
  • Usage: Measures the test’s progress and workload. Useful in endurance (soak) testing to observe long-term stability.

iteration_duration

  • Description: Time taken to complete one iteration of the script.
  • Usage: Helps identify performance degradation over time, especially under sustained load.

4. Data Transfer Metrics

data_sent

  • Description: Total amount of data sent over the network (in bytes).
  • Usage: Monitors network usage. High data volumes might indicate inefficient request payloads.

data_received

  • Description: Total data received from the server (in bytes).
  • Usage: Detects bandwidth usage and helps identify heavy response payloads.

5. Custom Metrics (Optional)

While K6 provides default metrics, you can define custom metrics like Counters, Gauges, Rates, and Trends for specific business logic or technical KPIs.

Example

import { Counter } from 'k6/metrics';

let myCounter = new Counter('my_custom_metric');

export default function () {
  myCounter.add(1); // Increment the custom metric
}

Interpreting Metrics for Performance Optimization

  • Low http_req_duration + High http_reqs = Good scalability.
  • High http_req_failed = Investigate server errors or timeouts.
  • High data_sent / data_received = Optimize payloads.
  • Increasing iteration_duration over time = Possible memory leaks or resource exhaustion.

Learning Notes #69 – Getting Started with k6: Writing Your First Load Test

5 February 2025 at 15:38

Performance testing is a crucial part of ensuring the stability and scalability of web applications. k6 is a modern, open-source load testing tool that allows developers and testers to script and execute performance tests efficiently. In this blog, we’ll explore the basics of k6 and write a simple test script to get started.

What is k6?

k6 is a load testing tool designed for developers. It is written in Go but uses JavaScript for scripting tests. Key features include,

  • High performance with minimal resource consumption
  • JavaScript-based scripting
  • CLI-based execution with detailed reporting
  • Integration with monitoring tools like Grafana and Prometheus

Installation

For installation check : https://grafana.com/docs/k6/latest/set-up/install-k6/

Writing a Basic k6 Test

A k6 test is written in JavaScript. Here’s a simple script to test an API endpoint,


import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  vus: 10, // Number of virtual users
  duration: '10s', // Test duration
};

export default function () {
  let res = http.get('https://api.restful-api.dev/objects');
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  sleep(1); // Simulate user wait time
}

Running the Test

Save the script as script.js and execute the test using the following command,

k6 run script.js

Understanding the Output

After running the test, k6 will provide a summary including

1. HTTP requests: Total number of requests made during the test.

    2. Response time metrics:

    • min: The shortest response time recorded.
    • max: The longest response time recorded.
    • avg: The average response time of all requests.
    • p(90), p(95), p(99): Percentile values indicating response time distribution.

    3. Checks: Number of checks passed or failed, such as status code validation.

    4. Virtual users (VUs):

    • vus_max: The maximum number of virtual users active at any time.
    • vus: The current number of active virtual users.

    5. Request Rate (RPS – Requests Per Second): The number of requests handled per second.

    6. Failures: Number of errors or failed requests due to timeouts or HTTP status codes other than expected.

    Next Steps

    Once you’ve successfully run your first k6 test, you can explore,

    • Load testing different APIs and endpoints
    • Running distributed tests
    • Exporting results to Grafana
    • Integrating k6 with CI/CD pipelines

    k6 is a powerful tool that helps developers and QA engineers ensure their applications perform under load. Stay tuned for more in-depth tutorials on advanced k6 features!

    ❌
    ❌