Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Locust EP 2: Understanding Locust Wait Times with Complete Examples

17 November 2024 at 07:43

Locust is an excellent load testing tool, enabling developers to simulate concurrent user traffic on their applications. One of its powerful features is wait times, which simulate the realistic user think time between consecutive tasks. By customizing wait times, you can emulate user behavior more effectively, making your tests reflect actual usage patterns.

In this blog, we’ll cover,

  1. What wait times are in Locust.
  2. Built-in wait time options.
  3. Creating custom wait times.
  4. A full example with instructions to run the test.

What Are Wait Times in Locust?

In real-world scenarios, users don’t interact with applications continuously. After performing an action (e.g., submitting a form), they often pause before the next action. This pause is called a wait time in Locust, and it plays a crucial role in mimicking real-life user behavior.

Locust provides several ways to define these wait times within your test scenarios.

FastAPI App Overview

Here’s the FastAPI app that we’ll test,


from fastapi import FastAPI

# Create a FastAPI app instance
app = FastAPI()

# Define a route with a GET method
@app.get("/")
def read_root():
    return {"message": "Welcome to FastAPI!"}

@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
    return {"item_id": item_id, "q": q}

Locust Examples for FastAPI

1. Constant Wait Time Example

Here, we’ll simulate constant pauses between user requests


from locust import HttpUser, task, constant

class FastAPIUser(HttpUser):
    wait_time = constant(2)  # Wait for 2 seconds between requests

    @task
    def get_root(self):
        self.client.get("/")  # Simulates a GET request to the root endpoint

    @task
    def get_item(self):
        self.client.get("/items/42?q=test")  # Simulates a GET request with path and query parameters

2. Between wait time Example

Simulating random pauses between requests.


from locust import HttpUser, task, between

class FastAPIUser(HttpUser):
    wait_time = between(1, 5)  # Random wait time between 1 and 5 seconds

    @task(3)  # Weighted task: this runs 3 times more often
    def get_root(self):
        self.client.get("/")

    @task(1)
    def get_item(self):
        self.client.get("/items/10?q=locust")

3. Custom Wait Time Example

Using a custom wait time function to introduce more complex user behavior


import random
from locust import HttpUser, task

def custom_wait():
    return max(1, random.normalvariate(3, 1))  # Normal distribution (mean: 3s, stddev: 1s)

class FastAPIUser(HttpUser):
    wait_time = custom_wait

    @task
    def get_root(self):
        self.client.get("/")

    @task
    def get_item(self):
        self.client.get("/items/99?q=custom")


Full Test Example

Combining all the above elements, here’s a complete Locust test for your FastAPI app.


from locust import HttpUser, task, between
import random

# Custom wait time function
def custom_wait():
    return max(1, random.uniform(1, 3))  # Random wait time between 1 and 3 seconds

class FastAPIUser(HttpUser):
    wait_time = custom_wait  # Use the custom wait time

    @task(3)
    def browse_homepage(self):
        """Simulates browsing the root endpoint."""
        self.client.get("/")

    @task(1)
    def browse_item(self):
        """Simulates fetching an item with ID and query parameter."""
        item_id = random.randint(1, 100)
        self.client.get(f"/items/{item_id}?q=test")

Running Locust for FastAPI

  1. Run Your FastAPI App
    Save the FastAPI app code in a file (e.g., main.py) and start the server

uvicorn main:app --reload

By default, the app will run on http://127.0.0.1:8000.

2. Run Locust
Save the Locust file as locustfile.py and start Locust.


locust -f locustfile.py

3. Configure Locust
Open http://localhost:8089 in your browser and enter:

  • Host: http://127.0.0.1:8000
  • Number of users and spawn rate based on your testing requirements.

4. Run in Headless Mode (Optional)
Use the following command to run Locust in headless mode


locust -f locustfile.py --headless -u 50 -r 10 --host http://127.0.0.1:8000`

-u 50: Simulate 50 users.

-r 10: Spawn 10 users per second.

Docker Ep 4 : The Digital Tea Kadai – Client Server Architecture & Docker

12 August 2024 at 14:25

The Client-Server Architecture

Once upon a time in the electronic city of Banglore, there was a popular digital tea kadai. This cafe was unique because it didn’t serve traditional coffee or pastries. Instead, it served data and services to its customers developers, businesses, and tech enthusiasts who were hungry for information and resources.

The Client:

One day, a young developer named Dinesh walked into tea kadai. He was working on a new app and needed to fetch some data from the cafe’s servers. In this story, Dinesh represents the client. As a client, his role was to request specific services and data from the cafe. He approached the counter and handed over his order slip, detailing what he needed.

The Server:

Behind the counter was Syed, the tea master, representing the server. Syed’s job was to take Dinesh’s request, process it, and deliver the requested data back to him.

Syed had access to a vast array of resources stored in the cafe’s back room, where all the data was kept. When Dinesh made his request, Syed quickly went to the back, gathered the data, and handed it back to Dinesh.

The client-server architecture at Tea Kadai worked seamlessly.

Dinesh, as the client, could make requests whenever he needed, and

Syed, as the server, would respond by providing the requested data.

This interaction was efficient, allowing many clients to be served by a single server at the cafe.

Docker’s Client-Server Technology

As Tea Kadai grew in popularity, it decided to expand its services to deliver data more efficiently and flexibly. To do this, they adopted a new technology called Docker, which helped them manage their operations more effectively.

Docker Client:

In the world of Docker at Tea Kadai, Dinesh still played the role of the client. But now, instead of just making simple data requests, she could request entire environments where he could test and run his applications.

These environments, called containers, were like personalized booths in the cafe where Alice could have her own setup with everything she needed to work on her app.

Dinesh used a special tool called the Docker Client to place his order. With this tool, he could specify exactly what he wanted in his container like the operating system, libraries, and applications needed for his app. The Docker Client was her interface for communicating with the cafe’s new backend system.

Docker Server (Daemon):

Behind the scenes, Tea Kadai had installed a powerful system known as the Docker Daemon, which acted as the server in this setup. The Docker Daemon was responsible for creating, running, and managing the containers requested by clients like Dinesh.

When Dinesh sent his container request using the Docker Client, the Docker Daemon received it, built the container environment, and handed it back to Dinesh for use.

Docker Images:

The Tea Kadai had a collection of premade recipes called Docker Images. These images were like blueprints for creating containers, containing all the necessary ingredients and instructions.

When Dinesh requested a new container, the Docker Daemon used these images to quickly prepare the environment.

Flexibility and Isolation:

The beauty of Docker at Tea Kadai was that it allowed multiple clients like Dinesh to have their containers running simultaneously, each isolated from the others. This isolation ensured that one client’s work wouldn’t interfere with another’s, just like having separate booths in the cafe for each customer. Dinesh could run, test, and even destroy his environment without affecting anyone else.

At the end,

In the vibrant city of Banglore, Tea Kadai thrived by adopting client-server architecture and Docker’s client-server technology. This approach allowed them to efficiently serve data and services while providing flexible, isolated environments for their clients. Dinesh and many others continued to come to tea kadai, knowing they could always get what they needed in a reliable and innovative way.

Exploring TAPAS: Analyzing Clinical Trial Data with Transformers

By: angu10
25 September 2023 at 04:31

Introduction:

Welcome to the world of Transformers, where cutting-edge natural language processing models are revolutionizing the way I interact with data. In this series of blogs, I will embark on a journey to explore and understand the capabilities of the TAPAS (Tabular Pre-trained Language Model) model, which is designed to extract valuable insights from tabular data. To kick things off, I'll delve into the basics of TAPAS and see it in action on a real-world dataset.

Understanding TAPAS:

TAPAS is a powerful language model developed by Google that specializes in processing tabular data. Unlike traditional models, TAPAS can handle structured data seamlessly, making it a game-changer for tasks involving tables and spreadsheets. With a token size of 512k, TAPAS can process large datasets efficiently, making it a valuable tool for data analysts and scientists.

My Dataset:

For this introductory exploration, I will work with a clinical trial dataset [Clinicaltrails.gov]. To start, I load the dataset and create a data frame containing the "label" column. This column contains information about gender distribution in clinical trials. I'll be using this data to ask questions and obtain insights.

from transformers import pipeline,TapasTokenizer, TapasForQuestionAnswering
import pandas as pd
import datasets

# Load the dataset (only once)
dataset = datasets.load_dataset("Kira-Asimov/gender_clinical_trial")

# Create the clinical_trials_data DataFrame with just the "label" column (only once)
clinical_trials_data = pd.DataFrame({
    "id": dataset["train"]["id"],
    "label": dataset["train"]["label"],
})

clinical_trials_data = clinical_trials_data.head(100)


Asking Questions with TAPAS:

The magic of TAPAS begins when I start asking questions about our data. In this example, I want to know how many records are in the dataset and how many of them are gender-specific (Male and Female). I construct queries like:

"How many records are in total?"
"How many 'Male' only gender studies are in total?"
"How many 'Female' only gender studies are in total?"

Using TAPAS to Answer Questions:

I utilize the "google/tapas-base-finetuned-wtq" model and its associated tokenizer to process our questions and tabular data. TAPAS tokenizes the data, extracts answers, and even performs aggregations when necessary.

counts = {}
answers = []

def TAPAS_model_learning(clinical_trials_data):
    model_name = "google/tapas-base-finetuned-wtq"
    model = TapasForQuestionAnswering.from_pretrained(model_name)
    tokenizer = TapasTokenizer.from_pretrained(model_name)


    queries = [
        "How many records are in total ?",
        "How many 'Male' only gender studies are in total ?",
        "How many 'Female' only gender studies are in total ?",
    ]

    for query in queries:
            model_name = "google/tapas-base-finetuned-wtq"
            model = TapasForQuestionAnswering.from_pretrained(model_name)
            tokenizer = TapasTokenizer.from_pretrained(model_name)
            # Tokenize the query and table
            inputs = tokenizer(table=clinical_trials_data, queries=query, padding="max_length", return_tensors="pt", truncation=True)

            # Get the model's output
            outputs = model(**inputs)
            predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
                inputs, outputs.logits.detach(), outputs.logits_aggregation.detach()
            )

            # Initialize variables to store answers for the current query
            current_answers = []

            # Count the number of cells in the answer coordinates
            count = 0
            for coordinates in predicted_answer_coordinates:
                count += len(coordinates)
                # Collect the cell values for the current answer
                cell_values = []
                for coordinate in coordinates:
                    cell_values.append(clinical_trials_data.iat[coordinate])

                current_answers.append(", ".join(cell_values))

            # Check if there are no matching cells for the query
            if count == 0:
                current_answers = ["No matching cells"]
            counts[query] = count
            answers.append(current_answers)
    return counts,answers

Evaluating TAPAS Performance:

Now, let's see how well TAPAS performs in answering our questions. I have expected answers for each question variation, and I calculate the error percentage to assess the model's accuracy.

# Prepare your variations of the same question and their expected answers
question_variations = {
    "How many records are in total ?": 100,
    "How many 'Male' only gender studies are in total ?": 3,
    "How many 'Female' only gender studies are in total ?":9,
}



# Use TAPAS to predict the answer based on your tabular data and the question
predicted_count,predicted_answer = TAPAS_model_learning(clinical_trials_data)
print(predicted_count)
# Check if any predicted answer matches the expected answer
for key,value in predicted_count.items():
    error = question_variations[key] - value


    # Calculate the accuracy percentage
    error_percentage = (error / question_variations[key]) * 100

    # Print the results
    print(f"{key}: Model Value: {value}, Excepted Value: {question_variations[key]}, Error Percentage: {error_percentage :.2f}%")

Results and Insights:

The output reveals how TAPAS handled our queries:

For the question "How many records are in total?", TAPAS predicted 69 records, with an error percentage of 31.00% compared to the expected value of 100 records.

For the question "How many 'Male' only gender studies are in total?", TAPAS correctly predicted 3 records, with a perfect match to the expected value.

For the question "How many 'Female' only gender studies are in total?", TAPAS predicted 2 records, with a significant error percentage of 77.78% compared to the expected value of 9 records.

Conclusion and Future Exploration:

In this first blog of our TAPAS exploration series, I introduced you to the model's capabilities and showcased its performance on a real dataset. I observed both accurate and less accurate predictions, highlighting the importance of understanding and fine-tuning the model for specific tasks.

In our future blogs, I will delve deeper into TAPAS, exploring its architecture, fine-tuning techniques, and strategies for improving its accuracy on tabular data. Stay tuned as I unlock the full potential of TAPAS for data analysis and insights.

Learning Fundamentals of Linux from scratch day-2 : Basic shell commands

6 February 2024 at 05:15

Today, in session 2, on Kaniyam- https://kaniyam.com/linux-course-feb-2024/ I learnt basic shell commands.

ls #prints all files and folders (not hidden)
ls -a #prints hidden files
ls -l #long listing
ls -al #long listing with hidden files
ls -h #human readable
ls -lh #long listing + human readable
ls -lS #sorted
ls -lt #most recently modified file at the top
ls -R #recursive listing
date --date="3 years ago"
cat filename.txt #view file
cat > sample.txt #concatenate to file, end with ctrl+D
cat sample1.txt sample1.txt sample2.txt #cat can be used to concatenate and display multiple files
history | head #displays first ten commands; only 1k stored)
history | tail #displays last ten commands from the 1k stored
history -d 1459 #deletes the command by event number from history
rm file.txt #removes this file
rm -i file.txt # asks for an option to confirm (interactive)
rm -r directory1/ #recursively deletes directory and all its contents
rm *.txt #deletes all files with that extension (*- all)
man ls #man pages for a given command
man history

Learning Fundamentals of Linux from scratch day-2 : Basic shell commands

6 February 2024 at 05:15

Today, in session 2, on Kaniyam- https://kaniyam.com/linux-course-feb-2024/ I learnt basic shell commands.

ls #prints all files and folders (not hidden)
ls -a #prints hidden files
ls -l #long listing
ls -al #long listing with hidden files
ls -h #human readable
ls -lh #long listing + human readable
ls -lS #sorted
ls -lt #most recently modified file at the top
ls -R #recursive listing
date --date="3 years ago"
cat filename.txt #view file
cat > sample.txt #concatenate to file, end with ctrl+D
cat sample1.txt sample1.txt sample2.txt #cat can be used to concatenate and display multiple files
history | head #displays first ten commands; only 1k stored)
history | tail #displays last ten commands from the 1k stored
history -d 1459 #deletes the command by event number from history
rm file.txt #removes this file
rm -i file.txt # asks for an option to confirm (interactive)
rm -r directory1/ #recursively deletes directory and all its contents
rm *.txt #deletes all files with that extension (*- all)
man ls #man pages for a given command
man history

அழகவள்…

By: Gowtham G
3 February 2024 at 07:14
அக்டோபர் மாத அழகில்
ஒருவளா அவள் ! என
யோசித்தேன்
இல்லை என்றது மனம்.
ஏன் எனக்
கேட்டேன்..
புன்னகையுடன் வந்தது
பதில்...

-
கெளதம்.கோ

(02 – 02 – 2024) இரவில்…

அழகவள்…

3 February 2024 at 07:14
அக்டோபர் மாத அழகில்
ஒருவளா அவள் ! என
யோசித்தேன்
இல்லை என்றது மனம்.
ஏன் எனக்
கேட்டேன்..
புன்னகையுடன் வந்தது
பதில்...

-
கெளதம்.கோ

(02 – 02 – 2024) இரவில்…

❌
❌