❌

Reading view

There are new articles available, click to refresh the page.

Fix – 1:

On searching this, https://stackoverflow.com/questions/26356268/call-to-undefined-function-imagecreatefromjpeg-and-gd-enabled

found that we have to install gd library of PHP.

we had installed this already in the server as

sudo apt install php-gd

Now, checked the php version.

php -v
PHP 8.1.31 (cli) (built: Nov 21 2024 13:10:15) (NTS)

installed the gd version for it.

sudo apt install php8.1-gd

and restarted apache

sudo systemctl restart apache2

That’s all.

Now, unpublished the catalog. re-uploaded the cover images and published again.

Now, the thumbnails are generated.

But they are too small.

Deploying a Scalable AWS Infrastructure with VPC, ALB, and Target Groups Using Terraform

Introduction
In this blog, we will walk through the process of deploying a scalable AWS infrastructure using Terraform. The setup includes:

  • A VPC with public and private subnets
  • An Internet Gateway for public access
  • Application Load Balancers (ALBs) for distributing traffic
  • Target Groups and EC2 instances for handling incoming requests
  • By the end of this guide, you’ll have a highly available setup with proper networking, security, and load balancing.

Step 1: Creating a VPC with Public and Private Subnets
The first step is to define our Virtual Private Cloud (VPC) with four subnets (two public, two private) spread across multiple Availability Zones.
Terraform Code: vpc.tf

resource "aws_vpc" "main_vpc" {
  cidr_block = "10.0.0.0/16"
}
# Public Subnet 1 - ap-south-1a
resource "aws_subnet" "public_subnet_1" {
  vpc_id            = aws_vpc.main_vpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "ap-south-1a"
  map_public_ip_on_launch = true
}
# Public Subnet 2 - ap-south-1b
resource "aws_subnet" "public_subnet_2" {
  vpc_id            = aws_vpc.main_vpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "ap-south-1b"
  map_public_ip_on_launch = true
}
# Private Subnet 1 - ap-south-1a
resource "aws_subnet" "private_subnet_1" {
  vpc_id            = aws_vpc.main_vpc.id
  cidr_block        = "10.0.3.0/24"
  availability_zone = "ap-south-1a"
}
# Private Subnet 2 - ap-south-1b
resource "aws_subnet" "private_subnet_2" {
  vpc_id            = aws_vpc.main_vpc.id
  cidr_block        = "10.0.4.0/24"
  availability_zone = "ap-south-1b"
}
# Internet Gateway for Public Access
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.main_vpc.id
}
# Public Route Table
resource "aws_route_table" "public_rt" {
  vpc_id = aws_vpc.main_vpc.id
}
resource "aws_route" "internet_access" {
  route_table_id         = aws_route_table.public_rt.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.igw.id
}
resource "aws_route_table_association" "public_assoc_1" {
  subnet_id      = aws_subnet.public_subnet_1.id
  route_table_id = aws_route_table.public_rt.id
}
resource "aws_route_table_association" "public_assoc_2" {
  subnet_id      = aws_subnet.public_subnet_2.id
  route_table_id = aws_route_table.public_rt.id
}

This configuration ensures that our public subnets can access the internet, while our private subnets remain isolated.

Step 2: Setting Up Security Groups
Next, we define security groups to control access to our ALBs and EC2 instances.
Terraform Code: security_groups.tf

resource "aws_security_group" "alb_sg" {
  vpc_id = aws_vpc.main_vpc.id
  # Allow HTTP and HTTPS traffic to ALB
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  # Allow outbound traffic
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

This allows public access to the ALB but restricts other traffic.

Step 3: Creating the Application Load Balancers (ALB)
Now, let’s define two ALBsβ€”one public and one private.
Terraform Code: alb.tf

# Public ALB
resource "aws_lb" "public_alb" {
  name               = "public-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb_sg.id]
  subnets           = [aws_subnet.public_subnet_1.id, aws_subnet.public_subnet_2.id]
}
# Private ALB
resource "aws_lb" "private_alb" {
  name               = "private-alb"
  internal           = true
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb_sg.id]
  subnets           = [aws_subnet.private_subnet_1.id, aws_subnet.private_subnet_2.id]
}

This ensures redundancy and distributes traffic across different subnets.

Step 4: Creating Target Groups for EC2 Instances
Each ALB needs target groups to route traffic to EC2 instances.
Terraform Code: target_groups.tf

resource "aws_lb_target_group" "public_tg" {
  name     = "public-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.main_vpc.id
}
resource "aws_lb_target_group" "private_tg" {
  name     = "private-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.main_vpc.id
}

These target groups allow ALBs to forward requests to backend EC2 instances.

Step 5: Launching EC2 Instances
Finally, we deploy EC2 instances and register them with the target groups.
Terraform Code: ec2.tf

resource "aws_instance" "public_instance" {
  ami           = "ami-0abcdef1234567890" # Replace with a valid AMI ID
  instance_type = "t2.micro"
  subnet_id     = aws_subnet.public_subnet_1.id
}
resource "aws_instance" "private_instance" {
  ami           = "ami-0abcdef1234567890" # Replace with a valid AMI ID
  instance_type = "t2.micro"
  subnet_id     = aws_subnet.private_subnet_1.id
}

These instances will serve web requests.

Step 6: Registering Instances to Target Groups

resource "aws_lb_target_group_attachment" "public_attach" {
  target_group_arn = aws_lb_target_group.public_tg.arn
  target_id        = aws_instance.public_instance.id
}
resource "aws_lb_target_group_attachment" "private_attach" {
  target_group_arn = aws_lb_target_group.private_tg.arn
  target_id        = aws_instance.private_instance.id
}

This registers our EC2 instances as backend servers.

Final Step: Terraform Apply!
Run the following command to deploy everything:

terraform init
terraform apply -auto-approve

Once completed, you’ll get ALB DNS names, which you can use to access your deployed infrastructure.

Conclusion
This guide covered how to deploy a highly available AWS infrastructure using Terraform, including VPC, subnets, ALBs, security groups, target groups, and EC2 instances. This setup ensures a secure and scalable architecture.

Follow for more and happy learning :)

open-cv write image

Explore the OpenCV imwrite function used to write an image.

Source Code

import cv2

image = cv2.imread("./data/openCV_logo.jpg",cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image,(600,600))
cv2.imwrite("./data/openCV_logo_grayscale.jpg",image)

Image

Function

imwrite()

Explain Code

Import the OpenCV library import cv2.

The imread function reads an image. Since I need a grayscale image, I set the flag value as cv2.IMREAD_GRAYSCALE.

Resize the image using the resize() function.

imwrite

The imwrite function is used to save an image. It takes two arguments:

  1. Image path – Set the image path and name.
  2. Image – The image as a NumPy array.

Additional Link

github code

open-cv open video

Playing a video in OpenCV is similar to opening an image, but it requires a loop to continuously read multiple frames.

Source Code

import cv2

video = cv2.VideoCapture("./data/video.mp4")

while(video.isOpened()):
    isTrue, frame = video.read()
    
    if(isTrue):
        frame = cv2.resize(frame,(800,500))
        cv2.imshow("play video",frame)
        if(cv2.waitKey(24)&0xFF == ord('q')):
            break
    else:
        break

video.release()
cv2.destroyAllWindows()

Video

Functions

Explain Program

Import OpenCV Library

import cv2

VideoCapture

This function is used to open a video by specifying a video path.

  • If you pass 0 as the argument, it opens the webcam instead.

isOpened

This function returns a boolean value to check if the video or resource is opened properly.

Use while to start a loop. with the condition isOpened().

read

This function reads a video frame by frame.

  • It returns two values:
    1. Boolean: True if the frame is read successfully.
    2. Frame Data: The actual video frame.

Use if(isTrue) to check if the data is properly read, then show the video.

  • Resize the video resolution using resize function.
  • Show the video using imshow.
  • Exit video on keypress if(cv2.waitKey(24)&0xFF == ord('q')).
    • Press β€˜qβ€˜ to break the video play loop.
Why Use &0xFF ?
  • This ensures the if condition runs correctly.
  • waitKey returns a key value, then performs an AND operation with 0xFF (which is 255 in hexadecimal).
  • If any number is used in an AND operation with 0xFF, it returns the same number.
    Example: 113 & 0xFF = 113 (same value as the first operand).

ord

The ord function returns the ASCII value of a character.

  • Example: ord('q') returns 113.

Finally, the if condition is validated.
If true, break the video play. Otherwise, continue playing.

release

This function releases the used resources.

destroyAllWindows() closes all windows and cleans up used memory.

Additional Link

github code

open-cv open image

What is OpenCV

OpenCV stands for Open Source Computer Vision. It is a library used for computer vision and machine learning tasks. It provides many functions to process images and videos.

Computer Vision

Computer vision is the process of extracting information from images or videos. For example, it can be used for object detection, face recognition, and more.

Source Code

import cv2

image = cv2.imread("./data/openCV_logo.jpg",cv2.IMREAD_COLOR)
image = cv2.resize(image,(600,600))
cv2.imshow("window title",image)

cv2.waitKey(0)

cv2.destroyAllWindows()

Image

OpenCV Functions

imread

This function is used to read an image and returns it as a NumPy array. It requires two arguments:

  1. Image path: The location of the image file.
  2. Read flag: Specifies the mode in which the image should be read. Common flags are:
    • Color Image
    • Grayscale Image
    • Image with Alpha Channel

resize

This function resizes an image. It requires two arguments:

  1. Image array: The NumPy array of the image.
  2. Resolution: A tuple specifying the new width and height.

imshow

This function displays an image. It takes two arguments:

  1. Window name: A string representing the window title.
  2. Image array: The image to be displayed.

waitKey

This function adds a delay to the program and listens for keypress events.

  • If the value is 0, the program waits indefinitely until a key is pressed.
  • If a key is pressed, it releases the program and returns the ASCII value of the pressed key.
  • Example: Pressing q returns 113.

destroyAllWindows

This function closes all open image windows and properly cleans up used resources.

Additional Link

Github code

Suggestions – 08.03.2025

S.No.NameCMP Rs.
1Mangalam Global15.98
2Taparia Tools16.43
3South Ind.Bank25.5
4Mangalam Alloys36.1
5Oricon Enterpris40.04
6Pasupati Acrylon44.63
7Ajanta Soya45.4
8Manali Petrochem62.15
9NMDC67.13
10NACL Industries70.66
11Balaxi Pharma70.93
12Nath Industries80.7
13S P I C81.74
14Raj Television82.69
15R&B Denims83.66
16SBFC Finance86.6
17Grauer & Weil96.03
18Anik Industries99.41
in this taparia already increased from rs.2. so risk is there
19Surana Telecom And Power20.83
20Ptl Enterprises40.09
21Rdb Real Estate Constructions44.1
22Pioneer Investcorp Ltd72.16
23Swan Defence N Heavy Ind74.48

list of companies invested by vanguard as on 04.03.2025

CompanyQuantityPrice
Marksans Pharma
20 Sep, 2024
BUY26,97,280317
Sundaram Finance
15 Mar, 2024
BUY9,12,9013,796
Powergrid Infra.
15 Mar, 2024
BUY72,27,41394.4
Nazara Technolo.
15 Sep, 2023
BUY3,98,217838
MTAR Technologie
15 Sep, 2023
BUY2,30,9082,608
Data Pattern
15 Sep, 2023
BUY3,25,1052,076
Himadri Special
15 Sep, 2023
BUY26,53,602242
Equitas Sma. Fin
17 Mar, 2023
BUY57,19,43768.0
Delhivery
17 Mar, 2023
BUY47,62,115323
Reliance Infra.
17 Mar, 2023
BUY20,12,088149
JP Power Ven.
17 Mar, 2023
BUY3,66,58,6836.08

🎯 PostgreSQL Zero to Hero with Parottasalna – 2 Day Bootcamp (FREE!) πŸš€

Databases power the backbone of modern applications, and PostgreSQL is one of the most powerful open-source relational databases trusted by top companies worldwide. Whether you’re a beginner or a developer looking to sharpen your database skills, this FREE bootcamp will take you from Zero to Hero in PostgreSQL!

What You’ll Learn?

βœ… PostgreSQL fundamentals & installation

βœ… Postgres Architecture
βœ… Writing optimized queries
βœ… Indexing & performance tuning
βœ… Transactions & locking mechanisms
βœ… Advanced joins, CTEs & subqueries
βœ… Real-world best practices & hands-on exercises

This intensive hands on bootcamp is designed for developers, DBAs, and tech enthusiasts who want to master PostgreSQL from scratch and apply it in real-world scenarios.

Who Should Attend?

πŸ”Ή Beginners eager to learn databases
πŸ”Ή Developers & Engineers working with PostgreSQL
πŸ”Ή Anyone looking to optimize their SQL skills

πŸ“… Date: March 22, 23
⏰ Time: Will be finalized later.
πŸ“ Location: Online
πŸ’° Cost: 100% FREE πŸŽ‰

πŸ”— RSVP Here

Prerequisite

  1. Checkout this playlist of our previous postgres session https://www.youtube.com/playlist?list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt

πŸŽ‰ This bootcamp is completely FREE – Learn without any cost! πŸŽ‰

πŸ’‘ Spots are limited – RSVP now to reserve your seat!

Boost System Performance During Traffic Surges with Spike Testing

Introduction

Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.

In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.

Why Perform Spike Testing?

Spike testing helps you

  • Determine system stability under unexpected traffic surges.
  • Identify bottlenecks that arise due to rapid load increases.
  • Assess auto-scaling capabilities of cloud-based infrastructures.
  • Measure response time degradation during high-demand spikes.
  • Ensure system recovery after the sudden load disappears.

Setting Up K6 for Spike Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Choosing the Right Test Scenario

K6 provides different executors to simulate load patterns. For spike testing, we use

  • ramping-arrival-rate β†’ Gradually increases the request rate over time.
  • constant-arrival-rate β†’ Maintains a fixed number of requests per second after the spike.

Example 1: Basic Spike Test

This test starts with low traffic, spikes suddenly, and then drops back to normal.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    spike_test: {
      executor: 'ramping-arrival-rate',
      startRate: 10, // Start with 10 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 100,
      maxVUs: 500,
      stages: [
        { duration: '30s', target: 10 },  // Low traffic
        { duration: '10s', target: 500 }, // Sudden spike
        { duration: '30s', target: 10 },  // Traffic drops
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 requests per second for 30 seconds.
  • Spikes to 500 requests per second in 10 seconds.
  • Drops back to 10 requests per second.
  • Tests the system’s ability to handle and recover from traffic spikes.

Example 2: Spike Test with High User Load

This test simulates a spike in virtual users rather than just requests per second.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    user_spike: {
      executor: 'ramping-vus',
      stages: [
        { duration: '30s', target: 20 },  // Normal traffic
        { duration: '10s', target: 300 }, // Sudden spike in users
        { duration: '30s', target: 20 },  // Drop back to normal
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation:

  • Simulates a sudden increase in concurrent virtual users (VUs).
  • Helps test server stability, database handling, and auto-scaling.

Example 3: Spike Test on Multiple Endpoints

In real-world applications, multiple endpoints may experience spikes simultaneously. Here’s how to test different API routes.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoint_spike: {
      executor: 'ramping-arrival-rate',
      startRate: 5,
      timeUnit: '1s',
      preAllocatedVUs: 200,
      maxVUs: 500,
      stages: [
        { duration: '20s', target: 10 },  // Normal traffic
        { duration: '10s', target: 300 }, // Spike across endpoints
        { duration: '20s', target: 10 },  // Traffic drop
      ],
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • Simulates traffic spikes across multiple API endpoints.
  • Helps identify which API calls suffer under extreme load.

Analyzing Test Results

After running the tests, K6 provides key performance metrics

http_req_duration......: avg=350ms min=150ms max=3000ms
http_reqs..............: 10,000 requests
vus_max................: 500
errors.................: 2%

Key Metrics

  • http_req_duration β†’ Measures response time impact.
  • vus_max β†’ Peak virtual users during the spike.
  • errors β†’ Percentage of failed requests due to overload.

Best Practices for Spike Testing

  • Monitor application logs and database performance during the test.
  • Use auto-scaling mechanisms for cloud-based environments.
  • Combine spike tests with stress testing for better insights.
  • Analyze error rates and recovery time to ensure system stability.

Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.

How Stress Testing Can Make More Attractive Systems ?

Introduction

Stress testing is a critical aspect of performance testing that evaluates how a system performs under extreme loads. Unlike load testing, which simulates expected user traffic, stress testing pushes a system beyond its limits to identify breaking points and measure recovery capabilities.

In this blog, we will explore stress testing using K6, an open-source load testing tool, with detailed explanations and full examples to help you implement stress testing effectively.

Why Stress Testing?

Stress testing helps you

  • Identify the maximum capacity of your system.
  • Detect potential failures and bottlenecks.
  • Measure system stability and recovery under high loads.
  • Ensure infrastructure can handle unexpected spikes in traffic.

Setting Up K6 for Stress Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Understanding Stress Testing Scenarios

K6 provides various executors to simulate different traffic patterns. For stress testing, we mainly use

  1. ramping-vus – Gradually increases virtual users to a high level.
  2. constant-vus – Maintains a fixed high number of virtual users.
  3. spike – Simulates a sudden surge in traffic.

Example 1: Basic Stress Test with Ramping VUs

This script gradually increases the number of virtual users, holds a peak load, and then reduces it.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '1m', target: 100 }, // Ramp up to 100 users in 1 min
    { duration: '3m', target: 100 }, // Stay at 100 users for 3 min
    { duration: '1m', target: 0 },   // Ramp down to 0 users
  ],
};

export default function () {
  let res = http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • The test starts with 0 users and ramps up to 100 users in 1 minute.
  • Holds 100 users for 3 minutes.
  • Gradually reduces load to 0 users.
  • The sleep(1) function helps simulate real user behavior between requests.

Example 2: Constant High Load Test

This test maintains a consistently high number of virtual users.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  vus: 200, // 200 virtual users
  duration: '5m', // Run the test for 5 minutes
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • 200 virtual users are constantly hitting the endpoint for 5 minutes.
  • Helps evaluate system performance under sustained high traffic.

Example 3: Spike Testing (Sudden Traffic Surge)

This test simulates a sudden spike in traffic, followed by a drop.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '10s', target: 10 },  // Start with 10 users
    { duration: '10s', target: 500 }, // Spike to 500 users
    { duration: '10s', target: 10 },  // Drop back to 10 users
  ],
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 users.
  • Spikes suddenly to 500 users in 10 seconds.
  • Drops back to 10 users.
  • Helps determine how the system handles sudden surges in traffic.

Analyzing Test Results

After running the tests, K6 provides detailed statistics

checks..................: 100.00% βœ“ 5000 βœ— 0
http_req_duration......: avg=300ms min=200ms max=2000ms
http_reqs..............: 5000 requests
vus_max................: 500

Key Metrics to Analyze

  • http_req_duration β†’ Measures response time.
  • vus_max β†’ Maximum concurrent virtual users.
  • http_reqs β†’ Total number of requests.
  • errors β†’ Number of failed requests.

Stress testing is vital to ensure application stability and scalability. Using K6, we can simulate different stress scenarios like ramping load, constant high load, and spikes to identify system weaknesses before they affect users.

Achieving Better User Engaging via Realistic Load Testing in K6

Introduction

Load testing is essential to evaluate how a system behaves under expected and peak loads. Traditionally, we rely on metrics like requests per second (RPS), response time, and error rates. However, an insightful approach called Average Load Testing has been discussed recently. This blog explores that concept in detail, providing practical examples to help you apply it effectively.

Understanding Average Load Testing

Average Load Testing focuses on simulating real-world load patterns rather than traditional peak load tests. Instead of sending a fixed number of requests per second, this approach

  • Generates requests based on the average concurrency over time.
  • More accurately reflects real-world traffic patterns.
  • Helps identify performance bottlenecks in a realistic manner.

Setting Up Load Testing with K6

K6 is an excellent tool for implementing Average Load Testing. Let’s go through practical examples of setting up such tests.

Install K6

brew install k6  # macOS
sudo apt install k6  # Ubuntu/Debian
docker pull grafana/k6  # Using Docker

Example 1: Basic K6 Script for Average Load Testing

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    avg_load: {
      executor: 'constant-arrival-rate',
      rate: 10, // 10 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 20,
      maxVUs: 50,
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The constant-arrival-rate executor ensures a steady request rate.
  • rate: 10 sends 10 requests per second.
  • duration: '2m' runs the test for 2 minutes.
  • preAllocatedVUs: 20 and maxVUs: 50 define virtual users needed to sustain the load.
  • The script logs response times to the console.

Example 2: Testing with Varying Load

To better reflect real-world scenarios, we can use ramping arrival rate to simulate gradual increases in traffic

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    ramping_load: {
      executor: 'ramping-arrival-rate',
      startRate: 5, // Start with 5 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 100,
      stages: [
        { duration: '1m', target: 20 },
        { duration: '2m', target: 50 },
        { duration: '3m', target: 100 },
      ],
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The ramping-arrival-rate gradually increases requests per second over time.
  • The stages array defines a progression from 5 to 100 requests/sec over 6 minutes.
  • Logs response times to help analyze system performance.

Example 3: Load Testing with Multiple Endpoints

In real applications, multiple endpoints are often tested simultaneously. Here’s how to test different API routes

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoints: {
      executor: 'constant-arrival-rate',
      rate: 15, // 15 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 30,
      maxVUs: 60,
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The script randomly selects an API endpoint to test different routes.
  • Uses check to ensure status codes are 200.
  • Logs response times for deeper insights.

Analyzing Results

To analyze test results, you can store logs or metrics in a database or monitoring tool and visualize trends over time. Some popular options include

  • Prometheus for time-series data storage.
  • InfluxDB for handling large-scale performance metrics.
  • ELK Stack (Elasticsearch, Logstash, Kibana) for log-based analysis.

Average Load Testing provides a more realistic way to measure system performance. By leveraging K6, you can create flexible, real-world simulations to optimize your applications effectively.

πŸš€ #FOSS: Mastering Superfile: The Ultimate Terminal-Based File Manager for Power Users

πŸ”₯ Introduction

Are you tired of slow, clunky GUI-based file managers? Do you want lightning-fast navigation and total control over your filesβ€”right from your terminal? Meet Superfile, the ultimate tool for power users who love efficiency and speed.

In this blog, we’ll take you on a deep dive into Superfile’s features, commands, and shortcuts, transforming you into a file management ninja! ⚑

πŸ’‘ Why Choose Superfile?

Superfile isn’t just another file manager it’s a game-changer.

Here’s why

βœ… Blazing Fast – No unnecessary UI lag, just pure efficiency.

βœ… Keyboard-Driven – Forget the mouse, master navigation with powerful keybindings.

βœ… Multi-Panel Support – Work with multiple directories simultaneously.

βœ… Smart Search & Sorting – Instantly locate and organize files.

βœ… Built-in File Preview & Metadata Display – See what you need without opening files.

βœ… Highly Customizable – Tailor it to fit your workflow perfectly.

πŸ›  Installation

Getting started is easy! Install Superfile using

# For Linux (Debian-based)
wget -qO- https://superfile.netlify.app/install.sh | bash

# For macOS (via Homebrew)
brew install superfile

# For Windows (via Scoop)
scoop install superfile

Once installed, launch it with

spf

πŸš€ Boom! You’re ready to roll.

⚑ Essential Commands & Shortcuts

πŸ— General Operations

  • Launch Superfile: spf
  • Exit: Press q or Esc
  • Help Menu: ?
  • Toggle Footer Panel: F

πŸ“‚ File & Folder Navigation

  • New File Panel: n
  • Close File Panel: w
  • Toggle File Preview: f
  • Next Panel: Tab or Shift + l
  • Sidebar Panel: s

πŸ“ File & Folder Management

  • Create File/Folder: Ctrl + n
  • Rename: Ctrl + r
  • Copy: Ctrl + c
  • Cut: Ctrl + x
  • Paste: Ctrl + v
  • Delete: Ctrl + d
  • Copy Path: Ctrl + p

πŸ”Ž Search & Selection

  • Search: /
  • Select Files: v
  • Select All: Shift + a

πŸ“¦ Compression & Extraction

  • Extract Zip: Ctrl + e
  • Compress to Zip: Ctrl + a

πŸ† Advanced Power Moves

  • Open Terminal Here: Shift + t
  • Open in Editor: e
  • Toggle Hidden Files: .

πŸ’‘ Pro Tip: Use Shift + p to pin frequently accessed folders for even quicker access!

🎨 Customizing Superfile

Want to make Superfile truly yours? Customize it easily by editing the config file

$EDITOR CONFIG_PATH

To enable the metadata plugin, add

metadata = true

For more customizations, check out the Superfile documentation.

🎯 Final Thoughts

Superfile is the Swiss Army knife of terminal-based file managers. Whether you’re a developer, system admin, or just someone who loves a fast, efficient workflow, Superfile will revolutionize the way you manage files.

πŸš€ Ready to supercharge your productivity? Install Superfile today and take control like never before!

For more details, visit the Superfile website.

How do I use the ResourceTag, condition keys to create an IAM policy for tag-based restriction

The following IAM policies use condition keys to create tag-based restriction.

  • Before you use tags to control access to your AWS resources, you must understand how AWS grants access. AWS is composed of collections of resources. An Amazon EC2 instance is a resource. An Amazon S3 bucket is a resource. You can use the AWS API, the AWS CLI, or the AWS Management Console to perform an operation, such as creating a bucket in Amazon S3. When you do, you send a request for that operation. Your request specifies an action, a resource, a principal entity (user or role), a principal account, and any necessary request information.

  • You can then create an IAM policy that allows or denies access to a resource based on that resource's tag. In that policy, you can use tag condition keys to control access to any of the following:

  • Resource – Control access to AWS service resources based on the tags on those resources. To do this, use the_ aws:ResourceTag/key-name_ condition key to determine whether to allow access to the resource based on the tags that are attached to the resource.

ResourceTag condition key

Use the _aws:ResourceTag/tag-key _condition key to compare the tag key-value pair that's specified in the IAM policy with the key-value pair that's attached to the AWS resource. For more information, see Controlling access to AWS resources.

You can use this condition key with the global aws:ResourceTag version and AWS services, such as ec2:ResourceTag. For more information, see Actions, resources, and condition keys for AWS services.

  • The following IAM policy allows users to start, stop, and terminate instances that are in the test application tag
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances"
            ],
            "Resource": "arn:aws:ec2:*:3817********:instance/*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/application": "test"
                }
            }
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeTags"
                "ec2:Describedescribe-instance-status"
            ],
            "Resource": "*"
        }
    ]
}

Create the policy and attach the policy to user or role.

  • Created 2 instance one is with application tag and other is non tagged.

Image description
You can see the tagged instance are able to perform Start and Stop action using the IAM resources tag condition.
non-tagged instance we are not able to perform the same.

  • check the status of the instance

Image description

  • perform the Termination action

Image description

reference commands

aws ec2 start-instances --instance-ids "instance-id"
aws ec2 stop-instances --instance-ids "instance-id"
aws ec2 describe-instance-status  --instance-ids "instance-id"
aws ec2 terminate-instances --instance-ids "instance-id"

String condition operators

String condition operators let you construct Condition elements that restrict access based on comparing a key to a string value.

  • StringEquals - Exact matching, case sensitive

  • StringNotEquals - Negated matching

  • StringEqualsIgnoreCase - Exact matching, ignoring case

  • StringNotEqualsIgnoreCase - Negated matching, ignoring case

  • StringLike - Case-sensitive matching. The values can include multi-character match wildcards (*) and single-character match wildcards (?) anywhere in the string. You must specify wildcards to achieve partial string matches.
    Note
    If a key contains multiple values, StringLike can be qualified with set operatorsβ€”ForAllValues:StringLike and ForAnyValue:StringLike.

  • StringNotLike - Negated case-sensitive matching. The values can include multi-character match wildcards (*) or single-character match wildcards (?) anywhere in the string.

Script to list the S3 Bucket storage size

If we need to fetch the S3 bucket storage size we need to trace via individual bucket under metrics we get the storage size.
on one go use the below script to get the bucket name with storage size.

s3list=`aws s3 ls | awk  '{print $3}'`
for s3dir in $s3list
do
    echo $s3dir
    aws s3 ls "s3://$s3dir"  --recursive --human-readable --summarize | grep "Total Size" 
done
  1. create the .sh file
  2. copy the code on the file
  3. Excecute the script to get the s3 bucket details

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

Linux Mint Installation Drive – Dual Boot on 10+Β Machines!

Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines! πŸ’»βœ¨

πŸ’‘ Topics Covered:
πŸ›  Syed Jafer – FOSS, GLUGs, and open-source communities
🌍 Salman – Why FOSS matters & Linux Commands
πŸš€ Dhanasekar – Linux and DevOps
πŸ”§ Guhan – GNU and free software

Challenges We Faced


πŸ” BitLocker Encryption – Had to disable BitLocker on some laptops
πŸ”§ BIOS/UEFI Problems – Secure Boot, boot order changes needed
🐧 GRUB Issues – Windows not showing up, required boot-repair

πŸŽ₯ Watch the installation video and try it yourself! https://www.youtube.com/watch?v=m7sSqlam2Sk


β–Ά Linux Mint Installation Guide https://tkdhanasekar.wordpress.com/2025/02/15/installation-of-linux-mint-22-1-cinnamon-edition/

This is just the beginning!

The Intelligent Loop: A Guide to Modern LLM Agents

Introduction

Large Language Model (LLM) based AI agents represent a new paradigm in artificial intelligence. Unlike traditional software agents, these systems leverage the powerful capabilities of LLMs to understand, reason, and interact with their environment in more sophisticated ways. This guide will introduce you to the basics of LLM agents and their think-act-observe cycle.

What is an LLM Agent?

An LLM agent is a system that uses a large language model as its core reasoning engine to:

  1. Process natural language instructions
  2. Make decisions based on context and goals
  3. Generate human-like responses and actions
  4. Interact with external tools and APIs
  5. Learn from interactions and feedback

Think of an LLM agent as an AI assistant who can understand, respond, and take actions in the digital world, like searching the web, writing code, or analyzing data.

Image description

The Think-Act-Observe Cycle in LLM Agents

Observe (Input Processing)

LLM agents observe their environment through:

  1. Direct user instructions and queries
  2. Context from previous conversations
  3. Data from connected tools and APIs
  4. System prompts and constraints
  5. Environmental feedback

Think (LLM Processing)

The thinking phase for LLM agents involves:

  1. Parsing and understanding input context
  2. Reasoning about the task and requirements
  3. Planning necessary steps to achieve goals
  4. Selecting appropriate tools or actions
  5. Generating natural language responses

The LLM is the "brain," using its trained knowledge to process information and make decisions.

Act (Execution)

LLM agents can take various actions:

  1. Generate text responses
  2. Call external APIs
  3. Execute code
  4. Use specialized tools
  5. Store and retrieve information
  6. Request clarification from users

Key Components of LLM Agents

Core LLM

  1. Serves as the primary reasoning engine
  2. Processes natural language input
  3. Generates responses and decisions
  4. Maintains conversation context

Working Memory

  1. Stores conversation history
  2. Maintains current context
  3. Tracks task progress
  4. Manages temporary information

Tool Use

  1. API integrations
  2. Code execution capabilities
  3. Data processing tools
  4. External knowledge bases
  5. File manipulation utilities

Planning System

  1. Task decomposition
  2. Step-by-step reasoning
  3. Goal tracking
  4. Error handling and recovery

Types of LLM Agent Architectures

Simple Agents

  1. Single LLM with basic tool access
  2. Direct input-output processing
  3. Limited memory and context
  4. Example: Basic chatbots with API access

ReAct Agents

  1. Reasoning and Acting framework
  2. Step-by-step thought process
  3. Explicit action planning
  4. Self-reflection capabilities

Chain-of-Thought Agents

  1. Detailed reasoning steps
  2. Complex problem decomposition
  3. Transparent decision-making
  4. Better error handling

Multi-Agent Systems

  1. Multiple LLM agents working together
  2. Specialized roles and capabilities
  3. Inter-agent communication
  4. Collaborative problem-solving

Common Applications

LLM agents are increasingly used for:

  1. Personal assistance and task automation
  2. Code generation and debugging
  3. Data analysis and research
  4. Content creation and editing
  5. Customer service and support
  6. Process automation and workflow management

Best Practices for LLM Agent Design

Clear Instructions

  1. Provide explicit system prompts
  2. Define constraints and limitations
  3. Specify available tools and capabilities
  4. Set clear success criteria

Effective Memory Management

  1. Implement efficient context tracking
  2. Prioritize relevant information
  3. Clean up unnecessary data
  4. Maintain conversation coherence

Robust Tool Integration

  1. Define clear tool interfaces
  2. Handle API errors gracefully
  3. Validate tool outputs
  4. Monitor resource usage

Safety and Control

  1. Implement ethical guidelines
  2. Add safety checks and filters
  3. Monitor agent behavior
  4. Maintain user control

Effortless Data Storage with LocalBase and IndexedDB

IndexedDB is a powerful client-side database API for storing structured data in browsers. However, its API is complex, requiring transactions, object stores, and cursors to manage data. LocalBase simplifies IndexedDB by providing an intuitive, promise-based API.

In this blog, we’ll explore LocalBase, its features, and how to use it effectively in web applications.

What is LocalBase?

LocalBase is an easy-to-use JavaScript library that simplifies IndexedDB interactions. It provides a syntax similar to Firestore, making it ideal for developers familiar with Firebase.

βœ… Key Features

  • Promise based API
  • Simple CRUD operations
  • No need for manual transaction handling
  • Works seamlessly in modern browsers

Installation

You can install LocalBase via npm or use it directly in a script tag

Using npm

npm install localbase

Using CDN

https://cdn.jsdelivr.net/npm/localbase/dist/localbase.min.js

Getting Started with LocalBase

First, initialize the database

let db = new Localbase('myDatabase')




Adding Data

You can add records to a collection,

db.collection('users').add({
  id: 1,
  name: 'John Doe',
  age: 30
})

Fetching Data

Retrieve all records from a collection

db.collection('users').get().then(users => {
  console.log(users)
})

Updating Data

To update a record

db.collection('users').doc({ id: 1 }).update({
  age: 31
})

Deleting Data

Delete a specific document

db.collection('users').doc({ id: 1 }).delete()

Or delete the entire collection

db.collection('users').delete()

Advanced LocalBase Functionalities

1. Updating Data in LocalBase

LocalBase allows updating specific fields in a document without overwriting the entire record.

Basic Update Example

db.collection('users').doc({ id: 1 }).update({
  age: 31
})

πŸ”Ή This updates only the age field while keeping other fields unchanged.

Updating Multiple Fields

db.collection('users').doc({ id: 1 }).update({
  age: 32,
  city: 'New York'
})

πŸ”Ή The city field is added, and age is updated.

Handling Non-Existing Documents

If the document doesn’t exist, LocalBase won’t create it automatically. You can handle this with .get()

db.collection('users').doc({ id: 2 }).get().then(user => {
  if (user) {
    db.collection('users').doc({ id: 2 }).update({ age: 25 })
  } else {
    db.collection('users').add({ id: 2, name: 'Alice', age: 25 })
  }
})

2. Querying and Filtering Data

You can fetch documents based on conditions.

Get All Documents in a Collection

db.collection('users').get().then(users => {
  console.log(users)
})

Get a Single Document

db.collection('users').doc({ id: 1 }).get().then(user => {
  console.log(user)
})

Filter with Conditions

db.collection('users').get().then(users => {
  let filteredUsers = users.filter(user => user.age > 25)
  console.log(filteredUsers)
})

πŸ”Ή Since LocalBase doesn’t support native where queries, you need to filter manually.

3. Handling Transactions

LocalBase handles transactions internally, so you don’t need to worry about opening and closing them. However, you should use .then() to ensure operations complete before the next action.

Example: Sequential Updates

db.collection('users').doc({ id: 1 }).update({ age: 32 }).then(() => {
  return db.collection('users').doc({ id: 1 }).update({ city: 'Los Angeles' })
}).then(() => {
  console.log('Update complete')
})

πŸ”Ή This ensures that the age field is updated before adding the city field.

4. Clearing and Deleting Data

Deleting a Single Document

db.collection('users').doc({ id: 1 }).delete()

Deleting an Entire Collection

db.collection('users').delete()

Clearing All Data

db.delete()

πŸ”Ή This removes everything from the database!

5. Using LocalBase in Real-World Scenarios

Offline Caching for a To-Do List

db.collection('tasks').add({ id: 1, title: 'Buy groceries', completed: false })

Later, when the app is online, you can sync it with a remote database.

User Preferences Storage

db.collection('settings').doc({ theme: 'dark' }).update({ fontSize: '16px' })

πŸ”Ή Stores user settings locally, ensuring a smooth UX.

LocalBase makes IndexedDB developer-friendly with


βœ… Easy updates without overwriting entire documents
βœ… Simple filtering with JavaScript functions
βœ… Automatic transaction handling
βœ… Efficient storage for offline-first apps

For more details, check out the official repository:
πŸ”— GitHub – LocalBase

Got Inspired from 450dsa.com .

The Pros and Cons of LocalStorage in Modern Web Development

Introduction

The Web storage api is a set of mechanisms that enable browsers to store key-value pairs. Before HTML5, application data had to be sorted in cookies, included in every server request. Its intended to be far more user-friendly than using cookies.

Web storage is more secure, and large amounts of data can be stored locally, without affecting website performance.

There are 2 types of web storage,

  1. Local Storage
  2. Session Storage

We already have cookies. Why additional objects?

Unlike cookies, web storage objects are not sent to server with each request. Because of that, we can store much more. Most modern browsers allow at least 5 megabytes of data (or more) and have settings to configure that.

Also unlike cookies, the server can’t manipulate storage objects via HTTP headers. Everything’s done in JavaScript.The storage is bound to the origin (domain/protocol/port triplet). That is, different protocols or subdomains infer different storage objects, they can’t access data from each other.

In this guide, you will learn/refresh about LocalStorage.

LocalStorage

The localStorage is property of the window (browser window object) interface allows you to access a Storage object for the Document’s origin; the stored data is saved across browser sessions.

  1. Data is kept for a longtime in local storage (with no expiration date.). This could be one day, one week, or even one year as per the developer preference ( Data in local storage maintained even if the browser is closed).
  2. Local storage only stores strings. So, if you intend to store objects, lists or arrays, you must convert them into a string using JSON.stringfy()
  3. Local storage will be available via the window.localstorage property.
  4. What’s interesting about them is that the data survives a page refresh (for sessionStorage) and even a full browser restart (for localStorage).

Functionalities

// setItem normal strings
window.localStorage.setItem("name", "goku");

// getItem 
const name = window.localStorage.getItem("name");
console.log("name from localstorage, "+name);

// Storing an Object without JSON stringify

const data = {
  "commodity":"apple",
  "price":43
};
window.localStorage.setItem('commodity', data);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data without jsonified, "+ result);

// Storing an object after converting to JSON string. 
var jsonifiedString = JSON.stringify(data);
window.localStorage.setItem('commodity', jsonifiedString);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data after jsonified, "+ result);

// remove item 
window.localStorage.removeItem("commodity");
var result = window.localStorage.getItem('commodity');
console.log("Data after removing the key "+ result);

//length
console.log("length of local storage " + window.localStorage.length);

// clear
window.localStorage.clear();
console.log("length of local storage - after clear " + window.localStorage.length);

When to use Local Storage

  1. Data stored in Local Storage can be easily accessed by third party individuals.
  2. So its important to know that any sensitive data must not sorted in Local Storage.
  3. Local Storage can help in storing temporary data before it is pushed to the server.
  4. Always clear local storage once the operation is completed.

Where the local storage is saved ?

Windows

  • Firefox: C:\Users\\AppData\Roaming\Mozilla\Firefox\Profiles\\webappsstore.sqlite, %APPDATA%\Mozilla\Firefox\Profiles\\webappsstore.sqlite
  • Chrome: %LocalAppData%\Google\Chrome\User Data\Default\Local Storage\

Linux

  • Firefox: ~/.mozilla/firefox//webappsstore.sqlite
  • Chrome: ~/.config/google-chrome/Default/Local Storage/

Mac

  • Firefox: ~/Library/Application Support/Firefox/Profiles//webappsstore.sqlite, ~/Library/Mozilla/Firefox/Profiles//webappsstore.sqlite
  • Chrome: ~/Library/Application Support/Google/Chrome//Local Storage/, ~/Library/Application Support/Google/Chrome/Default/Local Storage/

Downside of Localstorage

The majority of local storage’s drawbacks aren’t really significant. You may still not use it, but your app will run a little slower and you’ll experience a tiny developer inconvenience. Security, however, is distinct. Knowing and understanding the security model of local storage is crucial since it will have a significant impact on your website in ways you might not have anticipated.

Local storage also has the drawback of being insecure. In no way! Everyone who stores sensitive information in local storage, such as session data, user information, credit card information (even momentarily! ), and anything else you wouldn’t want shared publicly on social media, is doing it incorrectly.

The purpose of local storage in a browser for safe storage was not intended. It was intended to be a straightforward key/value store for strings only that programmers could use to create somewhat more complicated single page apps.

General Preventions

  1. For example, if we are using third party JavaScript libraries and they are injected with some scripts which extract the storage objects, our storage data won’t be secure anymore. Therefore it’s not recommended to save sensitive data as
    • Username/Password
    • Credit card info
    • JWT tokens
    • API keys
    • Personal info
    • Session ids
  2. Do not use the same origin for multiple web applications. Instead, use subdomains since otherwise, the storage will be shared with all. Reason is, for each subdomain it will have an unique localstorage; and they can’t communicate between subdomain instances.
  3. Once some data are stored in Local storage, the developers don’t have any control over it until the user clears it. If you want the data to be removed once the session ends, use SessionStorage.
  4. Validate, encode and escape data read from browser storage
  5. Encrypt data before saving

Git Stash Explained: Save Your Work Efficiently

Introduction

Git is an essential tool for version control, and one of its underrated but powerful features is git stash. It allows developers to temporarily save their uncommitted changes without committing them, enabling a smooth workflow when switching branches or handling urgent bug fixes.

In this blog, we will explore git stash, its varieties, and some clever hacks to make the most of it.

1. Understanding Git Stash

Git stash allows developers to temporarily save changes made to the working directory, enabling them to switch contexts without having to commit incomplete work. This is particularly useful when you need to switch branches quickly or when you are interrupted by an urgent task.

When you run git stash, Git takes the uncommitted changes in your working directory (both staged and unstaged) and saves them on a stack called β€œstash stack”. This action reverts your working directory to the last committed state while safely storing the changes for later use.

How It Works

  • Git saves the current state of the working directory and the index (staging area) as a stash.
  • The stash includes modifications to tracked files, newly created files, and changes in the index.
  • Untracked files are not stashed by default unless specified.
  • Stashes are stored in a stack, with the most recent stash on top.

Common Use Cases

  • Context Switching: When you are working on a feature and need to switch branches for an urgent bug fix.
  • Code Review Feedback: If you receive feedback and need to make changes but are in the middle of another task.
  • Cleanup Before Commit: To stash temporary debugging changes or print statements before making a clean commit.

Git stash is used to save uncommitted changes in a temporary area, allowing you to switch branches or work on something else without committing incomplete work.

Basic Usage

The basic git stash command saves all modified tracked files and staged changes. This does not include untracked files by default.

git stash

This command performs three main actions

  • Saves changes: Takes the current working directory state and index and saves it as a new stash entry.
  • Resets working directory: Reverts the working directory to match the last commit.
  • Stacks the stash: Stores the saved state on top of the stash stack.

Restoring Changes

To restore the stashed changes, you can use

git stash pop

This does two things

  • Applies the stash: Reapplies the changes to your working directory.
  • Deletes the stash: Removes the stash entry from the stash stack.

If you want to keep the stash for future use

git stash apply

This reapplies the changes without deleting the stash entry.

Viewing and Managing Stashes

To see a list of all stash entries

git stash list

This shows a list like

stash@{0}: WIP on feature-branch: 1234567 Commit message
stash@{1}: WIP on master: 89abcdef Commit message

Each stash is identified by an index (e.g., stash@{0}) which can be used for other stash commands.

git stash

This command stashes both tracked and untracked changes.

To apply the last stashed changes back

git stash pop

This applies the stash and removes it from the stash list.

To apply the stash without removing it

git stash apply

To see a list of all stashed changes

git stash list

To remove a specific stash

git stash drop stash@{index}

To clear all stashes

git stash clear

2. Varieties of Git Stash

a) Stashing Untracked Files

By default, git stash does not include untracked files. To include them

git stash -u

Or:

git stash --include-untracked

b) Stashing Ignored Files

To stash even ignored files

git stash -a

Or:

git stash --all

c) Stashing with a Message

To add a meaningful message to a stash

git stash push -m "WIP: Refactoring user authentication"

d) Stashing Specific Files

If you only want to stash specific files

git stash push -m "Partial stash" -- path/to/file

e) Stashing and Switching Branches

Instead of running git stash and git checkout separately, do it in one step

git stash push -m "WIP: Bug Fix" && git checkout other-branch

3. Advanced Stash Hacks

a) Viewing Stashed Changes

To see the contents of a stash before applying

git stash show -p stash@{0}

b) Applying a Stash to a Different Branch

You can stash on one branch and apply it to another

git checkout other-branch
git stash apply stash@{0}

c) Creating a New Branch from a Stash

If you realize your stash should have been a separate branch

git stash branch new-branch stash@{0}

This will create a new branch and apply the stashed changes.

d) Keeping Index Changes

If you want to keep staged files untouched while stashing

git stash push --keep-index

e) Recovering a Dropped Stash

If you accidentally dropped a stash, it may still be in the reflog

git fsck --lost-found

Or, check stash history with:

git reflog stash

f) Using Stash for Conflict Resolution

If you’re rebasing and hit conflicts, stash helps in saving progress

git stash
# Fix conflicts
# Continue rebase
git stash pop

4. When Not to Use Git Stash

  • If your work is significant, commit it instead of stashing.
  • Avoid excessive stashing as it can lead to forgotten changes.
  • Stashing doesn’t track renamed or deleted files effectively.

Git stash is an essential tool for developers to manage temporary changes efficiently. With the different stash varieties and hacks, you can enhance your workflow and avoid unnecessary commits. Mastering these techniques will save you time and improve your productivity in version control.

Happy coding! πŸš€

❌