Normal view

There are new articles available, click to refresh the page.
Yesterday — 1 June 2025Main stream

Git Reset – A tool to correct your (commited) mistakes.

1 June 2025 at 05:43

Terminologies

  1. Work Tree: Your working tree are the files that you are currently working on. Your local folder.
  2. Git Index/Staging Area: The git “index” is where you place files you want commit to the git repository. The index is also known as cache, directory cache, current directory cache, staging area, staged files. Before you “commit” (checking) files to the git repository, you need to first place the files in the git “index”.

The index is not the working directory, you can type a command such as git status, and git will tell you what files in your working directory have been added to the git index (for example, by using the git add filename command).

In simple words, git reset is to correct your mistakes. Consider a scenario, where you have mistakenly did some wrong code and commited it too. Everything started to fall apart. Now you want to go back to the previous stable state (or previous commit). This is the scenario, where git reset helps you. In this blog post you will understand the usage of git reset in 3 different ways.

Visualization of Working Folder, Staging, Local Repository

Git Reset

In general, git reset’s function is to take the current branch and reset it to point somewhere else, and possibly bring the index and work tree along. Git reset helps you in two broad categories.

  1. To remove changes from staging area.
  2. To undo commits at repository level.

Scenario #1 : To remove changes from staging area

Workground setup

  1. Create a new folder.
  2. Initialize git in the folder (git init)
  3. Create a text file (eg: sample.txt)
  4. Add some content inside it.
  5. Add the file to the staging area. git add sample.txt

Checking the stage area

Now the file is been added to the staging area (but not commited). But now you realize that you have a wrong content inside it. So inorder to remove it from the staging area, you just need to execute git reset <filename>.

Now you can see that the file is been moved from the staging area to working directory. and the contents inside the file is preserved.

Some of the possible scenarios,

  1. If there are many files in the staged area, then you can simply use git reset.
  2. There is also another command to unstage files, git restore. git restore --staged .

Scenario #2 – To Undo commits at repository level

Workground setup

  1. Create a new folder.
  2. Initialize git in the folder (git init)
  3. Create a text file (eg: file.txt)
  4. Add some content inside it.
  5. Add the file to the staging area. git add file1.txt
  6. Create a text file (eg: file2.txt)
  7. Add some content inside it.
  8. Add the file to the staging area. git add file2.txt
  9. Create a text file (eg: file3.txt)
  10. Add some content inside it.
  11. Add the file to the staging area. git add file3.txt
image.png

There are 3 modes to reset a commit from the local reposity.

Once any file or folder is committed, we won’t have the concept of resetting the file. It will be resetting the commits only.

All these 3 modes, will move the HEAD to a specified commit, and all the remaining recent commits will be removed. The mode will decide whether these changes are going to remove from staging area and working directory or not.

Mixed Method (Default)

Command: git reset --mixed <commit-id>

Functionality: It resets the index, but not the work tree. This means all your files are intact, but any differences between the original commit and the one you reset to will show up as local modifications (or untracked files) with git status.

Use this when you realize you made some bad commits, but you want to keep all the work you’ve done so you can fix it up and recommit. In order to commit, you’ll have to add files to the index again (git add …).

Hands On: Our working directory will look like this,

We can try resetting the last commit (so we need to specify the previous commit to point it there),

git reset --mixed 002aa06

Now we can see the HEAD moved to the commit which we specified. And the file3.txt is removed from the local repository and staging.

But the contents inside the file is preserved in the working directory. We can check using the git status to see the file under Untracked files.

Soft Method

Command: git reset --soft <commit-id>

Functionality: All your files are intact as with –mixed, but all the changes show up as changes to be committed with git status (i.e. checked in in preparation for committing). Use this when you realize you’ve made some bad commits, but the work’s all good all you need to do is recommit it differently.

The index is untouched, so you can commit immediately if you want the resulting commit will have all the same content as where you were before you reset.

Hands On: Our working directory will look like this,

We can try resetting the last commit (so we need to specify the previous commit to point it there),

git reset --soft f8d7c74

Now we can see the HEAD moved to the commit which we specified. And the file2.txt is removed from the commit, but staging.

The contents in the file are preserved and its already present in the staging. We can check this using the git status to see the file under Added files.

Hard Method

Command: git reset --hard <commit-id>

Functionality: This is the easiest to understand, probably. All of your local changes get clobbered.

One primary use is blowing away your work but not switching commits: git reset --hard means git reset --hard HEAD, i.e. don’t change the branch but get rid of all local changes.

The other is simply moving a branch from one place to another, and keeping index/work tree in sync. This is the one that can really make you lose work, because it modifies your work tree. Be very very sure you want to throw away local work before you run any reset --hard.

Hands On: Our working directory will look like this,

We can add one more commit (so that we will have some commits to test).

so now we have 2 files in commit, and one file in working directory (untracked file – file3.txt). We can try resetting the last commit (so we need to specify the previous commit to point it there),

git reset --hard f8d7c74

Now we can see the HEAD moved to the commit which we specified. Also the file2.txt is been removed from the local repository, staging and the working directory.

It removed the files that were present in the commit. This is the one that can really make you lose work, because it modifies your work tree.

The Strange Notation

When you are searching for the working for git reset, you might noticed people using HEAD^ and HEAD~1. These are just the shorthand or specifying commits, without having to use a hash name like f8d7c74.

It’s fully documented in the “specifying revisions” section of the man page for git-rev-parse, with lots of examples and related syntax. The caret and the tilde actually mean different things

  • HEAD~ is short for HEAD~1 and means the commit’s first parent. HEAD~2 means the commit’s first parent’s first parent. Think of HEAD~n as “n commits before HEAD” or “the nth generation ancestor of HEAD”.

  • HEAD^ (or HEAD^1) also means the commit’s first parent. HEAD^2 means the commit’s second parent. Remember, a normal merge commit has two parents – the first parent is the merged-into commit, and the second parent is the commit that was merged. In general, merges can actually have arbitrarily many parents (octopus merges).

  • The ^ and ~ operators can be strung together, as in HEAD~3^2, the second parent of the third-generation ancestor of HEAD, HEAD^^2, the second parent of the first parent of HEAD, or even HEAD^^^, which is equivalent to HEAD~3.

In this blog post we understood the working of different ways to reset a commit. I am having one question for you. When we are resetting to a different commit, (i.e) pointing the head to a different commit, what will happen to the removed commits ? Is there any way to go back to it ?

Let’s Build a Bank Account Simulation

26 May 2025 at 14:50

Problem Statement

🧩 Overview

Build a bank system to create and manage user accounts, simulate deposits, withdrawals, interest accrual, and overdraft penalties.

🎯 Goals

  • Support multiple account types with different rules
  • Simulate real-world banking logic like minimum balance and interest
  • Track user actions securely

🏗 Suggested Classes

  • BankAccount: account_number, owner, balance, deposit(), withdraw()
  • SavingsAccount(BankAccount): interest_rate, apply_interest()
  • CheckingAccount(BankAccount): minimum_balance, penalty
  • User: name, password, accounts[]

📦 Features

  • Create new accounts (checking/savings)
  • Deposit money to account
  • Withdraw money (with rules):
    • Checking: maintain minimum balance or pay penalty
    • Savings: limit to 3 withdrawals/month
  • Apply interest monthly for savings
  • Show account summary

🔧 OOP Concepts

  • Inheritance: SavingsAccount and CheckingAccount from BankAccount
  • Encapsulation: Balance, account actions hidden inside methods
  • Polymorphism: Overridden withdraw() method in each subclass

🔌 Optional Extensions

  • Password protection (simple CLI input masking)
  • Transaction history with timestamp
  • Monthly bank statement generation

Let’s Build a Library Management System With OOPS

26 May 2025 at 14:40

Problem Statement

🧩 Overview

Design a command-line-based Library Management System that simulates the basic operations of a library for both users and administrators. It should manage books, user accounts, borrowing/returning of books, and enforce library rules like book limits per member.

🎯 Goals

  • Allow members to search, borrow, and return books.
  • Allow admins to manage the library’s inventory.
  • Track book availability.
  • Prevent double borrowing of a book.

👤 Actors

  • Admin
  • Member

🏗 Suggested Classes

  • Book: ID, title, author, genre, is_available
  • User: username, role, user_id
  • Member(User): borrowed_books (max 3 at a time)
  • Admin(User): can add/remove books
  • Library: manages collections of books and users

📦 Features

  • Admin:
    • Add a book with metadata
    • Remove a book by ID or title
    • List all books
  • Member:
    • Register or login
    • View available books
    • Borrow a book (limit 3)
    • Return a book
  • Library:
    • Handles storage, availability, and user-book mappings

🔧 OOP Concepts

  • Inheritance: Admin and Member inherit from User
  • Encapsulation: Book’s availability status and member’s borrow list
  • Polymorphism: Different view_dashboard() method for Admin vs Member

🔌 Optional Extensions

  • Track borrowing history (borrow date, return date)
  • Due dates and overdue penalties
  • Persistent data storage (JSON or SQLite)

📊 Learn PostgreSQL in Tamil: From Zero to 5★ on HackerRank in Just 10 Days

25 May 2025 at 12:42

 

PostgreSQL is one of the most powerful, stable, and open-source relational database systems trusted by global giants like Apple, Instagram, and Spotify. Whether you’re building a web application, managing enterprise data, or diving into analytics, understanding PostgreSQL is a skill that sets you apart.

But what if you could master it in just 10 days, in Tamil, with hands-on learning and a guaranteed 5★ rating on HackerRank as your goal?

Sounds exciting? Let’s dive in.

🎯 Why This Bootcamp?

This 10-day PostgreSQL Bootcamp in Tamil is designed to take you from absolute beginner to confident practitioner, with a curriculum built around real-world use cases, performance optimization, and daily challenge-driven learning.

Whether you’re a

  • Student trying to get into backend development
  • Developer wanting to upskill and crack interviews
  • Data analyst exploring SQL performance
  • Tech enthusiast curious about databases

…this bootcamp gives you the structured path you need.

🧠 What You’ll Learn

Over 10 days, we’ll cover

  • ✅ PostgreSQL installation & setup
  • ✅ PostgreSQL architecture and internals
  • ✅ Writing efficient SQL queries with proper formatting
  • ✅ Joins, CTEs, subqueries, and advanced querying
  • ✅ Indexing, query plans, and performance tuning
  • ✅ Transactions, isolation levels, and locking mechanisms
  • ✅ Schema design for real-world applications
  • ✅ Debugging techniques, tips, and best practices
  • ✅ Daily HackerRank challenges to track your progress
  • ✅ Solve 40+ HackerRank SQL challenges

🧪 Bootcamp Highlights

  • 🗣 Language of instruction: Tamil
  • 💻 Format: Online, live and interactive
  • 🎥 Daily live sessions with Q&A
  • 📊 Practice-oriented learning using HackerRank
  • 📚 Notes, cheat sheets, and shared resources
  • 🧑‍🤝‍🧑 Access to community support and mentorship
  • 🧠 Learn through real-world datasets and scenarios

Check our previous Postgres session

📅 Details at a Glance

  • Duration: 10 Days
  • Language: Tamil
  • Format: Online, hands-on
  • Book Your Slot: https://topmate.io/parottasalna/1558376
  • Goal: Earn 5★ in PostgreSQL on HackerRank
  • Suitable for: Students, developers, DBAs, and tech enthusiasts

🔥 Why You Shouldn’t Miss This

  • Learn one of the most in-demand database systems in your native language
  • Structured learning path with practical tasks and daily targets
  • Build confidence to work on real projects and solve SQL challenges
  • Lifetime value from one affordable investment.

Will meet you in session !!!

Redis Strings – The Building Blocks of Key Value Storage

23 May 2025 at 00:12

Redis is famously known as an in-memory data structure store, often used as a database, cache, and message broker. The simplest and most fundamental data type in Redis is the string. This blog walks through everything you need to know about Redis strings with practical examples.

What Are Redis Strings?

In Redis, a string is a binary-safe sequence of bytes. That means it can contain any kind of data text, integers, or even serialized objects.

  • Maximum size: 512 MB
  • Default behavior: key-value pair storage

Common String Commands

Let’s explore key-value operations you can perform on Redis strings using the redis-cli.

1. SET – Assign a Value to a Key

SET user:1:name "Alice"

This sets the key user:1:name to the value "Alice".

2. GET – Retrieve a Value by Key

GET user:1:name
# Output: "Alice"

3. EXISTS – Check if a Key Exists

EXISTS user:1:name
# Output: 1 (true)

4. DEL – Delete a Key

DEL user:1:name

5. SETEX – Set Value with Expiry (TTL)

SETEX session:12345 60 "token_xyz"

This sets session:12345 with value token_xyz that expires in 60 seconds.

6. INCR / DECR – Numeric Operations

SET views:homepage 0
INCR views:homepage
INCR views:homepage
DECR views:homepage
GET views:homepage
# Output: "1"

7. APPEND – Append to Existing String

SET greet "Hello"
APPEND greet ", World!"
GET greet
# Output: "Hello, World!"

8. MSET / MGET – Set or Get Multiple Keys at Once

MSET product:1 "Book" product:2 "Pen"
MGET product:1 product:2
# Output: "Book" "Pen"

Gotchas to Watch Out For

  1. String size limit: 512 MB per key.
  2. Atomic operations: INCR, DECR are atomic – ideal for counters.
  3. Expire keys: Always use TTL for session-like data to avoid memory bloat.
  4. Binary safety: Strings can hold any binary data, including serialized objects.

Use Redis with Python

import redis

r = redis.Redis(host='localhost', port=6379, db=0)
r.set('user:1:name', 'Alice')
print(r.get('user:1:name').decode())

💾 Redis Is Open Source Again – What that means ?

11 May 2025 at 02:32

Imagine you’ve been using a powerful tool for years to help you build apps faster. Yeah its Redis, a super fast database that helps apps remember things temporarily, like logins or shopping cart items. It was free, open, and loved by developers.

But one day, the team behind Redis changed the rules. They said

“You can still use Redis, but if you’re a big cloud company (like Amazon or Google) offering it to others as a service, you need to play by our special rules or pay us.”

This change upset many in the tech world. Why?

Because open-source means freedom you can use it, improve it, and even share it with others. Redis’s new license in 2024 took away some of that freedom. It wasn’t completely closed, but it wasn’t truly open either. It hurts AWS, Microsoft more.

What Happened Next?

Developers and tech companies didn’t like the new rules. So they said,

“Fine, we’ll make our own open version of Redis.”

That’s how a new project called Valkey was born, a fork (copy) of Redis that stayed truly open-source.

Fast forward to May 2025 — Redis listened. They said

“We’re bringing back the open-source spirit. Redis version 8.0 will be under a proper open-source license again: AGPLv3.”

What’s AGPLv3?

It’s a type of license that says:

  • ✅ You can use, change, and share Redis freely.
  • 🌐 If you run a modified Redis on a website or cloud service, you must also share your changes with the world. (still hurts AWS and Azure)

This keeps things fair: no more companies secretly benefiting from Redis without giving back.

What Did Redis Say?

Rowan Trollope, Redis’s CEO, explained why they had changed the license in the first place:

“Big cloud companies were making money off Redis but not helping us or the open-source community.”

But now, by switching to AGPLv3, Redis is balancing two things:

  • Protecting their work from being misused
  • And staying truly open-source

Why This Is Good News

  • Developers can continue using Redis freely.
  • The community can contribute and improve Redis.
  • Fair rules apply to everyone, even giant tech companies.

Redis has come full circle. After a detour into more restricted territory, it’s back where it belongs in the hands of everyone. This shows the power of the developer community, and why open-source isn’t just about code, it’s about collaboration, fairness, and freedom.

Checkout this blog from Redis https://redis.io/blog/agplv3/

Before yesterdayMain stream

How to Manage Multiple Cron Job Executions

16 March 2025 at 06:13

Cron jobs are a fundamental part of automating tasks in Unix-based systems. However, one common problem with cron jobs is multiple executions, where overlapping job runs can cause serious issues like data corruption, race conditions, or unexpected system load.

In this blog, we’ll explore why multiple executions happen, the potential risks, and how flock provides an elegant solution to ensure that a cron job runs only once at a time.

The Problem: Multiple Executions of Cron Jobs

Cron jobs are scheduled to run at fixed intervals, but sometimes a new job instance starts before the previous one finishes.

This can happen due to

  • Long-running jobs: If a cron job takes longer than its interval, a new instance starts while the old one is still running.
  • System slowdowns: High CPU or memory usage can delay job execution, leading to overlapping runs.
  • Simultaneous executions across servers: In a distributed system, multiple servers might execute the same cron job, causing duplication.

Example of a Problematic Cron Job

Let’s say we have the following cron job that runs every minute:

* * * * * /path/to/script.sh

If script.sh takes more than a minute to execute, a second instance will start before the first one finishes.

This can lead to:

✅ Duplicate database writes → Inconsistent data

✅ Conflicts in file processing → Corrupt files

✅ Overloaded system resources → Performance degradation

Real-World Example

Imagine a job that processes user invoices and sends emails

* * * * * /usr/bin/python3 /home/user/process_invoices.py

If the script takes longer than a minute to complete, multiple instances might start running, causing

  1. Users to receive multiple invoices.
  2. The database to get inconsistent updates.
  3. Increased server load due to excessive email sending.

The Solution: Using flock to Prevent Multiple Executions

flock is a Linux utility that manages file locks to ensure that only one instance of a process runs at a time. It works by locking a specific file, preventing other processes from acquiring the same lock.

Using flock in a Cron Job

Modify the cron job as follows

* * * * * /usr/bin/flock -n /tmp/myjob.lock /path/to/script.sh

How It Works

  • flock -n /tmp/myjob.lock → Tries to acquire a lock on /tmp/myjob.lock.
  • If the lock is available, the script runs.
  • If the lock is already held (i.e., another instance is running), flock prevents the new instance from starting.
  • -n (non-blocking) ensures that the job doesn’t wait for the lock and simply exits if it cannot acquire it.

This guarantees that only one instance of the job runs at a time.

Verifying the Solution

You can test the lock by manually running the script with flock

/usr/bin/flock -n /tmp/myjob.lock /bin/bash -c 'echo "Running job..."; sleep 30'

Open another terminal and try to run the same command. You’ll see that the second attempt exits immediately because the lock is already acquired.

Preventing multiple executions of cron jobs is essential for maintaining data consistency, system stability, and efficiency. By using flock, you can easily enforce single execution without complex logic.

✅ Simple & efficient solution. ✅ No external dependencies required. ✅ Works seamlessly with cron jobs.

So next time you set up a cron job, add flock and sleep peacefully knowing your tasks won’t collide. 🚀

🎯 PostgreSQL Zero to Hero with Parottasalna – 2 Day Bootcamp (FREE!) 🚀

2 March 2025 at 07:09

Databases power the backbone of modern applications, and PostgreSQL is one of the most powerful open-source relational databases trusted by top companies worldwide. Whether you’re a beginner or a developer looking to sharpen your database skills, this FREE bootcamp will take you from Zero to Hero in PostgreSQL!

What You’ll Learn?

✅ PostgreSQL fundamentals & installation

✅ Postgres Architecture
✅ Writing optimized queries
✅ Indexing & performance tuning
✅ Transactions & locking mechanisms
✅ Advanced joins, CTEs & subqueries
✅ Real-world best practices & hands-on exercises

This intensive hands on bootcamp is designed for developers, DBAs, and tech enthusiasts who want to master PostgreSQL from scratch and apply it in real-world scenarios.

Who Should Attend?

🔹 Beginners eager to learn databases
🔹 Developers & Engineers working with PostgreSQL
🔹 Anyone looking to optimize their SQL skills

📅 Date: March 22, 23 -> (Moved to April 5, 6)
⏰ Time: Will be finalized later.
📍 Location: Online
💰 Cost: 100% FREE 🎉

🔗 RSVP Here

Session is not taken !!! Will be announced later.

Prerequisite

  1. Checkout this playlist of our previous postgres session https://www.youtube.com/playlist?list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt

🎉 This bootcamp is completely FREE – Learn without any cost! 🎉

💡 Spots are limited – RSVP now to reserve your seat!

Boost System Performance During Traffic Surges with Spike Testing

1 March 2025 at 06:17

Introduction

Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.

In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.

Why Perform Spike Testing?

Spike testing helps you

  • Determine system stability under unexpected traffic surges.
  • Identify bottlenecks that arise due to rapid load increases.
  • Assess auto-scaling capabilities of cloud-based infrastructures.
  • Measure response time degradation during high-demand spikes.
  • Ensure system recovery after the sudden load disappears.

Setting Up K6 for Spike Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Choosing the Right Test Scenario

K6 provides different executors to simulate load patterns. For spike testing, we use

  • ramping-arrival-rate → Gradually increases the request rate over time.
  • constant-arrival-rate → Maintains a fixed number of requests per second after the spike.

Example 1: Basic Spike Test

This test starts with low traffic, spikes suddenly, and then drops back to normal.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    spike_test: {
      executor: 'ramping-arrival-rate',
      startRate: 10, // Start with 10 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 100,
      maxVUs: 500,
      stages: [
        { duration: '30s', target: 10 },  // Low traffic
        { duration: '10s', target: 500 }, // Sudden spike
        { duration: '30s', target: 10 },  // Traffic drops
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 requests per second for 30 seconds.
  • Spikes to 500 requests per second in 10 seconds.
  • Drops back to 10 requests per second.
  • Tests the system’s ability to handle and recover from traffic spikes.

Example 2: Spike Test with High User Load

This test simulates a spike in virtual users rather than just requests per second.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    user_spike: {
      executor: 'ramping-vus',
      stages: [
        { duration: '30s', target: 20 },  // Normal traffic
        { duration: '10s', target: 300 }, // Sudden spike in users
        { duration: '30s', target: 20 },  // Drop back to normal
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation:

  • Simulates a sudden increase in concurrent virtual users (VUs).
  • Helps test server stability, database handling, and auto-scaling.

Example 3: Spike Test on Multiple Endpoints

In real-world applications, multiple endpoints may experience spikes simultaneously. Here’s how to test different API routes.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoint_spike: {
      executor: 'ramping-arrival-rate',
      startRate: 5,
      timeUnit: '1s',
      preAllocatedVUs: 200,
      maxVUs: 500,
      stages: [
        { duration: '20s', target: 10 },  // Normal traffic
        { duration: '10s', target: 300 }, // Spike across endpoints
        { duration: '20s', target: 10 },  // Traffic drop
      ],
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • Simulates traffic spikes across multiple API endpoints.
  • Helps identify which API calls suffer under extreme load.

Analyzing Test Results

After running the tests, K6 provides key performance metrics

http_req_duration......: avg=350ms min=150ms max=3000ms
http_reqs..............: 10,000 requests
vus_max................: 500
errors.................: 2%

Key Metrics

  • http_req_duration → Measures response time impact.
  • vus_max → Peak virtual users during the spike.
  • errors → Percentage of failed requests due to overload.

Best Practices for Spike Testing

  • Monitor application logs and database performance during the test.
  • Use auto-scaling mechanisms for cloud-based environments.
  • Combine spike tests with stress testing for better insights.
  • Analyze error rates and recovery time to ensure system stability.

Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.

How Stress Testing Can Make More Attractive Systems ?

1 March 2025 at 06:06

Introduction

Stress testing is a critical aspect of performance testing that evaluates how a system performs under extreme loads. Unlike load testing, which simulates expected user traffic, stress testing pushes a system beyond its limits to identify breaking points and measure recovery capabilities.

In this blog, we will explore stress testing using K6, an open-source load testing tool, with detailed explanations and full examples to help you implement stress testing effectively.

Why Stress Testing?

Stress testing helps you

  • Identify the maximum capacity of your system.
  • Detect potential failures and bottlenecks.
  • Measure system stability and recovery under high loads.
  • Ensure infrastructure can handle unexpected spikes in traffic.

Setting Up K6 for Stress Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Understanding Stress Testing Scenarios

K6 provides various executors to simulate different traffic patterns. For stress testing, we mainly use

  1. ramping-vus – Gradually increases virtual users to a high level.
  2. constant-vus – Maintains a fixed high number of virtual users.
  3. spike – Simulates a sudden surge in traffic.

Example 1: Basic Stress Test with Ramping VUs

This script gradually increases the number of virtual users, holds a peak load, and then reduces it.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '1m', target: 100 }, // Ramp up to 100 users in 1 min
    { duration: '3m', target: 100 }, // Stay at 100 users for 3 min
    { duration: '1m', target: 0 },   // Ramp down to 0 users
  ],
};

export default function () {
  let res = http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • The test starts with 0 users and ramps up to 100 users in 1 minute.
  • Holds 100 users for 3 minutes.
  • Gradually reduces load to 0 users.
  • The sleep(1) function helps simulate real user behavior between requests.

Example 2: Constant High Load Test

This test maintains a consistently high number of virtual users.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  vus: 200, // 200 virtual users
  duration: '5m', // Run the test for 5 minutes
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • 200 virtual users are constantly hitting the endpoint for 5 minutes.
  • Helps evaluate system performance under sustained high traffic.

Example 3: Spike Testing (Sudden Traffic Surge)

This test simulates a sudden spike in traffic, followed by a drop.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  stages: [
    { duration: '10s', target: 10 },  // Start with 10 users
    { duration: '10s', target: 500 }, // Spike to 500 users
    { duration: '10s', target: 10 },  // Drop back to 10 users
  ],
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 users.
  • Spikes suddenly to 500 users in 10 seconds.
  • Drops back to 10 users.
  • Helps determine how the system handles sudden surges in traffic.

Analyzing Test Results

After running the tests, K6 provides detailed statistics

checks..................: 100.00% ✓ 5000 ✗ 0
http_req_duration......: avg=300ms min=200ms max=2000ms
http_reqs..............: 5000 requests
vus_max................: 500

Key Metrics to Analyze

  • http_req_duration → Measures response time.
  • vus_max → Maximum concurrent virtual users.
  • http_reqs → Total number of requests.
  • errors → Number of failed requests.

Stress testing is vital to ensure application stability and scalability. Using K6, we can simulate different stress scenarios like ramping load, constant high load, and spikes to identify system weaknesses before they affect users.

Achieving Better User Engaging via Realistic Load Testing in K6

1 March 2025 at 05:55

Introduction

Load testing is essential to evaluate how a system behaves under expected and peak loads. Traditionally, we rely on metrics like requests per second (RPS), response time, and error rates. However, an insightful approach called Average Load Testing has been discussed recently. This blog explores that concept in detail, providing practical examples to help you apply it effectively.

Understanding Average Load Testing

Average Load Testing focuses on simulating real-world load patterns rather than traditional peak load tests. Instead of sending a fixed number of requests per second, this approach

  • Generates requests based on the average concurrency over time.
  • More accurately reflects real-world traffic patterns.
  • Helps identify performance bottlenecks in a realistic manner.

Setting Up Load Testing with K6

K6 is an excellent tool for implementing Average Load Testing. Let’s go through practical examples of setting up such tests.

Install K6

brew install k6  # macOS
sudo apt install k6  # Ubuntu/Debian
docker pull grafana/k6  # Using Docker

Example 1: Basic K6 Script for Average Load Testing

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    avg_load: {
      executor: 'constant-arrival-rate',
      rate: 10, // 10 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 20,
      maxVUs: 50,
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The constant-arrival-rate executor ensures a steady request rate.
  • rate: 10 sends 10 requests per second.
  • duration: '2m' runs the test for 2 minutes.
  • preAllocatedVUs: 20 and maxVUs: 50 define virtual users needed to sustain the load.
  • The script logs response times to the console.

Example 2: Testing with Varying Load

To better reflect real-world scenarios, we can use ramping arrival rate to simulate gradual increases in traffic

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    ramping_load: {
      executor: 'ramping-arrival-rate',
      startRate: 5, // Start with 5 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 100,
      stages: [
        { duration: '1m', target: 20 },
        { duration: '2m', target: 50 },
        { duration: '3m', target: 100 },
      ],
    },
  },
};

export default function () {
  let res = http.get('https://test-api.example.com');
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The ramping-arrival-rate gradually increases requests per second over time.
  • The stages array defines a progression from 5 to 100 requests/sec over 6 minutes.
  • Logs response times to help analyze system performance.

Example 3: Load Testing with Multiple Endpoints

In real applications, multiple endpoints are often tested simultaneously. Here’s how to test different API routes

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoints: {
      executor: 'constant-arrival-rate',
      rate: 15, // 15 requests per second
      timeUnit: '1s',
      duration: '2m',
      preAllocatedVUs: 30,
      maxVUs: 60,
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • The script randomly selects an API endpoint to test different routes.
  • Uses check to ensure status codes are 200.
  • Logs response times for deeper insights.

Analyzing Results

To analyze test results, you can store logs or metrics in a database or monitoring tool and visualize trends over time. Some popular options include

  • Prometheus for time-series data storage.
  • InfluxDB for handling large-scale performance metrics.
  • ELK Stack (Elasticsearch, Logstash, Kibana) for log-based analysis.

Average Load Testing provides a more realistic way to measure system performance. By leveraging K6, you can create flexible, real-world simulations to optimize your applications effectively.

🚀 #FOSS: Mastering Superfile: The Ultimate Terminal-Based File Manager for Power Users

28 February 2025 at 17:07

🔥 Introduction

Are you tired of slow, clunky GUI-based file managers? Do you want lightning-fast navigation and total control over your files—right from your terminal? Meet Superfile, the ultimate tool for power users who love efficiency and speed.

In this blog, we’ll take you on a deep dive into Superfile’s features, commands, and shortcuts, transforming you into a file management ninja! ⚡

💡 Why Choose Superfile?

Superfile isn’t just another file manager it’s a game-changer.

Here’s why

✅ Blazing Fast – No unnecessary UI lag, just pure efficiency.

✅ Keyboard-Driven – Forget the mouse, master navigation with powerful keybindings.

✅ Multi-Panel Support – Work with multiple directories simultaneously.

✅ Smart Search & Sorting – Instantly locate and organize files.

✅ Built-in File Preview & Metadata Display – See what you need without opening files.

✅ Highly Customizable – Tailor it to fit your workflow perfectly.

🛠 Installation

Getting started is easy! Install Superfile using

# For Linux (Debian-based)
wget -qO- https://superfile.netlify.app/install.sh | bash

# For macOS (via Homebrew)
brew install superfile

# For Windows (via Scoop)
scoop install superfile

Once installed, launch it with

spf

🚀 Boom! You’re ready to roll.

⚡ Essential Commands & Shortcuts

🏗 General Operations

  • Launch Superfile: spf
  • Exit: Press q or Esc
  • Help Menu: ?
  • Toggle Footer Panel: F

📂 File & Folder Navigation

  • New File Panel: n
  • Close File Panel: w
  • Toggle File Preview: f
  • Next Panel: Tab or Shift + l
  • Sidebar Panel: s

📝 File & Folder Management

  • Create File/Folder: Ctrl + n
  • Rename: Ctrl + r
  • Copy: Ctrl + c
  • Cut: Ctrl + x
  • Paste: Ctrl + v
  • Delete: Ctrl + d
  • Copy Path: Ctrl + p

🔎 Search & Selection

  • Search: /
  • Select Files: v
  • Select All: Shift + a

📦 Compression & Extraction

  • Extract Zip: Ctrl + e
  • Compress to Zip: Ctrl + a

🏆 Advanced Power Moves

  • Open Terminal Here: Shift + t
  • Open in Editor: e
  • Toggle Hidden Files: .

💡 Pro Tip: Use Shift + p to pin frequently accessed folders for even quicker access!

🎨 Customizing Superfile

Want to make Superfile truly yours? Customize it easily by editing the config file

$EDITOR CONFIG_PATH

To enable the metadata plugin, add

metadata = true

For more customizations, check out the Superfile documentation.

🎯 Final Thoughts

Superfile is the Swiss Army knife of terminal-based file managers. Whether you’re a developer, system admin, or just someone who loves a fast, efficient workflow, Superfile will revolutionize the way you manage files.

🚀 Ready to supercharge your productivity? Install Superfile today and take control like never before!

For more details, visit the Superfile website.

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

24 February 2025 at 16:18

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines! 💻✨

💡 Topics Covered:
🛠 Syed Jafer – FOSS, GLUGs, and open-source communities
🌍 Salman – Why FOSS matters & Linux Commands
🚀 Dhanasekar – Linux and DevOps
🔧 Guhan – GNU and free software

Challenges We Faced


🔐 BitLocker Encryption – Had to disable BitLocker on some laptops
🔧 BIOS/UEFI Problems – Secure Boot, boot order changes needed
🐧 GRUB Issues – Windows not showing up, required boot-repair

🎥 Watch the installation video and try it yourself! https://www.youtube.com/watch?v=m7sSqlam2Sk


▶ Linux Mint Installation Guide https://tkdhanasekar.wordpress.com/2025/02/15/installation-of-linux-mint-22-1-cinnamon-edition/

This is just the beginning!

Effortless Data Storage with LocalBase and IndexedDB

22 February 2025 at 04:42

IndexedDB is a powerful client-side database API for storing structured data in browsers. However, its API is complex, requiring transactions, object stores, and cursors to manage data. LocalBase simplifies IndexedDB by providing an intuitive, promise-based API.

In this blog, we’ll explore LocalBase, its features, and how to use it effectively in web applications.

What is LocalBase?

LocalBase is an easy-to-use JavaScript library that simplifies IndexedDB interactions. It provides a syntax similar to Firestore, making it ideal for developers familiar with Firebase.

✅ Key Features

  • Promise based API
  • Simple CRUD operations
  • No need for manual transaction handling
  • Works seamlessly in modern browsers

Installation

You can install LocalBase via npm or use it directly in a script tag

Using npm

npm install localbase

Using CDN

https://cdn.jsdelivr.net/npm/localbase/dist/localbase.min.js

Getting Started with LocalBase

First, initialize the database

let db = new Localbase('myDatabase')




Adding Data

You can add records to a collection,

db.collection('users').add({
  id: 1,
  name: 'John Doe',
  age: 30
})

Fetching Data

Retrieve all records from a collection

db.collection('users').get().then(users => {
  console.log(users)
})

Updating Data

To update a record

db.collection('users').doc({ id: 1 }).update({
  age: 31
})

Deleting Data

Delete a specific document

db.collection('users').doc({ id: 1 }).delete()

Or delete the entire collection

db.collection('users').delete()

Advanced LocalBase Functionalities

1. Updating Data in LocalBase

LocalBase allows updating specific fields in a document without overwriting the entire record.

Basic Update Example

db.collection('users').doc({ id: 1 }).update({
  age: 31
})

🔹 This updates only the age field while keeping other fields unchanged.

Updating Multiple Fields

db.collection('users').doc({ id: 1 }).update({
  age: 32,
  city: 'New York'
})

🔹 The city field is added, and age is updated.

Handling Non-Existing Documents

If the document doesn’t exist, LocalBase won’t create it automatically. You can handle this with .get()

db.collection('users').doc({ id: 2 }).get().then(user => {
  if (user) {
    db.collection('users').doc({ id: 2 }).update({ age: 25 })
  } else {
    db.collection('users').add({ id: 2, name: 'Alice', age: 25 })
  }
})

2. Querying and Filtering Data

You can fetch documents based on conditions.

Get All Documents in a Collection

db.collection('users').get().then(users => {
  console.log(users)
})

Get a Single Document

db.collection('users').doc({ id: 1 }).get().then(user => {
  console.log(user)
})

Filter with Conditions

db.collection('users').get().then(users => {
  let filteredUsers = users.filter(user => user.age > 25)
  console.log(filteredUsers)
})

🔹 Since LocalBase doesn’t support native where queries, you need to filter manually.

3. Handling Transactions

LocalBase handles transactions internally, so you don’t need to worry about opening and closing them. However, you should use .then() to ensure operations complete before the next action.

Example: Sequential Updates

db.collection('users').doc({ id: 1 }).update({ age: 32 }).then(() => {
  return db.collection('users').doc({ id: 1 }).update({ city: 'Los Angeles' })
}).then(() => {
  console.log('Update complete')
})

🔹 This ensures that the age field is updated before adding the city field.

4. Clearing and Deleting Data

Deleting a Single Document

db.collection('users').doc({ id: 1 }).delete()

Deleting an Entire Collection

db.collection('users').delete()

Clearing All Data

db.delete()

🔹 This removes everything from the database!

5. Using LocalBase in Real-World Scenarios

Offline Caching for a To-Do List

db.collection('tasks').add({ id: 1, title: 'Buy groceries', completed: false })

Later, when the app is online, you can sync it with a remote database.

User Preferences Storage

db.collection('settings').doc({ theme: 'dark' }).update({ fontSize: '16px' })

🔹 Stores user settings locally, ensuring a smooth UX.

LocalBase makes IndexedDB developer-friendly with


✅ Easy updates without overwriting entire documents
✅ Simple filtering with JavaScript functions
✅ Automatic transaction handling
✅ Efficient storage for offline-first apps

For more details, check out the official repository:
🔗 GitHub – LocalBase

Got Inspired from 450dsa.com .

The Pros and Cons of LocalStorage in Modern Web Development

20 February 2025 at 16:26

Introduction

The Web storage api is a set of mechanisms that enable browsers to store key-value pairs. Before HTML5, application data had to be sorted in cookies, included in every server request. Its intended to be far more user-friendly than using cookies.

Web storage is more secure, and large amounts of data can be stored locally, without affecting website performance.

There are 2 types of web storage,

  1. Local Storage
  2. Session Storage

We already have cookies. Why additional objects?

Unlike cookies, web storage objects are not sent to server with each request. Because of that, we can store much more. Most modern browsers allow at least 5 megabytes of data (or more) and have settings to configure that.

Also unlike cookies, the server can’t manipulate storage objects via HTTP headers. Everything’s done in JavaScript.The storage is bound to the origin (domain/protocol/port triplet). That is, different protocols or subdomains infer different storage objects, they can’t access data from each other.

In this guide, you will learn/refresh about LocalStorage.

LocalStorage

The localStorage is property of the window (browser window object) interface allows you to access a Storage object for the Document’s origin; the stored data is saved across browser sessions.

  1. Data is kept for a longtime in local storage (with no expiration date.). This could be one day, one week, or even one year as per the developer preference ( Data in local storage maintained even if the browser is closed).
  2. Local storage only stores strings. So, if you intend to store objects, lists or arrays, you must convert them into a string using JSON.stringfy()
  3. Local storage will be available via the window.localstorage property.
  4. What’s interesting about them is that the data survives a page refresh (for sessionStorage) and even a full browser restart (for localStorage).

Functionalities

// setItem normal strings
window.localStorage.setItem("name", "goku");

// getItem 
const name = window.localStorage.getItem("name");
console.log("name from localstorage, "+name);

// Storing an Object without JSON stringify

const data = {
  "commodity":"apple",
  "price":43
};
window.localStorage.setItem('commodity', data);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data without jsonified, "+ result);

// Storing an object after converting to JSON string. 
var jsonifiedString = JSON.stringify(data);
window.localStorage.setItem('commodity', jsonifiedString);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data after jsonified, "+ result);

// remove item 
window.localStorage.removeItem("commodity");
var result = window.localStorage.getItem('commodity');
console.log("Data after removing the key "+ result);

//length
console.log("length of local storage " + window.localStorage.length);

// clear
window.localStorage.clear();
console.log("length of local storage - after clear " + window.localStorage.length);

When to use Local Storage

  1. Data stored in Local Storage can be easily accessed by third party individuals.
  2. So its important to know that any sensitive data must not sorted in Local Storage.
  3. Local Storage can help in storing temporary data before it is pushed to the server.
  4. Always clear local storage once the operation is completed.

Where the local storage is saved ?

Windows

  • Firefox: C:\Users\\AppData\Roaming\Mozilla\Firefox\Profiles\\webappsstore.sqlite, %APPDATA%\Mozilla\Firefox\Profiles\\webappsstore.sqlite
  • Chrome: %LocalAppData%\Google\Chrome\User Data\Default\Local Storage\

Linux

  • Firefox: ~/.mozilla/firefox//webappsstore.sqlite
  • Chrome: ~/.config/google-chrome/Default/Local Storage/

Mac

  • Firefox: ~/Library/Application Support/Firefox/Profiles//webappsstore.sqlite, ~/Library/Mozilla/Firefox/Profiles//webappsstore.sqlite
  • Chrome: ~/Library/Application Support/Google/Chrome//Local Storage/, ~/Library/Application Support/Google/Chrome/Default/Local Storage/

Downside of Localstorage

The majority of local storage’s drawbacks aren’t really significant. You may still not use it, but your app will run a little slower and you’ll experience a tiny developer inconvenience. Security, however, is distinct. Knowing and understanding the security model of local storage is crucial since it will have a significant impact on your website in ways you might not have anticipated.

Local storage also has the drawback of being insecure. In no way! Everyone who stores sensitive information in local storage, such as session data, user information, credit card information (even momentarily! ), and anything else you wouldn’t want shared publicly on social media, is doing it incorrectly.

The purpose of local storage in a browser for safe storage was not intended. It was intended to be a straightforward key/value store for strings only that programmers could use to create somewhat more complicated single page apps.

General Preventions

  1. For example, if we are using third party JavaScript libraries and they are injected with some scripts which extract the storage objects, our storage data won’t be secure anymore. Therefore it’s not recommended to save sensitive data as
    • Username/Password
    • Credit card info
    • JWT tokens
    • API keys
    • Personal info
    • Session ids
  2. Do not use the same origin for multiple web applications. Instead, use subdomains since otherwise, the storage will be shared with all. Reason is, for each subdomain it will have an unique localstorage; and they can’t communicate between subdomain instances.
  3. Once some data are stored in Local storage, the developers don’t have any control over it until the user clears it. If you want the data to be removed once the session ends, use SessionStorage.
  4. Validate, encode and escape data read from browser storage
  5. Encrypt data before saving

Git Stash Explained: Save Your Work Efficiently

19 February 2025 at 13:13

Introduction

Git is an essential tool for version control, and one of its underrated but powerful features is git stash. It allows developers to temporarily save their uncommitted changes without committing them, enabling a smooth workflow when switching branches or handling urgent bug fixes.

In this blog, we will explore git stash, its varieties, and some clever hacks to make the most of it.

1. Understanding Git Stash

Git stash allows developers to temporarily save changes made to the working directory, enabling them to switch contexts without having to commit incomplete work. This is particularly useful when you need to switch branches quickly or when you are interrupted by an urgent task.

When you run git stash, Git takes the uncommitted changes in your working directory (both staged and unstaged) and saves them on a stack called “stash stack”. This action reverts your working directory to the last committed state while safely storing the changes for later use.

How It Works

  • Git saves the current state of the working directory and the index (staging area) as a stash.
  • The stash includes modifications to tracked files, newly created files, and changes in the index.
  • Untracked files are not stashed by default unless specified.
  • Stashes are stored in a stack, with the most recent stash on top.

Common Use Cases

  • Context Switching: When you are working on a feature and need to switch branches for an urgent bug fix.
  • Code Review Feedback: If you receive feedback and need to make changes but are in the middle of another task.
  • Cleanup Before Commit: To stash temporary debugging changes or print statements before making a clean commit.

Git stash is used to save uncommitted changes in a temporary area, allowing you to switch branches or work on something else without committing incomplete work.

Basic Usage

The basic git stash command saves all modified tracked files and staged changes. This does not include untracked files by default.

git stash

This command performs three main actions

  • Saves changes: Takes the current working directory state and index and saves it as a new stash entry.
  • Resets working directory: Reverts the working directory to match the last commit.
  • Stacks the stash: Stores the saved state on top of the stash stack.

Restoring Changes

To restore the stashed changes, you can use

git stash pop

This does two things

  • Applies the stash: Reapplies the changes to your working directory.
  • Deletes the stash: Removes the stash entry from the stash stack.

If you want to keep the stash for future use

git stash apply

This reapplies the changes without deleting the stash entry.

Viewing and Managing Stashes

To see a list of all stash entries

git stash list

This shows a list like

stash@{0}: WIP on feature-branch: 1234567 Commit message
stash@{1}: WIP on master: 89abcdef Commit message

Each stash is identified by an index (e.g., stash@{0}) which can be used for other stash commands.

git stash

This command stashes both tracked and untracked changes.

To apply the last stashed changes back

git stash pop

This applies the stash and removes it from the stash list.

To apply the stash without removing it

git stash apply

To see a list of all stashed changes

git stash list

To remove a specific stash

git stash drop stash@{index}

To clear all stashes

git stash clear

2. Varieties of Git Stash

a) Stashing Untracked Files

By default, git stash does not include untracked files. To include them

git stash -u

Or:

git stash --include-untracked

b) Stashing Ignored Files

To stash even ignored files

git stash -a

Or:

git stash --all

c) Stashing with a Message

To add a meaningful message to a stash

git stash push -m "WIP: Refactoring user authentication"

d) Stashing Specific Files

If you only want to stash specific files

git stash push -m "Partial stash" -- path/to/file

e) Stashing and Switching Branches

Instead of running git stash and git checkout separately, do it in one step

git stash push -m "WIP: Bug Fix" && git checkout other-branch

3. Advanced Stash Hacks

a) Viewing Stashed Changes

To see the contents of a stash before applying

git stash show -p stash@{0}

b) Applying a Stash to a Different Branch

You can stash on one branch and apply it to another

git checkout other-branch
git stash apply stash@{0}

c) Creating a New Branch from a Stash

If you realize your stash should have been a separate branch

git stash branch new-branch stash@{0}

This will create a new branch and apply the stashed changes.

d) Keeping Index Changes

If you want to keep staged files untouched while stashing

git stash push --keep-index

e) Recovering a Dropped Stash

If you accidentally dropped a stash, it may still be in the reflog

git fsck --lost-found

Or, check stash history with:

git reflog stash

f) Using Stash for Conflict Resolution

If you’re rebasing and hit conflicts, stash helps in saving progress

git stash
# Fix conflicts
# Continue rebase
git stash pop

4. When Not to Use Git Stash

  • If your work is significant, commit it instead of stashing.
  • Avoid excessive stashing as it can lead to forgotten changes.
  • Stashing doesn’t track renamed or deleted files effectively.

Git stash is an essential tool for developers to manage temporary changes efficiently. With the different stash varieties and hacks, you can enhance your workflow and avoid unnecessary commits. Mastering these techniques will save you time and improve your productivity in version control.

Happy coding! 🚀

Benefits of Binary Insertion Sort Explained

18 February 2025 at 14:30

Introduction

Binary insertion sort is a sorting algorithm similar to insertion sort, but instead of using linear search to find the position where the element should be inserted, we use binary search.

Thus, we reduce the number of comparisons for inserting one element from O(N) (Time complexity in Insertion Sort) to O(log N).

Best of two worlds

Binary insertion sort is a combination of insertion sort and binary search.

Insertion sort is sorting technique that works by finding the correct position of the element in the array and then inserting it into its correct position. Binary search is searching technique that works by finding the middle of the array for finding the element.

As the complexity of binary search is of logarithmic order, the searching algorithm’s time complexity will also decrease to of logarithmic order. Implementation of binary Insertion sort. this program is a simple Insertion sort program but instead of the standard searching technique binary search is used.

How Binary Insertion Sort works ?

Process flow

In binary insertion sort, we divide the array into two subarrays — sorted and unsorted. The first element of the array is in the sorted subarray, and the rest of the elements are in the unsorted one.

We then iterate from the second element to the last element. For the i-th iteration, we make the current element our “key.” This key is the element that we have to add to our existing sorted subarray.

Example

Consider the array 29, 10, 14, 37, 14

First Pass

Key = 1

Since we consider the first element is in the sorted array, we will be starting from the second element. Then we apply the binary search on the sorted array.

In this scenario, we can see that the middle element in sorted array (29) is greater than the key element 10. So the position of the key element is 0. Then we can shift the remaining elements by 1 position.

Increment the value of key.

Second Pass

Key = 2

Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element to be placed at index 1 (between 10 and 29). Then we can shift the remaining elements by 1 position.

Third Pass

Key = 3

Now the key element is 37. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element is placed in its correct position.

Fourth Pass

Key = 4

Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element to be placed at index 2 (between 14 and 29). Then we can shift the remaining elements by 1 position.

Now we can see all the elements are sorted.

def binary_search(arr, key, start, end):
    if start == end:
        if arr[start] > key:
            return start
        else:
            return start+1
 
    if start > end:
        return start
 
    mid = (start+end)//2
    if arr[mid] < key:
        return binary_search(arr, key, mid+1, end)
    elif arr[mid] > key:
        return binary_search(arr, key, start, mid-1)
    else:
        return mid
 
def insertion_sort(arr):
    total_num = len(arr)
    for i in range(1, total_num):
        key = arr[i]
        j = binary_search(arr, key, 0, i-1)
        arr = arr[:j] + [key] + arr[j:i] + arr[i+1:]
    return arr
 

sorted_array = insertion_sort([29, 10, 14, 37, 14])
print("Sorted Array : ", sorted_array)

Psuedocode

Consider the array Arr,

  1. Iterate the array from the second element to the last element.
  2. Store the current element Arr[i] in a variable key.
  3. Find the position of the element just greater than Arr[i] in the subarray from Arr[0] to Arr[i-1] using binary search. Say this element is at index pos.
  4. Shift all the elements from index pos to i-1 towards the right.
  5. Arr[pos] = key.

Complexity Analysis

Worst Case

For inserting the i-th element in its correct position in the sorted, finding the position (pos) will take O(log i) steps. However, to insert the element, we need to shift all the elements from pos to i-1. This will take i steps in the worst case (when we have to insert at the starting position).

We make a total of N insertions. so, the worst-case time complexity of binary insertion sort is O(N^2).

This occurs when the array is initially sorted in descending order.

Best Case

The best case will be when the element is already in its sorted position. In this case, we don’t have to shift any of the elements; we can insert the element in O(1).

But we are using binary search to find the position where we need to insert. If the element is already in its sorted position, binary search will take (log i) steps. Thus, for the i-th element, we make (log i) operations, so its best-case time complexity is O(N log N).

This occurs when the array is initially sorted in ascending order.

Average Case

For average-case time complexity, we assume that the elements of the array are jumbled. Thus, on average, we will need O(i /2) steps for inserting the i-th element, so the average time complexity of binary insertion sort is O(N^2).

Space Complexity Analysis

Binary insertion sort is an in-place sorting algorithm. This means that it only requires a constant amount of additional space. We sort the given array by shifting and inserting the elements.

Therefore, the space complexity of this algorithm is O(1) if we use iterative binary search. It will be O(logN) if we use recursive binary search because of O(log N) recursive calls.

Is Binary Insertion Sort a stable algorithm

It is a stable sorting algorithm, the elements with the same values appear in the same order in the final array as they were in the initial array.

Cons and Pros

  1. Binary insertion sort works efficiently for smaller arrays.
  2. This algorithm also works well for almost-sorted arrays, where the elements are near their position in the sorted array.
  3. However, when the size of the array is large, the binary insertion sort doesn’t perform well. We can use other sorting algorithms like merge sort or quicksort in such cases.
  4. Making fewer comparisons is also one of the strengths of this sorting algorithm; therefore, it is efficient to use it when the cost of comparison is high.
  5. Its efficient when the cost of comparison between keys is sufficiently high. For example, if we want to sort an array of strings, the comparison operation of two strings will be high.

Bonus Section

Binary Insertion Sort has a quadratic time complexity just as Insertion Sort. Still, it is usually faster than Insertion Sort in practice, which is apparent when comparison takes significantly more time than swapping two elements.

Why Skeleton Screens Improve Perceived Loading in Apps like LinkedIn

18 February 2025 at 11:55

Introduction

What do Reddit, Discord, Medium, and LinkedIn have in common? They use what’s called a skeleton loading screen for their applications. A skeleton screen is essentially a wireframe of the application. The wireframe is a placeholder until the application finally loads.

skeleton loader image

Rise of skeleton loader.

The term “skeleton screen” was introduced in 2013 by product designer Luke Wroblewski in a blog post about reducing perceived wait time. In this lukew.com/ff/entry.asp?1797 post, he explains how gradually revealing page content turns user attention to the content being loaded, and off of the loading time itself.

Skeleton Loader

Skeleton loading screens will improve your application’s user experience and make it feel more performant. The skeleton loading screen essentially impersonates the original layout.

This lets the user know what’s happening on the screen. The user interprets this as the application is booting up and the content is loading.

In simplest terms, Skeleton Loader is a static / animated placeholder for the information that is still loading. It mimic the structure and look of the entire view.

Why not just a loading spinner ?

Instead of showing a loading spinner, we could show a skeleton screen that makes the user see that there is progress happening when launching and navigating the application.

They let the user know that some content is loading and, more importantly, provide an indication of what is loading, whether it’s an image, text, card, and so on.

This gives the user the impression that the website is faster because they already know what type of content is loading before it appears. This is referred to as perceived performance.

Skeleton screens don’t really make pages load faster. Instead, they are designed to make it feel like pages are loading faster.

When to use ?

  1. Use on high-traffic pages where resources takes a bit long to load like account dashboard.
  2. When the component contains good amount of information, such as list or card.
  3. Could be replaced by spin in any situation, but can provide a better user experience.
  4. Use when there’s more than 1 element loading at the same time that requires an indicator.
  5. Use when you need to load multiple images at once, a skeleton screen might make a good placeholder. For these pages, consider implementing lazy loading first, which is a similar technique for decreasing perceived load time.

When not to use ?

  1. Not to use for a long-running process, e.g. importing data, manipulation of data etc. (Operations on data intensive applications)
  2. Not to use for fast processes that that take less than half a second.
  3. Users still associate video buffering with spinners. Avoid skeleton screens any time a video is loading on your page.
  4. For longer processes (uploads, download, file manipulation ) can use progress bar instead of skeleton loading.
  5. As a replacement for poor performance: If you can further optimize your website to actually load content more quickly, always pursue that first.

Let’s design a simple skeleton loading

index.html

<!DOCTYPE html>
<html>
<head>
	<meta charset="utf-8">
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<title>Skeleton Loading</title>

	<style>
		.card {
			width:  250px;
			height:  150px;
			background-color: #fff;
			padding: 10px;
			border-radius: 5px;
			border:  1px solid gray;
		}

		.card-skeleton {
			background-image: linear-gradient(90deg, #ccc, 0px, rgb(229 229 229 / 90%) 40px, #ccc 80px);
			background-size: 300%;
			background-position: 100% 0;
			border-radius: inherit;
			animation: shimmer 1.5s infinite;
		}

		.title {
			height: 15px;
			margin-bottom: 15px;
		}

		.description {
			height: 100px;
		}

		@keyframes shimmer{
			to {
				background-position: -100% 0;
			}
		}
	</style>

</head>
<body>
	<div class="card">
		<div class="card-skeleton title"></div>
		<div class="card-skeleton description"></div>
	</div>

</body>
</html>

Skeleton Loading .card { width: 250px; height: 150px; background-color: #fff; padding: 10px; border-radius: 5px; border: 1px solid gray; } .card-skeleton { background-image: linear-gradient(90deg, #ccc, 0px, rgb(229 229 229 / 90%) 40px, #ccc 80px); background-size: 300%; background-position: 100% 0; border-radius: inherit; animation: shimmer 1.5s infinite; } .title { height: 15px; margin-bottom: 15px; } .description { height: 100px; } @keyframes shimmer{ to { background-position: -100% 0; } }

Suggestions to keep in mind

  1. The goal is to design for a perception of decreased waiting time.
  2. Contents should replace skeleton exactly by position and size immediately after they are loaded.
  3. Using motion that moves from left to right (wave) gives a better perception of decreased waiting time than fading in and out (pulse).
  4. Using motion that is slow and steady gives a perception of decreased waiting time.

Simplify Your Workflow with Global .gitignore: Code Happily, Ignore Globally!

17 February 2025 at 18:30

To my surprise, i came to know about that there is a stuff called as global ignore in git. In this blog i will jot down notes on setting up global .gitignore file.

Git allows you to ignore certain files and directories using a .gitignore file. However, if you frequently work on multiple projects, you might want to configure a global .gitignore file to avoid repeating the same ignore rules across repositories.

Why Use a Global .gitignore?

A global .gitignore file is beneficial when you want to exclude common files across all Git repositories on your system. Some examples include,

  • OS-specific files (e.g., macOS .DS_Store, Windows Thumbs.db)
  • Editor/IDE-specific files (e.g., .vscode/, .idea/)
  • Common compiled files (e.g., *.pyc, node_modules/, dist/)
  • Personal configurations that should not be committed (e.g., .env files)
  • Or you can just define a format like *.ignore.* , which you can use to ignore any file needed.

Setting Up a Global .gitignore File

1. Create the Global .gitignore File

Run the following command to create a global .gitignore file in your home directory:

 touch ~/.gitignore_global

You can use any path, but ~/.gitignore_global is commonly used.

2. Configure Git to Use the Global .gitignore

Tell Git to use this file globally by running

 git config --global core.excludesFile ~/.gitignore_global

3. Add Rules to the Global .gitignore

Edit the ~/.gitignore_global file using a text editor

 nano ~/.gitignore_global

Example content

# Ignore system files
.DS_Store
Thumbs.db

# Ignore editor/IDE settings
.vscode/
.idea/
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

# Ignore compiled files
*.o
*.pyc
*.class
node_modules/
dist/

# Ignore environment files
.env
.env.local

Save the file and exit the editor.

4. Verify the Global Ignore Configuration

To check if Git recognizes your global .gitignore file, run

 git config --global core.excludesFile

It should return the path to your global ignore file, e.g., ~/.gitignore_global.

Managing the Global .gitignore

  • To update the file, edit ~/.gitignore_global and add/remove rules as needed.
  • If you ever want to remove the global .gitignore setting, run:git config --global --unset core.excludesFile
  • To list all global configurations, use:git config --global --list

Setting up a global .gitignore file is a simple yet powerful way to maintain cleaner repositories and avoid committing unnecessary files. By following these steps, you can streamline your workflow across multiple Git projects efficiently.

❌
❌