Databases power the backbone of modern applications, and PostgreSQL is one of the most powerful open-source relational databases trusted by top companies worldwide. Whether youβre a beginner or a developer looking to sharpen your database skills, this FREE bootcamp will take you from Zero to Hero in PostgreSQL!
This intensive hands on bootcamp is designed for developers, DBAs, and tech enthusiasts who want to master PostgreSQL from scratch and apply it in real-world scenarios.
Who Should Attend?
Beginners eager to learn databases Developers & Engineers working with PostgreSQL Anyone looking to optimize their SQL skills
Date: March 22, 23 Time: Will be finalized later. Location: Online Cost:100% FREE
Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.
In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.
Why Perform Spike Testing?
Spike testing helps you
Determine system stability under unexpected traffic surges.
Identify bottlenecks that arise due to rapid load increases.
Assess auto-scaling capabilities of cloud-based infrastructures.
Measure response time degradation during high-demand spikes.
Ensure system recovery after the sudden load disappears.
http_req_duration β Measures response time impact.
vus_max β Peak virtual users during the spike.
errors β Percentage of failed requests due to overload.
Best Practices for Spike Testing
Monitor application logs and database performance during the test.
Use auto-scaling mechanisms for cloud-based environments.
Combine spike tests with stress testing for better insights.
Analyze error rates and recovery time to ensure system stability.
Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.
Stress testing is a critical aspect of performance testing that evaluates how a system performs under extreme loads. Unlike load testing, which simulates expected user traffic, stress testing pushes a system beyond its limits to identify breaking points and measure recovery capabilities.
In this blog, we will explore stress testing using K6, an open-source load testing tool, with detailed explanations and full examples to help you implement stress testing effectively.
Why Stress Testing?
Stress testing helps you
Identify the maximum capacity of your system.
Detect potential failures and bottlenecks.
Measure system stability and recovery under high loads.
Ensure infrastructure can handle unexpected spikes in traffic.
K6 provides various executors to simulate different traffic patterns. For stress testing, we mainly use
ramping-vus β Gradually increases virtual users to a high level.
constant-vus β Maintains a fixed high number of virtual users.
spike β Simulates a sudden surge in traffic.
Example 1: Basic Stress Test with Ramping VUs
This script gradually increases the number of virtual users, holds a peak load, and then reduces it.
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
stages: [
{ duration: '1m', target: 100 }, // Ramp up to 100 users in 1 min
{ duration: '3m', target: 100 }, // Stay at 100 users for 3 min
{ duration: '1m', target: 0 }, // Ramp down to 0 users
],
};
export default function () {
let res = http.get('https://test-api.example.com');
sleep(1);
}
Explanation
The test starts with 0 users and ramps up to 100 users in 1 minute.
Holds 100 users for 3 minutes.
Gradually reduces load to 0 users.
The sleep(1) function helps simulate real user behavior between requests.
Example 2: Constant High Load Test
This test maintains a consistently high number of virtual users.
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
vus: 200, // 200 virtual users
duration: '5m', // Run the test for 5 minutes
};
export default function () {
http.get('https://test-api.example.com');
sleep(1);
}
Explanation
200 virtual users are constantly hitting the endpoint for 5 minutes.
Helps evaluate system performance under sustained high traffic.
Example 3: Spike Testing (Sudden Traffic Surge)
This test simulates a sudden spike in traffic, followed by a drop.
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
stages: [
{ duration: '10s', target: 10 }, // Start with 10 users
{ duration: '10s', target: 500 }, // Spike to 500 users
{ duration: '10s', target: 10 }, // Drop back to 10 users
],
};
export default function () {
http.get('https://test-api.example.com');
sleep(1);
}
Explanation
Starts with 10 users.
Spikes suddenly to 500 users in 10 seconds.
Drops back to 10 users.
Helps determine how the system handles sudden surges in traffic.
Analyzing Test Results
After running the tests, K6 provides detailed statistics
Stress testing is vital to ensure application stability and scalability. Using K6, we can simulate different stress scenarios like ramping load, constant high load, and spikes to identify system weaknesses before they affect users.
Load testing is essential to evaluate how a system behaves under expected and peak loads. Traditionally, we rely on metrics like requests per second (RPS), response time, and error rates. However, an insightful approach called Average Load Testing has been discussed recently. This blog explores that concept in detail, providing practical examples to help you apply it effectively.
Understanding Average Load Testing
Average Load Testing focuses on simulating real-world load patterns rather than traditional peak load tests. Instead of sending a fixed number of requests per second, this approach
Generates requests based on the average concurrency over time.
More accurately reflects real-world traffic patterns.
Helps identify performance bottlenecks in a realistic manner.
Setting Up Load Testing with K6
K6 is an excellent tool for implementing Average Load Testing. Letβs go through practical examples of setting up such tests.
The ramping-arrival-rate gradually increases requests per second over time.
The stages array defines a progression from 5 to 100 requests/sec over 6 minutes.
Logs response times to help analyze system performance.
Example 3: Load Testing with Multiple Endpoints
In real applications, multiple endpoints are often tested simultaneously. Hereβs how to test different API routes
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
scenarios: {
multiple_endpoints: {
executor: 'constant-arrival-rate',
rate: 15, // 15 requests per second
timeUnit: '1s',
duration: '2m',
preAllocatedVUs: 30,
maxVUs: 60,
},
},
};
export default function () {
let urls = [
'https://test-api.example.com/users',
'https://test-api.example.com/orders',
'https://test-api.example.com/products'
];
let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
check(res, {
'is status 200': (r) => r.status === 200,
});
console.log(`Response time: ${res.timings.duration}ms`);
sleep(1);
}
Explanation
The script randomly selects an API endpoint to test different routes.
Uses check to ensure status codes are 200.
Logs response times for deeper insights.
Analyzing Results
To analyze test results, you can store logs or metrics in a database or monitoring tool and visualize trends over time. Some popular options include
Prometheus for time-series data storage.
InfluxDB for handling large-scale performance metrics.
ELK Stack (Elasticsearch, Logstash, Kibana) for log-based analysis.
Average Load Testing provides a more realistic way to measure system performance. By leveraging K6, you can create flexible, real-world simulations to optimize your applications effectively.
Are you tired of slow, clunky GUI-based file managers? Do you want lightning-fast navigation and total control over your filesβright from your terminal? Meet Superfile, the ultimate tool for power users who love efficiency and speed.
In this blog, weβll take you on a deep dive into Superfileβs features, commands, and shortcuts, transforming you into a file management ninja!
Why Choose Superfile?
Superfile isnβt just another file manager itβs a game-changer.
Hereβs why
Blazing Fast β No unnecessary UI lag, just pure efficiency.
Keyboard-Driven β Forget the mouse, master navigation with powerful keybindings.
Multi-Panel Support β Work with multiple directories simultaneously.
Smart Search & Sorting β Instantly locate and organize files.
Built-in File Preview & Metadata Display β See what you need without opening files.
Highly Customizable β Tailor it to fit your workflow perfectly.
Installation
Getting started is easy! Install Superfile using
# For Linux (Debian-based)
wget -qO- https://superfile.netlify.app/install.sh | bash
# For macOS (via Homebrew)
brew install superfile
# For Windows (via Scoop)
scoop install superfile
Once installed, launch it with
spf
Boom! Youβre ready to roll.
Essential Commands & Shortcuts
General Operations
Launch Superfile: spf
Exit: Press q or Esc
Help Menu: ?
Toggle Footer Panel: F
File & Folder Navigation
New File Panel: n
Close File Panel: w
Toggle File Preview: f
Next Panel: Tab or Shift + l
Sidebar Panel: s
File & Folder Management
Create File/Folder: Ctrl + n
Rename: Ctrl + r
Copy: Ctrl + c
Cut: Ctrl + x
Paste: Ctrl + v
Delete: Ctrl + d
Copy Path: Ctrl + p
Search & Selection
Search: /
Select Files: v
Select All: Shift + a
Compression & Extraction
Extract Zip: Ctrl + e
Compress to Zip: Ctrl + a
Advanced Power Moves
Open Terminal Here: Shift + t
Open in Editor: e
Toggle Hidden Files: .
Pro Tip: Use Shift + p to pin frequently accessed folders for even quicker access!
Customizing Superfile
Want to make Superfile truly yours? Customize it easily by editing the config file
Superfile is the Swiss Army knife of terminal-based file managers. Whether youβre a developer, system admin, or just someone who loves a fast, efficient workflow, Superfile will revolutionize the way you manage files.
Ready to supercharge your productivity? Install Superfile today and take control like never before!
Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines!
Topics Covered: Syed Jafer β FOSS, GLUGs, and open-source communities Salman β Why FOSS matters & Linux Commands Dhanasekar β Linux and DevOps Guhan β GNU and free software
Challenges We Faced
BitLocker Encryption β Had to disable BitLocker on some laptops BIOS/UEFI Problems β Secure Boot, boot order changes needed GRUB Issues β Windows not showing up, required boot-repair
IndexedDB is a powerful client-side database API for storing structured data in browsers. However, its API is complex, requiring transactions, object stores, and cursors to manage data. LocalBase simplifies IndexedDB by providing an intuitive, promise-based API.
In this blog, weβll explore LocalBase, its features, and how to use it effectively in web applications.
What is LocalBase?
LocalBase is an easy-to-use JavaScript library that simplifies IndexedDB interactions. It provides a syntax similar to Firestore, making it ideal for developers familiar with Firebase.
Key Features
Promise based API
Simple CRUD operations
No need for manual transaction handling
Works seamlessly in modern browsers
Installation
You can install LocalBase via npm or use it directly in a script tag
Since LocalBase doesnβt support native where queries, you need to filter manually.
3. Handling Transactions
LocalBase handles transactions internally, so you donβt need to worry about opening and closing them. However, you should use .then() to ensure operations complete before the next action.
The Web storage api is a set of mechanisms that enable browsers to store key-value pairs. Before HTML5, application data had to be sorted in cookies, included in every server request. Its intended to be far more user-friendly than using cookies.
Web storage is more secure, and large amounts of data can be stored locally, without affecting website performance.
There are 2 types of web storage,
Local Storage
Session Storage
We already have cookies. Why additional objects?
Unlike cookies, web storage objects are not sent to server with each request. Because of that, we can store much more. Most modern browsers allow at least 5 megabytes of data (or more) and have settings to configure that.
Also unlike cookies, the server canβt manipulate storage objects via HTTP headers. Everythingβs done in JavaScript.The storage is bound to the origin (domain/protocol/port triplet). That is, different protocols or subdomains infer different storage objects, they canβt access data from each other.
In this guide, you will learn/refresh about LocalStorage.
LocalStorage
The localStorage is property of the window (browser window object) interface allows you to access a Storage object for the Documentβs origin; the stored data is saved across browser sessions.
Data is kept for a longtime in local storage (with no expiration date.). This could be one day, one week, or even one year as per the developer preference ( Data in local storage maintained even if the browser is closed).
Local storage only stores strings. So, if you intend to store objects, lists or arrays, you must convert them into a string using JSON.stringfy()
Local storage will be available via the window.localstorage property.
Whatβs interesting about them is that the data survives a page refresh (for sessionStorage) and even a full browser restart (for localStorage).
Functionalities
// setItem normal strings
window.localStorage.setItem("name", "goku");
// getItem
const name = window.localStorage.getItem("name");
console.log("name from localstorage, "+name);
// Storing an Object without JSON stringify
const data = {
"commodity":"apple",
"price":43
};
window.localStorage.setItem('commodity', data);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data without jsonified, "+ result);
// Storing an object after converting to JSON string.
var jsonifiedString = JSON.stringify(data);
window.localStorage.setItem('commodity', jsonifiedString);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data after jsonified, "+ result);
// remove item
window.localStorage.removeItem("commodity");
var result = window.localStorage.getItem('commodity');
console.log("Data after removing the key "+ result);
//length
console.log("length of local storage " + window.localStorage.length);
// clear
window.localStorage.clear();
console.log("length of local storage - after clear " + window.localStorage.length);
When to use Local Storage
Data stored in Local Storage can be easily accessed by third party individuals.
So its important to know that any sensitive data must not sorted in Local Storage.
Local Storage can help in storing temporary data before it is pushed to the server.
Always clear local storage once the operation is completed.
The majority of local storageβs drawbacks arenβt really significant. You may still not use it, but your app will run a little slower and youβll experience a tiny developer inconvenience. Security, however, is distinct. Knowing and understanding the security model of local storage is crucial since it will have a significant impact on your website in ways you might not have anticipated.
Local storage also has the drawback of being insecure. In no way! Everyone who stores sensitive information in local storage, such as session data, user information, credit card information (even momentarily! ), and anything else you wouldnβt want shared publicly on social media, is doing it incorrectly.
The purpose of local storage in a browser for safe storage was not intended. It was intended to be a straightforward key/value store for strings only that programmers could use to create somewhat more complicated single page apps.
General Preventions
For example, if we are using third party JavaScript libraries and they are injected with some scripts which extract the storage objects, our storage data wonβt be secure anymore. Therefore itβs not recommended to save sensitive data as
Username/Password
Credit card info
JWT tokens
API keys
Personal info
Session ids
Do not use the same origin for multiple web applications. Instead, use subdomains since otherwise, the storage will be shared with all. Reason is, for each subdomain it will have an unique localstorage; and they canβt communicate between subdomain instances.
Once some data are stored in Local storage, the developers donβt have any control over it until the user clears it. If you want the data to be removed once the session ends, use SessionStorage.
Validate, encode and escape data read from browser storage
Git is an essential tool for version control, and one of its underrated but powerful features is git stash. It allows developers to temporarily save their uncommitted changes without committing them, enabling a smooth workflow when switching branches or handling urgent bug fixes.
In this blog, we will explore git stash, its varieties, and some clever hacks to make the most of it.
1. Understanding Git Stash
Git stash allows developers to temporarily save changes made to the working directory, enabling them to switch contexts without having to commit incomplete work. This is particularly useful when you need to switch branches quickly or when you are interrupted by an urgent task.
When you run git stash, Git takes the uncommitted changes in your working directory (both staged and unstaged) and saves them on a stack called βstash stackβ. This action reverts your working directory to the last committed state while safely storing the changes for later use.
How It Works
Git saves the current state of the working directory and the index (staging area) as a stash.
The stash includes modifications to tracked files, newly created files, and changes in the index.
Untracked files are not stashed by default unless specified.
Stashes are stored in a stack, with the most recent stash on top.
Common Use Cases
Context Switching: When you are working on a feature and need to switch branches for an urgent bug fix.
Code Review Feedback: If you receive feedback and need to make changes but are in the middle of another task.
Cleanup Before Commit: To stash temporary debugging changes or print statements before making a clean commit.
Git stash is used to save uncommitted changes in a temporary area, allowing you to switch branches or work on something else without committing incomplete work.
Basic Usage
The basic git stash command saves all modified tracked files and staged changes. This does not include untracked files by default.
git stash
This command performs three main actions
Saves changes: Takes the current working directory state and index and saves it as a new stash entry.
Resets working directory: Reverts the working directory to match the last commit.
Stacks the stash: Stores the saved state on top of the stash stack.
Restoring Changes
To restore the stashed changes, you can use
git stash pop
This does two things
Applies the stash: Reapplies the changes to your working directory.
Deletes the stash: Removes the stash entry from the stash stack.
If you want to keep the stash for future use
git stash apply
This reapplies the changes without deleting the stash entry.
Viewing and Managing Stashes
To see a list of all stash entries
git stash list
This shows a list like
stash@{0}: WIP on feature-branch: 1234567 Commit message
stash@{1}: WIP on master: 89abcdef Commit message
Each stash is identified by an index (e.g., stash@{0}) which can be used for other stash commands.
git stash
This command stashes both tracked and untracked changes.
To apply the last stashed changes back
git stash pop
This applies the stash and removes it from the stash list.
To apply the stash without removing it
git stash apply
To see a list of all stashed changes
git stash list
To remove a specific stash
git stash drop stash@{index}
To clear all stashes
git stash clear
2. Varieties of Git Stash
a) Stashing Untracked Files
By default, git stash does not include untracked files. To include them
git stash -u
Or:
git stash --include-untracked
b) Stashing Ignored Files
To stash even ignored files
git stash -a
Or:
git stash --all
c) Stashing with a Message
To add a meaningful message to a stash
git stash push -m "WIP: Refactoring user authentication"
d) Stashing Specific Files
If you only want to stash specific files
git stash push -m "Partial stash" -- path/to/file
e) Stashing and Switching Branches
Instead of running git stash and git checkout separately, do it in one step
If you realize your stash should have been a separate branch
git stash branch new-branch stash@{0}
This will create a new branch and apply the stashed changes.
d) Keeping Index Changes
If you want to keep staged files untouched while stashing
git stash push --keep-index
e) Recovering a Dropped Stash
If you accidentally dropped a stash, it may still be in the reflog
git fsck --lost-found
Or, check stash history with:
git reflog stash
f) Using Stash for Conflict Resolution
If youβre rebasing and hit conflicts, stash helps in saving progress
git stash
# Fix conflicts
# Continue rebase
git stash pop
4. When Not to Use Git Stash
If your work is significant, commit it instead of stashing.
Avoid excessive stashing as it can lead to forgotten changes.
Stashing doesnβt track renamed or deleted files effectively.
Git stash is an essential tool for developers to manage temporary changes efficiently. With the different stash varieties and hacks, you can enhance your workflow and avoid unnecessary commits. Mastering these techniques will save you time and improve your productivity in version control.
Binary insertion sort is a sorting algorithm similar to insertion sort, but instead of using linear search to find the position where the element should be inserted, we use binary search.
Thus, we reduce the number of comparisons for inserting one element from O(N) (Time complexity in Insertion Sort) to O(log N).
Best of two worlds
Binary insertion sort is a combination of insertion sort and binary search.
Insertion sort is sorting technique that works by finding the correct position of the element in the array and then inserting it into its correct position. Binary search is searching technique that works by finding the middle of the array for finding the element.
As the complexity of binary search is of logarithmic order, the searching algorithmβs time complexity will also decrease to of logarithmic order. Implementation of binary Insertion sort. this program is a simple Insertion sort program but instead of the standard searching technique binary search is used.
How Binary Insertion Sort works ?
Process flow
In binary insertion sort, we divide the array into two subarrays β sorted and unsorted. The first element of the array is in the sorted subarray, and the rest of the elements are in the unsorted one.
We then iterate from the second element to the last element. For the i-th iteration, we make the current element our βkey.β This key is the element that we have to add to our existing sorted subarray.
Example
Consider the array 29, 10, 14, 37, 14
First Pass
Key = 1
Since we consider the first element is in the sorted array, we will be starting from the second element. Then we apply the binary search on the sorted array.
In this scenario, we can see that the middle element in sorted array (29) is greater than the key element 10. So the position of the key element is 0. Then we can shift the remaining elements by 1 position.
Increment the value of key.
Second Pass
Key = 2
Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.
In this scenario, by applying binary search, we can see key element to be placed at index 1 (between 10 and 29). Then we can shift the remaining elements by 1 position.
Third Pass
Key = 3
Now the key element is 37. We will apply binary search in the sorted array to find the position of the key element.
In this scenario, by applying binary search, we can see key element is placed in its correct position.
Fourth Pass
Key = 4
Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.
In this scenario, by applying binary search, we can see key element to be placed at index 2 (between 14 and 29). Then we can shift the remaining elements by 1 position.
Iterate the array from the second element to the last element.
Store the current element Arr[i] in a variable key.
Find the position of the element just greater than Arr[i] in the subarray from Arr[0] to Arr[i-1] using binary search. Say this element is at index pos.
Shift all the elements from index pos to i-1 towards the right.
Arr[pos] = key.
Complexity Analysis
Worst Case
For inserting the i-th element in its correct position in the sorted, finding the position (pos) will take O(log i) steps. However, to insert the element, we need to shift all the elements from pos to i-1. This will take i steps in the worst case (when we have to insert at the starting position).
We make a total of N insertions. so, the worst-case time complexity of binary insertion sort is O(N^2).
This occurs when the array is initially sorted in descending order.
Best Case
The best case will be when the element is already in its sorted position. In this case, we donβt have to shift any of the elements; we can insert the element in O(1).
But we are using binary search to find the position where we need to insert. If the element is already in its sorted position, binary search will take (log i) steps. Thus, for the i-th element, we make (log i) operations, so its best-case time complexity is O(N log N).
This occurs when the array is initially sorted in ascending order.
Average Case
For average-case time complexity, we assume that the elements of the array are jumbled. Thus, on average, we will need O(i /2) steps for inserting the i-th element, so the average time complexity of binary insertion sort is O(N^2).
Space Complexity Analysis
Binary insertion sort is an in-place sorting algorithm. This means that it only requires a constant amount of additional space. We sort the given array by shifting and inserting the elements.
Therefore, the space complexity of this algorithm is O(1) if we use iterative binary search. It will be O(logN) if we use recursive binary search because of O(log N) recursive calls.
Is Binary Insertion Sort a stable algorithm
It is a stable sorting algorithm, the elements with the same values appear in the same order in the final array as they were in the initial array.
Cons and Pros
Binary insertion sort works efficiently for smaller arrays.
This algorithm also works well for almost-sorted arrays, where the elements are near their position in the sorted array.
However, when the size of the array is large, the binary insertion sort doesnβt perform well. We can use other sorting algorithms like merge sort or quicksort in such cases.
Making fewer comparisons is also one of the strengths of this sorting algorithm; therefore, it is efficient to use it when the cost of comparison is high.
Its efficient when the cost of comparison between keys is sufficiently high. For example, if we want to sort an array of strings, the comparison operation of two strings will be high.
Bonus Section
Binary Insertion Sort has a quadratic time complexity just as Insertion Sort. Still, it is usually faster than Insertion Sort in practice, which is apparent when comparison takes significantly more time than swapping two elements.
What do Reddit, Discord, Medium, and LinkedIn have in common? They use whatβs called a skeleton loading screen for their applications. A skeleton screen is essentially a wireframe of the application. The wireframe is a placeholder until the application finally loads.
Rise of skeleton loader.
The term βskeleton screenβ was introduced in 2013 by product designer Luke Wroblewski in a blog post about reducing perceived wait time. In this lukew.com/ff/entry.asp?1797 post, he explains how gradually revealing page content turns user attention to the content being loaded, and off of the loading time itself.
Skeleton Loader
Skeleton loading screens will improve your applicationβs user experience andΒ make it feel more performant. The skeleton loading screen essentiallyΒ impersonates the original layout.
This lets the user know whatβs happening on the screen. The user interprets this as the application is booting up and the content is loading.
In simplest terms, Skeleton Loader is a static / animated placeholder for the information that is still loading. It mimic the structure and look of the entire view.
Why not just a loading spinner ?
Instead of showing a loading spinner, we could show a skeleton screen that makes the user see that there is progress happening when launching and navigating the application.
They let the user know that some content is loading and, more importantly, provide an indication of what is loading, whether itβs an image, text, card, and so on.
This gives the user the impression that the website is faster because they already know what type of content is loading before it appears. This is referred to asΒ perceived performance.
Skeleton screensΒ donβt really make pages load faster. Instead, they are designed to make it feel like pages are loading faster.
When to use ?
Use on high-traffic pages where resources takes a bit long to load like account dashboard.
When the component containsΒ good amount of information, such as list or card.
Could be replaced byΒ spinΒ in any situation, but can provide a better user experience.
Use when thereβs more than 1 element loading at the same time that requires an indicator.
Use when you need to load multiple images at once, a skeleton screen might make a good placeholder. For these pages, consider implementing lazy loading first, which is a similar technique for decreasing perceived load time.
When not to use ?
Not to use for a long-running process, e.g. importing data, manipulation of data etc. (Operations on data intensive applications)
Not to use for fast processes that that takeΒ less than half a second.
Users still associate video buffering with spinners. Avoid skeleton screens any time a video is loading on your page.
For longer processes (uploads, download, file manipulation ) can use progress bar instead of skeleton loading.
As a replacement for poor performance: If you can further optimize your website to actually load content more quickly, always pursue that first.
To my surprise, i came to know about that there is a stuff called as global ignore in git. In this blog i will jot down notes on setting up global .gitignore file.
Git allows you to ignore certain files and directories using a .gitignore file. However, if you frequently work on multiple projects, you might want to configure a global .gitignore file to avoid repeating the same ignore rules across repositories.
Why Use a Global .gitignore?
A global .gitignore file is beneficial when you want to exclude common files across all Git repositories on your system. Some examples include,
OS-specific files (e.g., macOS .DS_Store, Windows Thumbs.db)
To check if Git recognizes your global .gitignore file, run
git config --global core.excludesFile
It should return the path to your global ignore file, e.g., ~/.gitignore_global.
Managing the Global .gitignore
To update the file, edit ~/.gitignore_global and add/remove rules as needed.
If you ever want to remove the global .gitignore setting, run:git config --global --unset core.excludesFile
To list all global configurations, use:git config --global --list
Setting up a global .gitignore file is a simple yet powerful way to maintain cleaner repositories and avoid committing unnecessary files. By following these steps, you can streamline your workflow across multiple Git projects efficiently.
Managing dependencies for small Python scripts has always been a bit of a hassle.
Traditionally, we either install packages globally (not recommended) or create a virtual environment, activate it, and install dependencies manually.
But what if we could run Python scripts like standalone binaries ?
Introducing PEP 723 β Inline Script Metadata
PEP 723 (https://peps.python.org/pep-0723/) introduces a new way to specify dependencies directly within a script, making it easier to execute standalone scripts without dealing with external dependency files.
This is particularly useful for quick automation scripts or one-off tasks.
Consider a script that interacts with an API requiring a specific package,
Here, instead of manually creating a requirements.txt or setting up a virtual environment, the dependencies are defined inline. When using uv, it automatically installs the required packages and runs the script just like a binary.
Running the Script as a Third-Party Tool
With uv, executing the script feels like running a compiled binary,
$ uv run fetch-data.py
Reading inline script metadata from: fetch-data.py
Installed dependencies in milliseconds
ehind the scenes, uv creates an isolated environment, ensuring a clean dependency setup without affecting the global Python environment. This allows Python scripts to function as independent tools without any manual dependency management.
Why This Matters
This approach makes Python an even more attractive choice for quick automation tasks, replacing the need for complex setups. It allows scripts to be shared and executed effortlessly, much like compiled executables in other programming environments.
By leveraging uv, we can streamline our workflow and use Python scripts as powerful, self-contained tools without the usual dependency headaches.
The top command in Linux is a powerful utility that provides realtime information about system performance, including CPU usage, memory usage, running processes, and more.
It is an essential tool for system administrators to monitor system health and manage resources effectively.
1. Basic Usage
Simply running top without any arguments displays an interactive screen showing system statistics and a list of running processes:
$ top
2. Understanding the top Output
The top interface is divided into multiple sections
Header Section
This section provides an overview of the system status, including uptime, load averages, and system resource usage.
Uptime and Load Average β Displays how long the system has been running and the average system load over the last 1, 5, and 15 minutes.
Task Summary β Shows the number of processes in various states:
Running β Processes actively executing on the CPU.
Sleeping β Processes waiting for an event or resource.
Stopped β Processes that have been paused.
Zombie β Processes that have completed execution but still have an entry in the process table. These occur when the parent process has not yet read the exit status of the child process. Zombie processes do not consume system resources but can clutter the process table if not handled properly.
CPU Usage β Breaks down CPU utilization into different categories:
us (User Space) β CPU time spent on user processes.
sy (System Space) β CPU time spent on kernel operations.
id (Idle) β Time when the CPU is not being used.
wa (I/O Wait) β Time spent waiting for I/O operations to complete.
st (Steal Time) β CPU cycles stolen by a hypervisor in a virtualized environment.
Memory Usage β Shows the total, used, free, and available RAM.
Swap Usage β Displays total, used, and free swap memory, which is used when RAM is full.
Process Table
The table below the header lists active processes with details such as:
PID β Process ID, a unique identifier for each process.
USER β The owner of the process.
PR β Priority of the process, affecting its scheduling.
NI β Nice value, which determines how favorable the process scheduling is.
VIRT β The total virtual memory used by the process.
RES β The actual RAM used by the process.
SHR β The shared memory portion.
S β Process state:
R β Running
S β Sleeping
Z β Zombie
T β Stopped
%CPU β The percentage of CPU time used.
%MEM β The percentage of RAM used.
TIME+ β The total CPU time consumed by the process.
COMMAND β The command that started the process.
3. Interactive Commands
While running top, various keyboard shortcuts allow dynamic interaction:
q β Quit top.
h β Display help.
k β Kill a process by entering its PID.
r β Renice a process (change priority).
z β Toggle color/monochrome mode.
M β Sort by memory usage.
P β Sort by CPU usage.
T β Sort by process runtime.
1 β Toggle CPU usage breakdown for multi-core systems.
u β Filter processes by a specific user.
s β Change update interval.
4. Command-Line Options
The top command supports various options for customization:
-b (Batch mode): Used for scripting to display output in a non-interactive mode.$ top -b -n 1-n specifies the number of iterations before exit.
-o FIELD (Sort by a specific field):$ top -o %CPUSorts by CPU usage.
-d SECONDS (Refresh interval):$ top -d 3Updates the display every 3 seconds.
-u USERNAME (Show processes for a specific user):$ top -u john
-p PID (Monitor a specific process):$ top -p 1234
5. Customizing top Display
Persistent Customization
To save custom settings, press W while running top. This saves the configuration to ~/.toprc.
Changing Column Layout
Press f to toggle the fields displayed.
Press o to change sorting order.
Press X to highlight sorted columns.
6. Alternative to top: htop, btop
For a more user-friendly experience, htop is an alternative:
Caching is an essential technique for improving application performance and reducing the load on databases. However, improper caching strategies can lead to serious issues.
In this blog, we will discuss four common cache problems: Thundering Herd Problem, Cache Penetration, Cache Breakdown, and Cache Crash, along with their causes, consequences, and solutions.
The Thundering Herd Problem occurs when a large number of keys in the cache expire at the same time. When this happens, all requests bypass the cache and hit the database simultaneously, overwhelming it and causing performance degradation or even a system crash.
Example Scenario
Imagine an e-commerce website where product details are cached for 10 minutes. If all the productsβ cache expires at the same time, thousands of users sending requests will cause an overwhelming load on the database.
Solutions
Staggered Expiration: Instead of setting a fixed expiration time for all keys, introduce a random expiry variation.
Allow Only Core Business Queries: Limit direct database access only to core business data, while returning stale data or temporary placeholders for less critical data.
Lazy Rebuild Strategy: Instead of all requests querying the database, the first request fetches data and updates the cache while others wait.
Batch Processing: Queue multiple requests and process them in batches to reduce database load.
Cache Penetration
What is it?
Cache Penetration occurs when requests are made for keys that neither exist in the cache nor in the database. Since these requests always hit the database, they put excessive pressure on the system.
Example Scenario
A malicious user could attempt to query random user IDs that do not exist, forcing the system to repeatedly query the database and skip the cache.
Solutions
Cache Null Values: If a key does not exist in the database, store a null value in the cache to prevent unnecessary database queries.
Use a Bloom Filter: A Bloom filter helps check whether a key exists before querying the database. If the Bloom filter does not contain the key, the request is discarded immediately.
Rate Limiting: Implement request throttling to prevent excessive access to non-existent keys.
Data Prefetching: Predict and load commonly accessed data into the cache before it is needed.
Cache Breakdown
What is it?
Cache Breakdown is similar to the Thundering Herd Problem, but it occurs specifically when a single hot key (a frequently accessed key) expires. This results in a surge of database queries as all users try to retrieve the same data.
Example Scenario
A social media platform caches trending hashtags. If the cache expires, millions of users will query the same hashtag at once, hitting the database hard.
Solutions
Never Expire Hot Keys: Keep hot keys permanently in the cache unless an update is required.
Preload the Cache: Refresh the cache asynchronously before expiration by setting a background task to update the cache regularly.
Mutex Locking: Ensure only one request updates the cache, while others wait for the update to complete.
Double Buffering: Maintain a secondary cache layer to serve requests while the primary cache is being refreshed.
Cache Crash
What is it?
A Cache Crash occurs when the cache service itself goes down. When this happens, all requests fall back to the database, overloading it and causing severe performance issues.
Example Scenario
If a Redis instance storing session data for a web application crashes, all authentication requests will be forced to hit the database, leading to a potential outage.
Solutions
Cache Clustering: Use a cluster of cache nodes instead of a single instance to ensure high availability.
Persistent Storage for Cache: Enable persistence modes like Redis RDB or AOF to recover data quickly after a crash.
Automatic Failover: Configure automated failover with tools like Redis Sentinel to ensure availability even if a node fails.
Circuit Breaker Mechanism: Prevent the application from directly accessing the database if the cache is unavailable, reducing the impact of a crash.
Caching is a powerful mechanism to improve application performance, but improper strategies can lead to severe bottlenecks. Problems like Thundering Herd, Cache Penetration, Cache Breakdown, and Cache Crash can significantly degrade system reliability if not handled properly.
In this blog, i jot down notes on what is smoke test, how it got its name, and how to approach the same in k6.
The term smoke testing originates from hardware testing, where engineers would power on a circuit or device and check if smoke appeared.
If smoke was detected, it indicated a fundamental issue, and further testing was halted. This concept was later adapted to software engineering.
What is Smoke Testing?
Smoke testing is a subset of test cases executed to verify that the major functionalities of an application work as expected. If a smoke test fails, the build is rejected, preventing further testing of a potentially unstable application. This test helps catch major defects early, saving time and effort.
Key Characteristics
Ensures that the application is not broken in major areas.
Runs quickly and is not exhaustive.
Usually automated as part of a CI/CD pipeline.
Writing a Basic Smoke Test with K6
A basic smoke test using K6 typically checks API endpoints for HTTP 200 responses and acceptable response times.
import http from 'k6/http';
import { check } from 'k6';
export let options = {
vus: 1, // 1 virtual user
iterations: 5, // Runs the test 5 times
};
export default function () {
let res = http.get('https://example.com/api/health');
check(res, {
'is status 200': (r) => r.status === 200,
'response time < 500ms': (r) => r.timings.duration < 500,
});
}
Advanced Smoke Test Example
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
vus: 2, // 2 virtual users
iterations: 10, // Runs the test 10 times
};
export default function () {
let res = http.get('https://example.com/api/login');
check(res, {
'status is 200': (r) => r.status === 200,
'response time < 400ms': (r) => r.timings.duration < 400,
});
sleep(1);
}
Running and Analyzing Results
Execute the test using
k6 run smoke-test.js
Sample Output
checks...
is status 200
response time < 500ms
If any of the checks fail, K6 will report an error, signaling an issue in the application.
Smoke testing with K6 is an effective way to ensure that key functionalities in your application work as expected. By integrating it into your CI/CD pipeline, you can catch major defects early, improve application stability, and streamline your development workflow.
When running load tests with K6, two fundamental aspects that shape test execution are the number of Virtual Users (VUs) and the test duration. These parameters help simulate realistic user behavior and measure system performance under different load conditions.
In this blog, i jot down notes on virtual users and test duration in options. Using this we can ramp up users.
K6 offers multiple ways to define VUs and test duration, primarily through options in the test script or the command line.
Basic VU and Duration Configuration
The simplest way to specify VUs and test duration is by setting them in the options object of your test script.
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10, // Number of virtual users
duration: '30s', // Duration of the test
};
export default function () {
http.get('https://test.k6.io/');
sleep(1);
}
This script runs a load test with 10 virtual users for 30 seconds, making requests to the specified URL.
Specifying VUs and Duration from the Command Line
You can also set the VUs and duration dynamically using command-line arguments without modifying the script.
k6 run --vus 20 --duration 1m script.js
This command runs the test with 20 virtual users for 1 minute.
Ramp Up and Ramp Down with Stages
Instead of a fixed number of VUs, you can simulate user load variations over time using stages. This helps to gradually increase or decrease the load on the system.
export const options = {
stages: [
{ duration: '30s', target: 10 }, // Ramp up to 10 VUs
{ duration: '1m', target: 50 }, // Ramp up to 50 VUs
{ duration: '30s', target: 10 }, // Ramp down to 10 VUs
{ duration: '20s', target: 0 }, // Ramp down to 0 VUs
],
};
This test gradually increases the load, sustains it, and then reduces it, simulating real-world traffic patterns.
Custom Execution Scenarios
For more advanced load testing strategies, K6 supports scenarios, allowing fine-grained control over execution behavior.
Syntax of Custom Execution Scenarios
A scenarios object defines different execution strategies. Each scenario consists of
executor: Defines how the test runs (e.g., ramping-vus, constant-arrival-rate, etc.).
vus: Number of virtual users (for certain executors).
duration: How long the scenario runs.
iterations: Total number of iterations per VU (for certain executors).
stages: Used in ramping-vus to define load variations over time.
rate: Defines the number of iterations per time unit in constant-arrival-rate.
preAllocatedVUs: Number of VUs reserved for the test.
Different Executors in K6
K6 provides several executors that define how virtual users (VUs) generate load
shared-iterations β Distributes a fixed number of iterations across multiple VUs.
per-vu-iterations β Each VU runs a specific number of iterations independently.
constant-vus β Maintains a fixed number of VUs for a set duration.
ramping-vus β Increases or decreases the number of VUs over time.
constant-arrival-rate β Ensures a constant number of requests per time unit, independent of VUs.
ramping-arrival-rate β Gradually increases or decreases the request rate over time.
externally-controlled β Allows dynamic control of VUs via an external API.
Go a bit slower so that everyone can understand clearly without feeling rushed.
Provide more basics and examples to make learning easier for beginners.
Spend the first week explaining programming basics so that newcomers donβt feel lost.
Teach flowcharting methods to help participants understand the logic behind coding.
Try teaching Scratch as an interactive way to introduce programming concepts.
Offer weekend batches for those who prefer learning on weekends.
Encourage more conversations so that participants can actively engage in discussions.
Create sub-groups to allow participants to collaborate and support each other.
Get βcheerleadersβ within the team to make the classes more fun and interactive.
Increase promotion efforts to reach a wider audience and get more participants.
Provide better examples to make concepts easier to grasp.
Conduct more Q&A sessions so participants can ask and clarify their doubts.
Ensure that each participant gets a chance to speak and express their thoughts.
Showing your face in videos can help in building a more personal connection with the learners.
Organize mini-hackathons to provide hands-on experience and encourage practical learning.
Foster more interactions and connections between participants to build a strong learning community.
Encourage participants to write blogs daily to document their learning and share insights.
Motivate participants to give talks in class and other communities to build confidence.
Other Learnings & Suggestions
Avoid creating WhatsApp groups for communication, as the 1024 member limit makes it difficult to manage multiple groups.
Telegram works fine for now, but explore using mailing lists as an alternative for structured discussions.
Mute groups when necessary to prevent unnecessary messages like βHi, Hello, Good Morning.β
Teach participants how to join mailing lists like ChennaiPy and KanchiLUG and guide them on asking questions in forums like Tamil Linux Community.
Show participants how to create a free blog on platforms like dev.to or WordPress to share their learning journey.
Avoid spending too much time explaining everything in-depth, as participants should start coding a small project by the 5th or 6th class.
Present topics as solutions to project ideas or real-world problem statements instead of just theory.
Encourage using names when addressing people, rather than calling them βSirβ or βMadam,β to maintain an equal and friendly learning environment.
Zoom is costly, and since only around 50 people complete the training, consider alternatives like Jitsi or Google Meet for better cost-effectiveness.
In our previous blog on K6, we ran a script.js to test an api. As an output we received some metrics in the cli.
In this blog we are going to delve deep in to understanding metrics in K6.
1. HTTP Request Metrics
http_reqs
Description: Total number of HTTP requests initiated during the test.
Usage: Indicates the volume of traffic generated. A high number of requests can simulate real-world usage patterns.
http_req_duration
Description: Time taken for a request to receive a response (in milliseconds).
Components:
http_req_connecting: Time spent establishing a TCP connection.
http_req_tls_handshaking: Time for completing the TLS handshake.
http_req_waiting (TTFB): Time spent waiting for the first byte from the server.
http_req_sending: Time taken to send the HTTP request.
http_req_receiving: Time spent receiving the response data.
Usage: Identifies performance bottlenecks like slow server responses or network latency.
http_req_failed
Description: Proportion of failed HTTP requests (ratio between 0 and 1).
Usage: Highlights reliability issues. A high failure rate indicates problems with server stability or network errors.
2. VU (Virtual User) Metrics
vus
Description: Number of active Virtual Users at any given time.
Usage: Reflects concurrency level. Helps analyze how the system performs under varying loads.
vus_max
Description: Maximum number of Virtual Users during the test.
Usage: Defines the peak load. Useful for stress testing and capacity planning.
3. Iteration Metrics
iterations
Description: Total number of script iterations executed.
Usage: Measures the testβs progress and workload. Useful in endurance (soak) testing to observe long-term stability.
iteration_duration
Description: Time taken to complete one iteration of the script.
Usage: Helps identify performance degradation over time, especially under sustained load.
4. Data Transfer Metrics
data_sent
Description: Total amount of data sent over the network (in bytes).
Usage: Monitors network usage. High data volumes might indicate inefficient request payloads.
data_received
Description: Total data received from the server (in bytes).
Usage: Detects bandwidth usage and helps identify heavy response payloads.
5. Custom Metrics (Optional)
While K6 provides default metrics, you can define custom metrics like Counters, Gauges, Rates, and Trends for specific business logic or technical KPIs.
Example
import { Counter } from 'k6/metrics';
let myCounter = new Counter('my_custom_metric');
export default function () {
myCounter.add(1); // Increment the custom metric
}
Interpreting Metrics for Performance Optimization
Low http_req_duration + High http_reqs = Good scalability.
High http_req_failed = Investigate server errors or timeouts.
High data_sent / data_received = Optimize payloads.
Increasing iteration_duration over time = Possible memory leaks or resource exhaustion.