Introduction
In this blog, we will walk through the process of deploying a scalable AWS infrastructure using Terraform. The setup includes:
A VPC with public and private subnets
An Internet Gateway for public access
Application Load Balancers (ALBs) for distributing traffic
Target Groups and EC2 instances for handling incoming requests
By the end of this guide, youβll have a highly available setup with proper networking, security, and load balancing.
Step 1: Creating a VPC with Public and Private Subnets
The first step is to define our Virtual Private Cloud (VPC) with four subnets (two public, two private) spread across multiple Availability Zones.
Terraform Code: vpc.tf
This ensures redundancy and distributes traffic across different subnets.
Step 4: Creating Target Groups for EC2 Instances
Each ALB needs target groups to route traffic to EC2 instances.
Terraform Code: target_groups.tf
resource "aws_lb_target_group" "public_tg" {
name = "public-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main_vpc.id
}
resource "aws_lb_target_group" "private_tg" {
name = "private-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main_vpc.id
}
These target groups allow ALBs to forward requests to backend EC2 instances.
Step 5: Launching EC2 Instances
Finally, we deploy EC2 instances and register them with the target groups.
Terraform Code: ec2.tf
resource "aws_instance" "public_instance" {
ami = "ami-0abcdef1234567890" # Replace with a valid AMI ID
instance_type = "t2.micro"
subnet_id = aws_subnet.public_subnet_1.id
}
resource "aws_instance" "private_instance" {
ami = "ami-0abcdef1234567890" # Replace with a valid AMI ID
instance_type = "t2.micro"
subnet_id = aws_subnet.private_subnet_1.id
}
This registers our EC2 instances as backend servers.
Final Step:Terraform Apply!
Run the following command to deploy everything:
terraform init
terraform apply -auto-approve
Once completed, youβll get ALB DNS names, which you can use to access your deployed infrastructure.
Conclusion
This guide covered how to deploy a highly available AWS infrastructure using Terraform, including VPC, subnets, ALBs, security groups, target groups, and EC2 instances. This setup ensures a secure and scalable architecture.
OpenCV stands for Open Source Computer Vision. It is a library used for computer vision and machine learning tasks. It provides many functions to process images and videos.
Computer Vision
Computer vision is the process of extracting information from images or videos. For example, it can be used for object detection, face recognition, and more.
Databases power the backbone of modern applications, and PostgreSQL is one of the most powerful open-source relational databases trusted by top companies worldwide. Whether youβre a beginner or a developer looking to sharpen your database skills, this FREE bootcamp will take you from Zero to Hero in PostgreSQL!
This intensive hands on bootcamp is designed for developers, DBAs, and tech enthusiasts who want to master PostgreSQL from scratch and apply it in real-world scenarios.
Who Should Attend?
Beginners eager to learn databases Developers & Engineers working with PostgreSQL Anyone looking to optimize their SQL skills
Date: March 22, 23 Time: Will be finalized later. Location: Online Cost:100% FREE
Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.
In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.
Why Perform Spike Testing?
Spike testing helps you
Determine system stability under unexpected traffic surges.
Identify bottlenecks that arise due to rapid load increases.
Assess auto-scaling capabilities of cloud-based infrastructures.
Measure response time degradation during high-demand spikes.
Ensure system recovery after the sudden load disappears.
http_req_duration β Measures response time impact.
vus_max β Peak virtual users during the spike.
errors β Percentage of failed requests due to overload.
Best Practices for Spike Testing
Monitor application logs and database performance during the test.
Use auto-scaling mechanisms for cloud-based environments.
Combine spike tests with stress testing for better insights.
Analyze error rates and recovery time to ensure system stability.
Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.
Stress testing is a critical aspect of performance testing that evaluates how a system performs under extreme loads. Unlike load testing, which simulates expected user traffic, stress testing pushes a system beyond its limits to identify breaking points and measure recovery capabilities.
In this blog, we will explore stress testing using K6, an open-source load testing tool, with detailed explanations and full examples to help you implement stress testing effectively.
Why Stress Testing?
Stress testing helps you
Identify the maximum capacity of your system.
Detect potential failures and bottlenecks.
Measure system stability and recovery under high loads.
Ensure infrastructure can handle unexpected spikes in traffic.
K6 provides various executors to simulate different traffic patterns. For stress testing, we mainly use
ramping-vus β Gradually increases virtual users to a high level.
constant-vus β Maintains a fixed high number of virtual users.
spike β Simulates a sudden surge in traffic.
Example 1: Basic Stress Test with Ramping VUs
This script gradually increases the number of virtual users, holds a peak load, and then reduces it.
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
stages: [
{ duration: '1m', target: 100 }, // Ramp up to 100 users in 1 min
{ duration: '3m', target: 100 }, // Stay at 100 users for 3 min
{ duration: '1m', target: 0 }, // Ramp down to 0 users
],
};
export default function () {
let res = http.get('https://test-api.example.com');
sleep(1);
}
Explanation
The test starts with 0 users and ramps up to 100 users in 1 minute.
Holds 100 users for 3 minutes.
Gradually reduces load to 0 users.
The sleep(1) function helps simulate real user behavior between requests.
Example 2: Constant High Load Test
This test maintains a consistently high number of virtual users.
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
vus: 200, // 200 virtual users
duration: '5m', // Run the test for 5 minutes
};
export default function () {
http.get('https://test-api.example.com');
sleep(1);
}
Explanation
200 virtual users are constantly hitting the endpoint for 5 minutes.
Helps evaluate system performance under sustained high traffic.
Example 3: Spike Testing (Sudden Traffic Surge)
This test simulates a sudden spike in traffic, followed by a drop.
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
stages: [
{ duration: '10s', target: 10 }, // Start with 10 users
{ duration: '10s', target: 500 }, // Spike to 500 users
{ duration: '10s', target: 10 }, // Drop back to 10 users
],
};
export default function () {
http.get('https://test-api.example.com');
sleep(1);
}
Explanation
Starts with 10 users.
Spikes suddenly to 500 users in 10 seconds.
Drops back to 10 users.
Helps determine how the system handles sudden surges in traffic.
Analyzing Test Results
After running the tests, K6 provides detailed statistics
Stress testing is vital to ensure application stability and scalability. Using K6, we can simulate different stress scenarios like ramping load, constant high load, and spikes to identify system weaknesses before they affect users.
Load testing is essential to evaluate how a system behaves under expected and peak loads. Traditionally, we rely on metrics like requests per second (RPS), response time, and error rates. However, an insightful approach called Average Load Testing has been discussed recently. This blog explores that concept in detail, providing practical examples to help you apply it effectively.
Understanding Average Load Testing
Average Load Testing focuses on simulating real-world load patterns rather than traditional peak load tests. Instead of sending a fixed number of requests per second, this approach
Generates requests based on the average concurrency over time.
More accurately reflects real-world traffic patterns.
Helps identify performance bottlenecks in a realistic manner.
Setting Up Load Testing with K6
K6 is an excellent tool for implementing Average Load Testing. Letβs go through practical examples of setting up such tests.
The ramping-arrival-rate gradually increases requests per second over time.
The stages array defines a progression from 5 to 100 requests/sec over 6 minutes.
Logs response times to help analyze system performance.
Example 3: Load Testing with Multiple Endpoints
In real applications, multiple endpoints are often tested simultaneously. Hereβs how to test different API routes
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
scenarios: {
multiple_endpoints: {
executor: 'constant-arrival-rate',
rate: 15, // 15 requests per second
timeUnit: '1s',
duration: '2m',
preAllocatedVUs: 30,
maxVUs: 60,
},
},
};
export default function () {
let urls = [
'https://test-api.example.com/users',
'https://test-api.example.com/orders',
'https://test-api.example.com/products'
];
let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
check(res, {
'is status 200': (r) => r.status === 200,
});
console.log(`Response time: ${res.timings.duration}ms`);
sleep(1);
}
Explanation
The script randomly selects an API endpoint to test different routes.
Uses check to ensure status codes are 200.
Logs response times for deeper insights.
Analyzing Results
To analyze test results, you can store logs or metrics in a database or monitoring tool and visualize trends over time. Some popular options include
Prometheus for time-series data storage.
InfluxDB for handling large-scale performance metrics.
ELK Stack (Elasticsearch, Logstash, Kibana) for log-based analysis.
Average Load Testing provides a more realistic way to measure system performance. By leveraging K6, you can create flexible, real-world simulations to optimize your applications effectively.
Are you tired of slow, clunky GUI-based file managers? Do you want lightning-fast navigation and total control over your filesβright from your terminal? Meet Superfile, the ultimate tool for power users who love efficiency and speed.
In this blog, weβll take you on a deep dive into Superfileβs features, commands, and shortcuts, transforming you into a file management ninja!
Why Choose Superfile?
Superfile isnβt just another file manager itβs a game-changer.
Hereβs why
Blazing Fast β No unnecessary UI lag, just pure efficiency.
Keyboard-Driven β Forget the mouse, master navigation with powerful keybindings.
Multi-Panel Support β Work with multiple directories simultaneously.
Smart Search & Sorting β Instantly locate and organize files.
Built-in File Preview & Metadata Display β See what you need without opening files.
Highly Customizable β Tailor it to fit your workflow perfectly.
Installation
Getting started is easy! Install Superfile using
# For Linux (Debian-based)
wget -qO- https://superfile.netlify.app/install.sh | bash
# For macOS (via Homebrew)
brew install superfile
# For Windows (via Scoop)
scoop install superfile
Once installed, launch it with
spf
Boom! Youβre ready to roll.
Essential Commands & Shortcuts
General Operations
Launch Superfile: spf
Exit: Press q or Esc
Help Menu: ?
Toggle Footer Panel: F
File & Folder Navigation
New File Panel: n
Close File Panel: w
Toggle File Preview: f
Next Panel: Tab or Shift + l
Sidebar Panel: s
File & Folder Management
Create File/Folder: Ctrl + n
Rename: Ctrl + r
Copy: Ctrl + c
Cut: Ctrl + x
Paste: Ctrl + v
Delete: Ctrl + d
Copy Path: Ctrl + p
Search & Selection
Search: /
Select Files: v
Select All: Shift + a
Compression & Extraction
Extract Zip: Ctrl + e
Compress to Zip: Ctrl + a
Advanced Power Moves
Open Terminal Here: Shift + t
Open in Editor: e
Toggle Hidden Files: .
Pro Tip: Use Shift + p to pin frequently accessed folders for even quicker access!
Customizing Superfile
Want to make Superfile truly yours? Customize it easily by editing the config file
Superfile is the Swiss Army knife of terminal-based file managers. Whether youβre a developer, system admin, or just someone who loves a fast, efficient workflow, Superfile will revolutionize the way you manage files.
Ready to supercharge your productivity? Install Superfile today and take control like never before!
The following IAM policies use condition keys to create tag-based restriction.
Before you use tags to control access to your AWS resources, you must understand how AWS grants access. AWS is composed of collections of resources. An Amazon EC2 instance is a resource. An Amazon S3 bucket is a resource. You can use the AWS API, the AWS CLI, or the AWS Management Console to perform an operation, such as creating a bucket in Amazon S3. When you do, you send a request for that operation. Your request specifies an action, a resource, a principal entity (user or role), a principal account, and any necessary request information.
You can then create an IAM policy that allows or denies access to a resource based on that resource's tag. In that policy, you can use tag condition keys to control access to any of the following:
Resource β Control access to AWS service resources based on the tags on those resources. To do this, use the_ aws:ResourceTag/key-name_ condition key to determine whether to allow access to the resource based on the tags that are attached to the resource.
ResourceTag condition key
Use the _aws:ResourceTag/tag-key _condition key to compare the tag key-value pair that's specified in the IAM policy with the key-value pair that's attached to the AWS resource. For more information, see Controlling access to AWS resources.
You can use this condition key with the global aws:ResourceTag version and AWS services, such as ec2:ResourceTag. For more information, see Actions, resources, and condition keys for AWS services.
The following IAM policy allows users to start, stop, and terminate instances that are in the test application tag
Create the policy and attach the policy to user or role.
Created 2 instance one is with application tag and other is non tagged.
You can see the tagged instance are able to perform Start and Stop action using the IAM resources tag condition.
non-tagged instance we are not able to perform the same.
String condition operators let you construct Condition elements that restrict access based on comparing a key to a string value.
StringEquals - Exact matching, case sensitive
StringNotEquals - Negated matching
StringEqualsIgnoreCase - Exact matching, ignoring case
StringNotEqualsIgnoreCase - Negated matching, ignoring case
StringLike - Case-sensitive matching. The values can include multi-character match wildcards (*) and single-character match wildcards (?) anywhere in the string. You must specify wildcards to achieve partial string matches.
Note
If a key contains multiple values, StringLike can be qualified with set operatorsβForAllValues:StringLike and ForAnyValue:StringLike.
StringNotLike - Negated case-sensitive matching. The values can include multi-character match wildcards (*) or single-character match wildcards (?) anywhere in the string.
If we need to fetch the S3 bucket storage size we need to trace via individual bucket under metrics we get the storage size.
on one go use the below script to get the bucket name with storage size.
s3list=`aws s3 ls | awk '{print $3}'`
for s3dir in $s3list
do
echo $s3dir
aws s3 ls "s3://$s3dir" --recursive --human-readable --summarize | grep "Total Size"
done
Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines!
Topics Covered: Syed Jafer β FOSS, GLUGs, and open-source communities Salman β Why FOSS matters & Linux Commands Dhanasekar β Linux and DevOps Guhan β GNU and free software
Challenges We Faced
BitLocker Encryption β Had to disable BitLocker on some laptops BIOS/UEFI Problems β Secure Boot, boot order changes needed GRUB Issues β Windows not showing up, required boot-repair
Large Language Model (LLM) based AI agents represent a new paradigm in artificial intelligence. Unlike traditional software agents, these systems leverage the powerful capabilities of LLMs to understand, reason, and interact with their environment in more sophisticated ways. This guide will introduce you to the basics of LLM agents and their think-act-observe cycle.
What is an LLM Agent?
An LLM agent is a system that uses a large language model as its core reasoning engine to:
Process natural language instructions
Make decisions based on context and goals
Generate human-like responses and actions
Interact with external tools and APIs
Learn from interactions and feedback
Think of an LLM agent as an AI assistant who can understand, respond, and take actions in the digital world, like searching the web, writing code, or analyzing data.
The Think-Act-Observe Cycle in LLM Agents
Observe (Input Processing)
LLM agents observe their environment through:
Direct user instructions and queries
Context from previous conversations
Data from connected tools and APIs
System prompts and constraints
Environmental feedback
Think (LLM Processing)
The thinking phase for LLM agents involves:
Parsing and understanding input context
Reasoning about the task and requirements
Planning necessary steps to achieve goals
Selecting appropriate tools or actions
Generating natural language responses
The LLM is the "brain," using its trained knowledge to process information and make decisions.
IndexedDB is a powerful client-side database API for storing structured data in browsers. However, its API is complex, requiring transactions, object stores, and cursors to manage data. LocalBase simplifies IndexedDB by providing an intuitive, promise-based API.
In this blog, weβll explore LocalBase, its features, and how to use it effectively in web applications.
What is LocalBase?
LocalBase is an easy-to-use JavaScript library that simplifies IndexedDB interactions. It provides a syntax similar to Firestore, making it ideal for developers familiar with Firebase.
Key Features
Promise based API
Simple CRUD operations
No need for manual transaction handling
Works seamlessly in modern browsers
Installation
You can install LocalBase via npm or use it directly in a script tag
Since LocalBase doesnβt support native where queries, you need to filter manually.
3. Handling Transactions
LocalBase handles transactions internally, so you donβt need to worry about opening and closing them. However, you should use .then() to ensure operations complete before the next action.
The Web storage api is a set of mechanisms that enable browsers to store key-value pairs. Before HTML5, application data had to be sorted in cookies, included in every server request. Its intended to be far more user-friendly than using cookies.
Web storage is more secure, and large amounts of data can be stored locally, without affecting website performance.
There are 2 types of web storage,
Local Storage
Session Storage
We already have cookies. Why additional objects?
Unlike cookies, web storage objects are not sent to server with each request. Because of that, we can store much more. Most modern browsers allow at least 5 megabytes of data (or more) and have settings to configure that.
Also unlike cookies, the server canβt manipulate storage objects via HTTP headers. Everythingβs done in JavaScript.The storage is bound to the origin (domain/protocol/port triplet). That is, different protocols or subdomains infer different storage objects, they canβt access data from each other.
In this guide, you will learn/refresh about LocalStorage.
LocalStorage
The localStorage is property of the window (browser window object) interface allows you to access a Storage object for the Documentβs origin; the stored data is saved across browser sessions.
Data is kept for a longtime in local storage (with no expiration date.). This could be one day, one week, or even one year as per the developer preference ( Data in local storage maintained even if the browser is closed).
Local storage only stores strings. So, if you intend to store objects, lists or arrays, you must convert them into a string using JSON.stringfy()
Local storage will be available via the window.localstorage property.
Whatβs interesting about them is that the data survives a page refresh (for sessionStorage) and even a full browser restart (for localStorage).
Functionalities
// setItem normal strings
window.localStorage.setItem("name", "goku");
// getItem
const name = window.localStorage.getItem("name");
console.log("name from localstorage, "+name);
// Storing an Object without JSON stringify
const data = {
"commodity":"apple",
"price":43
};
window.localStorage.setItem('commodity', data);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data without jsonified, "+ result);
// Storing an object after converting to JSON string.
var jsonifiedString = JSON.stringify(data);
window.localStorage.setItem('commodity', jsonifiedString);
var result = window.localStorage.getItem('commodity');
console.log("Retrived data after jsonified, "+ result);
// remove item
window.localStorage.removeItem("commodity");
var result = window.localStorage.getItem('commodity');
console.log("Data after removing the key "+ result);
//length
console.log("length of local storage " + window.localStorage.length);
// clear
window.localStorage.clear();
console.log("length of local storage - after clear " + window.localStorage.length);
When to use Local Storage
Data stored in Local Storage can be easily accessed by third party individuals.
So its important to know that any sensitive data must not sorted in Local Storage.
Local Storage can help in storing temporary data before it is pushed to the server.
Always clear local storage once the operation is completed.
The majority of local storageβs drawbacks arenβt really significant. You may still not use it, but your app will run a little slower and youβll experience a tiny developer inconvenience. Security, however, is distinct. Knowing and understanding the security model of local storage is crucial since it will have a significant impact on your website in ways you might not have anticipated.
Local storage also has the drawback of being insecure. In no way! Everyone who stores sensitive information in local storage, such as session data, user information, credit card information (even momentarily! ), and anything else you wouldnβt want shared publicly on social media, is doing it incorrectly.
The purpose of local storage in a browser for safe storage was not intended. It was intended to be a straightforward key/value store for strings only that programmers could use to create somewhat more complicated single page apps.
General Preventions
For example, if we are using third party JavaScript libraries and they are injected with some scripts which extract the storage objects, our storage data wonβt be secure anymore. Therefore itβs not recommended to save sensitive data as
Username/Password
Credit card info
JWT tokens
API keys
Personal info
Session ids
Do not use the same origin for multiple web applications. Instead, use subdomains since otherwise, the storage will be shared with all. Reason is, for each subdomain it will have an unique localstorage; and they canβt communicate between subdomain instances.
Once some data are stored in Local storage, the developers donβt have any control over it until the user clears it. If you want the data to be removed once the session ends, use SessionStorage.
Validate, encode and escape data read from browser storage
Git is an essential tool for version control, and one of its underrated but powerful features is git stash. It allows developers to temporarily save their uncommitted changes without committing them, enabling a smooth workflow when switching branches or handling urgent bug fixes.
In this blog, we will explore git stash, its varieties, and some clever hacks to make the most of it.
1. Understanding Git Stash
Git stash allows developers to temporarily save changes made to the working directory, enabling them to switch contexts without having to commit incomplete work. This is particularly useful when you need to switch branches quickly or when you are interrupted by an urgent task.
When you run git stash, Git takes the uncommitted changes in your working directory (both staged and unstaged) and saves them on a stack called βstash stackβ. This action reverts your working directory to the last committed state while safely storing the changes for later use.
How It Works
Git saves the current state of the working directory and the index (staging area) as a stash.
The stash includes modifications to tracked files, newly created files, and changes in the index.
Untracked files are not stashed by default unless specified.
Stashes are stored in a stack, with the most recent stash on top.
Common Use Cases
Context Switching: When you are working on a feature and need to switch branches for an urgent bug fix.
Code Review Feedback: If you receive feedback and need to make changes but are in the middle of another task.
Cleanup Before Commit: To stash temporary debugging changes or print statements before making a clean commit.
Git stash is used to save uncommitted changes in a temporary area, allowing you to switch branches or work on something else without committing incomplete work.
Basic Usage
The basic git stash command saves all modified tracked files and staged changes. This does not include untracked files by default.
git stash
This command performs three main actions
Saves changes: Takes the current working directory state and index and saves it as a new stash entry.
Resets working directory: Reverts the working directory to match the last commit.
Stacks the stash: Stores the saved state on top of the stash stack.
Restoring Changes
To restore the stashed changes, you can use
git stash pop
This does two things
Applies the stash: Reapplies the changes to your working directory.
Deletes the stash: Removes the stash entry from the stash stack.
If you want to keep the stash for future use
git stash apply
This reapplies the changes without deleting the stash entry.
Viewing and Managing Stashes
To see a list of all stash entries
git stash list
This shows a list like
stash@{0}: WIP on feature-branch: 1234567 Commit message
stash@{1}: WIP on master: 89abcdef Commit message
Each stash is identified by an index (e.g., stash@{0}) which can be used for other stash commands.
git stash
This command stashes both tracked and untracked changes.
To apply the last stashed changes back
git stash pop
This applies the stash and removes it from the stash list.
To apply the stash without removing it
git stash apply
To see a list of all stashed changes
git stash list
To remove a specific stash
git stash drop stash@{index}
To clear all stashes
git stash clear
2. Varieties of Git Stash
a) Stashing Untracked Files
By default, git stash does not include untracked files. To include them
git stash -u
Or:
git stash --include-untracked
b) Stashing Ignored Files
To stash even ignored files
git stash -a
Or:
git stash --all
c) Stashing with a Message
To add a meaningful message to a stash
git stash push -m "WIP: Refactoring user authentication"
d) Stashing Specific Files
If you only want to stash specific files
git stash push -m "Partial stash" -- path/to/file
e) Stashing and Switching Branches
Instead of running git stash and git checkout separately, do it in one step
If you realize your stash should have been a separate branch
git stash branch new-branch stash@{0}
This will create a new branch and apply the stashed changes.
d) Keeping Index Changes
If you want to keep staged files untouched while stashing
git stash push --keep-index
e) Recovering a Dropped Stash
If you accidentally dropped a stash, it may still be in the reflog
git fsck --lost-found
Or, check stash history with:
git reflog stash
f) Using Stash for Conflict Resolution
If youβre rebasing and hit conflicts, stash helps in saving progress
git stash
# Fix conflicts
# Continue rebase
git stash pop
4. When Not to Use Git Stash
If your work is significant, commit it instead of stashing.
Avoid excessive stashing as it can lead to forgotten changes.
Stashing doesnβt track renamed or deleted files effectively.
Git stash is an essential tool for developers to manage temporary changes efficiently. With the different stash varieties and hacks, you can enhance your workflow and avoid unnecessary commits. Mastering these techniques will save you time and improve your productivity in version control.