Redis is famously known as an in-memory data structure store, often used as a database, cache, and message broker. The simplest and most fundamental data type in Redis is the string. This blog walks through everything you need to know about Redis strings with practical examples.
What Are Redis Strings?
In Redis, a string is a binary-safe sequence of bytes. That means it can contain any kind of data text, integers, or even serialized objects.
Maximum size: 512 MB
Default behavior: key-value pair storage
Common String Commands
Letβs explore key-value operations you can perform on Redis strings using the redis-cli.
1. SET β Assign a Value to a Key
SET user:1:name "Alice"
This sets the key user:1:name to the value "Alice".
2. GET β Retrieve a Value by Key
GET user:1:name
# Output: "Alice"
3. EXISTS β Check if a Key Exists
EXISTS user:1:name
# Output: 1 (true)
4. DEL β Delete a Key
DEL user:1:name
5. SETEX β Set Value with Expiry (TTL)
SETEX session:12345 60 "token_xyz"
This sets session:12345 with value token_xyz that expires in 60 seconds.
6. INCR / DECR β Numeric Operations
SET views:homepage 0
INCR views:homepage
INCR views:homepage
DECR views:homepage
GET views:homepage
# Output: "1"
7. APPEND β Append to Existing String
SET greet "Hello"
APPEND greet ", World!"
GET greet
# Output: "Hello, World!"
8. MSET / MGET β Set or Get Multiple Keys at Once
Imagine youβve been using a powerful tool for years to help you build apps faster. Yeah its Redis, a super fast database that helps apps remember things temporarily, like logins or shopping cart items. It was free, open, and loved by developers.
But one day, the team behind Redis changed the rules. They said
βYou can still use Redis, but if youβre a big cloud company (like Amazon or Google) offering it to others as a service, you need to play by our special rules or pay us.β
This change upset many in the tech world. Why?
Because open-source means freedom you can use it, improve it, and even share it with others. Redisβs new license in 2024 took away some of that freedom. It wasnβt completely closed, but it wasnβt truly open either. It hurts AWS, Microsoft more.
What Happened Next?
Developers and tech companies didnβt like the new rules. So they said,
βFine, weβll make our own open version of Redis.β
Thatβs how a new project called Valkey was born, a fork (copy) of Redis that stayed truly open-source.
Fast forward to May 2025 β Redis listened. They said
βWeβre bringing back the open-source spirit. Redis version 8.0 will be under a proper open-source license again: AGPLv3.β
Whatβs AGPLv3?
Itβs a type of license that says:
You can use, change, and share Redis freely.
If you run a modified Redis on a website or cloud service, you must also share your changes with the world. (still hurts AWS and Azure)
This keeps things fair: no more companies secretly benefiting from Redis without giving back.
What Did Redis Say?
Rowan Trollope, Redisβs CEO, explained why they had changed the license in the first place:
βBig cloud companies were making money off Redis but not helping us or the open-source community.β
But now, by switching to AGPLv3, Redis is balancing two things:
Protecting their work from being misused
And staying truly open-source
Why This Is Good News
Developers can continue using Redis freely.
The community can contribute and improve Redis.
Fair rules apply to everyone, even giant tech companies.
Redis has come full circle. After a detour into more restricted territory, itβs back where it belongs in the hands of everyone. This shows the power of the developer community, and why open-source isnβt just about code, itβs about collaboration, fairness, and freedom.
const observer = new MutationObserver(() => {
const heading = document.querySelector(".title-shortlink-container");
if (heading) {
// Create the button element
const button = document.createElement("button");
button.textContent = "Copy Text";
button.style.marginLeft = "10px"; // Add some spacing from the container
// Insert the button next to the .title-shortlink-container
heading.insertAdjacentElement("afterend", button);
// Add an event listener to copy the text
button.addEventListener("click", () => {
const textToCopy = heading.textContent.trim(); // Get only the container's text
navigator.clipboard.writeText(textToCopy).then(() => {
alert("Text copied: " + textToCopy);
}).catch(err => {
console.error("Failed to copy text: ", err);
});
});
observer.disconnect(); // Stop observing once the element is found
}
});
// Observe the document for dynamic changes
observer.observe(document.body, { childList: true, subtree: true });
Code Explanation :
Use Document. querySelector for get the element of the .title-shortlink-container class. This class is for shortURL className and assign to variable Heading.
If the Heading is not null. Then we create the button and styles.
The Button is placed next to link. Use heading.insertAdjacentElement(βafterendβ, button);
Button action create a function in addEventListener.
Get the link by heading.textContent and store in textToCopy
For copy the text use navigator.clipboard.writeText(textToCopy)
Then pop alert for the event.
if error catch throw error message.
Problem faced (dynamically loaded content)
First i use DOMContentLoaded. It cannot work in wiki.
Then i use Polling (Setintervel for a Certain time). It works partcially.
Then i use MutationObserver. I works perfectly to all pages.
const observer = new MutationObserver(() => {
const heading = document.querySelector(".title-shortlink-container");
if (heading) {
// Create the button element
const button = document.createElement("button");
button.textContent = "Copy Text";
button.style.marginLeft = "10px"; // Add some spacing from the container
// Insert the button next to the .title-shortlink-container
heading.insertAdjacentElement("afterend", button);
// Add an event listener to copy the text
button.addEventListener("click", () => {
const textToCopy = heading.textContent.trim(); // Get only the container's text
navigator.clipboard.writeText(textToCopy).then(() => {
alert("Text copied: " + textToCopy);
}).catch(err => {
console.error("Failed to copy text: ", err);
});
});
observer.disconnect(); // Stop observing once the element is found
}
});
// Observe the document for dynamic changes
observer.observe(document.body, { childList: true, subtree: true });
Code Explanation :
Use Document. querySelector for get the element of the .title-shortlink-container class. This class is for shortURL className and assign to variable Heading.
If the Heading is not null. Then we create the button and styles.
The Button is placed next to link. Use heading.insertAdjacentElement(βafterendβ, button);
Button action create a function in addEventListener.
Get the link by heading.textContent and store in textToCopy
For copy the text use navigator.clipboard.writeText(textToCopy)
Then pop alert for the event.
if error catch throw error message.
Problem faced (dynamically loaded content)
First i use DOMContentLoaded. It cannot work in wiki.
Then i use Polling (Setintervel for a Certain time). It works partcially.
Then i use MutationObserver. I works perfectly to all pages.
namespace is a mechanism for logically partitioning and isolating resources within a single cluster, allowing multiple teams or projects to share the same cluster without conflicts
creating a namespace mygroup by manfest file $ vim mygroup.yml apiVersion: v1
kind: Namespace
metadata:
name: mygroup
:x $ kubectl apply -f mygroup.yml
To list all namespace $ kubectl get namespaces
To switch to mygroup namespace $ kubectl config set-context --current --namespace=mygroup
To delete namespace mygroup $ kubectl delete namespace mygroup
Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.
In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.
Why Perform Spike Testing?
Spike testing helps you
Determine system stability under unexpected traffic surges.
Identify bottlenecks that arise due to rapid load increases.
Assess auto-scaling capabilities of cloud-based infrastructures.
Measure response time degradation during high-demand spikes.
Ensure system recovery after the sudden load disappears.
http_req_duration β Measures response time impact.
vus_max β Peak virtual users during the spike.
errors β Percentage of failed requests due to overload.
Best Practices for Spike Testing
Monitor application logs and database performance during the test.
Use auto-scaling mechanisms for cloud-based environments.
Combine spike tests with stress testing for better insights.
Analyze error rates and recovery time to ensure system stability.
Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.
Git is an essential tool for version control, and one of its underrated but powerful features is git stash. It allows developers to temporarily save their uncommitted changes without committing them, enabling a smooth workflow when switching branches or handling urgent bug fixes.
In this blog, we will explore git stash, its varieties, and some clever hacks to make the most of it.
1. Understanding Git Stash
Git stash allows developers to temporarily save changes made to the working directory, enabling them to switch contexts without having to commit incomplete work. This is particularly useful when you need to switch branches quickly or when you are interrupted by an urgent task.
When you run git stash, Git takes the uncommitted changes in your working directory (both staged and unstaged) and saves them on a stack called βstash stackβ. This action reverts your working directory to the last committed state while safely storing the changes for later use.
How It Works
Git saves the current state of the working directory and the index (staging area) as a stash.
The stash includes modifications to tracked files, newly created files, and changes in the index.
Untracked files are not stashed by default unless specified.
Stashes are stored in a stack, with the most recent stash on top.
Common Use Cases
Context Switching: When you are working on a feature and need to switch branches for an urgent bug fix.
Code Review Feedback: If you receive feedback and need to make changes but are in the middle of another task.
Cleanup Before Commit: To stash temporary debugging changes or print statements before making a clean commit.
Git stash is used to save uncommitted changes in a temporary area, allowing you to switch branches or work on something else without committing incomplete work.
Basic Usage
The basic git stash command saves all modified tracked files and staged changes. This does not include untracked files by default.
git stash
This command performs three main actions
Saves changes: Takes the current working directory state and index and saves it as a new stash entry.
Resets working directory: Reverts the working directory to match the last commit.
Stacks the stash: Stores the saved state on top of the stash stack.
Restoring Changes
To restore the stashed changes, you can use
git stash pop
This does two things
Applies the stash: Reapplies the changes to your working directory.
Deletes the stash: Removes the stash entry from the stash stack.
If you want to keep the stash for future use
git stash apply
This reapplies the changes without deleting the stash entry.
Viewing and Managing Stashes
To see a list of all stash entries
git stash list
This shows a list like
stash@{0}: WIP on feature-branch: 1234567 Commit message
stash@{1}: WIP on master: 89abcdef Commit message
Each stash is identified by an index (e.g., stash@{0}) which can be used for other stash commands.
git stash
This command stashes both tracked and untracked changes.
To apply the last stashed changes back
git stash pop
This applies the stash and removes it from the stash list.
To apply the stash without removing it
git stash apply
To see a list of all stashed changes
git stash list
To remove a specific stash
git stash drop stash@{index}
To clear all stashes
git stash clear
2. Varieties of Git Stash
a) Stashing Untracked Files
By default, git stash does not include untracked files. To include them
git stash -u
Or:
git stash --include-untracked
b) Stashing Ignored Files
To stash even ignored files
git stash -a
Or:
git stash --all
c) Stashing with a Message
To add a meaningful message to a stash
git stash push -m "WIP: Refactoring user authentication"
d) Stashing Specific Files
If you only want to stash specific files
git stash push -m "Partial stash" -- path/to/file
e) Stashing and Switching Branches
Instead of running git stash and git checkout separately, do it in one step
If you realize your stash should have been a separate branch
git stash branch new-branch stash@{0}
This will create a new branch and apply the stashed changes.
d) Keeping Index Changes
If you want to keep staged files untouched while stashing
git stash push --keep-index
e) Recovering a Dropped Stash
If you accidentally dropped a stash, it may still be in the reflog
git fsck --lost-found
Or, check stash history with:
git reflog stash
f) Using Stash for Conflict Resolution
If youβre rebasing and hit conflicts, stash helps in saving progress
git stash
# Fix conflicts
# Continue rebase
git stash pop
4. When Not to Use Git Stash
If your work is significant, commit it instead of stashing.
Avoid excessive stashing as it can lead to forgotten changes.
Stashing doesnβt track renamed or deleted files effectively.
Git stash is an essential tool for developers to manage temporary changes efficiently. With the different stash varieties and hacks, you can enhance your workflow and avoid unnecessary commits. Mastering these techniques will save you time and improve your productivity in version control.
In this blog, i jot down notes on what is smoke test, how it got its name, and how to approach the same in k6.
The term smoke testing originates from hardware testing, where engineers would power on a circuit or device and check if smoke appeared.
If smoke was detected, it indicated a fundamental issue, and further testing was halted. This concept was later adapted to software engineering.
What is Smoke Testing?
Smoke testing is a subset of test cases executed to verify that the major functionalities of an application work as expected. If a smoke test fails, the build is rejected, preventing further testing of a potentially unstable application. This test helps catch major defects early, saving time and effort.
Key Characteristics
Ensures that the application is not broken in major areas.
Runs quickly and is not exhaustive.
Usually automated as part of a CI/CD pipeline.
Writing a Basic Smoke Test with K6
A basic smoke test using K6 typically checks API endpoints for HTTP 200 responses and acceptable response times.
import http from 'k6/http';
import { check } from 'k6';
export let options = {
vus: 1, // 1 virtual user
iterations: 5, // Runs the test 5 times
};
export default function () {
let res = http.get('https://example.com/api/health');
check(res, {
'is status 200': (r) => r.status === 200,
'response time < 500ms': (r) => r.timings.duration < 500,
});
}
Advanced Smoke Test Example
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
vus: 2, // 2 virtual users
iterations: 10, // Runs the test 10 times
};
export default function () {
let res = http.get('https://example.com/api/login');
check(res, {
'status is 200': (r) => r.status === 200,
'response time < 400ms': (r) => r.timings.duration < 400,
});
sleep(1);
}
Running and Analyzing Results
Execute the test using
k6 run smoke-test.js
Sample Output
checks...
is status 200
response time < 500ms
If any of the checks fail, K6 will report an error, signaling an issue in the application.
Smoke testing with K6 is an effective way to ensure that key functionalities in your application work as expected. By integrating it into your CI/CD pipeline, you can catch major defects early, improve application stability, and streamline your development workflow.
When running load tests with K6, two fundamental aspects that shape test execution are the number of Virtual Users (VUs) and the test duration. These parameters help simulate realistic user behavior and measure system performance under different load conditions.
In this blog, i jot down notes on virtual users and test duration in options. Using this we can ramp up users.
K6 offers multiple ways to define VUs and test duration, primarily through options in the test script or the command line.
Basic VU and Duration Configuration
The simplest way to specify VUs and test duration is by setting them in the options object of your test script.
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10, // Number of virtual users
duration: '30s', // Duration of the test
};
export default function () {
http.get('https://test.k6.io/');
sleep(1);
}
This script runs a load test with 10 virtual users for 30 seconds, making requests to the specified URL.
Specifying VUs and Duration from the Command Line
You can also set the VUs and duration dynamically using command-line arguments without modifying the script.
k6 run --vus 20 --duration 1m script.js
This command runs the test with 20 virtual users for 1 minute.
Ramp Up and Ramp Down with Stages
Instead of a fixed number of VUs, you can simulate user load variations over time using stages. This helps to gradually increase or decrease the load on the system.
export const options = {
stages: [
{ duration: '30s', target: 10 }, // Ramp up to 10 VUs
{ duration: '1m', target: 50 }, // Ramp up to 50 VUs
{ duration: '30s', target: 10 }, // Ramp down to 10 VUs
{ duration: '20s', target: 0 }, // Ramp down to 0 VUs
],
};
This test gradually increases the load, sustains it, and then reduces it, simulating real-world traffic patterns.
Custom Execution Scenarios
For more advanced load testing strategies, K6 supports scenarios, allowing fine-grained control over execution behavior.
Syntax of Custom Execution Scenarios
A scenarios object defines different execution strategies. Each scenario consists of
executor: Defines how the test runs (e.g., ramping-vus, constant-arrival-rate, etc.).
vus: Number of virtual users (for certain executors).
duration: How long the scenario runs.
iterations: Total number of iterations per VU (for certain executors).
stages: Used in ramping-vus to define load variations over time.
rate: Defines the number of iterations per time unit in constant-arrival-rate.
preAllocatedVUs: Number of VUs reserved for the test.
Different Executors in K6
K6 provides several executors that define how virtual users (VUs) generate load
shared-iterations β Distributes a fixed number of iterations across multiple VUs.
per-vu-iterations β Each VU runs a specific number of iterations independently.
constant-vus β Maintains a fixed number of VUs for a set duration.
ramping-vus β Increases or decreases the number of VUs over time.
constant-arrival-rate β Ensures a constant number of requests per time unit, independent of VUs.
ramping-arrival-rate β Gradually increases or decreases the request rate over time.
externally-controlled β Allows dynamic control of VUs via an external API.
Go a bit slower so that everyone can understand clearly without feeling rushed.
Provide more basics and examples to make learning easier for beginners.
Spend the first week explaining programming basics so that newcomers donβt feel lost.
Teach flowcharting methods to help participants understand the logic behind coding.
Try teaching Scratch as an interactive way to introduce programming concepts.
Offer weekend batches for those who prefer learning on weekends.
Encourage more conversations so that participants can actively engage in discussions.
Create sub-groups to allow participants to collaborate and support each other.
Get βcheerleadersβ within the team to make the classes more fun and interactive.
Increase promotion efforts to reach a wider audience and get more participants.
Provide better examples to make concepts easier to grasp.
Conduct more Q&A sessions so participants can ask and clarify their doubts.
Ensure that each participant gets a chance to speak and express their thoughts.
Showing your face in videos can help in building a more personal connection with the learners.
Organize mini-hackathons to provide hands-on experience and encourage practical learning.
Foster more interactions and connections between participants to build a strong learning community.
Encourage participants to write blogs daily to document their learning and share insights.
Motivate participants to give talks in class and other communities to build confidence.
Other Learnings & Suggestions
Avoid creating WhatsApp groups for communication, as the 1024 member limit makes it difficult to manage multiple groups.
Telegram works fine for now, but explore using mailing lists as an alternative for structured discussions.
Mute groups when necessary to prevent unnecessary messages like βHi, Hello, Good Morning.β
Teach participants how to join mailing lists like ChennaiPy and KanchiLUG and guide them on asking questions in forums like Tamil Linux Community.
Show participants how to create a free blog on platforms like dev.to or WordPress to share their learning journey.
Avoid spending too much time explaining everything in-depth, as participants should start coding a small project by the 5th or 6th class.
Present topics as solutions to project ideas or real-world problem statements instead of just theory.
Encourage using names when addressing people, rather than calling them βSirβ or βMadam,β to maintain an equal and friendly learning environment.
Zoom is costly, and since only around 50 people complete the training, consider alternatives like Jitsi or Google Meet for better cost-effectiveness.
In our previous blog on K6, we ran a script.js to test an api. As an output we received some metrics in the cli.
In this blog we are going to delve deep in to understanding metrics in K6.
1. HTTP Request Metrics
http_reqs
Description: Total number of HTTP requests initiated during the test.
Usage: Indicates the volume of traffic generated. A high number of requests can simulate real-world usage patterns.
http_req_duration
Description: Time taken for a request to receive a response (in milliseconds).
Components:
http_req_connecting: Time spent establishing a TCP connection.
http_req_tls_handshaking: Time for completing the TLS handshake.
http_req_waiting (TTFB): Time spent waiting for the first byte from the server.
http_req_sending: Time taken to send the HTTP request.
http_req_receiving: Time spent receiving the response data.
Usage: Identifies performance bottlenecks like slow server responses or network latency.
http_req_failed
Description: Proportion of failed HTTP requests (ratio between 0 and 1).
Usage: Highlights reliability issues. A high failure rate indicates problems with server stability or network errors.
2. VU (Virtual User) Metrics
vus
Description: Number of active Virtual Users at any given time.
Usage: Reflects concurrency level. Helps analyze how the system performs under varying loads.
vus_max
Description: Maximum number of Virtual Users during the test.
Usage: Defines the peak load. Useful for stress testing and capacity planning.
3. Iteration Metrics
iterations
Description: Total number of script iterations executed.
Usage: Measures the testβs progress and workload. Useful in endurance (soak) testing to observe long-term stability.
iteration_duration
Description: Time taken to complete one iteration of the script.
Usage: Helps identify performance degradation over time, especially under sustained load.
4. Data Transfer Metrics
data_sent
Description: Total amount of data sent over the network (in bytes).
Usage: Monitors network usage. High data volumes might indicate inefficient request payloads.
data_received
Description: Total data received from the server (in bytes).
Usage: Detects bandwidth usage and helps identify heavy response payloads.
5. Custom Metrics (Optional)
While K6 provides default metrics, you can define custom metrics like Counters, Gauges, Rates, and Trends for specific business logic or technical KPIs.
Example
import { Counter } from 'k6/metrics';
let myCounter = new Counter('my_custom_metric');
export default function () {
myCounter.add(1); // Increment the custom metric
}
Interpreting Metrics for Performance Optimization
Low http_req_duration + High http_reqs = Good scalability.
High http_req_failed = Investigate server errors or timeouts.
High data_sent / data_received = Optimize payloads.
Increasing iteration_duration over time = Possible memory leaks or resource exhaustion.
In the evolving Python ecosystem, pyproject.toml has emerged as a pivotal configuration file, streamlining project management and enhancing interoperability across tools.
In this blog i delve deep into the significance, structure, and usage of pyproject.toml.
What is pyproject.toml?
Introduced in PEP 518, pyproject.toml is a standardized file format designed to specify build system requirements and manage project configurations. Its primary goal is to provide a unified, tool-agnostic approach to project setup, reducing the clutter of multiple configuration files.
Why Use pyproject.toml?
Standardization: Offers a consistent way to define project metadata, dependencies, and build tools.
Interoperability: Supported by various tools like Poetry, Flit, Black, isort, and even pip.
Simplification: Consolidates multiple configuration files (like setup.cfg, requirements.txt) into one.
Future-Proofing: As Python evolves, pyproject.toml is becoming the de facto standard for project configurations, ensuring compatibility with future tools and practices.
Structure of pyproject.toml
The pyproject.toml file uses the TOML format, which stands for βTomβs Obvious, Minimal Language.β TOML is designed to be easy to read and write while being simple enough for parsing by tools.
1. [build-system]
Defines the build system requirements. Essential for tools like pip to know how to build the project.
requires: Lists the build dependencies required to build the project. These packages are installed in an isolated environment before the build process starts.
build-backend: Specifies the backend responsible for building the project. Common backends include:
setuptools.build_meta (for traditional Python projects)
flit_core.buildapi (for projects managed with Flit)
poetry.core.masonry.api (for Poetry projects)
2. [tool]
This section is used by third-party tools to store their configuration. Each tool manages its own sub-table under [tool].