namespace is a mechanism for logically partitioning and isolating resources within a single cluster, allowing multiple teams or projects to share the same cluster without conflicts
creating a namespace mygroup by manfest file $ vim mygroup.yml apiVersion: v1
kind: Namespace
metadata:
name: mygroup
:x $ kubectl apply -f mygroup.yml
To list all namespace $ kubectl get namespaces
To switch to mygroup namespace $ kubectl config set-context --current --namespace=mygroup
To delete namespace mygroup $ kubectl delete namespace mygroup
Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.
In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.
Why Perform Spike Testing?
Spike testing helps you
Determine system stability under unexpected traffic surges.
Identify bottlenecks that arise due to rapid load increases.
Assess auto-scaling capabilities of cloud-based infrastructures.
Measure response time degradation during high-demand spikes.
Ensure system recovery after the sudden load disappears.
http_req_duration โ Measures response time impact.
vus_max โ Peak virtual users during the spike.
errors โ Percentage of failed requests due to overload.
Best Practices for Spike Testing
Monitor application logs and database performance during the test.
Use auto-scaling mechanisms for cloud-based environments.
Combine spike tests with stress testing for better insights.
Analyze error rates and recovery time to ensure system stability.
Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.
Git is an essential tool for version control, and one of its underrated but powerful features is git stash. It allows developers to temporarily save their uncommitted changes without committing them, enabling a smooth workflow when switching branches or handling urgent bug fixes.
In this blog, we will explore git stash, its varieties, and some clever hacks to make the most of it.
1. Understanding Git Stash
Git stash allows developers to temporarily save changes made to the working directory, enabling them to switch contexts without having to commit incomplete work. This is particularly useful when you need to switch branches quickly or when you are interrupted by an urgent task.
When you run git stash, Git takes the uncommitted changes in your working directory (both staged and unstaged) and saves them on a stack called โstash stackโ. This action reverts your working directory to the last committed state while safely storing the changes for later use.
How It Works
Git saves the current state of the working directory and the index (staging area) as a stash.
The stash includes modifications to tracked files, newly created files, and changes in the index.
Untracked files are not stashed by default unless specified.
Stashes are stored in a stack, with the most recent stash on top.
Common Use Cases
Context Switching: When you are working on a feature and need to switch branches for an urgent bug fix.
Code Review Feedback: If you receive feedback and need to make changes but are in the middle of another task.
Cleanup Before Commit: To stash temporary debugging changes or print statements before making a clean commit.
Git stash is used to save uncommitted changes in a temporary area, allowing you to switch branches or work on something else without committing incomplete work.
Basic Usage
The basic git stash command saves all modified tracked files and staged changes. This does not include untracked files by default.
git stash
This command performs three main actions
Saves changes: Takes the current working directory state and index and saves it as a new stash entry.
Resets working directory: Reverts the working directory to match the last commit.
Stacks the stash: Stores the saved state on top of the stash stack.
Restoring Changes
To restore the stashed changes, you can use
git stash pop
This does two things
Applies the stash: Reapplies the changes to your working directory.
Deletes the stash: Removes the stash entry from the stash stack.
If you want to keep the stash for future use
git stash apply
This reapplies the changes without deleting the stash entry.
Viewing and Managing Stashes
To see a list of all stash entries
git stash list
This shows a list like
stash@{0}: WIP on feature-branch: 1234567 Commit message
stash@{1}: WIP on master: 89abcdef Commit message
Each stash is identified by an index (e.g., stash@{0}) which can be used for other stash commands.
git stash
This command stashes both tracked and untracked changes.
To apply the last stashed changes back
git stash pop
This applies the stash and removes it from the stash list.
To apply the stash without removing it
git stash apply
To see a list of all stashed changes
git stash list
To remove a specific stash
git stash drop stash@{index}
To clear all stashes
git stash clear
2. Varieties of Git Stash
a) Stashing Untracked Files
By default, git stash does not include untracked files. To include them
git stash -u
Or:
git stash --include-untracked
b) Stashing Ignored Files
To stash even ignored files
git stash -a
Or:
git stash --all
c) Stashing with a Message
To add a meaningful message to a stash
git stash push -m "WIP: Refactoring user authentication"
d) Stashing Specific Files
If you only want to stash specific files
git stash push -m "Partial stash" -- path/to/file
e) Stashing and Switching Branches
Instead of running git stash and git checkout separately, do it in one step
If you realize your stash should have been a separate branch
git stash branch new-branch stash@{0}
This will create a new branch and apply the stashed changes.
d) Keeping Index Changes
If you want to keep staged files untouched while stashing
git stash push --keep-index
e) Recovering a Dropped Stash
If you accidentally dropped a stash, it may still be in the reflog
git fsck --lost-found
Or, check stash history with:
git reflog stash
f) Using Stash for Conflict Resolution
If youโre rebasing and hit conflicts, stash helps in saving progress
git stash
# Fix conflicts
# Continue rebase
git stash pop
4. When Not to Use Git Stash
If your work is significant, commit it instead of stashing.
Avoid excessive stashing as it can lead to forgotten changes.
Stashing doesnโt track renamed or deleted files effectively.
Git stash is an essential tool for developers to manage temporary changes efficiently. With the different stash varieties and hacks, you can enhance your workflow and avoid unnecessary commits. Mastering these techniques will save you time and improve your productivity in version control.
In this blog, i jot down notes on what is smoke test, how it got its name, and how to approach the same in k6.
The term smoke testing originates from hardware testing, where engineers would power on a circuit or device and check if smoke appeared.
If smoke was detected, it indicated a fundamental issue, and further testing was halted. This concept was later adapted to software engineering.
What is Smoke Testing?
Smoke testing is a subset of test cases executed to verify that the major functionalities of an application work as expected. If a smoke test fails, the build is rejected, preventing further testing of a potentially unstable application. This test helps catch major defects early, saving time and effort.
Key Characteristics
Ensures that the application is not broken in major areas.
Runs quickly and is not exhaustive.
Usually automated as part of a CI/CD pipeline.
Writing a Basic Smoke Test with K6
A basic smoke test using K6 typically checks API endpoints for HTTP 200 responses and acceptable response times.
import http from 'k6/http';
import { check } from 'k6';
export let options = {
vus: 1, // 1 virtual user
iterations: 5, // Runs the test 5 times
};
export default function () {
let res = http.get('https://example.com/api/health');
check(res, {
'is status 200': (r) => r.status === 200,
'response time < 500ms': (r) => r.timings.duration < 500,
});
}
Advanced Smoke Test Example
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
vus: 2, // 2 virtual users
iterations: 10, // Runs the test 10 times
};
export default function () {
let res = http.get('https://example.com/api/login');
check(res, {
'status is 200': (r) => r.status === 200,
'response time < 400ms': (r) => r.timings.duration < 400,
});
sleep(1);
}
Running and Analyzing Results
Execute the test using
k6 run smoke-test.js
Sample Output
checks...
is status 200
response time < 500ms
If any of the checks fail, K6 will report an error, signaling an issue in the application.
Smoke testing with K6 is an effective way to ensure that key functionalities in your application work as expected. By integrating it into your CI/CD pipeline, you can catch major defects early, improve application stability, and streamline your development workflow.
When running load tests with K6, two fundamental aspects that shape test execution are the number of Virtual Users (VUs) and the test duration. These parameters help simulate realistic user behavior and measure system performance under different load conditions.
In this blog, i jot down notes on virtual users and test duration in options. Using this we can ramp up users.
K6 offers multiple ways to define VUs and test duration, primarily through options in the test script or the command line.
Basic VU and Duration Configuration
The simplest way to specify VUs and test duration is by setting them in the options object of your test script.
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10, // Number of virtual users
duration: '30s', // Duration of the test
};
export default function () {
http.get('https://test.k6.io/');
sleep(1);
}
This script runs a load test with 10 virtual users for 30 seconds, making requests to the specified URL.
Specifying VUs and Duration from the Command Line
You can also set the VUs and duration dynamically using command-line arguments without modifying the script.
k6 run --vus 20 --duration 1m script.js
This command runs the test with 20 virtual users for 1 minute.
Ramp Up and Ramp Down with Stages
Instead of a fixed number of VUs, you can simulate user load variations over time using stages. This helps to gradually increase or decrease the load on the system.
export const options = {
stages: [
{ duration: '30s', target: 10 }, // Ramp up to 10 VUs
{ duration: '1m', target: 50 }, // Ramp up to 50 VUs
{ duration: '30s', target: 10 }, // Ramp down to 10 VUs
{ duration: '20s', target: 0 }, // Ramp down to 0 VUs
],
};
This test gradually increases the load, sustains it, and then reduces it, simulating real-world traffic patterns.
Custom Execution Scenarios
For more advanced load testing strategies, K6 supports scenarios, allowing fine-grained control over execution behavior.
Syntax of Custom Execution Scenarios
A scenarios object defines different execution strategies. Each scenario consists of
executor: Defines how the test runs (e.g., ramping-vus, constant-arrival-rate, etc.).
vus: Number of virtual users (for certain executors).
duration: How long the scenario runs.
iterations: Total number of iterations per VU (for certain executors).
stages: Used in ramping-vus to define load variations over time.
rate: Defines the number of iterations per time unit in constant-arrival-rate.
preAllocatedVUs: Number of VUs reserved for the test.
Different Executors in K6
K6 provides several executors that define how virtual users (VUs) generate load
shared-iterations โ Distributes a fixed number of iterations across multiple VUs.
per-vu-iterations โ Each VU runs a specific number of iterations independently.
constant-vus โ Maintains a fixed number of VUs for a set duration.
ramping-vus โ Increases or decreases the number of VUs over time.
constant-arrival-rate โ Ensures a constant number of requests per time unit, independent of VUs.
ramping-arrival-rate โ Gradually increases or decreases the request rate over time.
externally-controlled โ Allows dynamic control of VUs via an external API.
Go a bit slower so that everyone can understand clearly without feeling rushed.
Provide more basics and examples to make learning easier for beginners.
Spend the first week explaining programming basics so that newcomers donโt feel lost.
Teach flowcharting methods to help participants understand the logic behind coding.
Try teaching Scratch as an interactive way to introduce programming concepts.
Offer weekend batches for those who prefer learning on weekends.
Encourage more conversations so that participants can actively engage in discussions.
Create sub-groups to allow participants to collaborate and support each other.
Get โcheerleadersโ within the team to make the classes more fun and interactive.
Increase promotion efforts to reach a wider audience and get more participants.
Provide better examples to make concepts easier to grasp.
Conduct more Q&A sessions so participants can ask and clarify their doubts.
Ensure that each participant gets a chance to speak and express their thoughts.
Showing your face in videos can help in building a more personal connection with the learners.
Organize mini-hackathons to provide hands-on experience and encourage practical learning.
Foster more interactions and connections between participants to build a strong learning community.
Encourage participants to write blogs daily to document their learning and share insights.
Motivate participants to give talks in class and other communities to build confidence.
Other Learnings & Suggestions
Avoid creating WhatsApp groups for communication, as the 1024 member limit makes it difficult to manage multiple groups.
Telegram works fine for now, but explore using mailing lists as an alternative for structured discussions.
Mute groups when necessary to prevent unnecessary messages like โHi, Hello, Good Morning.โ
Teach participants how to join mailing lists like ChennaiPy and KanchiLUG and guide them on asking questions in forums like Tamil Linux Community.
Show participants how to create a free blog on platforms like dev.to or WordPress to share their learning journey.
Avoid spending too much time explaining everything in-depth, as participants should start coding a small project by the 5th or 6th class.
Present topics as solutions to project ideas or real-world problem statements instead of just theory.
Encourage using names when addressing people, rather than calling them โSirโ or โMadam,โ to maintain an equal and friendly learning environment.
Zoom is costly, and since only around 50 people complete the training, consider alternatives like Jitsi or Google Meet for better cost-effectiveness.
In our previous blog on K6, we ran a script.js to test an api. As an output we received some metrics in the cli.
In this blog we are going to delve deep in to understanding metrics in K6.
1. HTTP Request Metrics
http_reqs
Description: Total number of HTTP requests initiated during the test.
Usage: Indicates the volume of traffic generated. A high number of requests can simulate real-world usage patterns.
http_req_duration
Description: Time taken for a request to receive a response (in milliseconds).
Components:
http_req_connecting: Time spent establishing a TCP connection.
http_req_tls_handshaking: Time for completing the TLS handshake.
http_req_waiting (TTFB): Time spent waiting for the first byte from the server.
http_req_sending: Time taken to send the HTTP request.
http_req_receiving: Time spent receiving the response data.
Usage: Identifies performance bottlenecks like slow server responses or network latency.
http_req_failed
Description: Proportion of failed HTTP requests (ratio between 0 and 1).
Usage: Highlights reliability issues. A high failure rate indicates problems with server stability or network errors.
2. VU (Virtual User) Metrics
vus
Description: Number of active Virtual Users at any given time.
Usage: Reflects concurrency level. Helps analyze how the system performs under varying loads.
vus_max
Description: Maximum number of Virtual Users during the test.
Usage: Defines the peak load. Useful for stress testing and capacity planning.
3. Iteration Metrics
iterations
Description: Total number of script iterations executed.
Usage: Measures the testโs progress and workload. Useful in endurance (soak) testing to observe long-term stability.
iteration_duration
Description: Time taken to complete one iteration of the script.
Usage: Helps identify performance degradation over time, especially under sustained load.
4. Data Transfer Metrics
data_sent
Description: Total amount of data sent over the network (in bytes).
Usage: Monitors network usage. High data volumes might indicate inefficient request payloads.
data_received
Description: Total data received from the server (in bytes).
Usage: Detects bandwidth usage and helps identify heavy response payloads.
5. Custom Metrics (Optional)
While K6 provides default metrics, you can define custom metrics like Counters, Gauges, Rates, and Trends for specific business logic or technical KPIs.
Example
import { Counter } from 'k6/metrics';
let myCounter = new Counter('my_custom_metric');
export default function () {
myCounter.add(1); // Increment the custom metric
}
Interpreting Metrics for Performance Optimization
Low http_req_duration + High http_reqs = Good scalability.
High http_req_failed = Investigate server errors or timeouts.
High data_sent / data_received = Optimize payloads.
Increasing iteration_duration over time = Possible memory leaks or resource exhaustion.
In the evolving Python ecosystem, pyproject.toml has emerged as a pivotal configuration file, streamlining project management and enhancing interoperability across tools.
In this blog i delve deep into the significance, structure, and usage of pyproject.toml.
What is pyproject.toml?
Introduced in PEP 518, pyproject.toml is a standardized file format designed to specify build system requirements and manage project configurations. Its primary goal is to provide a unified, tool-agnostic approach to project setup, reducing the clutter of multiple configuration files.
Why Use pyproject.toml?
Standardization: Offers a consistent way to define project metadata, dependencies, and build tools.
Interoperability: Supported by various tools like Poetry, Flit, Black, isort, and even pip.
Simplification: Consolidates multiple configuration files (like setup.cfg, requirements.txt) into one.
Future-Proofing: As Python evolves, pyproject.toml is becoming the de facto standard for project configurations, ensuring compatibility with future tools and practices.
Structure of pyproject.toml
The pyproject.toml file uses the TOML format, which stands for โTomโs Obvious, Minimal Language.โ TOML is designed to be easy to read and write while being simple enough for parsing by tools.
1. [build-system]
Defines the build system requirements. Essential for tools like pip to know how to build the project.
requires: Lists the build dependencies required to build the project. These packages are installed in an isolated environment before the build process starts.
build-backend: Specifies the backend responsible for building the project. Common backends include:
setuptools.build_meta (for traditional Python projects)
flit_core.buildapi (for projects managed with Flit)
poetry.core.masonry.api (for Poetry projects)
2. [tool]
This section is used by third-party tools to store their configuration. Each tool manages its own sub-table under [tool].
This time, weโre shifting gears from theory to practice with mini projects that will help you build real-world solutions. Study materials will be shared beforehand, and youโll work hands-on to solve practical problems building actual projects that showcase your skills.
Whatโs New?
Real-world mini projects Task-based shortlisting process Limited seats for focused learning Dedicated WhatsApp group for discussions & mentorship Live streaming of sessions for wider participation Study materials, quizzes, surprise gifts, and more!
How to Join?
Fill the below RSVP โ Open for 20 days (till โ March 2) only!
After RSVP closes, shortlisted participants will receive tasks via email.
Complete the tasks to get shortlisted.
Selected students will be added to an exclusive WhatsApp group for intensive training.
Itโs a COST-FREE learning. We require your time, effort and support.
Donโt miss this chance to level up your Python skills Cost Free with hands-on projects and exciting rewards! RSVP now and be part of Python Learning 2.0!
In the field of Python development, maintaining clean, readable, and efficient code is needed.
The Ruff Python package is a faster linter and code formatter designed to boost code quality and developer productivity. Written in Rust, Ruff stands out for its blazing speed and comprehensive feature set.
This blog will delve into Ruffโs features, usage, and how it compares to other popular Python linters and formatters like flake8, pylint, and black.
What is Ruff?
Ruff is an extremely fast Python linter and code formatter that provides linting, code formatting, and static code analysis in a single package. It supports a wide range of rules out of the box, covering various Python standards and style guides.
Key Features of Ruff
Lightning-fast Performance: Written in Rust, Ruff is significantly faster than traditional Python linters.
All-in-One Tool: Combines linting, formatting, and static analysis.
Extensive Rule Support: Covers rules from flake8, isort, pyflakes, pylint, and more.
Customizable: Allows configuration of rules to fit specific project needs.
Seamless Integration: Works well with CI/CD pipelines and popular code editors.
Installing Ruff
# Using pip
pip install ruff
# Using Homebrew (macOS/Linux)
brew install ruff
# Using UV
uv add ruff
Basic Usage
1. Linting a python file
# Lint a single file
ruff check app.py
# Lint an entire directory
ruff check src/
2. Auto Fixing Issues
ruff check src/ --fix
3. Formatting Code
While Ruff primarily focuses on linting, it also handles some formatting tasks
ruff format src/
Configuration
Ruff can be configured using a pyproject.toml file
import sys
import os
print("Hello World !")
def add(a, b):
result = a + b
return a
x= 1
y =2
print(x+y)
def append_to_list(value, my_list=[]):
my_list.append(value)
return my_list
def append_to_list(value, my_list=[]):
my_list.append(value)
return my_list
Identifying Unused Imports
Auto-fixing Imports
Sorting Imports
Detecting Unused Variables
Enforcing Code Style (PEP 8 Violations)
Detecting Mutable Default Arguments
Fixing Line Length Issues
Integrating Ruff with Pre-commit
To ensure code quality before every commit, integrate Ruff with pre-commit
Step 1: Install Pre-Commit
pip install pre-commit
Step 2: Create a .pre-commit-config.yaml file
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.1.0 # Use the latest version
hooks:
- id: ruff
Step 3: Install the Pre-commit Hook
pre-commit install
Step 4: Test the Hook
pre-commit run --all-files
This setup ensures that Ruff automatically checks your code for linting issues before every commit, maintaining consistent code quality.
When to Use Ruff
Large Codebases: Ideal for projects with thousands of files due to its speed.
CI/CD Pipelines: Reduces linting time, accelerating build processes.
Code Reviews: Ensures consistent coding standards across teams.
Open Source Projects: Simplifies code quality management.
Pre-commit Hooks: Ensures code quality before committing changes.
Ruff is a game-changer in the Python development ecosystem. Its unmatched speed, comprehensive rule set, and ease of use make it a powerful tool for developers aiming to maintain high code quality.
Whether youโre working on small scripts or large-scale applications, Ruff can streamline your linting and formatting processes, ensuring clean, efficient, and consistent code.
Git is a powerful version control system that every developer should master. Whether youโre a beginner or an experienced developer, knowing a few handy Git command-line tricks can save you time and improve your workflow. Here are 20 essential Git tips and tricks to boost your efficiency.
1. Undo the Last Commit (Without Losing Changes)
git reset --soft HEAD~1
If you made a commit but want to undo it while keeping your changes, this command resets the last commit but retains the modified files in your staging area.
This is useful when you realize you need to make more changes before committing.
If you also want to remove the changes from the staging area but keep them in your working directory, use,
git reset HEAD~1
2. Discard Unstaged Changes
git checkout -- <file>
Use this to discard local changes in a file before staging. Be careful, as this cannot be undone! If you want to discard all unstaged changes in your working directory, use,
git reset --hard HEAD
3. Delete a Local Branch
git branch -d branch-name
Removes a local branch safely if itโs already merged. If itโs not merged and you still want to delete it, use -D
git branch -D branch-name
4. Delete a Remote Branch
git push origin --delete branch-name
Deletes a branch from the remote repository, useful for cleaning up old feature branches. If you mistakenly deleted the branch and want to restore it, you can use
git checkout -b branch-name origin/branch-name
if it still exists remotely.
5. Rename a Local Branch
git branch -m old-name new-name
Useful when you want to rename a branch locally without affecting the remote repository. To update the remote reference after renaming, push the renamed branch and delete the old one,
Instead of cloning the entire repository, this fetches only the specified branch, saving time and space. If you want all branches but donโt want to check them out initially:
git clone --mirror repository-url
12. Change the Last Commit Message
git commit --amend -m "New message"
Use this to correct a typo in your last commit message before pushing. Be cautiousโif youโve already pushed, use
git push --force-with-lease
13. See the List of Tracked Files
git ls-files
Displays all files being tracked by Git, which is useful for auditing your repository. To see ignored files
Performance testing is a crucial part of ensuring the stability and scalability of web applications. k6 is a modern, open-source load testing tool that allows developers and testers to script and execute performance tests efficiently. In this blog, weโll explore the basics of k6 and write a simple test script to get started.
What is k6?
k6 is a load testing tool designed for developers. It is written in Go but uses JavaScript for scripting tests. Key features include,
High performance with minimal resource consumption
JavaScript-based scripting
CLI-based execution with detailed reporting
Integration with monitoring tools like Grafana and Prometheus
A k6 test is written in JavaScript. Hereโs a simple script to test an API endpoint,
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
vus: 10, // Number of virtual users
duration: '10s', // Test duration
};
export default function () {
let res = http.get('https://api.restful-api.dev/objects');
check(res, {
'is status 200': (r) => r.status === 200,
});
sleep(1); // Simulate user wait time
}
Running the Test
Save the script as script.js and execute the test using the following command,
k6 run script.js
Understanding the Output
After running the test, k6 will provide a summary including
1. HTTP requests: Total number of requests made during the test.
2. Response time metrics:
min: The shortest response time recorded.
max: The longest response time recorded.
avg: The average response time of all requests.
p(90), p(95), p(99): Percentile values indicating response time distribution.
3. Checks: Number of checks passed or failed, such as status code validation.
4. Virtual users (VUs):
vus_max: The maximum number of virtual users active at any time.
vus: The current number of active virtual users.
5. Request Rate (RPS โ Requests Per Second): The number of requests handled per second.
6. Failures: Number of errors or failed requests due to timeouts or HTTP status codes other than expected.
Next Steps
Once youโve successfully run your first k6 test, you can explore,
Load testing different APIs and endpoints
Running distributed tests
Exporting results to Grafana
Integrating k6 with CI/CD pipelines
k6 is a powerful tool that helps developers and QA engineers ensure their applications perform under load. Stay tuned for more in-depth tutorials on advanced k6 features!
Ensuring your applications perform well under high traffic is crucial. Join us for an interactive K6 Bootcamp, where weโll explore performance testing, load testing strategies, and real-world use cases to help you build scalable and resilient systems.
What is K6 and Why Should You Learn It?
Modern applications must handle thousands (or millions!) of users without breaking. K6 is an open-source, developer-friendly performance testing tool that helps you
Simulate real-world traffic and identify performance bottlenecks. Write tests in JavaScript โ no need for complex tools! Run efficient load tests on APIs, microservices, and web applications. Integrate with CI/CD pipelines to automate performance testing. Gain deep insights with real-time performance metrics.
By mastering K6, youโll gain the skills to predict failures before they happen, optimize performance, and build systems that scale with confidence!
Bootcamp Details
Date: Feb 23 2024 โ Sunday Time: 10:30 AM Mode: Online (Link Will be shared in Email after RSVP) Language:เฎคเฎฎเฎฟเฎดเฏ
Who Should Attend?
Developers โ Ensure APIs and services perform well under load.
QA Engineers โ Validate system reliability before production.
SREs / DevOps Engineers โ Continuously test performance in CI/CD pipelines.
RSVP Now
Donโt miss this opportunity to master load testing with K6 and take your performance engineering skills to the next level!
Got questions? Drop them in the comments or reach out to me. See you at the bootcamp!
Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developerโs pain. In this blog i jot down my experience on Buildpacks.
Before going to try Buildpacks, we need to understand what is an OCI ?
What is an OCI ?
An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).
It consists of,
Manifest โ Metadata describing the image (layers, config, etc.).
Config JSON โ Information about how the container should run (CMD, ENV, etc.).
Filesystem Layers โ The actual file system of the container.
OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.
Does Docker Create OCI Images?
Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.
When you build an image with docker build, it follows the OCI Image format.
When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.
However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.
What is a Buildpack ?
A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).
Overview of Buildpack Process
The buildpack process consists of two primary phases
Detection Phase: Determines if the buildpack should be applied based on the appโs dependencies.
Build Phase: Executes the necessary steps to prepare the application for running in a container.
Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacksโ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.
Builder: The Image That Executes the Build
A builder is an image that contains all necessary components to run a buildpack.
Components of a Builder Image
Build Image โ Used during the build phase (includes compilers, dependencies, etc.).
Run Image โ A minimal environment for running the final built application.
Lifecycle โ The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.
Stack: The Combination of Build and Run Images
Build Image + Run Image = Stack
Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
Run Image: Lightweight OS with only the runtime dependencies for execution.
It detects Python, installs dependencies, and builds the app into a container. Docker requires a Dockerfile, which developers must manually configure and maintain.
Automatic Security Updates
Buildpacks automatically patch base images for security vulnerabilities.
If thereโs a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.
pack rebase my-python-app
No need to rebuild! It replaces only the OS layers while keeping the app the same.
Standardized & Reproducible Builds
Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,
pack build my-app
Extensibility: Custom Buildpacks
Developers can create custom Buildpacks to add special dependencies.