โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How to create namespace in K*S

29 March 2025 at 02:04

namespace is a mechanism for logically partitioning and isolating resources within a single cluster, allowing multiple teams or projects to share the same cluster without conflicts

creating a namespace mygroup by manfest file
$ vim mygroup.yml
apiVersion: v1
kind: Namespace
metadata:
name: mygroup
:x

$ kubectl apply -f mygroup.yml

To list all namespace
$ kubectl get namespaces

To switch to mygroup namespace
$ kubectl config set-context --current --namespace=mygroup

To delete namespace mygroup
$ kubectl delete namespace mygroup

Suggestions โ€“ 08.03.2025

By: vsraj80
8 March 2025 at 11:26
S.No.NameCMP Rs.
1Mangalam Global15.98
2Taparia Tools16.43
3South Ind.Bank25.5
4Mangalam Alloys36.1
5Oricon Enterpris40.04
6Pasupati Acrylon44.63
7Ajanta Soya45.4
8Manali Petrochem62.15
9NMDC67.13
10NACL Industries70.66
11Balaxi Pharma70.93
12Nath Industries80.7
13S P I C81.74
14Raj Television82.69
15R&B Denims83.66
16SBFC Finance86.6
17Grauer & Weil96.03
18Anik Industries99.41
in this taparia already increased from rs.2. so risk is there
19Surana Telecom And Power20.83
20Ptl Enterprises40.09
21Rdb Real Estate Constructions44.1
22Pioneer Investcorp Ltd72.16
23Swan Defence N Heavy Ind74.48

list of companies invested by vanguard as on 04.03.2025

By: vsraj80
4 March 2025 at 02:54
CompanyQuantityPrice
Marksans Pharma
20 Sep, 2024
BUY26,97,280317
Sundaram Finance
15 Mar, 2024
BUY9,12,9013,796
Powergrid Infra.
15 Mar, 2024
BUY72,27,41394.4
Nazara Technolo.
15 Sep, 2023
BUY3,98,217838
MTAR Technologie
15 Sep, 2023
BUY2,30,9082,608
Data Pattern
15 Sep, 2023
BUY3,25,1052,076
Himadri Special
15 Sep, 2023
BUY26,53,602242
Equitas Sma. Fin
17 Mar, 2023
BUY57,19,43768.0
Delhivery
17 Mar, 2023
BUY47,62,115323
Reliance Infra.
17 Mar, 2023
BUY20,12,088149
JP Power Ven.
17 Mar, 2023
BUY3,66,58,6836.08

Boost System Performance During Traffic Surges with Spike Testing

1 March 2025 at 06:17

Introduction

Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases in load. Unlike stress testing, which gradually increases the load, spike testing simulates abrupt surges in traffic to identify system vulnerabilities, such as crashes, slow response times, and resource exhaustion.

In this blog, we will explore spike testing in detail, covering its importance, methodology, and full implementation using K6.

Why Perform Spike Testing?

Spike testing helps you

  • Determine system stability under unexpected traffic surges.
  • Identify bottlenecks that arise due to rapid load increases.
  • Assess auto-scaling capabilities of cloud-based infrastructures.
  • Measure response time degradation during high-demand spikes.
  • Ensure system recovery after the sudden load disappears.

Setting Up K6 for Spike Testing

Installing K6

# macOS
brew install k6  

# Ubuntu/Debian
sudo apt install k6  

# Using Docker
docker pull grafana/k6  

Choosing the Right Test Scenario

K6 provides different executors to simulate load patterns. For spike testing, we use

  • ramping-arrival-rate โ†’ Gradually increases the request rate over time.
  • constant-arrival-rate โ†’ Maintains a fixed number of requests per second after the spike.

Example 1: Basic Spike Test

This test starts with low traffic, spikes suddenly, and then drops back to normal.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    spike_test: {
      executor: 'ramping-arrival-rate',
      startRate: 10, // Start with 10 requests/sec
      timeUnit: '1s',
      preAllocatedVUs: 100,
      maxVUs: 500,
      stages: [
        { duration: '30s', target: 10 },  // Low traffic
        { duration: '10s', target: 500 }, // Sudden spike
        { duration: '30s', target: 10 },  // Traffic drops
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation

  • Starts with 10 requests per second for 30 seconds.
  • Spikes to 500 requests per second in 10 seconds.
  • Drops back to 10 requests per second.
  • Tests the systemโ€™s ability to handle and recover from traffic spikes.

Example 2: Spike Test with High User Load

This test simulates a spike in virtual users rather than just requests per second.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    user_spike: {
      executor: 'ramping-vus',
      stages: [
        { duration: '30s', target: 20 },  // Normal traffic
        { duration: '10s', target: 300 }, // Sudden spike in users
        { duration: '30s', target: 20 },  // Drop back to normal
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Explanation:

  • Simulates a sudden increase in concurrent virtual users (VUs).
  • Helps test server stability, database handling, and auto-scaling.

Example 3: Spike Test on Multiple Endpoints

In real-world applications, multiple endpoints may experience spikes simultaneously. Hereโ€™s how to test different API routes.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    multiple_endpoint_spike: {
      executor: 'ramping-arrival-rate',
      startRate: 5,
      timeUnit: '1s',
      preAllocatedVUs: 200,
      maxVUs: 500,
      stages: [
        { duration: '20s', target: 10 },  // Normal traffic
        { duration: '10s', target: 300 }, // Spike across endpoints
        { duration: '20s', target: 10 },  // Traffic drop
      ],
    },
  },
};

export default function () {
  let urls = [
    'https://test-api.example.com/users',
    'https://test-api.example.com/orders',
    'https://test-api.example.com/products'
  ];
  
  let res = http.get(urls[Math.floor(Math.random() * urls.length)]);
  console.log(`Response time: ${res.timings.duration}ms`);
  sleep(1);
}

Explanation

  • Simulates traffic spikes across multiple API endpoints.
  • Helps identify which API calls suffer under extreme load.

Analyzing Test Results

After running the tests, K6 provides key performance metrics

http_req_duration......: avg=350ms min=150ms max=3000ms
http_reqs..............: 10,000 requests
vus_max................: 500
errors.................: 2%

Key Metrics

  • http_req_duration โ†’ Measures response time impact.
  • vus_max โ†’ Peak virtual users during the spike.
  • errors โ†’ Percentage of failed requests due to overload.

Best Practices for Spike Testing

  • Monitor application logs and database performance during the test.
  • Use auto-scaling mechanisms for cloud-based environments.
  • Combine spike tests with stress testing for better insights.
  • Analyze error rates and recovery time to ensure system stability.

Spike testing is crucial for ensuring application stability under sudden, unpredictable traffic surges. Using K6, we can simulate spikes in both requests per second and concurrent users to identify bottlenecks before they impact real users.

Git Stash Explained: Save Your Work Efficiently

19 February 2025 at 13:13

Introduction

Git is an essential tool for version control, and one of its underrated but powerful features is git stash. It allows developers to temporarily save their uncommitted changes without committing them, enabling a smooth workflow when switching branches or handling urgent bug fixes.

In this blog, we will explore git stash, its varieties, and some clever hacks to make the most of it.

1. Understanding Git Stash

Git stash allows developers to temporarily save changes made to the working directory, enabling them to switch contexts without having to commit incomplete work. This is particularly useful when you need to switch branches quickly or when you are interrupted by an urgent task.

When you run git stash, Git takes the uncommitted changes in your working directory (both staged and unstaged) and saves them on a stack called โ€œstash stackโ€. This action reverts your working directory to the last committed state while safely storing the changes for later use.

How It Works

  • Git saves the current state of the working directory and the index (staging area) as a stash.
  • The stash includes modifications to tracked files, newly created files, and changes in the index.
  • Untracked files are not stashed by default unless specified.
  • Stashes are stored in a stack, with the most recent stash on top.

Common Use Cases

  • Context Switching: When you are working on a feature and need to switch branches for an urgent bug fix.
  • Code Review Feedback: If you receive feedback and need to make changes but are in the middle of another task.
  • Cleanup Before Commit: To stash temporary debugging changes or print statements before making a clean commit.

Git stash is used to save uncommitted changes in a temporary area, allowing you to switch branches or work on something else without committing incomplete work.

Basic Usage

The basic git stash command saves all modified tracked files and staged changes. This does not include untracked files by default.

git stash

This command performs three main actions

  • Saves changes: Takes the current working directory state and index and saves it as a new stash entry.
  • Resets working directory: Reverts the working directory to match the last commit.
  • Stacks the stash: Stores the saved state on top of the stash stack.

Restoring Changes

To restore the stashed changes, you can use

git stash pop

This does two things

  • Applies the stash: Reapplies the changes to your working directory.
  • Deletes the stash: Removes the stash entry from the stash stack.

If you want to keep the stash for future use

git stash apply

This reapplies the changes without deleting the stash entry.

Viewing and Managing Stashes

To see a list of all stash entries

git stash list

This shows a list like

stash@{0}: WIP on feature-branch: 1234567 Commit message
stash@{1}: WIP on master: 89abcdef Commit message

Each stash is identified by an index (e.g., stash@{0}) which can be used for other stash commands.

git stash

This command stashes both tracked and untracked changes.

To apply the last stashed changes back

git stash pop

This applies the stash and removes it from the stash list.

To apply the stash without removing it

git stash apply

To see a list of all stashed changes

git stash list

To remove a specific stash

git stash drop stash@{index}

To clear all stashes

git stash clear

2. Varieties of Git Stash

a) Stashing Untracked Files

By default, git stash does not include untracked files. To include them

git stash -u

Or:

git stash --include-untracked

b) Stashing Ignored Files

To stash even ignored files

git stash -a

Or:

git stash --all

c) Stashing with a Message

To add a meaningful message to a stash

git stash push -m "WIP: Refactoring user authentication"

d) Stashing Specific Files

If you only want to stash specific files

git stash push -m "Partial stash" -- path/to/file

e) Stashing and Switching Branches

Instead of running git stash and git checkout separately, do it in one step

git stash push -m "WIP: Bug Fix" && git checkout other-branch

3. Advanced Stash Hacks

a) Viewing Stashed Changes

To see the contents of a stash before applying

git stash show -p stash@{0}

b) Applying a Stash to a Different Branch

You can stash on one branch and apply it to another

git checkout other-branch
git stash apply stash@{0}

c) Creating a New Branch from a Stash

If you realize your stash should have been a separate branch

git stash branch new-branch stash@{0}

This will create a new branch and apply the stashed changes.

d) Keeping Index Changes

If you want to keep staged files untouched while stashing

git stash push --keep-index

e) Recovering a Dropped Stash

If you accidentally dropped a stash, it may still be in the reflog

git fsck --lost-found

Or, check stash history with:

git reflog stash

f) Using Stash for Conflict Resolution

If youโ€™re rebasing and hit conflicts, stash helps in saving progress

git stash
# Fix conflicts
# Continue rebase
git stash pop

4. When Not to Use Git Stash

  • If your work is significant, commit it instead of stashing.
  • Avoid excessive stashing as it can lead to forgotten changes.
  • Stashing doesnโ€™t track renamed or deleted files effectively.

Git stash is an essential tool for developers to manage temporary changes efficiently. With the different stash varieties and hacks, you can enhance your workflow and avoid unnecessary commits. Mastering these techniques will save you time and improve your productivity in version control.

Happy coding! ๐Ÿš€

Learning Notes #77 โ€“ Smoke Testing with K6

16 February 2025 at 07:12

In this blog, i jot down notes on what is smoke test, how it got its name, and how to approach the same in k6.

The term smoke testing originates from hardware testing, where engineers would power on a circuit or device and check if smoke appeared.

If smoke was detected, it indicated a fundamental issue, and further testing was halted. This concept was later adapted to software engineering.

What is Smoke Testing?

Smoke testing is a subset of test cases executed to verify that the major functionalities of an application work as expected. If a smoke test fails, the build is rejected, preventing further testing of a potentially unstable application. This test helps catch major defects early, saving time and effort.

Key Characteristics

  • Ensures that the application is not broken in major areas.
  • Runs quickly and is not exhaustive.
  • Usually automated as part of a CI/CD pipeline.

Writing a Basic Smoke Test with K6

A basic smoke test using K6 typically checks API endpoints for HTTP 200 responses and acceptable response times.

import http from 'k6/http';
import { check } from 'k6';

export let options = {
    vus: 1, // 1 virtual user
    iterations: 5, // Runs the test 5 times
};

export default function () {
    let res = http.get('https://example.com/api/health');
    check(res, {
        'is status 200': (r) => r.status === 200,
        'response time < 500ms': (r) => r.timings.duration < 500,
    });
}

Advanced Smoke Test Example

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
    vus: 2, // 2 virtual users
    iterations: 10, // Runs the test 10 times
};

export default function () {
    let res = http.get('https://example.com/api/login');
    check(res, {
        'status is 200': (r) => r.status === 200,
        'response time < 400ms': (r) => r.timings.duration < 400,
    });
    sleep(1);
}

Running and Analyzing Results

Execute the test using

k6 run smoke-test.js

Sample Output

checks...
โœ” is status 200
โœ” response time < 500ms

If any of the checks fail, K6 will report an error, signaling an issue in the application.

Smoke testing with K6 is an effective way to ensure that key functionalities in your application work as expected. By integrating it into your CI/CD pipeline, you can catch major defects early, improve application stability, and streamline your development workflow.

Learning Notes #76 โ€“ Specifying Virtual Users (VUs) and Test duration in K6

16 February 2025 at 05:13

When running load tests with K6, two fundamental aspects that shape test execution are the number of Virtual Users (VUs) and the test duration. These parameters help simulate realistic user behavior and measure system performance under different load conditions.

In this blog, i jot down notes on virtual users and test duration in options. Using this we can ramp up users.

  1. Defining VUs and Duration in K6
  2. Basic VU and Duration Configuration
  3. Specifying VUs and Duration from the Command Line
  4. Ramp Up and Ramp Down with Stages
  5. Custom Execution Scenarios
    1. Syntax of Custom Execution Scenarios
    2. Different Executors in K6
    3. Example: Ramping VUs Scenario
    4. Example: Constant Arrival Rate Scenario
    5. Example: Per VU Iteration Scenario
  6. Choosing the Right Configuration
  7. References

Defining VUs and Duration in K6

K6 offers multiple ways to define VUs and test duration, primarily through options in the test script or the command line.

Basic VU and Duration Configuration

The simplest way to specify VUs and test duration is by setting them in the options object of your test script.

import http from 'k6/http';
import { sleep } from 'k6';

export const options = {
  vus: 10, // Number of virtual users
  duration: '30s', // Duration of the test
};

export default function () {
  http.get('https://test.k6.io/');
  sleep(1);
}

This script runs a load test with 10 virtual users for 30 seconds, making requests to the specified URL.

Specifying VUs and Duration from the Command Line

You can also set the VUs and duration dynamically using command-line arguments without modifying the script.

k6 run --vus 20 --duration 1m script.js

This command runs the test with 20 virtual users for 1 minute.

Ramp Up and Ramp Down with Stages

Instead of a fixed number of VUs, you can simulate user load variations over time using stages. This helps to gradually increase or decrease the load on the system.

export const options = {
  stages: [
    { duration: '30s', target: 10 }, // Ramp up to 10 VUs
    { duration: '1m', target: 50 },  // Ramp up to 50 VUs
    { duration: '30s', target: 10 }, // Ramp down to 10 VUs
    { duration: '20s', target: 0 },  // Ramp down to 0 VUs
  ],
};

This test gradually increases the load, sustains it, and then reduces it, simulating real-world traffic patterns.

Custom Execution Scenarios

For more advanced load testing strategies, K6 supports scenarios, allowing fine-grained control over execution behavior.

Syntax of Custom Execution Scenarios

A scenarios object defines different execution strategies. Each scenario consists of

  • executor: Defines how the test runs (e.g., ramping-vus, constant-arrival-rate, etc.).
  • vus: Number of virtual users (for certain executors).
  • duration: How long the scenario runs.
  • iterations: Total number of iterations per VU (for certain executors).
  • stages: Used in ramping-vus to define load variations over time.
  • rate: Defines the number of iterations per time unit in constant-arrival-rate.
  • preAllocatedVUs: Number of VUs reserved for the test.

Different Executors in K6

K6 provides several executors that define how virtual users (VUs) generate load

  1. shared-iterations โ€“ Distributes a fixed number of iterations across multiple VUs.
  2. per-vu-iterations โ€“ Each VU runs a specific number of iterations independently.
  3. constant-vus โ€“ Maintains a fixed number of VUs for a set duration.
  4. ramping-vus โ€“ Increases or decreases the number of VUs over time.
  5. constant-arrival-rate โ€“ Ensures a constant number of requests per time unit, independent of VUs.
  6. ramping-arrival-rate โ€“ Gradually increases or decreases the request rate over time.
  7. externally-controlled โ€“ Allows dynamic control of VUs via an external API.

Example: Ramping VUs Scenario

export const options = {
  scenarios: {
    ramping_users: {
      executor: 'ramping-vus',
      startVUs: 0,
      stages: [
        { duration: '30s', target: 20 },
        { duration: '1m', target: 100 },
        { duration: '30s', target: 0 },
      ],
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Example: Constant Arrival Rate Scenario

export const options = {
  scenarios: {
    constant_request_rate: {
      executor: 'constant-arrival-rate',
      rate: 50, // 50 iterations per second
      timeUnit: '1s', // Per second
      duration: '1m', // Test duration
      preAllocatedVUs: 20, // Number of VUs to allocate
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Example: Per VU Iteration Scenario

export const options = {
  scenarios: {
    per_vu_iterations: {
      executor: 'per-vu-iterations',
      vus: 10,
      iterations: 50, // Each VU executes 50 iterations
      maxDuration: '1m',
    },
  },
};

export default function () {
  http.get('https://test-api.example.com');
  sleep(1);
}

Choosing the Right Configuration

  • Use fixed VUs and duration for simple, constant load testing.
  • Use stages for ramping up and down load gradually.
  • Use scenarios for more complex and controlled testing setups.

References

  1. Scenarios & Executors โ€“ https://grafana.com/docs/k6/latest/using-k6/scenarios/

python โ€“ random function

By: vsraj80
15 February 2025 at 03:19

Generating random number with restricted given number of times

import random

computer_Num = random.randint(1,100)

limit=5

while limit > 0:

guess = int (input(โ€œGuess the Number :โ€))

limit-=1

if guess == computer_Num:

print(โ€œGuess is โ€œ,guess,โ€Computer Number is โ€œ,computer_Num,โ€You Wonโ€)

if guess != computer_Num:

print(โ€œGuess is โ€œ,guess,โ€Computer Number is โ€œ,computer_Num,โ€Wrong guessโ€)

computer_Num = random.randint(1,100)

if limit == 0:

print(โ€œYour limit is reachedโ€)

Reg.Web Scrape -No Output

By: vsraj80
13 February 2025 at 13:35

Not getting proper output

import requests

from bs4 import BeautifulSoup

url=โ€https://www.moneycontrol.com/stocks/marketstats/nsehigh/index.php&#8221;

page=requests.get(url)

soup=BeautifulSoup(page.content,โ€html.parserโ€)

company = soup.find_all(โ€œaโ€,class_=โ€ReuseTable_gld13__HzxFN undefinedโ€)

#print(company)

for cmp in company:

print(cmp.prettify(), end=โ€\n\nโ€)

MAY I KNOW WHAT MISTAKE I DID HERE.

Golden Feedbacks for Python Sessions 1.0 from last year (2024)

13 February 2025 at 08:49

Many Thanks to Shrini for documenting it last year. This serves as a good reference to improve my skills. Hope it will help many.

๐Ÿ“ข What Participants wanted to improve

๐Ÿšถโ€โ™‚๏ธ Go a bit slower so that everyone can understand clearly without feeling rushed.


๐Ÿ“š Provide more basics and examples to make learning easier for beginners.


๐Ÿ–ฅ Spend the first week explaining programming basics so that newcomers donโ€™t feel lost.


๐Ÿ“Š Teach flowcharting methods to help participants understand the logic behind coding.


๐Ÿ•น Try teaching Scratch as an interactive way to introduce programming concepts.


๐Ÿ—“ Offer weekend batches for those who prefer learning on weekends.


๐Ÿ—ฃ Encourage more conversations so that participants can actively engage in discussions.


๐Ÿ‘ฅ Create sub-groups to allow participants to collaborate and support each other.


๐ŸŽ‰ Get โ€œcheerleadersโ€ within the team to make the classes more fun and interactive.


๐Ÿ“ข Increase promotion efforts to reach a wider audience and get more participants.


๐Ÿ” Provide better examples to make concepts easier to grasp.


โ“ Conduct more Q&A sessions so participants can ask and clarify their doubts.


๐ŸŽ™ Ensure that each participant gets a chance to speak and express their thoughts.


๐Ÿ“น Showing your face in videos can help in building a more personal connection with the learners.


๐Ÿ† Organize mini-hackathons to provide hands-on experience and encourage practical learning.


๐Ÿ”— Foster more interactions and connections between participants to build a strong learning community.


โœ Encourage participants to write blogs daily to document their learning and share insights.


๐ŸŽค Motivate participants to give talks in class and other communities to build confidence.

๐Ÿ“ Other Learnings & Suggestions

๐Ÿ“ต Avoid creating WhatsApp groups for communication, as the 1024 member limit makes it difficult to manage multiple groups.


โœ‰ Telegram works fine for now, but explore using mailing lists as an alternative for structured discussions.


๐Ÿ”• Mute groups when necessary to prevent unnecessary messages like โ€œHi, Hello, Good Morning.โ€


๐Ÿ“ข Teach participants how to join mailing lists like ChennaiPy and KanchiLUG and guide them on asking questions in forums like Tamil Linux Community.


๐Ÿ“ Show participants how to create a free blog on platforms like dev.to or WordPress to share their learning journey.


๐Ÿ›  Avoid spending too much time explaining everything in-depth, as participants should start coding a small project by the 5th or 6th class.


๐Ÿ“Œ Present topics as solutions to project ideas or real-world problem statements instead of just theory.


๐Ÿ‘ค Encourage using names when addressing people, rather than calling them โ€œSirโ€ or โ€œMadam,โ€ to maintain an equal and friendly learning environment.


๐Ÿ’ธ Zoom is costly, and since only around 50 people complete the training, consider alternatives like Jitsi or Google Meet for better cost-effectiveness.

Will try to incorporate these learnings in our upcoming sessions.

๐Ÿš€ Letโ€™s make this learning experience engaging, interactive, and impactful! ๐ŸŽฏ

Learning Notes #72 โ€“ Metrics in K6 Load Testing

12 February 2025 at 17:15

In our previous blog on K6, we ran a script.js to test an api. As an output we received some metrics in the cli.

In this blog we are going to delve deep in to understanding metrics in K6.

1. HTTP Request Metrics

http_reqs

  • Description: Total number of HTTP requests initiated during the test.
  • Usage: Indicates the volume of traffic generated. A high number of requests can simulate real-world usage patterns.

http_req_duration

  • Description: Time taken for a request to receive a response (in milliseconds).
  • Components:
    • http_req_connecting: Time spent establishing a TCP connection.
    • http_req_tls_handshaking: Time for completing the TLS handshake.
    • http_req_waiting (TTFB): Time spent waiting for the first byte from the server.
    • http_req_sending: Time taken to send the HTTP request.
    • http_req_receiving: Time spent receiving the response data.
  • Usage: Identifies performance bottlenecks like slow server responses or network latency.

http_req_failed

  • Description: Proportion of failed HTTP requests (ratio between 0 and 1).
  • Usage: Highlights reliability issues. A high failure rate indicates problems with server stability or network errors.

2. VU (Virtual User) Metrics

vus

  • Description: Number of active Virtual Users at any given time.
  • Usage: Reflects concurrency level. Helps analyze how the system performs under varying loads.

vus_max

  • Description: Maximum number of Virtual Users during the test.
  • Usage: Defines the peak load. Useful for stress testing and capacity planning.

3. Iteration Metrics

iterations

  • Description: Total number of script iterations executed.
  • Usage: Measures the testโ€™s progress and workload. Useful in endurance (soak) testing to observe long-term stability.

iteration_duration

  • Description: Time taken to complete one iteration of the script.
  • Usage: Helps identify performance degradation over time, especially under sustained load.

4. Data Transfer Metrics

data_sent

  • Description: Total amount of data sent over the network (in bytes).
  • Usage: Monitors network usage. High data volumes might indicate inefficient request payloads.

data_received

  • Description: Total data received from the server (in bytes).
  • Usage: Detects bandwidth usage and helps identify heavy response payloads.

5. Custom Metrics (Optional)

While K6 provides default metrics, you can define custom metrics like Counters, Gauges, Rates, and Trends for specific business logic or technical KPIs.

Example

import { Counter } from 'k6/metrics';

let myCounter = new Counter('my_custom_metric');

export default function () {
  myCounter.add(1); // Increment the custom metric
}

Interpreting Metrics for Performance Optimization

  • Low http_req_duration + High http_reqs = Good scalability.
  • High http_req_failed = Investigate server errors or timeouts.
  • High data_sent / data_received = Optimize payloads.
  • Increasing iteration_duration over time = Possible memory leaks or resource exhaustion.

Learning Notes #71 โ€“ pyproject.toml

12 February 2025 at 16:57

In the evolving Python ecosystem, pyproject.toml has emerged as a pivotal configuration file, streamlining project management and enhancing interoperability across tools.

In this blog i delve deep into the significance, structure, and usage of pyproject.toml.

What is pyproject.toml?

Introduced in PEP 518, pyproject.toml is a standardized file format designed to specify build system requirements and manage project configurations. Its primary goal is to provide a unified, tool-agnostic approach to project setup, reducing the clutter of multiple configuration files.

Why Use pyproject.toml?

  • Standardization: Offers a consistent way to define project metadata, dependencies, and build tools.
  • Interoperability: Supported by various tools like Poetry, Flit, Black, isort, and even pip.
  • Simplification: Consolidates multiple configuration files (like setup.cfg, requirements.txt) into one.
  • Future-Proofing: As Python evolves, pyproject.toml is becoming the de facto standard for project configurations, ensuring compatibility with future tools and practices.

Structure of pyproject.toml

The pyproject.toml file uses the TOML format, which stands for โ€œTomโ€™s Obvious, Minimal Language.โ€ TOML is designed to be easy to read and write while being simple enough for parsing by tools.

1. [build-system]

Defines the build system requirements. Essential for tools like pip to know how to build the project.

[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

requires: Lists the build dependencies required to build the project. These packages are installed in an isolated environment before the build process starts.

build-backend: Specifies the backend responsible for building the project. Common backends include:

  • setuptools.build_meta (for traditional Python projects)
  • flit_core.buildapi (for projects managed with Flit)
  • poetry.core.masonry.api (for Poetry projects)

2. [tool]

This section is used by third-party tools to store their configuration. Each tool manages its own sub-table under [tool].

Example with Black (Python code formatter):

[tool.black]
line-length = 88
target-version = ["py38"]
include = '\.pyi?$'
exclude = '''
/(
  \.git
  | \.mypy_cache
  | \.venv
  | build
  | dist
)/
'''

  • line-length: Sets the maximum line length for code formatting.
  • target-version: Specifies the Python versions the code should be compatible with.
  • include / exclude: Regular expressions to define which files Black should format.

Example with isort (import sorter)

[tool.isort]
profile = "black"
line_length = 88
multi_line_output = 3
include_trailing_comma = true

  • profile: Allows easy integration with formatting tools like Black.
  • multi_line_output: Controls how imports are wrapped.
  • include_trailing_comma: Ensures trailing commas in multi-line imports.

3. [project]

Introduced in PEP 621, this section standardizes project metadata, reducing reliance on setup.py.

[project]
name = "my-awesome-project"
version = "0.1.0"
description = "An awesome Python project"
readme = "README.md"
requires-python = ">=3.8"
authors = [
    { name="Syed Jafer K", email="syed@example.com" }
]
dependencies = [
    "requests>=2.25.1",
    "fastapi"
]
license = { file = "LICENSE" }
keywords = ["python", "awesome", "project"]
classifiers = [
    "Programming Language :: Python :: 3",
    "License :: OSI Approved :: MIT License",
    "Operating System :: OS Independent"
]

  • name, version, description: Basic project metadata.
  • readme: Path to the README file.
  • requires-python: Specifies compatible Python versions.
  • authors: List of project authors.
  • dependencies: Project dependencies.
  • license: Specifies the projectโ€™s license.
  • keywords: Helps with project discovery in package repositories.
  • classifiers: Provides metadata for tools like PyPI to categorize the project.

4. Optional scripts and entry-points

Define CLI commands:

[project.scripts]
mycli = "my_module:main"

  • scripts: Maps command-line scripts to Python functions, allowing users to run mycli directly after installing the package.

Tools That Support pyproject.toml

  • Build tools: Poetry, Flit, setuptools
  • Linters/Formatters: Black, isort, Ruff
  • Test frameworks: Pytest (via addopts)
  • Package managers: Pip (PEP 517/518 compliant)
  • Documentation tools: Sphinx

Migration Tips

  • Gradual Migration: Move one configuration at a time to avoid breaking changes.
  • Backwards Compatibility: Keep older config files during transition if needed.
  • Testing: Use CI pipelines to ensure the new configuration doesnโ€™t break the build.

Troubleshooting Common Issues

  1. Build Failures with Pip: Ensure build-system.requires includes all necessary build tools.
  2. Incompatible Tools: Check for the latest versions of tools to ensure pyproject.toml support.
  3. Configuration Errors: Validate your TOML file with online validators like TOML Lint.

Further Reading:

๐Ÿ“ข Python Learning 2.0 in Tamil โ€“ Call for Participants! ๐Ÿš€

10 February 2025 at 07:58

After an incredible year of Python learning Watch our journey here, weโ€™re back with an all new approach for 2025!

If you havenโ€™t subscribed to our channel, donโ€™t miss to do it ? Support Us by subscribing

This time, weโ€™re shifting gears from theory to practice with mini projects that will help you build real-world solutions. Study materials will be shared beforehand, and youโ€™ll work hands-on to solve practical problems building actual projects that showcase your skills.

๐Ÿ”‘ Whatโ€™s New?

โœ… Real-world mini projects
โœ… Task-based shortlisting process
โœ… Limited seats for focused learning
โœ… Dedicated WhatsApp group for discussions & mentorship
โœ… Live streaming of sessions for wider participation
โœ… Study materials, quizzes, surprise gifts, and more!

๐Ÿ“‹ How to Join?

  1. Fill the below RSVP โ€“ Open for 20 days (till โ€“ March 2) only!
  2. After RSVP closes, shortlisted participants will receive tasks via email.
  3. Complete the tasks to get shortlisted.
  4. Selected students will be added to an exclusive WhatsApp group for intensive training.
  5. Itโ€™s a COST-FREE learning. We require your time, effort and support.
  6. Course start date will be announced after RSVP.

๐Ÿ“œ RSVP Form

โ˜Ž How to Contact for Queries ?

If you have any queries, feel free to message in whatsapp, telegram, signal on this number 9176409201.

You can also mail me at learnwithjafer@gmail.com

Follow us for more oppurtunities/updates and moreโ€ฆ

Donโ€™t miss this chance to level up your Python skills Cost Free with hands-on projects and exciting rewards! RSVP now and be part of Python Learning 2.0! ๐Ÿš€

Our Previous Monthly meetsย โ€“ย https://www.youtube.com/watch?v=cPtyuSzeaa8&list=PLiutOxBS1MizPGGcdfXF61WP5pNUYvxUl&pp=gAQB

Our Previous Sessions,

Postgres โ€“ย https://www.youtube.com/watch?v=04pE5bK2-VA&list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt&pp=gAQB

Python โ€“ย https://www.youtube.com/watch?v=lQquVptFreE&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&pp=gAQB

Docker โ€“ย https://www.youtube.com/watch?v=nXgUBanjZP8&list=PLiutOxBS1Mizi9IRQM-N3BFWXJkb-hQ4U&pp=gAQB

Note: If you wish to support me for this initiative please share this with your friends, students and those who are in need.

Learning Notes #70 โ€“ RUFF An extremely fast Python linter and code formatter, written in Rust.

9 February 2025 at 11:00

In the field of Python development, maintaining clean, readable, and efficient code is needed.

The Ruff Python package is a faster linter and code formatter designed to boost code quality and developer productivity. Written in Rust, Ruff stands out for its blazing speed and comprehensive feature set.

This blog will delve into Ruffโ€™s features, usage, and how it compares to other popular Python linters and formatters like flake8, pylint, and black.

What is Ruff?

Ruff is an extremely fast Python linter and code formatter that provides linting, code formatting, and static code analysis in a single package. It supports a wide range of rules out of the box, covering various Python standards and style guides.

Key Features of Ruff

  1. Lightning-fast Performance: Written in Rust, Ruff is significantly faster than traditional Python linters.
  2. All-in-One Tool: Combines linting, formatting, and static analysis.
  3. Extensive Rule Support: Covers rules from flake8, isort, pyflakes, pylint, and more.
  4. Customizable: Allows configuration of rules to fit specific project needs.
  5. Seamless Integration: Works well with CI/CD pipelines and popular code editors.

Installing Ruff


# Using pip
pip install ruff

# Using Homebrew (macOS/Linux)
brew install ruff

# Using UV
uv add ruff

Basic Usage

1. Linting a python file

# Lint a single file
ruff check app.py

# Lint an entire directory
ruff check src/

2. Auto Fixing Issues

ruff check src/ --fix

3. Formatting Code

While Ruff primarily focuses on linting, it also handles some formatting tasks

ruff format src/

Configuration

Ruff can be configured using a pyproject.toml file

[tool.ruff]
line-length = 88
exclude = ["migrations"]
select = ["E", "F", "W"]  # Enable specific rule categories
ignore = ["E501"]          # Ignore specific rules

Examples

import sys
import os

print("Hello World !")


def add(a, b):
    result = a + b
    return a

x= 1
y =2
print(x+y)

def append_to_list(value, my_list=[]):
    my_list.append(value)
    return my_list

def append_to_list(value, my_list=[]):
    my_list.append(value)
    return my_list

  1. Identifying Unused Imports
  2. Auto-fixing Imports
  3. Sorting Imports
  4. Detecting Unused Variables
  5. Enforcing Code Style (PEP 8 Violations)
  6. Detecting Mutable Default Arguments
  7. Fixing Line Length Issues

Integrating Ruff with Pre-commit

To ensure code quality before every commit, integrate Ruff with pre-commit

Step 1: Install Pre-Commit

pip install pre-commit

Step 2: Create a .pre-commit-config.yaml file

repos:
  - repo: https://github.com/charliermarsh/ruff-pre-commit
    rev: v0.1.0  # Use the latest version
    hooks:
      - id: ruff

Step 3: Install the Pre-commit Hook

pre-commit install

Step 4: Test the Hook

pre-commit run --all-files

This setup ensures that Ruff automatically checks your code for linting issues before every commit, maintaining consistent code quality.

When to Use Ruff

  • Large Codebases: Ideal for projects with thousands of files due to its speed.
  • CI/CD Pipelines: Reduces linting time, accelerating build processes.
  • Code Reviews: Ensures consistent coding standards across teams.
  • Open Source Projects: Simplifies code quality management.
  • Pre-commit Hooks: Ensures code quality before committing changes.

Integrating Ruff with CI/CD

name: Lint Code

on: [push, pull_request]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: '3.10'
    - name: Install Ruff
      run: pip install ruff
    - name: Lint Code
      run: ruff check .

Ruff is a game-changer in the Python development ecosystem. Its unmatched speed, comprehensive rule set, and ease of use make it a powerful tool for developers aiming to maintain high code quality.

Whether youโ€™re working on small scripts or large-scale applications, Ruff can streamline your linting and formatting processes, ensuring clean, efficient, and consistent code.

20 Essential Git Command-Line Tricks Every Developer Should Know

5 February 2025 at 16:14

Git is a powerful version control system that every developer should master. Whether youโ€™re a beginner or an experienced developer, knowing a few handy Git command-line tricks can save you time and improve your workflow. Here are 20 essential Git tips and tricks to boost your efficiency.

1. Undo the Last Commit (Without Losing Changes)

git reset --soft HEAD~1

If you made a commit but want to undo it while keeping your changes, this command resets the last commit but retains the modified files in your staging area.

This is useful when you realize you need to make more changes before committing.

If you also want to remove the changes from the staging area but keep them in your working directory, use,

git reset HEAD~1

2. Discard Unstaged Changes

git checkout -- <file>

Use this to discard local changes in a file before staging. Be careful, as this cannot be undone! If you want to discard all unstaged changes in your working directory, use,

git reset --hard HEAD

3. Delete a Local Branch

git branch -d branch-name

Removes a local branch safely if itโ€™s already merged. If itโ€™s not merged and you still want to delete it, use -D

git branch -D branch-name

4. Delete a Remote Branch

git push origin --delete branch-name

Deletes a branch from the remote repository, useful for cleaning up old feature branches. If you mistakenly deleted the branch and want to restore it, you can use

git checkout -b branch-name origin/branch-name

if it still exists remotely.

5. Rename a Local Branch

git branch -m old-name new-name

Useful when you want to rename a branch locally without affecting the remote repository. To update the remote reference after renaming, push the renamed branch and delete the old one,

git push origin -u new-name
git push origin --delete old-name

6. See the Commit History in a Compact Format

git log --oneline --graph --decorate --all

A clean and structured way to view Git history, showing branches and commits in a visual format. If you want to see a detailed history with diffs, use

git log -p

7. Stash Your Changes Temporarily

git stash

If you need to switch branches but donโ€™t want to commit yet, stash your changes and retrieve them later with

git stash pop

To see all stashed changes

git stash list

8. Find the Author of a Line in a File

git blame file-name

Shows who made changes to each line in a file. Helpful for debugging or reviewing historical changes. If you want to ignore whitespace changes

git blame -w file-name

9. View a File from a Previous Commit

git show commit-hash:path/to/file

Useful for checking an older version of a file without switching branches. If you want to restore the file from an old commit

git checkout commit-hash -- path/to/file

10. Reset a File to the Last Committed Version

git checkout HEAD -- file-name

Restores the file to the last committed state, removing any local changes. If you want to reset all files

git reset --hard HEAD

11. Clone a Specific Branch

git clone -b branch-name --single-branch repository-url

Instead of cloning the entire repository, this fetches only the specified branch, saving time and space. If you want all branches but donโ€™t want to check them out initially:

git clone --mirror repository-url

12. Change the Last Commit Message

git commit --amend -m "New message"

Use this to correct a typo in your last commit message before pushing. Be cautiousโ€”if youโ€™ve already pushed, use

git push --force-with-lease

13. See the List of Tracked Files

git ls-files

Displays all files being tracked by Git, which is useful for auditing your repository. To see ignored files

git ls-files --others --ignored --exclude-standard

14. Check the Difference Between Two Branches

git diff branch-1..branch-2

Compares changes between two branches, helping you understand what has been modified. To see only file names that changed

git diff --name-only branch-1..branch-2

15. Add a Remote Repository

git remote add origin repository-url

Links a remote repository to your local project, enabling push and pull operations. To verify remote repositories

git remote -v

16. Remove a Remote Repository

git remote remove origin

Unlinks your repository from a remote source, useful when switching remotes.

17. View the Last Commit Details

git show HEAD

Shows detailed information about the most recent commit, including the changes made. To see only the commit message

git log -1 --pretty=%B

18. Check Whatโ€™s Staged for Commit

git diff --staged

Displays changes that are staged for commit, helping you review before finalizing a commit.

19. Fetch and Rebase from a Remote Branch

git pull --rebase origin main

Combines fetching and rebasing in one step, keeping your branch up-to-date cleanly. If conflicts arise, resolve them manually and continue with

git rebase --continue

20. View All Git Aliases

git config --global --list | grep alias

If youโ€™ve set up aliases, this command helps you see them all. Aliases can make your Git workflow faster by shortening common commands. For example

git config --global alias.co checkout

allows you to use git co instead of git checkout.

Try these tricks in your daily development to level up your Git skills!

Learning Notes #69 โ€“ Getting Started with k6: Writing Your First Load Test

5 February 2025 at 15:38

Performance testing is a crucial part of ensuring the stability and scalability of web applications. k6 is a modern, open-source load testing tool that allows developers and testers to script and execute performance tests efficiently. In this blog, weโ€™ll explore the basics of k6 and write a simple test script to get started.

What is k6?

k6 is a load testing tool designed for developers. It is written in Go but uses JavaScript for scripting tests. Key features include,

  • High performance with minimal resource consumption
  • JavaScript-based scripting
  • CLI-based execution with detailed reporting
  • Integration with monitoring tools like Grafana and Prometheus

Installation

For installation check : https://grafana.com/docs/k6/latest/set-up/install-k6/

Writing a Basic k6 Test

A k6 test is written in JavaScript. Hereโ€™s a simple script to test an API endpoint,


import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  vus: 10, // Number of virtual users
  duration: '10s', // Test duration
};

export default function () {
  let res = http.get('https://api.restful-api.dev/objects');
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  sleep(1); // Simulate user wait time
}

Running the Test

Save the script as script.js and execute the test using the following command,

k6 run script.js

Understanding the Output

After running the test, k6 will provide a summary including

1. HTTP requests: Total number of requests made during the test.

    2. Response time metrics:

    • min: The shortest response time recorded.
    • max: The longest response time recorded.
    • avg: The average response time of all requests.
    • p(90), p(95), p(99): Percentile values indicating response time distribution.

    3. Checks: Number of checks passed or failed, such as status code validation.

    4. Virtual users (VUs):

    • vus_max: The maximum number of virtual users active at any time.
    • vus: The current number of active virtual users.

    5. Request Rate (RPS โ€“ Requests Per Second): The number of requests handled per second.

    6. Failures: Number of errors or failed requests due to timeouts or HTTP status codes other than expected.

    Next Steps

    Once youโ€™ve successfully run your first k6 test, you can explore,

    • Load testing different APIs and endpoints
    • Running distributed tests
    • Exporting results to Grafana
    • Integrating k6 with CI/CD pipelines

    k6 is a powerful tool that helps developers and QA engineers ensure their applications perform under load. Stay tuned for more in-depth tutorials on advanced k6 features!

    RSVP for K6 : Load Testing Made Easy in Tamil

    5 February 2025 at 10:57

    Ensuring your applications perform well under high traffic is crucial. Join us for an interactive K6 Bootcamp, where weโ€™ll explore performance testing, load testing strategies, and real-world use cases to help you build scalable and resilient systems.

    ๐ŸŽฏ What is K6 and Why Should You Learn It?

    Modern applications must handle thousands (or millions!) of users without breaking. K6 is an open-source, developer-friendly performance testing tool that helps you

    โœ… Simulate real-world traffic and identify performance bottlenecks.
    โœ… Write tests in JavaScript โ€“ no need for complex tools!
    โœ… Run efficient load tests on APIs, microservices, and web applications.
    โœ… Integrate with CI/CD pipelines to automate performance testing.
    โœ… Gain deep insights with real-time performance metrics.

    By mastering K6, youโ€™ll gain the skills to predict failures before they happen, optimize performance, and build systems that scale with confidence!

    ๐Ÿ“Œ Bootcamp Details

    ๐Ÿ“… Date: Feb 23 2024 โ€“ Sunday
    ๐Ÿ•’ Time: 10:30 AM
    ๐ŸŒ Mode: Online (Link Will be shared in Email after RSVP)
    ๐Ÿ—ฃ Language: เฎคเฎฎเฎฟเฎดเฏ

    ๐ŸŽ“ Who Should Attend?

    • Developers โ€“ Ensure APIs and services perform well under load.
    • QA Engineers โ€“ Validate system reliability before production.
    • SREs / DevOps Engineers โ€“ Continuously test performance in CI/CD pipelines.

    RSVP Now

    ๐Ÿ”ฅ Donโ€™t miss this opportunity to master load testing with K6 and take your performance engineering skills to the next level!

    Got questions? Drop them in the comments or reach out to me. See you at the bootcamp! ๐Ÿš€

    Our Previous Monthly meetsย โ€“ย https://www.youtube.com/watch?v=cPtyuSzeaa8&list=PLiutOxBS1MizPGGcdfXF61WP5pNUYvxUl&pp=gAQB

    Our Previous Sessions,

    1. Python โ€“ย https://www.youtube.com/watch?v=lQquVptFreE&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&pp=gAQB
    2. Docker โ€“ย https://www.youtube.com/watch?v=nXgUBanjZP8&list=PLiutOxBS1Mizi9IRQM-N3BFWXJkb-hQ4U&pp=gAQB
    3. Postgres โ€“ย https://www.youtube.com/watch?v=04pE5bK2-VA&list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt&pp=gAQB

    Learning Notes #68 โ€“ Buildpacks and Dockerfile

    2 February 2025 at 09:32

    1. What is an OCI ?
    2. Does Docker Create OCI Images?
    3. What is a Buildpack ?
    4. Overview of Buildpack Process
    5. Builder: The Image That Executes the Build
      1. Components of a Builder Image
      2. Stack: The Combination of Build and Run Images
    6. Installation and Initial Setups
    7. Basic Build of an Image (Python Project)
      1. Building an image using buildpack
      2. Building an Image using Dockerfile
    8. Unique Benefits of Buildpacks
      1. No Need for a Dockerfile (Auto-Detection)
      2. Automatic Security Updates
      3. Standardized & Reproducible Builds
      4. Extensibility: Custom Buildpacks
    9. Generating SBOM in Buildpacks
      1. a) Using pack CLI to Generate SBOM
      2. b) Generate SBOM in Docker

    Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developerโ€™s pain. In this blog i jot down my experience on Buildpacks.

    Before going to try Buildpacks, we need to understand what is an OCI ?

    What is an OCI ?

    An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).

    It consists of,

    1. Manifest โ€“ Metadata describing the image (layers, config, etc.).
    2. Config JSON โ€“ Information about how the container should run (CMD, ENV, etc.).
    3. Filesystem Layers โ€“ The actual file system of the container.

    OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.

    Does Docker Create OCI Images?

    Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.

    • When you build an image with docker build, it follows the OCI Image format.
    • When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.

    However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.

    What is a Buildpack ?

    A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).

    Overview of Buildpack Process

    The buildpack process consists of two primary phases

    • Detection Phase: Determines if the buildpack should be applied based on the appโ€™s dependencies.
    • Build Phase: Executes the necessary steps to prepare the application for running in a container.

    Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacksโ€™ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.

    Builder: The Image That Executes the Build

    A builder is an image that contains all necessary components to run a buildpack.

    Components of a Builder Image

    1. Build Image โ€“ Used during the build phase (includes compilers, dependencies, etc.).
    2. Run Image โ€“ A minimal environment for running the final built application.
    3. Lifecycle โ€“ The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.

    Stack: The Combination of Build and Run Images

    • Build Image + Run Image = Stack
    • Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
    • Run Image: Lightweight OS with only the runtime dependencies for execution.

    Installation and Initial Setups

    Basic Build of an Image (Python Project)

    Project Source: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app

    Building an image using buildpack

    Before running these commands, ensure you have Pack CLI (pack) installed.

    a) Detect builder suggest

    pack builder suggest
    

    b) Build the image

    pack build my-app --builder paketobuildpacks/builder:base
    

    c) Run the image locally

    
    docker run -p 8080:8080 my-python-app
    

    Building an Image using Dockerfile

    a) Dockerfile

    
    FROM python:3.9-slim
    WORKDIR /app
    COPY requirements.txt .
    
    RUN pip install -r requirements.txt
    
    COPY ./random_id_generator ./random_id_generator
    COPY app.py app.py
    
    EXPOSE 8080
    
    CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8080"]
    

    b) Build and Run

    
    docker build -t my-python-app .
    docker run -p 8080:8080 my-python-app
    

    Unique Benefits of Buildpacks

    No Need for a Dockerfile (Auto-Detection)

    Buildpacks automatically detect the language and dependencies, removing the need for Dockerfile.

    
    pack build my-python-app --builder paketobuildpacks/builder:base
    

    It detects Python, installs dependencies, and builds the app into a container. ๐Ÿš€ Docker requires a Dockerfile, which developers must manually configure and maintain.

    Automatic Security Updates

    Buildpacks automatically patch base images for security vulnerabilities.

    If thereโ€™s a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.

    
    pack rebase my-python-app
    

    No need to rebuild! It replaces only the OS layers while keeping the app the same.

    Standardized & Reproducible Builds

    Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,

    
    pack build my-app
    

    Extensibility: Custom Buildpacks

    Developers can create custom Buildpacks to add special dependencies.

    Example: Adding ffmpeg to a Python buildpack,

    
    pack buildpack package my-custom-python-buildpack --path .
    

    Generating SBOM in Buildpacks

    a) Using pack CLI to Generate SBOM

    After building an image with pack, run,

    
    pack sbom download my-python-app --output-dir ./sbom
    
    • This fetches the SBOM for your built image.
    • The SBOM is saved in the ./sbom/ directory.

    โœ… Supported formats:

    • SPDX (sbom.spdx.json)
    • CycloneDX (sbom.cdx.json)

    b) Generate SBOM in Docker

    
    trivy image --format cyclonedx -o sbom.json my-python-app
    

    Both are helpful in creating images. Its all about the tradeoffs.

    โŒ
    โŒ