Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

display , switch namespace using kubens in K8S

29 March 2025 at 02:23

Install kubectx
$ sudo snap install kubectx --classic

create namespace development
$ kubectl create namespace development

To list available namespace
$ kubens

To switch to another namespace development
$ kubens development

To switch to default namespace from development
$ kubens default

To delete all resources from a namespace
$ kubectl delete all –all -n {namespace}

Setting Up Kubernetes and Nginx Ingress Controller on an EC2 Instance

By: Ragul.M
19 March 2025 at 16:52

Introduction

Kubernetes (K8s) is a powerful container orchestration platform that simplifies application deployment and scaling. In this guide, we’ll set up Kubernetes on an AWS EC2 instance, install the Nginx Ingress Controller, and configure Ingress rules to expose multiple services (app1 and app2).

Step 1: Setting Up Kubernetes on an EC2 Instance
1.1 Launch an EC2 Instance
Choose an instance with enough resources (e.g., t3.medium or larger) and install Ubuntu 20.04 or Amazon Linux 2.
1.2 Update Packages

sudo apt update && sudo apt upgrade -y  # For Ubuntu 

1.3 Install Docker

sudo apt install -y docker.io  
sudo systemctl enable --now docker

1.4 Install Kubernetes (kubectl, kubeadm, kubelet)

sudo apt install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl

1.5 Initialize Kubernetes

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Follow the output instructions to set up kubectl for your user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.6 Install a Network Plugin (Calico)

For Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Now, Kubernetes is ready!

Step 2: Install Nginx Ingress Controller
Nginx Ingress Controller helps manage external traffic to services inside the cluster.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

Wait until the controller is running:

kubectl get pods -n ingress-nginx

You should see ingress-nginx-controller running.

Step 3: Deploy Two Applications (app1 and app2)
3.1 Deploy app1
Create app1-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
      - name: app1
        image: nginx
        ports:
        - containerPort: 80

Create app1-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: app1-service
spec:
  selector:
    app: app1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

Apply the resources:

kubectl apply -f app1-deployment.yaml 
kubectl apply -f app1-service.yaml

3.2 Deploy app2
Create app2-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app2
  template:
    metadata:
      labels:
        app: app2
    spec:
      containers:
      - name: app2
        image: nginx
        ports:
        - containerPort: 80

Create app2-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: app2-service
spec:
  selector:
    app: app2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

Apply the resources:

kubectl apply -f app2-deployment.yaml 
kubectl apply -f app2-service.yaml

Step 4: Configure Ingress for app1 and app2
Create nginx-ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: app1.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app1-service
            port:
              number: 80
  - host: app2.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app2-service
            port:
              number: 80

Apply the Ingress rule:

kubectl apply -f nginx-ingress.yaml

Step 5: Verify Everything
5.1 Get Ingress External IP

kubectl get ingress

5.2 Update /etc/hosts (Local Testing Only)
If you're testing on a local machine, add this to /etc/hosts:

<EXTERNAL-IP> app1.example.com
<EXTERNAL-IP> app2.example.com

Replace with the actual external IP of your Ingress Controller.
5.3 Test in Browser or Curl

curl http://app1.example.com
curl http://app2.example.com

If everything is set up correctly, you should see the default Nginx welcome page for both applications.

Conclusion
In this guide, we:

  • Installed Kubernetes on an EC2 instance
  • Set up Nginx Ingress Controller
  • Deployed two services (app1 and app2)
  • Configured Ingress to expose them via domain names

Now, you can easily manage multiple applications in your cluster using a single Ingress resource.

Follow for more . Happy learning :)

How to Manage Multiple Cron Job Executions

16 March 2025 at 06:13

Cron jobs are a fundamental part of automating tasks in Unix-based systems. However, one common problem with cron jobs is multiple executions, where overlapping job runs can cause serious issues like data corruption, race conditions, or unexpected system load.

In this blog, we’ll explore why multiple executions happen, the potential risks, and how flock provides an elegant solution to ensure that a cron job runs only once at a time.

The Problem: Multiple Executions of Cron Jobs

Cron jobs are scheduled to run at fixed intervals, but sometimes a new job instance starts before the previous one finishes.

This can happen due to

  • Long-running jobs: If a cron job takes longer than its interval, a new instance starts while the old one is still running.
  • System slowdowns: High CPU or memory usage can delay job execution, leading to overlapping runs.
  • Simultaneous executions across servers: In a distributed system, multiple servers might execute the same cron job, causing duplication.

Example of a Problematic Cron Job

Let’s say we have the following cron job that runs every minute:

* * * * * /path/to/script.sh

If script.sh takes more than a minute to execute, a second instance will start before the first one finishes.

This can lead to:

✅ Duplicate database writes → Inconsistent data

✅ Conflicts in file processing → Corrupt files

✅ Overloaded system resources → Performance degradation

Real-World Example

Imagine a job that processes user invoices and sends emails

* * * * * /usr/bin/python3 /home/user/process_invoices.py

If the script takes longer than a minute to complete, multiple instances might start running, causing

  1. Users to receive multiple invoices.
  2. The database to get inconsistent updates.
  3. Increased server load due to excessive email sending.

The Solution: Using flock to Prevent Multiple Executions

flock is a Linux utility that manages file locks to ensure that only one instance of a process runs at a time. It works by locking a specific file, preventing other processes from acquiring the same lock.

Using flock in a Cron Job

Modify the cron job as follows

* * * * * /usr/bin/flock -n /tmp/myjob.lock /path/to/script.sh

How It Works

  • flock -n /tmp/myjob.lock → Tries to acquire a lock on /tmp/myjob.lock.
  • If the lock is available, the script runs.
  • If the lock is already held (i.e., another instance is running), flock prevents the new instance from starting.
  • -n (non-blocking) ensures that the job doesn’t wait for the lock and simply exits if it cannot acquire it.

This guarantees that only one instance of the job runs at a time.

Verifying the Solution

You can test the lock by manually running the script with flock

/usr/bin/flock -n /tmp/myjob.lock /bin/bash -c 'echo "Running job..."; sleep 30'

Open another terminal and try to run the same command. You’ll see that the second attempt exits immediately because the lock is already acquired.

Preventing multiple executions of cron jobs is essential for maintaining data consistency, system stability, and efficiency. By using flock, you can easily enforce single execution without complex logic.

✅ Simple & efficient solution. ✅ No external dependencies required. ✅ Works seamlessly with cron jobs.

So next time you set up a cron job, add flock and sleep peacefully knowing your tasks won’t collide. 🚀

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

24 February 2025 at 16:18

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines! 💻✨

💡 Topics Covered:
🛠 Syed Jafer – FOSS, GLUGs, and open-source communities
🌍 Salman – Why FOSS matters & Linux Commands
🚀 Dhanasekar – Linux and DevOps
🔧 Guhan – GNU and free software

Challenges We Faced


🔐 BitLocker Encryption – Had to disable BitLocker on some laptops
🔧 BIOS/UEFI Problems – Secure Boot, boot order changes needed
🐧 GRUB Issues – Windows not showing up, required boot-repair

🎥 Watch the installation video and try it yourself! https://www.youtube.com/watch?v=m7sSqlam2Sk


▶ Linux Mint Installation Guide https://tkdhanasekar.wordpress.com/2025/02/15/installation-of-linux-mint-22-1-cinnamon-edition/

This is just the beginning!

Top Command in Linux: Tips for Effective Usage

17 February 2025 at 17:17

The top command in Linux is a powerful utility that provides realtime information about system performance, including CPU usage, memory usage, running processes, and more.

It is an essential tool for system administrators to monitor system health and manage resources effectively.

1. Basic Usage

Simply running top without any arguments displays an interactive screen showing system statistics and a list of running processes:

$ top

2. Understanding the top Output

The top interface is divided into multiple sections

Header Section

This section provides an overview of the system status, including uptime, load averages, and system resource usage.

  • Uptime and Load Average – Displays how long the system has been running and the average system load over the last 1, 5, and 15 minutes.
  • Task Summary – Shows the number of processes in various states:
    • Running – Processes actively executing on the CPU.
    • Sleeping – Processes waiting for an event or resource.
    • Stopped – Processes that have been paused.
    • Zombie – Processes that have completed execution but still have an entry in the process table. These occur when the parent process has not yet read the exit status of the child process. Zombie processes do not consume system resources but can clutter the process table if not handled properly.
  • CPU Usage – Breaks down CPU utilization into different categories:
    • us (User Space) – CPU time spent on user processes.
    • sy (System Space) – CPU time spent on kernel operations.
    • id (Idle) – Time when the CPU is not being used.
    • wa (I/O Wait) – Time spent waiting for I/O operations to complete.
    • st (Steal Time) – CPU cycles stolen by a hypervisor in a virtualized environment.
  • Memory Usage – Shows the total, used, free, and available RAM.
  • Swap Usage – Displays total, used, and free swap memory, which is used when RAM is full.

Process Table

The table below the header lists active processes with details such as:

  • PID – Process ID, a unique identifier for each process.
  • USER – The owner of the process.
  • PR – Priority of the process, affecting its scheduling.
  • NI – Nice value, which determines how favorable the process scheduling is.
  • VIRT – The total virtual memory used by the process.
  • RES – The actual RAM used by the process.
  • SHR – The shared memory portion.
  • S – Process state:
    • R – Running
    • S – Sleeping
    • Z – Zombie
    • T – Stopped
  • %CPU – The percentage of CPU time used.
  • %MEM – The percentage of RAM used.
  • TIME+ – The total CPU time consumed by the process.
  • COMMAND – The command that started the process.

3. Interactive Commands

While running top, various keyboard shortcuts allow dynamic interaction:

  • q – Quit top.
  • h – Display help.
  • k – Kill a process by entering its PID.
  • r – Renice a process (change priority).
  • z – Toggle color/monochrome mode.
  • M – Sort by memory usage.
  • P – Sort by CPU usage.
  • T – Sort by process runtime.
  • 1 – Toggle CPU usage breakdown for multi-core systems.
  • u – Filter processes by a specific user.
  • s – Change update interval.

4. Command-Line Options

The top command supports various options for customization:

  • -b (Batch mode): Used for scripting to display output in a non-interactive mode.$ top -b -n 1-n specifies the number of iterations before exit.
  • -o FIELD (Sort by a specific field):$ top -o %CPUSorts by CPU usage.
  • -d SECONDS (Refresh interval):$ top -d 3Updates the display every 3 seconds.
  • -u USERNAME (Show processes for a specific user):$ top -u john
  • -p PID (Monitor a specific process):$ top -p 1234

5. Customizing top Display

Persistent Customization

To save custom settings, press W while running top. This saves the configuration to ~/.toprc.

Changing Column Layout

  • Press f to toggle the fields displayed.
  • Press o to change sorting order.
  • Press X to highlight sorted columns.

6. Alternative to top: htop, btop

For a more user-friendly experience, htop is an alternative:

$ sudo apt install htop  # Debian-based
$ sudo yum install htop  # RHEL-based
$ htop

It provides a visually rich interface with color coding and easy navigation.

20 Essential Git Command-Line Tricks Every Developer Should Know

5 February 2025 at 16:14

Git is a powerful version control system that every developer should master. Whether you’re a beginner or an experienced developer, knowing a few handy Git command-line tricks can save you time and improve your workflow. Here are 20 essential Git tips and tricks to boost your efficiency.

1. Undo the Last Commit (Without Losing Changes)

git reset --soft HEAD~1

If you made a commit but want to undo it while keeping your changes, this command resets the last commit but retains the modified files in your staging area.

This is useful when you realize you need to make more changes before committing.

If you also want to remove the changes from the staging area but keep them in your working directory, use,

git reset HEAD~1

2. Discard Unstaged Changes

git checkout -- <file>

Use this to discard local changes in a file before staging. Be careful, as this cannot be undone! If you want to discard all unstaged changes in your working directory, use,

git reset --hard HEAD

3. Delete a Local Branch

git branch -d branch-name

Removes a local branch safely if it’s already merged. If it’s not merged and you still want to delete it, use -D

git branch -D branch-name

4. Delete a Remote Branch

git push origin --delete branch-name

Deletes a branch from the remote repository, useful for cleaning up old feature branches. If you mistakenly deleted the branch and want to restore it, you can use

git checkout -b branch-name origin/branch-name

if it still exists remotely.

5. Rename a Local Branch

git branch -m old-name new-name

Useful when you want to rename a branch locally without affecting the remote repository. To update the remote reference after renaming, push the renamed branch and delete the old one,

git push origin -u new-name
git push origin --delete old-name

6. See the Commit History in a Compact Format

git log --oneline --graph --decorate --all

A clean and structured way to view Git history, showing branches and commits in a visual format. If you want to see a detailed history with diffs, use

git log -p

7. Stash Your Changes Temporarily

git stash

If you need to switch branches but don’t want to commit yet, stash your changes and retrieve them later with

git stash pop

To see all stashed changes

git stash list

8. Find the Author of a Line in a File

git blame file-name

Shows who made changes to each line in a file. Helpful for debugging or reviewing historical changes. If you want to ignore whitespace changes

git blame -w file-name

9. View a File from a Previous Commit

git show commit-hash:path/to/file

Useful for checking an older version of a file without switching branches. If you want to restore the file from an old commit

git checkout commit-hash -- path/to/file

10. Reset a File to the Last Committed Version

git checkout HEAD -- file-name

Restores the file to the last committed state, removing any local changes. If you want to reset all files

git reset --hard HEAD

11. Clone a Specific Branch

git clone -b branch-name --single-branch repository-url

Instead of cloning the entire repository, this fetches only the specified branch, saving time and space. If you want all branches but don’t want to check them out initially:

git clone --mirror repository-url

12. Change the Last Commit Message

git commit --amend -m "New message"

Use this to correct a typo in your last commit message before pushing. Be cautious—if you’ve already pushed, use

git push --force-with-lease

13. See the List of Tracked Files

git ls-files

Displays all files being tracked by Git, which is useful for auditing your repository. To see ignored files

git ls-files --others --ignored --exclude-standard

14. Check the Difference Between Two Branches

git diff branch-1..branch-2

Compares changes between two branches, helping you understand what has been modified. To see only file names that changed

git diff --name-only branch-1..branch-2

15. Add a Remote Repository

git remote add origin repository-url

Links a remote repository to your local project, enabling push and pull operations. To verify remote repositories

git remote -v

16. Remove a Remote Repository

git remote remove origin

Unlinks your repository from a remote source, useful when switching remotes.

17. View the Last Commit Details

git show HEAD

Shows detailed information about the most recent commit, including the changes made. To see only the commit message

git log -1 --pretty=%B

18. Check What’s Staged for Commit

git diff --staged

Displays changes that are staged for commit, helping you review before finalizing a commit.

19. Fetch and Rebase from a Remote Branch

git pull --rebase origin main

Combines fetching and rebasing in one step, keeping your branch up-to-date cleanly. If conflicts arise, resolve them manually and continue with

git rebase --continue

20. View All Git Aliases

git config --global --list | grep alias

If you’ve set up aliases, this command helps you see them all. Aliases can make your Git workflow faster by shortening common commands. For example

git config --global alias.co checkout

allows you to use git co instead of git checkout.

Try these tricks in your daily development to level up your Git skills!

Learning Notes #68 – Buildpacks and Dockerfile

2 February 2025 at 09:32

  1. What is an OCI ?
  2. Does Docker Create OCI Images?
  3. What is a Buildpack ?
  4. Overview of Buildpack Process
  5. Builder: The Image That Executes the Build
    1. Components of a Builder Image
    2. Stack: The Combination of Build and Run Images
  6. Installation and Initial Setups
  7. Basic Build of an Image (Python Project)
    1. Building an image using buildpack
    2. Building an Image using Dockerfile
  8. Unique Benefits of Buildpacks
    1. No Need for a Dockerfile (Auto-Detection)
    2. Automatic Security Updates
    3. Standardized & Reproducible Builds
    4. Extensibility: Custom Buildpacks
  9. Generating SBOM in Buildpacks
    1. a) Using pack CLI to Generate SBOM
    2. b) Generate SBOM in Docker

Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developer’s pain. In this blog i jot down my experience on Buildpacks.

Before going to try Buildpacks, we need to understand what is an OCI ?

What is an OCI ?

An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).

It consists of,

  1. Manifest – Metadata describing the image (layers, config, etc.).
  2. Config JSON – Information about how the container should run (CMD, ENV, etc.).
  3. Filesystem Layers – The actual file system of the container.

OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.

Does Docker Create OCI Images?

Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.

  • When you build an image with docker build, it follows the OCI Image format.
  • When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.

However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.

What is a Buildpack ?

A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).

Overview of Buildpack Process

The buildpack process consists of two primary phases

  • Detection Phase: Determines if the buildpack should be applied based on the app’s dependencies.
  • Build Phase: Executes the necessary steps to prepare the application for running in a container.

Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacks’ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.

Builder: The Image That Executes the Build

A builder is an image that contains all necessary components to run a buildpack.

Components of a Builder Image

  1. Build Image – Used during the build phase (includes compilers, dependencies, etc.).
  2. Run Image – A minimal environment for running the final built application.
  3. Lifecycle – The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.

Stack: The Combination of Build and Run Images

  • Build Image + Run Image = Stack
  • Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
  • Run Image: Lightweight OS with only the runtime dependencies for execution.

Installation and Initial Setups

Basic Build of an Image (Python Project)

Project Source: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app

Building an image using buildpack

Before running these commands, ensure you have Pack CLI (pack) installed.

a) Detect builder suggest

pack builder suggest

b) Build the image

pack build my-app --builder paketobuildpacks/builder:base

c) Run the image locally


docker run -p 8080:8080 my-python-app

Building an Image using Dockerfile

a) Dockerfile


FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .

RUN pip install -r requirements.txt

COPY ./random_id_generator ./random_id_generator
COPY app.py app.py

EXPOSE 8080

CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8080"]

b) Build and Run


docker build -t my-python-app .
docker run -p 8080:8080 my-python-app

Unique Benefits of Buildpacks

No Need for a Dockerfile (Auto-Detection)

Buildpacks automatically detect the language and dependencies, removing the need for Dockerfile.


pack build my-python-app --builder paketobuildpacks/builder:base

It detects Python, installs dependencies, and builds the app into a container. 🚀 Docker requires a Dockerfile, which developers must manually configure and maintain.

Automatic Security Updates

Buildpacks automatically patch base images for security vulnerabilities.

If there’s a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.


pack rebase my-python-app

No need to rebuild! It replaces only the OS layers while keeping the app the same.

Standardized & Reproducible Builds

Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,


pack build my-app

Extensibility: Custom Buildpacks

Developers can create custom Buildpacks to add special dependencies.

Example: Adding ffmpeg to a Python buildpack,


pack buildpack package my-custom-python-buildpack --path .

Generating SBOM in Buildpacks

a) Using pack CLI to Generate SBOM

After building an image with pack, run,


pack sbom download my-python-app --output-dir ./sbom
  • This fetches the SBOM for your built image.
  • The SBOM is saved in the ./sbom/ directory.

✅ Supported formats:

  • SPDX (sbom.spdx.json)
  • CycloneDX (sbom.cdx.json)

b) Generate SBOM in Docker


trivy image --format cyclonedx -o sbom.json my-python-app

Both are helpful in creating images. Its all about the tradeoffs.

Learning Notes #67 – Build and Push to a Registry (Docker Hub) with GH-Actions

28 January 2025 at 02:30

GitHub Actions is a powerful tool for automating workflows directly in your repository.In this blog, we’ll explore how to efficiently set up GitHub Actions to handle Docker workflows with environments, secrets, and protection rules.

Why Use GitHub Actions for Docker?

My Code base is in Github and i want to tryout gh-actions to build and push images to docker hub seamlessly.

Setting Up GitHub Environments

GitHub Environments let you define settings specific to deployment stages. Here’s how to configure them:

1. Create an Environment

Go to your GitHub repository and navigate to Settings > Environments. Click New environment, name it (e.g., production), and save.

2. Add Secrets and Variables

Inside the environment settings, click Add secret to store sensitive information like DOCKER_USERNAME and DOCKER_TOKEN.

Use Variables for non-sensitive configuration, such as the Docker image name.

3. Optional: Set Protection Rules

Enforce rules like requiring manual approval before deployments. Restrict deployments to specific branches (e.g., main).

Sample Workflow for Building and Pushing Docker Images

Below is a GitHub Actions workflow for automating the build and push of a Docker image based on a minimal Flask app.

Workflow: .github/workflows/docker-build-push.yml


name: Build and Push Docker Image

on:
  push:
    branches:
      - main  # Trigger workflow on pushes to the `main` branch

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    environment: production  # Specify the environment to use

    steps:
      # Checkout the repository
      - name: Checkout code
        uses: actions/checkout@v3

      # Log in to Docker Hub using environment secrets
      - name: Log in to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_TOKEN }}

      # Build the Docker image using an environment variable
      - name: Build Docker image
        env:
          DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
        run: |
          docker build -t ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }} .

      # Push the Docker image to Docker Hub
      - name: Push Docker image
        env:
          DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
        run: |
          docker push ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }}

To Actions on live: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app/actions

Event Summary: Grafana & Friends Meetup Chennai – 25-01-2025

26 January 2025 at 04:47

🚀 Attended the Grafana & Friends Meetup Yesterday! 🚀

I usually have a question. As a developer, i have logs, isn’t that enough. With curious mind, i attended Grafana & Friends Chennai meetup (Jan 25th 2025)

Had an awesome time meeting fellow tech enthusiasts (devops engineers) and learning about cool ways to monitor and understand data better.
Big shoutout to the Grafana Labs community and Presidio for hosting such a great event!

Sandwich and Juice was nice 😋

Talk Summary,

1⃣ Making Data Collection Easier with Grafana Alloy
Dinesh J. and Krithika R shared how Grafana Alloy, combined with Open Telemetry, makes it super simple to collect and manage data for better monitoring.

2⃣ Running Grafana in Kubernetes
Lakshmi Narasimhan Parthasarathy (https://lnkd.in/gShxtucZ) showed how to set up Grafana in Kubernetes in 4 different ways (vanilla, helm chart, grafana operator, kube-prom-stack). He is building a SaaS product https://lnkd.in/gSS9XS5m (Heroku on your own servers).

3⃣ Observability for Frontend Apps with Grafana Faro
Selvaraj Kuppusamy show how Grafana Faro can help frontend developers monitor what’s happening on websites and apps in real time. This makes it easier to spot and fix issues quickly. Were able to see core web vitals, and traces too. I was surprised about this.

Techies i had interaction with,

Prasanth Baskar, who is an open source contributor at Cloud Native Computing Foundation (CNCF) on project https://lnkd.in/gmHjt9Bs. I was also happy to know that he knows **parottasalna** (that’s me) and read some blogs. Happy To Hear that.

Selvaraj Kuppusamy, Devops Engineer, he is also conducting Grafana and Friends chapter in Coimbatore on Feb 1. I will attend that aswell.

Saranraj Chandrasekaran who is also a devops engineer, Had a chat with him on devops and related stuffs.

To all of them, i shared about KanchiLUG (https://lnkd.in/gasCnxXv) and Parottasalna (https://parottasalna.com/) and My Channel on Tech https://lnkd.in/gKcyE-b5.

Thanks Achanandhi M for organising this wonderful meetup. You did well. I came to Achanandhi M from medium. He regularly writes blog on cloud related stuffs. https://lnkd.in/ghUS-GTc Checkout his blog.

Also, He shared some tasks for us,

1. Create your First Grafana Dashboard.
Objective: Create a basic Grafana Dashboard to visualize data in various formats such as tables, charts and graphs. Aslo, try to connect to multiple data sources to get diverse data for your dashboard.

2. Monitor your linux system’s health with prometheus, Node Exporter and Grafana.
Objective: Use prometheus, Node Exporter adn Grafana to monitor your linux machines health system by tracking key metrics like CPU, memory and disk usage.


3. Using Grafana Faro to track User Actions (Like Button Clicks) and Identify the Most Used Features.

Give a try on these.

Learning Notes #61 – Undo a git pull

18 January 2025 at 16:04

Today, i came across a blog on undo a git pull. In this blog, i have reiterated the blog in other words.

Mistakes happen. You run a git pull and suddenly find your repository in a mess. Maybe conflicts arose, or perhaps the changes merged from the remote branch aren’t what you expected.

Fortunately, Git’s reflog comes to the rescue, allowing you to undo a git pull and restore your repository to its previous state. Here’s how you can do it.

Understanding Reflog


Reflog is a powerful feature in Git that logs every update made to the tips of your branches and references. Even actions like resets or rebases leave traces in the reflog. This makes it an invaluable tool for troubleshooting and recovering from mistakes.

Whenever you perform a git pull, Git updates the branch pointer, and the reflog records this action. By examining the reflog, you can identify the exact state of your branch before the pull and revert to it if needed.

Step By Step Guide to UNDO a git pull

1. Check Your Current State Ensure you’re aware of the current state of your branch. If you have uncommitted changes, stash or commit them to avoid losing any work.


git stash
# or
git add . && git commit -m "Save changes before undoing pull"

2. Inspect the Reflog View the recent history of your branch using the reflog,


git reflog

This command will display a list of recent actions, showing commit hashes and descriptions. For example,


0a1b2c3 (HEAD -> main) HEAD@{0}: pull origin main: Fast-forward
4d5e6f7 HEAD@{1}: commit: Add new feature
8g9h0i1 HEAD@{2}: checkout: moving from feature-branch to main

3. Identify the Pre-Pull Commit Locate the commit hash of your branch’s state before the pull. In the above example, it’s 4d5e6f7, which corresponds to the commit made before the git pull.

4. Reset to the Previous Commit Use the git reset command to move your branch back to its earlier state,


git reset <commit-hash>

By default, it’s mixed so changes wont be removed but will be in staging.

The next time a pull operation goes awry, don’t panic—let the reflog guide you back to safety!

Learning Notes #22 – Claim Check Pattern | Cloud Pattern

31 December 2024 at 17:03

Today, i learnt about claim check pattern, which tells how to handle a big message into the queue. Every message broker has a defined message size limit. If our message size exceeds the size, it wont work.

The Claim Check Pattern emerges as a pivotal architectural design to address challenges in managing large payloads in a decoupled and efficient manner. In this blog, i jot down notes on my learning for my future self.

What is the Claim Check Pattern?

The Claim Check Pattern is a messaging pattern used in distributed systems to manage large messages efficiently. Instead of transmitting bulky data directly between services, this pattern extracts and stores the payload in a dedicated storage system (e.g., object storage or a database).

A lightweight reference or “claim check” is then sent through the message queue, which the receiving service can use to retrieve the full data from the storage.

This pattern is inspired by the physical process of checking in luggage at an airport: you hand over your luggage, receive a claim check (a token), and later use it to retrieve your belongings.

How Does the Claim Check Pattern Work?

The process typically involves the following steps

  1. Data Submission The sender service splits a message into two parts:
    • Metadata: A small piece of information that provides context about the data.
    • Payload: The main body of data that is too large or sensitive to send through the message queue.
  2. Storing the Payload
    • The sender uploads the payload to a storage service (e.g., AWS S3, Azure Blob Storage, or Google Cloud Storage).
    • The storage service returns a unique identifier (e.g., a URL or object key).
  3. Sending the Claim Check
    • The sender service places the metadata and the unique identifier (claim check) onto the message queue.
  4. Receiving the Claim Check
    • The receiver service consumes the message from the queue, extracts the claim check, and retrieves the payload from the storage system.
  5. Processing
    • The receiver processes the payload alongside the metadata as required.

Use Cases

1. Media Processing Pipelines In video transcoding systems, raw video files can be uploaded to storage while metadata (e.g., video format and length) is passed through the message queue.

2. IoT Systems – IoT devices generate large datasets. Using the Claim Check Pattern ensures efficient transmission and processing of these data chunks.

3. Data Processing Workflows – In big data systems, datasets can be stored in object storage while processing metadata flows through orchestration tools like Apache Airflow.

4. Event-Driven Architectures – For systems using event-driven models, large event payloads can be offloaded to storage to avoid overloading the messaging layer.

Example with RabbitMQ

1.Sender Service


import boto3
import pika

s3 = boto3.client('s3')
bucket_name = 'my-bucket'
object_key = 'data/large-file.txt'

response = s3.upload_file('large-file.txt', bucket_name, object_key)
claim_check = f's3://{bucket_name}/{object_key}'

# Connect to RabbitMQ
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

# Declare a queue
channel.queue_declare(queue='claim_check_queue')

# Send the claim check
message = {
    'metadata': 'Some metadata',
    'claim_check': claim_check
}
channel.basic_publish(exchange='', routing_key='claim_check_queue', body=str(message))

connection.close()

2. Consumer


import boto3
import pika

s3 = boto3.client('s3')

# Connect to RabbitMQ
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

# Declare a queue
channel.queue_declare(queue='claim_check_queue')

# Callback function to process messages
def callback(ch, method, properties, body):
    message = eval(body)
    claim_check = message['claim_check']

    bucket_name, object_key = claim_check.replace('s3://', '').split('/', 1)
    s3.download_file(bucket_name, object_key, 'retrieved-large-file.txt')
    print("Payload retrieved and processed.")

# Consume messages
channel.basic_consume(queue='claim_check_queue', on_message_callback=callback, auto_ack=True)

print('Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

References

  1. https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check
  2. https://medium.com/@dmosyan/claim-check-design-pattern-603dc1f3796d

Learning Notes #19 – Blue Green Deployments – An near ZERO downtime deployment

30 December 2024 at 18:19

Today, i got refreshed on Blue Green Deployment from a podcast https://open.spotify.com/episode/03p86zgOuSEbNezK71CELH. Deployment designing is a plate i haven’t touched yet. In this blog i jot down the notes on blue green deployment for my future self.

What is Blue-Green Deployment?

Blue-Green Deployment is a release management strategy that involves maintaining two identical environments, referred to as “Blue” and “Green.” At any point in time, only one environment is live (receiving traffic), while the other remains idle or in standby. Updates are deployed to the idle environment, thoroughly tested, and then switched to live with minimal downtime.

How It Works

  • This approach involves setting up two environments: the Blue environment, which serves live traffic, and the Green environment, a replica used for staging updates.
  • Updates are first deployed to the Green environment, where comprehensive testing is performed to ensure functionality, performance, and integration meet expectations.
  • Once testing is successful, the routing mechanism, such as a DNS or API Gateway or load balancer, is updated to redirect traffic from the Blue environment to the Green environment.
  • The Green environment then becomes live, while the Blue environment transitions to an idle state.
  • If issues arise, traffic can be reverted to the Blue environment for a quick recovery with minimal impact.

Benefits of Blue-Green Deployment

  • Blue-Green Deployment provides zero downtime during the deployment process, ensuring uninterrupted user experiences.
  • Rollbacks are simplified because the previous version remains intact in the Blue environment, enabling quick reversion if necessary. Consideration of forward and backwar capability is important. eg, Database.
  • It also allows seamless testing in the Green environment before updates go live, reducing risks by isolating production from deployment issues.

Challenges and Considerations

  • Maintaining two identical environments can be resource intensive.
  • Ensuring synchronization between environments is critical to prevent discrepancies in configuration and data.
  • Handling live database changes during the environment switch is complex, requiring careful planning for database migrations.

Implementing Blue-Green Deployment (Not Yet Tried)

  • Several tools and platforms support Blue-Green Deployment. Kubernetes simplifies managing multiple environments through namespaces and services.
  • AWS Elastic Beanstalk offers built-in support for Blue-Green Deployment, while HashiCorp Terraform automates the setup of Blue-Green infrastructure.
  • To implement this strategy, organizations should design infrastructure capable of supporting two identical environments, automate deployments using CI/CD pipelines, monitor and test thoroughly, and define rollback procedures to revert to previous versions when necessary.

Reference:

Idea Implementation #1 – Automating Daily Quiz Delivery to Telegram with GitHub Actions

25 December 2024 at 17:53

Telegram Group: https://t.me/parottasalna

Github Repo: https://github.com/syedjaferk/daily_quiz_sender

In this blog, I’ll walk you through the journey of transitioning from a PythonAnywhere server to GitHub Actions for automating the delivery of daily quizzes to a Telegram group https://t.me/parottasalna . This implementation highlights the benefits of GitHub Actions to run a cronjob.

Problem Statement

I wanted to send a daily quiz to a Telegram group to keep members engaged and learning. Initially, I hosted the solution on a PythonAnywhere server.

Some of the limitations are,

  1. Only one cron job is allowed for a free account.
  2. For every x days, i need to visit the dashboard and reactive my account to make it work.

Recently, started learning the Github Actions . So thought of leveraging the schedule mechanism in it for my usecase.

Key Features of My Solution

  • Automated Scheduling: GitHub Actions triggers the script daily at a specified time.
  • Secure Secrets Management: Sensitive information like Telegram bot tokens and chat IDs are stored securely using GitHub Secrets.
  • Serverless Architecture: No server maintenance; everything runs on GitHub’s infrastructure.

Implementation Overview

I am not explaining the code here. Please checkout the repo https://github.com/syedjaferk/daily_quiz_sender

The core components of the implementation include

  1. A Python script (populate_questions.py) to create questions and store each question in the respective directory.
  2. A Python script (runner.py) which takes telegram_bot_url, chat_id and category as input to read the correct question from the category directory and send the message.
  3. A GitHub Actions workflow to automate execution.

Workflow Yaml File


name: Docker Quiz Sender

on:
  schedule:
    - cron: "30 4 * * *"

  workflow_dispatch:

jobs:
  run-python:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          if [ -f requirements.txt ]; then pip install -r requirements.txt; fi

      - name: Run script with secrets0
        run: python runner.py --telegram_url ${{ secrets.TELEGRAM_BOT_URL }} --chat_id ${{ secrets.CHAT_ID }} --category docker

Benefits of the New Implementation

  1. Cost-Effectiveness: No server costs thanks to GitHub’s free tier for public repositories.
  2. Reliability: GitHub Actions ensures the script runs on schedule without manual intervention.
  3. Security: GitHub Secrets securely manages sensitive credentials.
  4. Centralization: All code, configuration, and automation reside in a single repository.

Lessons Learned

  • Environment Variables: Using environment variables for secrets keeps the code clean and secure.
  • Error Handling: Adding error-handling mechanisms in the script helps debug issues effectively.
  • Time Zones: GitHub Actions uses UTC; be mindful when scheduling tasks for specific time zones.

Future Enhancements

  1. Dynamic Quizzes: Automate quiz generation from a database (sqlite3) or external API.
  2. Logging: Store logs as GitHub Actions artifacts for better debugging and tracking.
  3. Failure Notifications: Notify via Telegram or email if the workflow fails.
  4. Extend for other categories

You can check out the project here. Feel free to fork it and adapt it to your needs!

Learning Notes #2 – Automate Email Notifications in GitHub Actions

20 December 2024 at 07:07

Today, i was checking youtube videos on github actions. I came across a video on sending a mail via a Github Action https://www.youtube.com/watch?v=SkD7KQ3KzZs&t=108s. This blog is just an implementation of the video.

What am i going to do ?

I need to send a mail using my gmail id via Github

What prior information you need ?

  1. Gmail SMTP Server address – smtp.gmail.com
  2. Gmail SMTP port – 465
  3. Email username – your mail id.
  4. Email password – App Password from Google. https://support.google.com/accounts/answer/185833?hl=en.

YAML File


name: "Send Mail"

on:
  workflow_dispatch:

jobs:
  mail:
    runs-on: ubuntu-latest
    steps:
      - name: Print Name
        run: echo "Sending Mail"
    
      - name: Send Mail
        if: ${{ always() }}
        uses: dawidd6/action-send-mail@v3
        with:
          server_address: smtp.gmail.com
          server_port: 465

          username: ${{ secrets.EMAIL_USERNAME }}
          password: ${{ secrets.EMAIL_PASSWORD }}

          subject: ${{ github.job }} job of ${{ github.repository }} has  ${{ github.status }}
          body: "Test Message in Github"
          to:  syedjafer1997@gmail.com
          from: Syed Jafer K



My Github Run: https://github.com/syedjaferk/gh_actions_templates/actions/runs/12426929593

Learning Notes #01 – Github Actions

19 December 2024 at 16:31

GitHub Actions is a CI/CD tool that automates workflows directly from your GitHub repository. I am writing this notes for my future self.

Github: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app

0. Github Actions

Github Actions is a managed CI/CD pipeline offering. It provides free runners for running the code.

1. Workflow, Jobs, Steps

A workflow is a collection of jobs defined in a .yml file inside .github/workflows. Each workflow consists of jobs, and jobs have steps.


name: My First Workflow
on: push
jobs:
  example-job:
    runs-on: ubuntu-latest
    steps:
      - name: Print a message
        run: echo "Hello, GitHub Actions!"

2. Availability and Pricing

GitHub Actions is free for public repositories and has free usage limits for private repositories, depending on the plan. Paid plans increase these limits. For detailed pricing, visit the GitHub Actions pricing page.

3. First workflow with basic echo commands

Start with a workflow triggered by any push event. Here’s a simple example,


name: Echo Workflow
on: push
jobs:
  echo-job:
    runs-on: ubuntu-latest
    steps:
      - name: Say Hello
        run: echo "Hello from my first workflow!"

4. Multiline Shell Commands

You can use the | symbol to write multiline shell commands.


name: Push Event Workflow
on:
  push:
    branches:
      - main
jobs:
  push-job:
    runs-on: ubuntu-latest
    steps:
      - name: On Push
        run: echo "Code pushed to main branch!"

5. A new workflow with push events

Workflows can be triggered by specific events like push. This example triggers only on push to the main branch.


name: Push Event Workflow
on:
  push:
    branches:
      - main
jobs:
  push-job:
    runs-on: ubuntu-latest
    steps:
      - name: On Push
        run: echo "Code pushed to main branch!"

6. Using actions in workflow (Marketplace and Open Source)

GitHub Actions Marketplace offers reusable actions. For example, using the actions/checkout action.


steps:
  - name: Checkout Code
    uses: actions/checkout@v3

7. Checkout in code

Checking out your code is essential to access the repository content. Use actions/checkout


steps:
  - name: Checkout Code
    uses: actions/checkout@v3

8. Adding workflow job steps

Each job can have multiple steps.


jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Step 1 - Checkout Code
        uses: actions/checkout@v3
      - name: Step 2 - Run a Script
        run: echo "Script running in step 2."

9. Jobs in Parallel and Sequential

To run jobs in parallel, define them without dependencies. For sequential jobs, use needs.

Parallel Jobs:

jobs:
  job1:
    runs-on: ubuntu-latest
    steps:
      - run: echo "Job 1"

  job2:
    runs-on: ubuntu-latest
    steps:
      - run: echo "Job 2"

Sequential Jobs:


jobs:
  job1:
    runs-on: ubuntu-latest
    steps:
      - run: echo "Job 1"

  job2:
    runs-on: ubuntu-latest
    needs: job1
    steps:
      - run: echo "Job 2 after Job 1"

10. Using Multiple Triggers

A workflow can be triggered by multiple events.


on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

11. Expressions and Context Objects

Expressions are used to access variables and contexts like github or env.


steps:
  - name: Use an Expression
    run: echo "Triggered by ${{ github.event_name }}"

Problem Statements : Git & Github Session – St. Joseph’s GDG Meeting

17 December 2024 at 15:13

List of problem statements enough to get your hands dirty on git. These are the list of commands that you mostly use in your development.

Problem 1

  1. Initialize a Repository.
  2. Setup user details globally.
  3. Setup project specific user details.
  4. Check Configuration – List the configurations.

Problem 2

  1. Add Specific files. Create two files app.js and style.css. User git add to stage only style.css . This allows selective addition of files to the staging area before committing.
  2. Stage all files except one.

Problem 3

  1. Commit with a message
  2. Amend a commit
  3. Commit without staging

Problem 4

  1. Create a Branch
  2. Create a new branch named feature/api to work on a feature independently without affecting the main branch.
  3. Delete a branch.
  4. Force delete a branch.
  5. Rename a branch.
  6. List all branches.

Problem 5

  1. Switch to a branch
  2. Switch to the main branch using git checkout.
  3. Create and switch to a branch
  4. Create a new branch named bugfix/001 and switch to it in a single command with git checkout -b.

Problem 6

  1. Start with a repository containing a file named project.txt
  2. Create two branches (feature-1 and feature-2) from the main branch.
  3. Make changes to project.txt in both branches.
  4. Attempt to merge feature-1 and feature-2 into the main branch.
  5. Resolve any merge conflicts and complete the merge process.

Problem 7

  1. View history in one-line format
  2. Graphical commit history
  3. Filter commits by Author
  4. Show changes in a commit

Problem 8

  1. Fetch updates from remote
  2. Fetch and Merge
  3. Fetch changes from the remote branch origin/main and merge them into your local main
  4. List remote references

Problem 9

  1. Create a stash
  2. Apply a stash
  3. Pop a stash
  4. View stash

Problem 10

  1. You need to undo the last commit but want to keep the changes staged for a new commit. What will you do ?

Problem 11

  1. You realize you staged some files for commit but want to unstage them while keeping the changes in your working directory. Which git command will allow you to unstage the files without losing any change ?

Problem 12

  1. You decide to completely discard all local changes and reset the repository to the state of the last commit. What git command should you run to discard all changes and reset your working directory ?

Automating AWS Cost Management Reports with Lambda

By: Ragul.M
11 December 2024 at 16:08

Monitoring AWS costs is essential for keeping budgets in check. In this guide, we’ll walk through creating an AWS Lambda function to retrieve cost details and send them to email (via SES) and Slack.
Prerequisites
1.AWS Account with IAM permissions for Lambda, SES, and Cost Explorer.
2.Slack Webhook URL to send messages.
3.Configured SES Email for notifications.
4.S3 Bucket for storing cost reports as CSV files.

Step 1: Enable Cost Explorer

  • Go to AWS Billing Dashboard > Cost Explorer.
  • Enable Cost Explorer to access detailed cost data.

Step 2: Create an S3 Bucket

  • Create an S3 bucket (e.g., aws-cost-reports) to store cost reports.
  • Ensure the bucket has appropriate read/write permissions for Lambda.

Step 3: Write the Lambda Code
1.Create a Lambda Function

  • Go to AWS Lambda > Create Function.
  • Select Python Runtime (e.g., Python 3.9).
    1. Add Dependencies
  • Use a Lambda layer or package libraries like boto3 and slack_sdk. 3.Write your python code and execute them. (If you want my code , just comment "ease-py-code" on my blog , will share you 🫶 )

Step 4: Add S3 Permissions
Update the Lambda execution role to allow s3:PutObject, ses:SendEmail, and ce:GetCostAndUsage.

Step 5: Test the Lambda
1.Trigger Lambda manually using a test event.

  1. Verify the cost report is:
    • Uploaded to the S3 bucket.
    • Emailed via SES.
    • Notified in Slack.

Conclusion
With this setup, AWS cost reports are automatically delivered to your inbox and Slack, keeping you updated on spending trends. Fine-tune this solution by customizing the report frequency or grouping costs by other dimensions.

Follow for more and happy learning :)

Exploring Kubernetes: A Step Ahead of Basics

By: Ragul.M
10 December 2024 at 05:35

Kubernetes is a powerful platform that simplifies the management of containerized applications. If you’re familiar with the fundamentals, it’s time to take a step further and explore intermediate concepts that enhance your ability to manage and optimize Kubernetes clusters.

  1. Understanding Deployments
    A Deployment ensures your application runs reliably by managing scaling, updates, and rollbacks.

  2. Using ConfigMaps and Secrets
    Kubernetes separates application configuration and sensitive data from the application code using ConfigMaps and Secrets.

    ConfigMaps

    Store non-sensitive configurations, such as environment variables or application settings.

kubectl create configmap app-config --from-literal=ENV=production 

3. Liveness and Readiness Probes

Probes ensure your application is healthy and ready to handle traffic.

Liveness Probe
Checks if your application is running. If it fails, Kubernetes restarts the pod.

Readiness Probe
Checks if your application is ready to accept traffic. If it fails, Kubernetes stops routing requests to the pod.

4.Resource Requests and Limits
To ensure efficient resource utilization, define requests (minimum resources a pod needs) and limits (maximum resources a pod can use).

5.Horizontal Pod Autoscaling (HPA)
Scale your application dynamically based on CPU or memory usage.
Example:

kubectl autoscale deployment my-app --cpu-percent=70 --min=2 --max=10 

This ensures your application scales automatically when resource usage increases or decreases.

6.Network Policies
Control how pods communicate with each other and external resources using Network Policies.

Conclusion
Kubernetes has revolutionized the way we manage containerized applications. By automating tasks like deployment, scaling, and maintenance, it allows developers and organizations to focus on innovation. Whether you're a beginner or a seasoned developer, mastering Kubernetes is a skill that will enhance your ability to build and manage modern applications.

By mastering these slightly advanced Kubernetes concepts, you’ll improve your cluster management, application reliability, and resource utilization. With this knowledge, you’re well-prepared to dive into more advanced topics like Helm, monitoring with Prometheus, and service meshes like Istio.

Follow for more and Happy learning :)

Understanding Kubernetes Basics: A Beginner’s Guide

By: Ragul.M
29 November 2024 at 17:12

In today’s tech-driven world, Kubernetes has emerged as one of the most powerful tools for container orchestration. Whether you’re managing a few containers or thousands of them, Kubernetes simplifies the process, ensuring high availability, scalability, and efficient resource utilization. This blog will guide you through the basics of Kubernetes, helping you understand its core components and functionality.

What is Kubernetes?
Kubernetes, often abbreviated as K8s, is an open-source platform developed by Google that automates the deployment, scaling, and management of containerized applications. It was later donated to the Cloud Native Computing Foundation (CNCF).
With Kubernetes, developers can focus on building applications, while Kubernetes takes care of managing their deployment and runtime.

Key Features of Kubernetes

  1. Automated Deployment and Scaling Kubernetes automates the deployment of containers and can scale them up or down based on demand.
  2. Self-Healing If a container fails, Kubernetes replaces it automatically, ensuring minimal downtime.
  3. Load Balancing Distributes traffic evenly across containers, optimizing performance and preventing overload.
  4. Rollbacks and Updates Kubernetes manages seamless updates and rollbacks for your applications without disrupting service.
  5. Resource Management Optimizes hardware utilization by efficiently scheduling containers across the cluster.

Core Components of Kubernetes
To understand Kubernetes, let’s break it down into its core components:

  1. Cluster A Kubernetes cluster consists of:
  2. Master Node: The control plane managing the entire cluster.
  3. Worker Nodes: Machines where containers run.
  4. Pods :The smallest deployable unit in Kubernetes. A pod can contain one or more containers that share resources like storage and networking.
  5. Nodes : Physical or virtual machines that run the pods. Managed by the Kubelet, a process ensuring pods are running as expected.
  6. Services : Allow communication between pods and other resources, both inside and outside the cluster. Examples include ClusterIP, NodePort, and LoadBalancer services.
  7. ConfigMaps and Secrets : ConfigMaps: Store configuration data for your applications. Secrets: Store sensitive data like passwords and tokens securely.
  8. Namespaces Virtual clusters within a Kubernetes cluster, used for organizing and isolating resources.

Conclusion
Kubernetes has revolutionized the way we manage containerized applications. By automating tasks like deployment, scaling, and maintenance, it allows developers and organizations to focus on innovation. Whether you're a beginner or a seasoned developer, mastering Kubernetes is a skill that will enhance your ability to build and manage modern applications.

Follow for more and Happy learning :)

❌
❌