Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

display , switch namespace using kubens in K8S

29 March 2025 at 02:23

Install kubectx
$ sudo snap install kubectx --classic

create namespace development
$ kubectl create namespace development

To list available namespace
$ kubens

To switch to another namespace development
$ kubens development

To switch to default namespace from development
$ kubens default

To delete all resources from a namespace
$ kubectl delete all –all -n {namespace}

How to create namespace in K*S

29 March 2025 at 02:04

namespace is a mechanism for logically partitioning and isolating resources within a single cluster, allowing multiple teams or projects to share the same cluster without conflicts

creating a namespace mygroup by manfest file
$ vim mygroup.yml
apiVersion: v1
kind: Namespace
metadata:
name: mygroup
:x

$ kubectl apply -f mygroup.yml

To list all namespace
$ kubectl get namespaces

To switch to mygroup namespace
$ kubectl config set-context --current --namespace=mygroup

To delete namespace mygroup
$ kubectl delete namespace mygroup

Setting Up Kubernetes and Nginx Ingress Controller on an EC2 Instance

By: Ragul.M
19 March 2025 at 16:52

Introduction

Kubernetes (K8s) is a powerful container orchestration platform that simplifies application deployment and scaling. In this guide, we’ll set up Kubernetes on an AWS EC2 instance, install the Nginx Ingress Controller, and configure Ingress rules to expose multiple services (app1 and app2).

Step 1: Setting Up Kubernetes on an EC2 Instance
1.1 Launch an EC2 Instance
Choose an instance with enough resources (e.g., t3.medium or larger) and install Ubuntu 20.04 or Amazon Linux 2.
1.2 Update Packages

sudo apt update && sudo apt upgrade -y  # For Ubuntu 

1.3 Install Docker

sudo apt install -y docker.io  
sudo systemctl enable --now docker

1.4 Install Kubernetes (kubectl, kubeadm, kubelet)

sudo apt install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl

1.5 Initialize Kubernetes

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Follow the output instructions to set up kubectl for your user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.6 Install a Network Plugin (Calico)

For Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Now, Kubernetes is ready!

Step 2: Install Nginx Ingress Controller
Nginx Ingress Controller helps manage external traffic to services inside the cluster.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

Wait until the controller is running:

kubectl get pods -n ingress-nginx

You should see ingress-nginx-controller running.

Step 3: Deploy Two Applications (app1 and app2)
3.1 Deploy app1
Create app1-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
      - name: app1
        image: nginx
        ports:
        - containerPort: 80

Create app1-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: app1-service
spec:
  selector:
    app: app1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

Apply the resources:

kubectl apply -f app1-deployment.yaml 
kubectl apply -f app1-service.yaml

3.2 Deploy app2
Create app2-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app2
  template:
    metadata:
      labels:
        app: app2
    spec:
      containers:
      - name: app2
        image: nginx
        ports:
        - containerPort: 80

Create app2-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: app2-service
spec:
  selector:
    app: app2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

Apply the resources:

kubectl apply -f app2-deployment.yaml 
kubectl apply -f app2-service.yaml

Step 4: Configure Ingress for app1 and app2
Create nginx-ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: app1.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app1-service
            port:
              number: 80
  - host: app2.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app2-service
            port:
              number: 80

Apply the Ingress rule:

kubectl apply -f nginx-ingress.yaml

Step 5: Verify Everything
5.1 Get Ingress External IP

kubectl get ingress

5.2 Update /etc/hosts (Local Testing Only)
If you're testing on a local machine, add this to /etc/hosts:

<EXTERNAL-IP> app1.example.com
<EXTERNAL-IP> app2.example.com

Replace with the actual external IP of your Ingress Controller.
5.3 Test in Browser or Curl

curl http://app1.example.com
curl http://app2.example.com

If everything is set up correctly, you should see the default Nginx welcome page for both applications.

Conclusion
In this guide, we:

  • Installed Kubernetes on an EC2 instance
  • Set up Nginx Ingress Controller
  • Deployed two services (app1 and app2)
  • Configured Ingress to expose them via domain names

Now, you can easily manage multiple applications in your cluster using a single Ingress resource.

Follow for more . Happy learning :)

How to Manage Multiple Cron Job Executions

16 March 2025 at 06:13

Cron jobs are a fundamental part of automating tasks in Unix-based systems. However, one common problem with cron jobs is multiple executions, where overlapping job runs can cause serious issues like data corruption, race conditions, or unexpected system load.

In this blog, we’ll explore why multiple executions happen, the potential risks, and how flock provides an elegant solution to ensure that a cron job runs only once at a time.

The Problem: Multiple Executions of Cron Jobs

Cron jobs are scheduled to run at fixed intervals, but sometimes a new job instance starts before the previous one finishes.

This can happen due to

  • Long-running jobs: If a cron job takes longer than its interval, a new instance starts while the old one is still running.
  • System slowdowns: High CPU or memory usage can delay job execution, leading to overlapping runs.
  • Simultaneous executions across servers: In a distributed system, multiple servers might execute the same cron job, causing duplication.

Example of a Problematic Cron Job

Let’s say we have the following cron job that runs every minute:

* * * * * /path/to/script.sh

If script.sh takes more than a minute to execute, a second instance will start before the first one finishes.

This can lead to:

✅ Duplicate database writes → Inconsistent data

✅ Conflicts in file processing → Corrupt files

✅ Overloaded system resources → Performance degradation

Real-World Example

Imagine a job that processes user invoices and sends emails

* * * * * /usr/bin/python3 /home/user/process_invoices.py

If the script takes longer than a minute to complete, multiple instances might start running, causing

  1. Users to receive multiple invoices.
  2. The database to get inconsistent updates.
  3. Increased server load due to excessive email sending.

The Solution: Using flock to Prevent Multiple Executions

flock is a Linux utility that manages file locks to ensure that only one instance of a process runs at a time. It works by locking a specific file, preventing other processes from acquiring the same lock.

Using flock in a Cron Job

Modify the cron job as follows

* * * * * /usr/bin/flock -n /tmp/myjob.lock /path/to/script.sh

How It Works

  • flock -n /tmp/myjob.lock → Tries to acquire a lock on /tmp/myjob.lock.
  • If the lock is available, the script runs.
  • If the lock is already held (i.e., another instance is running), flock prevents the new instance from starting.
  • -n (non-blocking) ensures that the job doesn’t wait for the lock and simply exits if it cannot acquire it.

This guarantees that only one instance of the job runs at a time.

Verifying the Solution

You can test the lock by manually running the script with flock

/usr/bin/flock -n /tmp/myjob.lock /bin/bash -c 'echo "Running job..."; sleep 30'

Open another terminal and try to run the same command. You’ll see that the second attempt exits immediately because the lock is already acquired.

Preventing multiple executions of cron jobs is essential for maintaining data consistency, system stability, and efficiency. By using flock, you can easily enforce single execution without complex logic.

✅ Simple & efficient solution. ✅ No external dependencies required. ✅ Works seamlessly with cron jobs.

So next time you set up a cron job, add flock and sleep peacefully knowing your tasks won’t collide. 🚀

Learning Notes #68 – Buildpacks and Dockerfile

2 February 2025 at 09:32

  1. What is an OCI ?
  2. Does Docker Create OCI Images?
  3. What is a Buildpack ?
  4. Overview of Buildpack Process
  5. Builder: The Image That Executes the Build
    1. Components of a Builder Image
    2. Stack: The Combination of Build and Run Images
  6. Installation and Initial Setups
  7. Basic Build of an Image (Python Project)
    1. Building an image using buildpack
    2. Building an Image using Dockerfile
  8. Unique Benefits of Buildpacks
    1. No Need for a Dockerfile (Auto-Detection)
    2. Automatic Security Updates
    3. Standardized & Reproducible Builds
    4. Extensibility: Custom Buildpacks
  9. Generating SBOM in Buildpacks
    1. a) Using pack CLI to Generate SBOM
    2. b) Generate SBOM in Docker

Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developer’s pain. In this blog i jot down my experience on Buildpacks.

Before going to try Buildpacks, we need to understand what is an OCI ?

What is an OCI ?

An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).

It consists of,

  1. Manifest – Metadata describing the image (layers, config, etc.).
  2. Config JSON – Information about how the container should run (CMD, ENV, etc.).
  3. Filesystem Layers – The actual file system of the container.

OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.

Does Docker Create OCI Images?

Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.

  • When you build an image with docker build, it follows the OCI Image format.
  • When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.

However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.

What is a Buildpack ?

A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).

Overview of Buildpack Process

The buildpack process consists of two primary phases

  • Detection Phase: Determines if the buildpack should be applied based on the app’s dependencies.
  • Build Phase: Executes the necessary steps to prepare the application for running in a container.

Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacks’ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.

Builder: The Image That Executes the Build

A builder is an image that contains all necessary components to run a buildpack.

Components of a Builder Image

  1. Build Image – Used during the build phase (includes compilers, dependencies, etc.).
  2. Run Image – A minimal environment for running the final built application.
  3. Lifecycle – The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.

Stack: The Combination of Build and Run Images

  • Build Image + Run Image = Stack
  • Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
  • Run Image: Lightweight OS with only the runtime dependencies for execution.

Installation and Initial Setups

Basic Build of an Image (Python Project)

Project Source: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app

Building an image using buildpack

Before running these commands, ensure you have Pack CLI (pack) installed.

a) Detect builder suggest

pack builder suggest

b) Build the image

pack build my-app --builder paketobuildpacks/builder:base

c) Run the image locally


docker run -p 8080:8080 my-python-app

Building an Image using Dockerfile

a) Dockerfile


FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .

RUN pip install -r requirements.txt

COPY ./random_id_generator ./random_id_generator
COPY app.py app.py

EXPOSE 8080

CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8080"]

b) Build and Run


docker build -t my-python-app .
docker run -p 8080:8080 my-python-app

Unique Benefits of Buildpacks

No Need for a Dockerfile (Auto-Detection)

Buildpacks automatically detect the language and dependencies, removing the need for Dockerfile.


pack build my-python-app --builder paketobuildpacks/builder:base

It detects Python, installs dependencies, and builds the app into a container. 🚀 Docker requires a Dockerfile, which developers must manually configure and maintain.

Automatic Security Updates

Buildpacks automatically patch base images for security vulnerabilities.

If there’s a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.


pack rebase my-python-app

No need to rebuild! It replaces only the OS layers while keeping the app the same.

Standardized & Reproducible Builds

Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,


pack build my-app

Extensibility: Custom Buildpacks

Developers can create custom Buildpacks to add special dependencies.

Example: Adding ffmpeg to a Python buildpack,


pack buildpack package my-custom-python-buildpack --path .

Generating SBOM in Buildpacks

a) Using pack CLI to Generate SBOM

After building an image with pack, run,


pack sbom download my-python-app --output-dir ./sbom
  • This fetches the SBOM for your built image.
  • The SBOM is saved in the ./sbom/ directory.

✅ Supported formats:

  • SPDX (sbom.spdx.json)
  • CycloneDX (sbom.cdx.json)

b) Generate SBOM in Docker


trivy image --format cyclonedx -o sbom.json my-python-app

Both are helpful in creating images. Its all about the tradeoffs.

Learning Notes #67 – Build and Push to a Registry (Docker Hub) with GH-Actions

28 January 2025 at 02:30

GitHub Actions is a powerful tool for automating workflows directly in your repository.In this blog, we’ll explore how to efficiently set up GitHub Actions to handle Docker workflows with environments, secrets, and protection rules.

Why Use GitHub Actions for Docker?

My Code base is in Github and i want to tryout gh-actions to build and push images to docker hub seamlessly.

Setting Up GitHub Environments

GitHub Environments let you define settings specific to deployment stages. Here’s how to configure them:

1. Create an Environment

Go to your GitHub repository and navigate to Settings > Environments. Click New environment, name it (e.g., production), and save.

2. Add Secrets and Variables

Inside the environment settings, click Add secret to store sensitive information like DOCKER_USERNAME and DOCKER_TOKEN.

Use Variables for non-sensitive configuration, such as the Docker image name.

3. Optional: Set Protection Rules

Enforce rules like requiring manual approval before deployments. Restrict deployments to specific branches (e.g., main).

Sample Workflow for Building and Pushing Docker Images

Below is a GitHub Actions workflow for automating the build and push of a Docker image based on a minimal Flask app.

Workflow: .github/workflows/docker-build-push.yml


name: Build and Push Docker Image

on:
  push:
    branches:
      - main  # Trigger workflow on pushes to the `main` branch

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    environment: production  # Specify the environment to use

    steps:
      # Checkout the repository
      - name: Checkout code
        uses: actions/checkout@v3

      # Log in to Docker Hub using environment secrets
      - name: Log in to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_TOKEN }}

      # Build the Docker image using an environment variable
      - name: Build Docker image
        env:
          DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
        run: |
          docker build -t ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }} .

      # Push the Docker image to Docker Hub
      - name: Push Docker image
        env:
          DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
        run: |
          docker push ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }}

To Actions on live: https://github.com/syedjaferk/gh_action_docker_build_push_fastapi_app/actions

Event Summary: Grafana & Friends Meetup Chennai – 25-01-2025

26 January 2025 at 04:47

🚀 Attended the Grafana & Friends Meetup Yesterday! 🚀

I usually have a question. As a developer, i have logs, isn’t that enough. With curious mind, i attended Grafana & Friends Chennai meetup (Jan 25th 2025)

Had an awesome time meeting fellow tech enthusiasts (devops engineers) and learning about cool ways to monitor and understand data better.
Big shoutout to the Grafana Labs community and Presidio for hosting such a great event!

Sandwich and Juice was nice 😋

Talk Summary,

1⃣ Making Data Collection Easier with Grafana Alloy
Dinesh J. and Krithika R shared how Grafana Alloy, combined with Open Telemetry, makes it super simple to collect and manage data for better monitoring.

2⃣ Running Grafana in Kubernetes
Lakshmi Narasimhan Parthasarathy (https://lnkd.in/gShxtucZ) showed how to set up Grafana in Kubernetes in 4 different ways (vanilla, helm chart, grafana operator, kube-prom-stack). He is building a SaaS product https://lnkd.in/gSS9XS5m (Heroku on your own servers).

3⃣ Observability for Frontend Apps with Grafana Faro
Selvaraj Kuppusamy show how Grafana Faro can help frontend developers monitor what’s happening on websites and apps in real time. This makes it easier to spot and fix issues quickly. Were able to see core web vitals, and traces too. I was surprised about this.

Techies i had interaction with,

Prasanth Baskar, who is an open source contributor at Cloud Native Computing Foundation (CNCF) on project https://lnkd.in/gmHjt9Bs. I was also happy to know that he knows **parottasalna** (that’s me) and read some blogs. Happy To Hear that.

Selvaraj Kuppusamy, Devops Engineer, he is also conducting Grafana and Friends chapter in Coimbatore on Feb 1. I will attend that aswell.

Saranraj Chandrasekaran who is also a devops engineer, Had a chat with him on devops and related stuffs.

To all of them, i shared about KanchiLUG (https://lnkd.in/gasCnxXv) and Parottasalna (https://parottasalna.com/) and My Channel on Tech https://lnkd.in/gKcyE-b5.

Thanks Achanandhi M for organising this wonderful meetup. You did well. I came to Achanandhi M from medium. He regularly writes blog on cloud related stuffs. https://lnkd.in/ghUS-GTc Checkout his blog.

Also, He shared some tasks for us,

1. Create your First Grafana Dashboard.
Objective: Create a basic Grafana Dashboard to visualize data in various formats such as tables, charts and graphs. Aslo, try to connect to multiple data sources to get diverse data for your dashboard.

2. Monitor your linux system’s health with prometheus, Node Exporter and Grafana.
Objective: Use prometheus, Node Exporter adn Grafana to monitor your linux machines health system by tracking key metrics like CPU, memory and disk usage.


3. Using Grafana Faro to track User Actions (Like Button Clicks) and Identify the Most Used Features.

Give a try on these.

Learning Notes #19 – Blue Green Deployments – An near ZERO downtime deployment

30 December 2024 at 18:19

Today, i got refreshed on Blue Green Deployment from a podcast https://open.spotify.com/episode/03p86zgOuSEbNezK71CELH. Deployment designing is a plate i haven’t touched yet. In this blog i jot down the notes on blue green deployment for my future self.

What is Blue-Green Deployment?

Blue-Green Deployment is a release management strategy that involves maintaining two identical environments, referred to as “Blue” and “Green.” At any point in time, only one environment is live (receiving traffic), while the other remains idle or in standby. Updates are deployed to the idle environment, thoroughly tested, and then switched to live with minimal downtime.

How It Works

  • This approach involves setting up two environments: the Blue environment, which serves live traffic, and the Green environment, a replica used for staging updates.
  • Updates are first deployed to the Green environment, where comprehensive testing is performed to ensure functionality, performance, and integration meet expectations.
  • Once testing is successful, the routing mechanism, such as a DNS or API Gateway or load balancer, is updated to redirect traffic from the Blue environment to the Green environment.
  • The Green environment then becomes live, while the Blue environment transitions to an idle state.
  • If issues arise, traffic can be reverted to the Blue environment for a quick recovery with minimal impact.

Benefits of Blue-Green Deployment

  • Blue-Green Deployment provides zero downtime during the deployment process, ensuring uninterrupted user experiences.
  • Rollbacks are simplified because the previous version remains intact in the Blue environment, enabling quick reversion if necessary. Consideration of forward and backwar capability is important. eg, Database.
  • It also allows seamless testing in the Green environment before updates go live, reducing risks by isolating production from deployment issues.

Challenges and Considerations

  • Maintaining two identical environments can be resource intensive.
  • Ensuring synchronization between environments is critical to prevent discrepancies in configuration and data.
  • Handling live database changes during the environment switch is complex, requiring careful planning for database migrations.

Implementing Blue-Green Deployment (Not Yet Tried)

  • Several tools and platforms support Blue-Green Deployment. Kubernetes simplifies managing multiple environments through namespaces and services.
  • AWS Elastic Beanstalk offers built-in support for Blue-Green Deployment, while HashiCorp Terraform automates the setup of Blue-Green infrastructure.
  • To implement this strategy, organizations should design infrastructure capable of supporting two identical environments, automate deployments using CI/CD pipelines, monitor and test thoroughly, and define rollback procedures to revert to previous versions when necessary.

Reference:

Learning Notes #11 – Sidecar Pattern | Cloud Patterns

26 December 2024 at 17:40

Today, I learnt about Sidecar Pattern. Its seems like offloading the common functionalities (logging, networking, …) aside within a pod to be used by other apps within the pod.

Its just not only about pods, but other deployments aswell. In this blog, i am going to curate the items i have learnt for my future self. Its a pattern, not an strict rule.

What is a Sidecar?

Imagine you’re riding a motorbike, and you attach a little sidecar to carry your friend or groceries. The sidecar isn’t part of the motorbike’s engine or core mechanism, but it helps you achieve your goals—whether it’s carrying more stuff or having a buddy ride along.

In the software world, a sidecar is a similar concept. It’s a separate process or container that runs alongside a primary application. Like the motorbike’s sidecar, it supports the main application by offloading or enhancing certain tasks without interfering with its core functionality.

Why Use a Sidecar?

In traditional applications, all responsibilities (logging, communication, monitoring, etc.) are bundled into the main application. This approach can make the application complex and harder to manage. Sidecars address this by handling auxiliary tasks separately, so the main application can focus on its primary purpose.

Here are some key reasons to use a sidecar

  1. Modularity: Sidecars separate responsibilities, making the system easier to develop, test, and maintain.
  2. Reusability: The same sidecar can be used across multiple services. And its language agnostic.
  3. Scalability: You can scale the sidecar independently from the main application.
  4. Isolation: Sidecars provide a level of isolation, reducing the risk of one part affecting the other.

Real-Life Analogies

To make the concept clearer, here are some real-world analogies:

  1. Coffee Maker with a Milk Frother:
    • The coffee maker (main application) brews coffee.
    • The milk frother (sidecar) prepares frothed milk for your latte.
    • Both work independently but combine their outputs for a better experience.
  2. Movie Subtitles:
    • The movie (main application) provides the visuals and sound.
    • The subtitles (sidecar) add clarity for those who need them.
    • You can watch the movie with or without subtitles—they’re optional but enhance the experience.
  3. A School with a Sports Coach:
    • The school (main application) handles education.
    • The sports coach (sidecar) focuses on physical training.
    • Both have distinct roles but contribute to the overall development of students.

Some Random Sidecar Ideas in Software

Let’s look at how sidecars are used in actual software scenarios

  1. Service Meshes (e.g., Istio, Linkerd):
    • A service mesh helps microservices communicate with each other reliably and securely.
    • The sidecar (proxy like Envoy) handles tasks like load balancing, encryption, and monitoring, so the main application doesn’t have to.
  2. Logging and Monitoring:
    • Instead of the main application generating and managing logs, a sidecar can collect, format, and send logs to a centralized system like Elasticsearch or Splunk.
  3. Authentication and Security:
    • A sidecar can act as a gatekeeper, handling user authentication and ensuring that only authorized requests reach the main application.
  4. Data Caching:
    • If an application frequently queries a database, a sidecar can serve as a local cache, reducing database load and speeding up responses.
  5. Service Discovery:
    • Sidecars can aid in service discovery by automatically registering the main application with a registry service or load balancer, ensuring seamless communication in dynamic environments.

How Sidecars Work

In modern environments like Kubernetes, sidecars are often deployed as separate containers within the same pod as the main application. They share the same network and storage, making communication between the two seamless.

Here’s a simplified workflow

  1. The main application focuses on its core tasks (e.g., serving a web page).
  2. The sidecar handles auxiliary tasks (e.g., compressing and encrypting logs).
  3. The two communicate over local connections within the pod.

Pros and Cons of Sidecars

Pros:

  • Simplifies the main application.
  • Encourages reusability and modular design.
  • Improves scalability and flexibility.
  • Enhances observability with centralized logging and metrics.
  • Facilitates experimentation—you can deploy or update sidecars independently.

Cons:

  • Adds complexity to deployment and orchestration.
  • Consumes additional resources (CPU, memory).
  • Requires careful design to avoid tight coupling between the sidecar and the main application.
  • Latency (You are adding an another hop).

Do we always need to use sidecars

No. Not at all.

a. When there is a latency between the parent application and sidecar, then Reconsider.

b. If your application is small, then reconsider.

c. When you are scaling differently or independently from the parent application, then Reconsider.

Some other examples

1. Adding HTTPS to a Legacy Application

Consider a legacy web service which services requests over unencrypted HTTP. We have a requirement to enhance the same legacy system to service requests with HTTPS in future.

The legacy app is configured to serve request exclusively on localhost, which means that only services that share the local network with the server able to access legacy application. In addition to the main container (legacy app) we can add Nginx Sidecar container which runs in the same network namespace as the main container so that it can access the service running on localhost.

2. For Logging (Image from ByteByteGo)

Sidecars are not just technical solutions; they embody the principle of collaboration and specialization. By dividing responsibilities, they empower the main application to shine while ensuring auxiliary tasks are handled efficiently. Next time you hear about sidecars, you’ll know they’re more than just cool attachments for motorcycle they’re an essential part of scalable, maintainable software systems.

Also, do you feel its closely related to Adapter and Ambassador Pattern ? I Do.

References:

  1. Hussein Nasser – https://www.youtube.com/watch?v=zcJWvhzkPsw&pp=ygUHc2lkZWNhcg%3D%3D
  2. Sudo Code – https://www.youtube.com/watch?v=QU5WcwuFpZU&pp=ygUPc2lkZWNhciBwYXR0ZXJu
  3. Software Dude – https://www.youtube.com/watch?v=poPUzN33Oug&pp=ygUPc2lkZWNhciBwYXR0ZXJu
  4. https://medium.com/nerd-for-tech/microservice-design-pattern-sidecar-sidekick-pattern-dbcea9bed783
  5. https://dzone.com/articles/sidecar-design-pattern-in-your-microservices-ecosy-1

Exploring Kubernetes: A Step Ahead of Basics

By: Ragul.M
10 December 2024 at 05:35

Kubernetes is a powerful platform that simplifies the management of containerized applications. If you’re familiar with the fundamentals, it’s time to take a step further and explore intermediate concepts that enhance your ability to manage and optimize Kubernetes clusters.

  1. Understanding Deployments
    A Deployment ensures your application runs reliably by managing scaling, updates, and rollbacks.

  2. Using ConfigMaps and Secrets
    Kubernetes separates application configuration and sensitive data from the application code using ConfigMaps and Secrets.

    ConfigMaps

    Store non-sensitive configurations, such as environment variables or application settings.

kubectl create configmap app-config --from-literal=ENV=production 

3. Liveness and Readiness Probes

Probes ensure your application is healthy and ready to handle traffic.

Liveness Probe
Checks if your application is running. If it fails, Kubernetes restarts the pod.

Readiness Probe
Checks if your application is ready to accept traffic. If it fails, Kubernetes stops routing requests to the pod.

4.Resource Requests and Limits
To ensure efficient resource utilization, define requests (minimum resources a pod needs) and limits (maximum resources a pod can use).

5.Horizontal Pod Autoscaling (HPA)
Scale your application dynamically based on CPU or memory usage.
Example:

kubectl autoscale deployment my-app --cpu-percent=70 --min=2 --max=10 

This ensures your application scales automatically when resource usage increases or decreases.

6.Network Policies
Control how pods communicate with each other and external resources using Network Policies.

Conclusion
Kubernetes has revolutionized the way we manage containerized applications. By automating tasks like deployment, scaling, and maintenance, it allows developers and organizations to focus on innovation. Whether you're a beginner or a seasoned developer, mastering Kubernetes is a skill that will enhance your ability to build and manage modern applications.

By mastering these slightly advanced Kubernetes concepts, you’ll improve your cluster management, application reliability, and resource utilization. With this knowledge, you’re well-prepared to dive into more advanced topics like Helm, monitoring with Prometheus, and service meshes like Istio.

Follow for more and Happy learning :)

Understanding Kubernetes Basics: A Beginner’s Guide

By: Ragul.M
29 November 2024 at 17:12

In today’s tech-driven world, Kubernetes has emerged as one of the most powerful tools for container orchestration. Whether you’re managing a few containers or thousands of them, Kubernetes simplifies the process, ensuring high availability, scalability, and efficient resource utilization. This blog will guide you through the basics of Kubernetes, helping you understand its core components and functionality.

What is Kubernetes?
Kubernetes, often abbreviated as K8s, is an open-source platform developed by Google that automates the deployment, scaling, and management of containerized applications. It was later donated to the Cloud Native Computing Foundation (CNCF).
With Kubernetes, developers can focus on building applications, while Kubernetes takes care of managing their deployment and runtime.

Key Features of Kubernetes

  1. Automated Deployment and Scaling Kubernetes automates the deployment of containers and can scale them up or down based on demand.
  2. Self-Healing If a container fails, Kubernetes replaces it automatically, ensuring minimal downtime.
  3. Load Balancing Distributes traffic evenly across containers, optimizing performance and preventing overload.
  4. Rollbacks and Updates Kubernetes manages seamless updates and rollbacks for your applications without disrupting service.
  5. Resource Management Optimizes hardware utilization by efficiently scheduling containers across the cluster.

Core Components of Kubernetes
To understand Kubernetes, let’s break it down into its core components:

  1. Cluster A Kubernetes cluster consists of:
  2. Master Node: The control plane managing the entire cluster.
  3. Worker Nodes: Machines where containers run.
  4. Pods :The smallest deployable unit in Kubernetes. A pod can contain one or more containers that share resources like storage and networking.
  5. Nodes : Physical or virtual machines that run the pods. Managed by the Kubelet, a process ensuring pods are running as expected.
  6. Services : Allow communication between pods and other resources, both inside and outside the cluster. Examples include ClusterIP, NodePort, and LoadBalancer services.
  7. ConfigMaps and Secrets : ConfigMaps: Store configuration data for your applications. Secrets: Store sensitive data like passwords and tokens securely.
  8. Namespaces Virtual clusters within a Kubernetes cluster, used for organizing and isolating resources.

Conclusion
Kubernetes has revolutionized the way we manage containerized applications. By automating tasks like deployment, scaling, and maintenance, it allows developers and organizations to focus on innovation. Whether you're a beginner or a seasoned developer, mastering Kubernetes is a skill that will enhance your ability to build and manage modern applications.

Follow for more and Happy learning :)

Exploring Thanos Kube Chaos - A Kubernetes Chaos Engineering Tool

By: angu10
3 February 2024 at 00:30

Image description

Chaos engineering has become a crucial aspect of ensuring the resilience and reliability of applications and infrastructure, especially in the dynamic world of Kubernetes. In this blog post, we will dive into "Thanos Kube Chaos," an open-source tool designed for chaos engineering in Kubernetes environments. The project draws inspiration from Netflix Chaos Monkey and provides a set of features to simulate controlled failures and assess the robustness of your Kubernetes clusters.

Overview

Thanos Kube Chaos is a Python-based chaos engineering tool that leverages the Kubernetes Python client to interact with Kubernetes clusters. Its primary goal is to help users proactively identify vulnerabilities in their systems by inducing controlled failures and assessing the system's response. Let's explore some key aspects of this project.

The Importance of Project Thanos in Resilience Testing:

1. Engineering Team - Resilience Testing:

Need: Modern applications often run in complex and dynamic environments. Chaos engineering allows organizations to proactively identify weaknesses and points of failure in their systems.
Importance: Testing how systems respond to failures helps ensure that they can gracefully handle unexpected issues, improving overall system resilience.

2. Training Support/ Product Delivery Teams:

Need: Support teams need to be well-prepared to handle incidents and outages. Chaos engineering provides a controlled environment to simulate real-world failures.
Importance: Through simulated chaos experiments, support teams can become familiar with different failure scenarios, practice incident response, and develop confidence in managing unexpected events.

3. SRE Team - Identifying Vulnerabilities:

Need: Systems are susceptible to various failure modes, such as network issues, hardware failures, or service disruptions. Identifying vulnerabilities is crucial for preventing cascading failures.
Importance: Chaos experiments help uncover vulnerabilities in the system architecture, infrastructure, or application code, allowing teams to address these issues proactively.

Collaboration and Contribution

Thanos Kube Chaos is an open-source project, and collaboration is welcome! If you are passionate about chaos engineering, Kubernetes, or Python development, consider contributing to the project. You can find the project on GitHub: Thanos Kube Chaos

Features

1. List Pods
Thanos Kube Chaos allows users to retrieve the names of pods in specified namespaces. This feature is essential for understanding the current state of the cluster and identifying the target pods for chaos experiments.

2. List Running Pods
To focus on running instances, the tool provides a feature to retrieve the names of running pods in specified namespaces. This is particularly useful when targeting live instances for chaos experiments.

3. Delete Pod
Deleting a specific pod in a given namespace is a common chaos engineering scenario. Thanos Kube Chaos provides a straightforward method to induce this failure and observe the system's response.

4. Delete Random Running Pod
For more dynamic chaos, the tool allows users to delete a randomly selected running pod, optionally matching a regex pattern. This randomness adds an element of unpredictability to the chaos experiments.

5. Delete Services
Deleting all services in specified namespaces can simulate a scenario where critical services are temporarily unavailable. This helps evaluate the system's resilience to service disruptions.

6. Delete Nodes
Inducing node failures is a critical aspect of chaos engineering. Thanos Kube Chaos facilitates the deletion of specific nodes from the Kubernetes cluster to evaluate the system's ability to handle node failures.

7. Network Chaos Testing
Simulating network chaos by introducing latency to a specified network interface helps assess the impact of network issues on application performance. This feature allows users to evaluate how well their applications handle network disruptions.

8. Resource Limit Configuration
Setting resource limits (CPU and memory) for a specific pod in a given namespace allows users to evaluate the application's behavior under resource constraints. This can be crucial for identifying resource-related vulnerabilities.

9. Node Eviction
Triggering the eviction of a specific node from the cluster is another way to assess the system's response to node failures. Thanos Kube Chaos provides a method to simulate node evictions and observe the impact.

10. Execute Command in Pod
Running a command inside a specific pod in a given namespace is a versatile feature. It enables users to perform custom chaos experiments by executing specific commands within the targeted pods.

11. Simulate Disk I/O Chaos
Simulating high disk I/O for a specific pod by creating a test file helps assess the application's behavior under disk-related stress. This can be crucial for identifying potential disk I/O bottlenecks.

12. Retrieve Pod Volumes
Retrieving the volumes attached to a specific pod in a given namespace provides insights into the storage configuration of the targeted pod. Understanding pod volumes is essential for designing chaos experiments that involve storage-related scenarios.

13. Starve Pod Resources
Starving resources (CPU and memory) for a randomly selected running pod is a valuable chaos engineering scenario. This feature helps evaluate how well applications handle resource shortages and whether they gracefully degrade under such conditions.

Example and Code Availability

Explore practical examples and access the full source code of Thanos Kube Chaos on GitHub. Head over to the Thanos Kube Chaos GitHub Repository for detailed examples, and documentation, and to contribute to the project.

Feel free to clone the repository and experiment with the code to enhance your chaos engineering practices in Kubernetes.

How to install AWS EKS Cluster

4 January 2024 at 01:43

 

step1: create EC2 instance

step2:
Create an IAM Role and attach it to EC2 instance 
IAM 
EC2 
VPC 
CloudFormation 
Administrative access

In the EC2 instance install kubectl and eksctl
step3: install kubectl
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin
$ kubectl version --client

step4: install eksctl
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version

step5: install aws-iam-authenticator
$ curl -Lo aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.6.14/aws-iam-authenticator_0.6.14_linux_amd64
$ chmod +x ./aws-iam-authenticator
$ mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
$ echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
step6:
create clusters and nodes
$ eksctl create cluster --name dhana-cluster --region ap-south-1 --node-type t2.small

step7:
Validate your cluster using by creating by checking nodes and by creating a pod
$ kubectl get nodes
$ kubectl run pod tomcat --image=tomcat 
$ kubectl run pod nginx --image=nginx

To delete the cluster dhana-cluster
$ eksctl delete cluster dhana-cluster --region ap-south-1

❌
❌