Kubernetes (K8s) is a powerful container orchestration platform that simplifies application deployment and scaling. In this guide, we’ll set up Kubernetes on an AWS EC2 instance, install the Nginx Ingress Controller, and configure Ingress rules to expose multiple services (app1 and app2).
Step 1: Setting Up Kubernetes on an EC2 Instance 1.1 Launch an EC2 Instance
Choose an instance with enough resources (e.g., t3.medium or larger) and install Ubuntu 20.04 or Amazon Linux 2. 1.2 Update Packages
Cron jobs are a fundamental part of automating tasks in Unix-based systems. However, one common problem with cron jobs is multiple executions, where overlapping job runs can cause serious issues like data corruption, race conditions, or unexpected system load.
In this blog, we’ll explore why multiple executions happen, the potential risks, and how flock provides an elegant solution to ensure that a cron job runs only once at a time.
The Problem: Multiple Executions of Cron Jobs
Cron jobs are scheduled to run at fixed intervals, but sometimes a new job instance starts before the previous one finishes.
This can happen due to
Long-running jobs: If a cron job takes longer than its interval, a new instance starts while the old one is still running.
System slowdowns: High CPU or memory usage can delay job execution, leading to overlapping runs.
Simultaneous executions across servers: In a distributed system, multiple servers might execute the same cron job, causing duplication.
Example of a Problematic Cron Job
Let’s say we have the following cron job that runs every minute:
* * * * * /path/to/script.sh
If script.sh takes more than a minute to execute, a second instance will start before the first one finishes.
This can lead to:
Duplicate database writes → Inconsistent data
Conflicts in file processing → Corrupt files
Overloaded system resources → Performance degradation
Real-World Example
Imagine a job that processes user invoices and sends emails
If the script takes longer than a minute to complete, multiple instances might start running, causing
Users to receive multiple invoices.
The database to get inconsistent updates.
Increased server load due to excessive email sending.
The Solution: Using flock to Prevent Multiple Executions
flock is a Linux utility that manages file locks to ensure that only one instance of a process runs at a time. It works by locking a specific file, preventing other processes from acquiring the same lock.
Open another terminal and try to run the same command. You’ll see that the second attempt exits immediately because the lock is already acquired.
Preventing multiple executions of cron jobs is essential for maintaining data consistency, system stability, and efficiency. By using flock, you can easily enforce single execution without complex logic.
Simple & efficient solution. No external dependencies required. Works seamlessly with cron jobs.
So next time you set up a cron job, add flock and sleep peacefully knowing your tasks won’t collide.
Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines!
Topics Covered: Syed Jafer – FOSS, GLUGs, and open-source communities Salman – Why FOSS matters & Linux Commands Dhanasekar – Linux and DevOps Guhan – GNU and free software
Challenges We Faced
BitLocker Encryption – Had to disable BitLocker on some laptops BIOS/UEFI Problems – Secure Boot, boot order changes needed GRUB Issues – Windows not showing up, required boot-repair
The top command in Linux is a powerful utility that provides realtime information about system performance, including CPU usage, memory usage, running processes, and more.
It is an essential tool for system administrators to monitor system health and manage resources effectively.
1. Basic Usage
Simply running top without any arguments displays an interactive screen showing system statistics and a list of running processes:
$ top
2. Understanding the top Output
The top interface is divided into multiple sections
Header Section
This section provides an overview of the system status, including uptime, load averages, and system resource usage.
Uptime and Load Average – Displays how long the system has been running and the average system load over the last 1, 5, and 15 minutes.
Task Summary – Shows the number of processes in various states:
Running – Processes actively executing on the CPU.
Sleeping – Processes waiting for an event or resource.
Stopped – Processes that have been paused.
Zombie – Processes that have completed execution but still have an entry in the process table. These occur when the parent process has not yet read the exit status of the child process. Zombie processes do not consume system resources but can clutter the process table if not handled properly.
CPU Usage – Breaks down CPU utilization into different categories:
us (User Space) – CPU time spent on user processes.
sy (System Space) – CPU time spent on kernel operations.
id (Idle) – Time when the CPU is not being used.
wa (I/O Wait) – Time spent waiting for I/O operations to complete.
st (Steal Time) – CPU cycles stolen by a hypervisor in a virtualized environment.
Memory Usage – Shows the total, used, free, and available RAM.
Swap Usage – Displays total, used, and free swap memory, which is used when RAM is full.
Process Table
The table below the header lists active processes with details such as:
PID – Process ID, a unique identifier for each process.
USER – The owner of the process.
PR – Priority of the process, affecting its scheduling.
NI – Nice value, which determines how favorable the process scheduling is.
VIRT – The total virtual memory used by the process.
RES – The actual RAM used by the process.
SHR – The shared memory portion.
S – Process state:
R – Running
S – Sleeping
Z – Zombie
T – Stopped
%CPU – The percentage of CPU time used.
%MEM – The percentage of RAM used.
TIME+ – The total CPU time consumed by the process.
COMMAND – The command that started the process.
3. Interactive Commands
While running top, various keyboard shortcuts allow dynamic interaction:
q – Quit top.
h – Display help.
k – Kill a process by entering its PID.
r – Renice a process (change priority).
z – Toggle color/monochrome mode.
M – Sort by memory usage.
P – Sort by CPU usage.
T – Sort by process runtime.
1 – Toggle CPU usage breakdown for multi-core systems.
u – Filter processes by a specific user.
s – Change update interval.
4. Command-Line Options
The top command supports various options for customization:
-b (Batch mode): Used for scripting to display output in a non-interactive mode.$ top -b -n 1-n specifies the number of iterations before exit.
-o FIELD (Sort by a specific field):$ top -o %CPUSorts by CPU usage.
-d SECONDS (Refresh interval):$ top -d 3Updates the display every 3 seconds.
-u USERNAME (Show processes for a specific user):$ top -u john
-p PID (Monitor a specific process):$ top -p 1234
5. Customizing top Display
Persistent Customization
To save custom settings, press W while running top. This saves the configuration to ~/.toprc.
Changing Column Layout
Press f to toggle the fields displayed.
Press o to change sorting order.
Press X to highlight sorted columns.
6. Alternative to top: htop, btop
For a more user-friendly experience, htop is an alternative:
Git is a powerful version control system that every developer should master. Whether you’re a beginner or an experienced developer, knowing a few handy Git command-line tricks can save you time and improve your workflow. Here are 20 essential Git tips and tricks to boost your efficiency.
1. Undo the Last Commit (Without Losing Changes)
git reset --soft HEAD~1
If you made a commit but want to undo it while keeping your changes, this command resets the last commit but retains the modified files in your staging area.
This is useful when you realize you need to make more changes before committing.
If you also want to remove the changes from the staging area but keep them in your working directory, use,
git reset HEAD~1
2. Discard Unstaged Changes
git checkout -- <file>
Use this to discard local changes in a file before staging. Be careful, as this cannot be undone! If you want to discard all unstaged changes in your working directory, use,
git reset --hard HEAD
3. Delete a Local Branch
git branch -d branch-name
Removes a local branch safely if it’s already merged. If it’s not merged and you still want to delete it, use -D
git branch -D branch-name
4. Delete a Remote Branch
git push origin --delete branch-name
Deletes a branch from the remote repository, useful for cleaning up old feature branches. If you mistakenly deleted the branch and want to restore it, you can use
git checkout -b branch-name origin/branch-name
if it still exists remotely.
5. Rename a Local Branch
git branch -m old-name new-name
Useful when you want to rename a branch locally without affecting the remote repository. To update the remote reference after renaming, push the renamed branch and delete the old one,
Instead of cloning the entire repository, this fetches only the specified branch, saving time and space. If you want all branches but don’t want to check them out initially:
git clone --mirror repository-url
12. Change the Last Commit Message
git commit --amend -m "New message"
Use this to correct a typo in your last commit message before pushing. Be cautious—if you’ve already pushed, use
git push --force-with-lease
13. See the List of Tracked Files
git ls-files
Displays all files being tracked by Git, which is useful for auditing your repository. To see ignored files
Last few days, i was exploring on Buildpacks. I am amused at this tool features on reducing the developer’s pain. In this blog i jot down my experience on Buildpacks.
Before going to try Buildpacks, we need to understand what is an OCI ?
What is an OCI ?
An OCI Image (Open Container Initiative Image) is a standard format for container images, defined by the Open Container Initiative (OCI) to ensure interoperability across different container runtimes (Docker, Podman, containerd, etc.).
It consists of,
Manifest – Metadata describing the image (layers, config, etc.).
Config JSON – Information about how the container should run (CMD, ENV, etc.).
Filesystem Layers – The actual file system of the container.
OCI Image Specification ensures that container images built once can run on any OCI-compliant runtime.
Does Docker Create OCI Images?
Yes, Docker creates OCI-compliant images. Since Docker v1.10+, Docker has been aligned with the OCI Image Specification, and all Docker images are OCI-compliant by default.
When you build an image with docker build, it follows the OCI Image format.
When you push/pull images to registries like Docker Hub, they follow the OCI Image Specification.
However, Docker also supports its legacy Docker Image format, which existed before OCI was introduced. Most modern registries and runtimes (Kubernetes, Podman, containerd) support OCI images natively.
What is a Buildpack ?
A buildpack is a framework for transforming application source code into a runnable image by handling dependencies, compilation, and configuration. Buildpacks are widely used in cloud environments like Heroku, Cloud Foundry, and Kubernetes (via Cloud Native Buildpacks).
Overview of Buildpack Process
The buildpack process consists of two primary phases
Detection Phase: Determines if the buildpack should be applied based on the app’s dependencies.
Build Phase: Executes the necessary steps to prepare the application for running in a container.
Buildpacks work with a lifecycle manager (e.g., Cloud Native Buildpacks’ lifecycle) that orchestrates the execution of multiple buildpacks in an ordered sequence.
Builder: The Image That Executes the Build
A builder is an image that contains all necessary components to run a buildpack.
Components of a Builder Image
Build Image – Used during the build phase (includes compilers, dependencies, etc.).
Run Image – A minimal environment for running the final built application.
Lifecycle – The core mechanism that executes buildpacks, orchestrates the process, and ensures reproducibility.
Stack: The Combination of Build and Run Images
Build Image + Run Image = Stack
Build Image: Base OS with tools required for building (e.g., Ubuntu, Alpine).
Run Image: Lightweight OS with only the runtime dependencies for execution.
It detects Python, installs dependencies, and builds the app into a container. Docker requires a Dockerfile, which developers must manually configure and maintain.
Automatic Security Updates
Buildpacks automatically patch base images for security vulnerabilities.
If there’s a CVE in the OS layer, Buildpacks update the base image without rebuilding the app.
pack rebase my-python-app
No need to rebuild! It replaces only the OS layers while keeping the app the same.
Standardized & Reproducible Builds
Ensures consistent images across environments (dev, CI/CD, production). Example: Running the same build locally and on Heroku/Cloud Run,
pack build my-app
Extensibility: Custom Buildpacks
Developers can create custom Buildpacks to add special dependencies.
GitHub Actions is a powerful tool for automating workflows directly in your repository.In this blog, we’ll explore how to efficiently set up GitHub Actions to handle Docker workflows with environments, secrets, and protection rules.
Why Use GitHub Actions for Docker?
My Code base is in Github and i want to tryout gh-actions to build and push images to docker hub seamlessly.
Setting Up GitHub Environments
GitHub Environments let you define settings specific to deployment stages. Here’s how to configure them:
1. Create an Environment
Go to your GitHub repository and navigate to Settings > Environments. Click New environment, name it (e.g., production), and save.
2. Add Secrets and Variables
Inside the environment settings, click Add secret to store sensitive information like DOCKER_USERNAME and DOCKER_TOKEN.
Use Variables for non-sensitive configuration, such as the Docker image name.
3. Optional: Set Protection Rules
Enforce rules like requiring manual approval before deployments. Restrict deployments to specific branches (e.g., main).
Sample Workflow for Building and Pushing Docker Images
Below is a GitHub Actions workflow for automating the build and push of a Docker image based on a minimal Flask app.
Workflow: .github/workflows/docker-build-push.yml
name: Build and Push Docker Image
on:
push:
branches:
- main # Trigger workflow on pushes to the `main` branch
jobs:
build-and-push:
runs-on: ubuntu-latest
environment: production # Specify the environment to use
steps:
# Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
# Log in to Docker Hub using environment secrets
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
# Build the Docker image using an environment variable
- name: Build Docker image
env:
DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
run: |
docker build -t ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }} .
# Push the Docker image to Docker Hub
- name: Push Docker image
env:
DOCKER_IMAGE_NAME: ${{ vars.DOCKER_IMAGE_NAME }}
run: |
docker push ${{ secrets.DOCKER_USERNAME }}/$DOCKER_IMAGE_NAME:${{ github.run_id }}
I usually have a question. As a developer, i have logs, isn’t that enough. With curious mind, i attended Grafana & Friends Chennai meetup (Jan 25th 2025)
Had an awesome time meeting fellow tech enthusiasts (devops engineers) and learning about cool ways to monitor and understand data better. Big shoutout to the Grafana Labs community and Presidio for hosting such a great event!
Sandwich and Juice was nice
Talk Summary,
1⃣ Making Data Collection Easier with Grafana Alloy Dinesh J. and Krithika R shared how Grafana Alloy, combined with Open Telemetry, makes it super simple to collect and manage data for better monitoring.
2⃣ Running Grafana in Kubernetes Lakshmi Narasimhan Parthasarathy (https://lnkd.in/gShxtucZ) showed how to set up Grafana in Kubernetes in 4 different ways (vanilla, helm chart, grafana operator, kube-prom-stack). He is building a SaaS product https://lnkd.in/gSS9XS5m (Heroku on your own servers).
3⃣ Observability for Frontend Apps with Grafana Faro Selvaraj Kuppusamy show how Grafana Faro can help frontend developers monitor what’s happening on websites and apps in real time. This makes it easier to spot and fix issues quickly. Were able to see core web vitals, and traces too. I was surprised about this.
Thanks Achanandhi M for organising this wonderful meetup. You did well. I came to Achanandhi M from medium. He regularly writes blog on cloud related stuffs. https://lnkd.in/ghUS-GTc Checkout his blog.
Also, He shared some tasks for us,
1. Create your First Grafana Dashboard. Objective: Create a basic Grafana Dashboard to visualize data in various formats such as tables, charts and graphs. Aslo, try to connect to multiple data sources to get diverse data for your dashboard.
2. Monitor your linux system’s health with prometheus, Node Exporter and Grafana. Objective: Use prometheus, Node Exporter adn Grafana to monitor your linux machines health system by tracking key metrics like CPU, memory and disk usage.
3. Using Grafana Faro to track User Actions (Like Button Clicks) and Identify the Most Used Features.
Today, i came across a blog on undo a git pull. In this blog, i have reiterated the blog in other words.
Mistakes happen. You run a git pull and suddenly find your repository in a mess. Maybe conflicts arose, or perhaps the changes merged from the remote branch aren’t what you expected.
Fortunately, Git’s reflog comes to the rescue, allowing you to undo a git pull and restore your repository to its previous state. Here’s how you can do it.
Understanding Reflog
Reflog is a powerful feature in Git that logs every update made to the tips of your branches and references. Even actions like resets or rebases leave traces in the reflog. This makes it an invaluable tool for troubleshooting and recovering from mistakes.
Whenever you perform a git pull, Git updates the branch pointer, and the reflog records this action. By examining the reflog, you can identify the exact state of your branch before the pull and revert to it if needed.
Step By Step Guide to UNDO a git pull
1. Check Your Current State Ensure you’re aware of the current state of your branch. If you have uncommitted changes, stash or commit them to avoid losing any work.
git stash
# or
git add . && git commit -m "Save changes before undoing pull"
2. Inspect the Reflog View the recent history of your branch using the reflog,
git reflog
This command will display a list of recent actions, showing commit hashes and descriptions. For example,
0a1b2c3 (HEAD -> main) HEAD@{0}: pull origin main: Fast-forward
4d5e6f7 HEAD@{1}: commit: Add new feature
8g9h0i1 HEAD@{2}: checkout: moving from feature-branch to main
3. Identify the Pre-Pull Commit Locate the commit hash of your branch’s state before the pull. In the above example, it’s 4d5e6f7, which corresponds to the commit made before the git pull.
4. Reset to the Previous Commit Use the git reset command to move your branch back to its earlier state,
git reset <commit-hash>
By default, it’s mixed so changes wont be removed but will be in staging.
The next time a pull operation goes awry, don’t panic—let the reflog guide you back to safety!
Today, i learnt about claim check pattern, which tells how to handle a big message into the queue. Every message broker has a defined message size limit. If our message size exceeds the size, it wont work.
The Claim Check Pattern emerges as a pivotal architectural design to address challenges in managing large payloads in a decoupled and efficient manner. In this blog, i jot down notes on my learning for my future self.
What is the Claim Check Pattern?
The Claim Check Pattern is a messaging pattern used in distributed systems to manage large messages efficiently. Instead of transmitting bulky data directly between services, this pattern extracts and stores the payload in a dedicated storage system (e.g., object storage or a database).
A lightweight reference or “claim check” is then sent through the message queue, which the receiving service can use to retrieve the full data from the storage.
This pattern is inspired by the physical process of checking in luggage at an airport: you hand over your luggage, receive a claim check (a token), and later use it to retrieve your belongings.
How Does the Claim Check Pattern Work?
The process typically involves the following steps
Data Submission The sender service splits a message into two parts:
Metadata: A small piece of information that provides context about the data.
Payload: The main body of data that is too large or sensitive to send through the message queue.
Storing the Payload
The sender uploads the payload to a storage service (e.g., AWS S3, Azure Blob Storage, or Google Cloud Storage).
The storage service returns a unique identifier (e.g., a URL or object key).
Sending the Claim Check
The sender service places the metadata and the unique identifier (claim check) onto the message queue.
Receiving the Claim Check
The receiver service consumes the message from the queue, extracts the claim check, and retrieves the payload from the storage system.
Processing
The receiver processes the payload alongside the metadata as required.
Use Cases
1. Media Processing Pipelines In video transcoding systems, raw video files can be uploaded to storage while metadata (e.g., video format and length) is passed through the message queue.
2. IoT Systems – IoT devices generate large datasets. Using the Claim Check Pattern ensures efficient transmission and processing of these data chunks.
3. Data Processing Workflows – In big data systems, datasets can be stored in object storage while processing metadata flows through orchestration tools like Apache Airflow.
4. Event-Driven Architectures – For systems using event-driven models, large event payloads can be offloaded to storage to avoid overloading the messaging layer.
Today, i got refreshed on Blue Green Deployment from a podcast https://open.spotify.com/episode/03p86zgOuSEbNezK71CELH. Deployment designing is a plate i haven’t touched yet. In this blog i jot down the notes on blue green deployment for my future self.
What is Blue-Green Deployment?
Blue-Green Deployment is a release management strategy that involves maintaining two identical environments, referred to as “Blue” and “Green.” At any point in time, only one environment is live (receiving traffic), while the other remains idle or in standby. Updates are deployed to the idle environment, thoroughly tested, and then switched to live with minimal downtime.
How It Works
This approach involves setting up two environments: the Blue environment, which serves live traffic, and the Green environment, a replica used for staging updates.
Updates are first deployed to the Green environment, where comprehensive testing is performed to ensure functionality, performance, and integration meet expectations.
Once testing is successful, the routing mechanism, such as a DNS or API Gateway or load balancer, is updated to redirect traffic from the Blue environment to the Green environment.
The Green environment then becomes live, while the Blue environment transitions to an idle state.
If issues arise, traffic can be reverted to the Blue environment for a quick recovery with minimal impact.
Benefits of Blue-Green Deployment
Blue-Green Deployment provides zero downtime during the deployment process, ensuring uninterrupted user experiences.
Rollbacks are simplified because the previous version remains intact in the Blue environment, enabling quick reversion if necessary. Consideration of forward and backwar capability is important. eg, Database.
It also allows seamless testing in the Green environment before updates go live, reducing risks by isolating production from deployment issues.
Challenges and Considerations
Maintaining two identical environments can be resource intensive.
Ensuring synchronization between environments is critical to prevent discrepancies in configuration and data.
Handling live database changes during the environment switch is complex, requiring careful planning for database migrations.
Several tools and platforms support Blue-Green Deployment. Kubernetes simplifies managing multiple environments through namespaces and services.
AWS Elastic Beanstalk offers built-in support for Blue-Green Deployment, while HashiCorp Terraform automates the setup of Blue-Green infrastructure.
To implement this strategy, organizations should design infrastructure capable of supporting two identical environments, automate deployments using CI/CD pipelines, monitor and test thoroughly, and define rollback procedures to revert to previous versions when necessary.
In this blog, I’ll walk you through the journey of transitioning from a PythonAnywhere server to GitHub Actions for automating the delivery of daily quizzes to a Telegram group https://t.me/parottasalna . This implementation highlights the benefits of GitHub Actions to run a cronjob.
Problem Statement
I wanted to send a daily quiz to a Telegram group to keep members engaged and learning. Initially, I hosted the solution on a PythonAnywhere server.
Some of the limitations are,
Only one cron job is allowed for a free account.
For every x days, i need to visit the dashboard and reactive my account to make it work.
Recently, started learning the Github Actions . So thought of leveraging the schedule mechanism in it for my usecase.
Key Features of My Solution
Automated Scheduling: GitHub Actions triggers the script daily at a specified time.
Secure Secrets Management: Sensitive information like Telegram bot tokens and chat IDs are stored securely using GitHub Secrets.
Serverless Architecture: No server maintenance; everything runs on GitHub’s infrastructure.
A Python script (populate_questions.py) to create questions and store each question in the respective directory.
A Python script (runner.py) which takes telegram_bot_url, chat_id and category as input to read the correct question from the category directory and send the message.
A GitHub Actions workflow to automate execution.
Workflow Yaml File
name: Docker Quiz Sender
on:
schedule:
- cron: "30 4 * * *"
workflow_dispatch:
jobs:
run-python:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Run script with secrets0
run: python runner.py --telegram_url ${{ secrets.TELEGRAM_BOT_URL }} --chat_id ${{ secrets.CHAT_ID }} --category docker
Benefits of the New Implementation
Cost-Effectiveness: No server costs thanks to GitHub’s free tier for public repositories.
Reliability: GitHub Actions ensures the script runs on schedule without manual intervention.
Today, i was checking youtube videos on github actions. I came across a video on sending a mail via a Github Action https://www.youtube.com/watch?v=SkD7KQ3KzZs&t=108s. This blog is just an implementation of the video.
What am i going to do ?
I need to send a mail using my gmail id via Github
Github Actions is a managed CI/CD pipeline offering. It provides free runners for running the code.
1. Workflow, Jobs, Steps
A workflow is a collection of jobs defined in a .yml file inside .github/workflows. Each workflow consists of jobs, and jobs have steps.
name: My First Workflow
on: push
jobs:
example-job:
runs-on: ubuntu-latest
steps:
- name: Print a message
run: echo "Hello, GitHub Actions!"
2. Availability and Pricing
GitHub Actions is free for public repositories and has free usage limits for private repositories, depending on the plan. Paid plans increase these limits. For detailed pricing, visit the GitHub Actions pricing page.
3. First workflow with basic echo commands
Start with a workflow triggered by any push event. Here’s a simple example,
name: Echo Workflow
on: push
jobs:
echo-job:
runs-on: ubuntu-latest
steps:
- name: Say Hello
run: echo "Hello from my first workflow!"
4. Multiline Shell Commands
You can use the | symbol to write multiline shell commands.
name: Push Event Workflow
on:
push:
branches:
- main
jobs:
push-job:
runs-on: ubuntu-latest
steps:
- name: On Push
run: echo "Code pushed to main branch!"
5. A new workflow with push events
Workflows can be triggered by specific events like push. This example triggers only on push to the main branch.
name: Push Event Workflow
on:
push:
branches:
- main
jobs:
push-job:
runs-on: ubuntu-latest
steps:
- name: On Push
run: echo "Code pushed to main branch!"
6. Using actions in workflow (Marketplace and Open Source)
GitHub Actions Marketplace offers reusable actions. For example, using the actions/checkout action.
List of problem statements enough to get your hands dirty on git. These are the list of commands that you mostly use in your development.
Problem 1
Initialize a Repository.
Setup user details globally.
Setup project specific user details.
Check Configuration – List the configurations.
Problem 2
Add Specific files. Create two files app.js and style.css. User git add to stage only style.css . This allows selective addition of files to the staging area before committing.
Stage all files except one.
Problem 3
Commit with a message
Amend a commit
Commit without staging
Problem 4
Create a Branch
Create a new branch named feature/api to work on a feature independently without affecting the main branch.
Delete a branch.
Force delete a branch.
Rename a branch.
List all branches.
Problem 5
Switch to a branch
Switch to the main branch using git checkout.
Create and switch to a branch
Create a new branch named bugfix/001 and switch to it in a single command with git checkout -b.
Problem 6
Start with a repository containing a file named project.txt
Create two branches (feature-1 and feature-2) from the main branch.
Make changes to project.txt in both branches.
Attempt to merge feature-1 and feature-2 into the main branch.
Resolve any merge conflicts and complete the merge process.
Problem 7
View history in one-line format
Graphical commit history
Filter commits by Author
Show changes in a commit
Problem 8
Fetch updates from remote
Fetch and Merge
Fetch changes from the remote branch origin/main and merge them into your local main
List remote references
Problem 9
Create a stash
Apply a stash
Pop a stash
View stash
Problem 10
You need to undo the last commit but want to keep the changes staged for a new commit. What will you do ?
Problem 11
You realize you staged some files for commit but want to unstage them while keeping the changes in your working directory. Which git command will allow you to unstage the files without losing any change ?
Problem 12
You decide to completely discard all local changes and reset the repository to the state of the last commit. What git command should you run to discard all changes and reset your working directory ?
Monitoring AWS costs is essential for keeping budgets in check. In this guide, we’ll walk through creating an AWS Lambda function to retrieve cost details and send them to email (via SES) and Slack.
Prerequisites
1.AWS Account with IAM permissions for Lambda, SES, and Cost Explorer.
2.Slack Webhook URL to send messages.
3.Configured SES Email for notifications.
4.S3 Bucket for storing cost reports as CSV files.
Step 1: Enable Cost Explorer
Go to AWS Billing Dashboard > Cost Explorer.
Enable Cost Explorer to access detailed cost data.
Step 2: Create an S3 Bucket
Create an S3 bucket (e.g., aws-cost-reports) to store cost reports.
Ensure the bucket has appropriate read/write permissions for Lambda.
Step 3: Write the Lambda Code
1.Create a Lambda Function
Go to AWS Lambda > Create Function.
Select Python Runtime (e.g., Python 3.9).
Add Dependencies
Use a Lambda layer or package libraries like boto3 and slack_sdk.
3.Write your python code and execute them. (If you want my code , just comment "ease-py-code" on my blog , will share you 🫶 )
Step 4: Add S3 Permissions
Update the Lambda execution role to allow s3:PutObject, ses:SendEmail, and ce:GetCostAndUsage.
Step 5: Test the Lambda
1.Trigger Lambda manually using a test event.
Verify the cost report is:
Uploaded to the S3 bucket.
Emailed via SES.
Notified in Slack.
Conclusion
With this setup, AWS cost reports are automatically delivered to your inbox and Slack, keeping you updated on spending trends. Fine-tune this solution by customizing the report frequency or grouping costs by other dimensions.
Kubernetes is a powerful platform that simplifies the management of containerized applications. If you’re familiar with the fundamentals, it’s time to take a step further and explore intermediate concepts that enhance your ability to manage and optimize Kubernetes clusters.
Understanding Deployments
A Deployment ensures your application runs reliably by managing scaling, updates, and rollbacks.
Using ConfigMaps and Secrets
Kubernetes separates application configuration and sensitive data from the application code using ConfigMaps and Secrets.
ConfigMaps
Store non-sensitive configurations, such as environment variables or application settings.
Probes ensure your application is healthy and ready to handle traffic.
Liveness Probe
Checks if your application is running. If it fails, Kubernetes restarts the pod.
Readiness Probe
Checks if your application is ready to accept traffic. If it fails, Kubernetes stops routing requests to the pod.
4.Resource Requests and Limits
To ensure efficient resource utilization, define requests (minimum resources a pod needs) and limits (maximum resources a pod can use).
5.Horizontal Pod Autoscaling (HPA)
Scale your application dynamically based on CPU or memory usage.
Example:
This ensures your application scales automatically when resource usage increases or decreases.
6.Network Policies
Control how pods communicate with each other and external resources using Network Policies.
Conclusion
Kubernetes has revolutionized the way we manage containerized applications. By automating tasks like deployment, scaling, and maintenance, it allows developers and organizations to focus on innovation. Whether you're a beginner or a seasoned developer, mastering Kubernetes is a skill that will enhance your ability to build and manage modern applications.
By mastering these slightly advanced Kubernetes concepts, you’ll improve your cluster management, application reliability, and resource utilization. With this knowledge, you’re well-prepared to dive into more advanced topics like Helm, monitoring with Prometheus, and service meshes like Istio.
In today’s tech-driven world, Kubernetes has emerged as one of the most powerful tools for container orchestration. Whether you’re managing a few containers or thousands of them, Kubernetes simplifies the process, ensuring high availability, scalability, and efficient resource utilization. This blog will guide you through the basics of Kubernetes, helping you understand its core components and functionality.
What is Kubernetes?
Kubernetes, often abbreviated as K8s, is an open-source platform developed by Google that automates the deployment, scaling, and management of containerized applications. It was later donated to the Cloud Native Computing Foundation (CNCF).
With Kubernetes, developers can focus on building applications, while Kubernetes takes care of managing their deployment and runtime.
Key Features of Kubernetes
Automated Deployment and Scaling
Kubernetes automates the deployment of containers and can scale them up or down based on demand.
Self-Healing
If a container fails, Kubernetes replaces it automatically, ensuring minimal downtime.
Load Balancing
Distributes traffic evenly across containers, optimizing performance and preventing overload.
Rollbacks and Updates
Kubernetes manages seamless updates and rollbacks for your applications without disrupting service.
Resource Management
Optimizes hardware utilization by efficiently scheduling containers across the cluster.
Core Components of Kubernetes
To understand Kubernetes, let’s break it down into its core components:
Cluster
A Kubernetes cluster consists of:
Master Node: The control plane managing the entire cluster.
Worker Nodes: Machines where containers run.
Pods :The smallest deployable unit in Kubernetes.
A pod can contain one or more containers that share resources like storage and networking.
Nodes : Physical or virtual machines that run the pods.
Managed by the Kubelet, a process ensuring pods are running as expected.
Services : Allow communication between pods and other resources, both inside and outside the cluster.
Examples include ClusterIP, NodePort, and LoadBalancer services.
ConfigMaps and Secrets :
ConfigMaps: Store configuration data for your applications.
Secrets: Store sensitive data like passwords and tokens securely.
Namespaces
Virtual clusters within a Kubernetes cluster, used for organizing and isolating resources.
Conclusion
Kubernetes has revolutionized the way we manage containerized applications. By automating tasks like deployment, scaling, and maintenance, it allows developers and organizations to focus on innovation. Whether you're a beginner or a seasoned developer, mastering Kubernetes is a skill that will enhance your ability to build and manage modern applications.