Normal view

There are new articles available, click to refresh the page.
Today — 4 July 2025Main stream

Beej’s GDB guide in Tamil

5 November 2024 at 00:00

GDB க்கான பீஜின் விரைவான வழிகாட்டி (Beej’s Quick Guide to GDB)

I’ve translated the Beej’s Guide on GDB to tamil. It is based on this project idea put forth by Thanga Ayyanar aka Gold Ayan in Kaniyam Foundation’s Project Ideas. Initially I planned to do it as a part of Hacktoberfest but It took time and in the meanwhile I also got busy with other works. Finally the translation is out. If you found any mistakes you can raise a issue in the repo where it has been hosted. shrini said after some review it will be made as a series of few posts in Kaniyam website.

This is my First Open Source Contribution, so I feel happy that I’ve started to give back something to FOSS, Tamil community. It was also discussed by Thanga Ayyanar in Nov 3 - Kanchi LUG Weekly News. Hoping to contribute more in future!

Before yesterdayMain stream

VS Code vs VSCodium

30 June 2025 at 17:08

Hi everyone! Welcome to my another blog.
Today we are going to see about difference between VS Code and VSCodium.
We all know about VS Code. If you’re a developer, then you’ve probably used Visual Studio Code (VS Code) — a popular and powerful code editor. It is very useful for beginners because of User Interface. But have you heard of VSCodium? They look almost the same, but there are some important differences behind the scenes.

What is VS Code?

VS Code is a free code editor made by Microsoft. It’s used by millions of developers for writing code in JavaScript, Python, C++, HTML, CSS, and many other languages. And there are some things also there which is making VS Code very special. There are,

  • A nice user interface
  • Built-in terminal and debugger
  • Extension marketplace
  • Git integration
  • And more!

What is VSCodium?

VSCodium is almost the same as VS Code — it looks the same, works the same, and is also free.
But there’s one main difference:
VSCodium is 100% open source and doesn’t send any data to Microsoft.

What does that mean?
When you use VS Code, it may collect some basic data (like what extensions you use or how the app performs) and send it to Microsoft. This is called telemetry. VSCodium removes all that.
So if you’re someone who really cares about privacy or using only open-source software, VSCodium is the better choice.

Summary in one line:

  • VS Code is great for beginners and everyday use.
  • VSCodium is the same thing, but without Microsoft and tracking.

Some common doubts about these platforms:

1. Can I use the same extensions in both?
Yes, most extensions work in both VS Code and VSCodium.
But in VSCodium, you may need to manually connect to the VS Code Marketplace, or some Microsoft-only extensions might not work properly.
note*: So if you're using popular extensions like Prettier, ESLint, or Live Server, they will likely work in both.

2. Do they look the same?
Yes! When you open VSCodium, it looks and feels almost exactly like VS Code.
The only small difference is the logo and name — VSCodium has a blue icon, and it doesn’t say “Microsoft”.

3. Why do people care about telemetry (data collection)?
Some developers are very serious about privacy or using only free and open-source software.
VS Code sends some data back to Microsoft to improve the product. It’s not personal data, but some people prefer zero tracking, which is why they choose VSCodium.

4. Who maintains VSCodium?
VSCodium is not made by a company. It is maintained by open-source developers who want to give people a version of VS Code that doesn’t include Microsoft’s parts.

So that's it, guys. These are all the information which I want to share with you. I hope it will be useful for everyone. Will see you in next blog.

A intro for Ente Photos!

18 June 2025 at 15:56

Hi! everyone...
Welcome to my next blog.
Today is the first day i heard the word "Ente photos". So today's blog we will explore about that platform.

The meaning of the name:
"Ente" means "mine" in Malayalam.(Many of you already knew that) - It's your own secure gallery.

Key features:

End-to-End Encryption(E2EE)

  • The main feature of this platform is security. Our photos will encrypted.
  • Even their company server can't to see our data.
  • If the server is hacked ,our photos will remain unreadable.

Cross-Platform Access

  • It's available on Android,iOS,Web,Windows,Linux,macOS.
  • We can access our photos anywhere securely.

Family sharing

  • We can share our storage plan with up to 5 family members.
  • We can create shared albums with customized icon settings with full privacy.

On-Device AI Search

  • If we want search a photos in large set of photo collections we can search by objects,faces,places just like Google Photos.
  • But all AI runs on your device(no cloud processing = better privacy.)

Original Quality Uploads

  • Ente doesn't compress our images or videos for satisfying their servers.

Who made this?

  • This is built by a team based in Kerala, including ex-Google engineer Vishnu Mohandas.

  • By creating platform they gained international recognition for privacy and simplicity.

So this all about I prepared to tell you today's blog. For many details you can use this platform additional of Google Photos.
Download the app from https://ente.io

Thank you for read my blog guys! I hope you like it. We Will see in my next Blog.

Open source in my vision! Before and after knowing these things

14 June 2025 at 04:01

Welcome to my blog! I'm really happy to see you on my another blog also.

Before

In my college, my final semester also included a paper called 'Open-Source System'. I actually had good marks on that paper as well. But the knowledge about open-source is very basic, no! not basic also, it's like I really don't understand what open source is. The things I get into wrong or misunderstand about open-source are,

  • "Open source is a free resource." This means "we can use open-source without paying any amount". Actually, this is also a correct meaning, but that sentence is not meant to say this. So, what is the real meaning of this is I will let you know in the 'After' part.

  • "Open source is Linux only." I really believe that open-source is only about Linux. The only example I gave in my exams was Linux for open-source.

  • "Open source is a big corporation. Behind of that, there will be a big company environment." Now also, I can't believe that my understanding is wrong, there is nothing like that.

After

After knowing some of the things about open-source is mind-blowing. The things I gain some clear meaning of the things and the real truths are,

  • The real meaning of the "Open is a free resource" is that the source code of the resource is free, like anyone can access that code. We can
    edit, update, and share. The reason behind the free resource is, we want to learn something about that code and grow ourselves with the open-source community. An interesting fact is that we can also contribute to them. For contributing open-source community, we can build a strong foundation for our career.

  • Open source is not only about Linux, there are so many open sources are there. Example: Android, VS Code, Firefox, etc. Linux is a popular OS in open source. I will let you know about Linux briefly in my next blog.

  • There is no one behind the open source. there is a community called the open-source community is a group of people managing some things in a remote mode. Interesting is that there is no office for open-source. Can you believe this? actually, I can't. How just a community can manage also these things without a physical location? That is the community power. That community has people who do not expect a profit or income. They are passionate about building open-source software.

So these are the thing I really want to share with you guys! You may know these things. But this is new to me to know the different side of the tech world. So thank you for read my blog. I hope you like it!

📊 Learn PostgreSQL in Tamil: From Zero to 5★ on HackerRank in Just 10 Days

25 May 2025 at 12:42

 

PostgreSQL is one of the most powerful, stable, and open-source relational database systems trusted by global giants like Apple, Instagram, and Spotify. Whether you’re building a web application, managing enterprise data, or diving into analytics, understanding PostgreSQL is a skill that sets you apart.

But what if you could master it in just 10 days, in Tamil, with hands-on learning and a guaranteed 5★ rating on HackerRank as your goal?

Sounds exciting? Let’s dive in.

🎯 Why This Bootcamp?

This 10-day PostgreSQL Bootcamp in Tamil is designed to take you from absolute beginner to confident practitioner, with a curriculum built around real-world use cases, performance optimization, and daily challenge-driven learning.

Whether you’re a

  • Student trying to get into backend development
  • Developer wanting to upskill and crack interviews
  • Data analyst exploring SQL performance
  • Tech enthusiast curious about databases

…this bootcamp gives you the structured path you need.

🧠 What You’ll Learn

Over 10 days, we’ll cover

  • ✅ PostgreSQL installation & setup
  • ✅ PostgreSQL architecture and internals
  • ✅ Writing efficient SQL queries with proper formatting
  • ✅ Joins, CTEs, subqueries, and advanced querying
  • ✅ Indexing, query plans, and performance tuning
  • ✅ Transactions, isolation levels, and locking mechanisms
  • ✅ Schema design for real-world applications
  • ✅ Debugging techniques, tips, and best practices
  • ✅ Daily HackerRank challenges to track your progress
  • ✅ Solve 40+ HackerRank SQL challenges

🧪 Bootcamp Highlights

  • 🗣 Language of instruction: Tamil
  • 💻 Format: Online, live and interactive
  • 🎥 Daily live sessions with Q&A
  • 📊 Practice-oriented learning using HackerRank
  • 📚 Notes, cheat sheets, and shared resources
  • 🧑‍🤝‍🧑 Access to community support and mentorship
  • 🧠 Learn through real-world datasets and scenarios

Check our previous Postgres session

📅 Details at a Glance

  • Duration: 10 Days
  • Language: Tamil
  • Format: Online, hands-on
  • Book Your Slot: https://topmate.io/parottasalna/1558376
  • Goal: Earn 5★ in PostgreSQL on HackerRank
  • Suitable for: Students, developers, DBAs, and tech enthusiasts

🔥 Why You Shouldn’t Miss This

  • Learn one of the most in-demand database systems in your native language
  • Structured learning path with practical tasks and daily targets
  • Build confidence to work on real projects and solve SQL challenges
  • Lifetime value from one affordable investment.

Will meet you in session !!!

💾 Redis Is Open Source Again – What that means ?

11 May 2025 at 02:32

Imagine you’ve been using a powerful tool for years to help you build apps faster. Yeah its Redis, a super fast database that helps apps remember things temporarily, like logins or shopping cart items. It was free, open, and loved by developers.

But one day, the team behind Redis changed the rules. They said

“You can still use Redis, but if you’re a big cloud company (like Amazon or Google) offering it to others as a service, you need to play by our special rules or pay us.”

This change upset many in the tech world. Why?

Because open-source means freedom you can use it, improve it, and even share it with others. Redis’s new license in 2024 took away some of that freedom. It wasn’t completely closed, but it wasn’t truly open either. It hurts AWS, Microsoft more.

What Happened Next?

Developers and tech companies didn’t like the new rules. So they said,

“Fine, we’ll make our own open version of Redis.”

That’s how a new project called Valkey was born, a fork (copy) of Redis that stayed truly open-source.

Fast forward to May 2025 — Redis listened. They said

“We’re bringing back the open-source spirit. Redis version 8.0 will be under a proper open-source license again: AGPLv3.”

What’s AGPLv3?

It’s a type of license that says:

  • ✅ You can use, change, and share Redis freely.
  • 🌐 If you run a modified Redis on a website or cloud service, you must also share your changes with the world. (still hurts AWS and Azure)

This keeps things fair: no more companies secretly benefiting from Redis without giving back.

What Did Redis Say?

Rowan Trollope, Redis’s CEO, explained why they had changed the license in the first place:

“Big cloud companies were making money off Redis but not helping us or the open-source community.”

But now, by switching to AGPLv3, Redis is balancing two things:

  • Protecting their work from being misused
  • And staying truly open-source

Why This Is Good News

  • Developers can continue using Redis freely.
  • The community can contribute and improve Redis.
  • Fair rules apply to everyone, even giant tech companies.

Redis has come full circle. After a detour into more restricted territory, it’s back where it belongs in the hands of everyone. This shows the power of the developer community, and why open-source isn’t just about code, it’s about collaboration, fairness, and freedom.

Checkout this blog from Redis https://redis.io/blog/agplv3/

Deploying a Two-Tier Web Application on AWS with MySQL and Apache

By: Ragul.M
12 March 2025 at 12:46

In this blog, I will guide you through step-by-step instructions to set up a two-tier architecture on AWS using VPC, Subnets, Internet Gateway, Route Tables, RDS, EC2, Apache, MySQL, PHP, and HTML. This project will allow you to host a registration web application where users can submit their details, which will be stored in an RDS MySQL database.

Step 1: Create a VPC
1.1 Login to AWS Management Console

  • Navigate to the VPC service
  • Click Create VPC
  • Enter the following details:
  • VPC Name: my-vpc
  • IPv4 CIDR Block: 10.0.0.0/16
  • Tenancy: Default
  • Click Create VPC

Image description

Step 2: Create Subnets
2.1 Create a Public Subnet

  • Go to VPC > Subnets
  • Click Create Subnet
  • Choose my-vpc
  • Set Subnet Name: public-subnet
  • IPv4 CIDR Block: 10.0.1.0/24
  • Click Create

2.2 Create a Private Subnet
Repeat the steps above but set:

  • Subnet Name: private-subnet
  • IPv4 CIDR Block: 10.0.2.0/24

Image description

Step 3: Create an Internet Gateway (IGW) and Attach to VPC
3.1 Create IGW

  • Go to VPC > Internet Gateways
  • Click Create Internet Gateway
  • Set Name: your-igw
  • Click Create IGW 3.2 Attach IGW to VPC
  • Select your-igw
  • Click Actions > Attach to VPC
  • Choose my-vpc and click Attach

Image description

Step 4: Configure Route Tables
4.1 Create a Public Route Table

  • Go to VPC > Route Tables
  • Click Create Route Table
  • Set Name: public-route-table
  • Choose my-vpc and click Create
  • Edit Routes → Add a new route:
  • Destination: 0.0.0.0/0
  • Target: my-igw
  • Edit Subnet Associations → Attach public-subnet

Image description

Step 5: Create an RDS Database (MySQL)

  • Go to RDS > Create Database
  • Choose Standard Create
  • Select MySQL
  • Set DB instance identifier: my-rds
  • Master Username: admin
  • Master Password: yourpassword
  • Subnet Group: Select private-subnet
  • VPC Security Group: Allow 3306 (MySQL) from my-vpc
  • Click Create Database

Image description

Step 6: Launch an EC2 Instance

  • Go to EC2 > Launch Instance
  • Choose Ubuntu 22.04
  • Set Instance Name: my-ec2
  • Select my-vpc and attach public-subnet
  • Security Group: Allow
  • SSH (22) from your IP
  • HTTP (80) from anywhere
  • MySQL (3306) from my-vpc
  • Click Launch Instance

Image description

Step 7: Install Apache, PHP, and MySQL Client
7.1 Connect to EC2

ssh -i your-key.pem ubuntu@your-ec2-public-ip

7.2 Install LAMP Stack

sudo apt update && sudo apt install -y apache2 php libapache2-mod-php php-mysql mysql-client

7.3 Start Apache

sudo systemctl start apache2
sudo systemctl enable apache2

Step 8: Configure Web Application
8.1 Create the Registration Form

cd /var/www/html
sudo nano index.html
<!DOCTYPE html>
<html>
<head>
    <title>Registration Form</title>
</head>
<body>
    <h2>User Registration</h2>
    <form action="submit.php" method="POST">
        Name: <input type="text" name="name" required><br>
        DOB: <input type="date" name="dob" required><br>
        Email: <input type="email" name="email" required><br>
        <input type="submit" value="Register">
    </form>
</body>
</html>

Image description

8.2 Create PHP Script (submit.php)

sudo nano /var/www/html/submit.php
<?php
$servername = "your-rds-endpoint";
$username = "admin";
$password = "yourpassword";
$dbname = "registration";
$conn = new mysqli($servername, $username, $password, $dbname);
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}
$name = $_POST['name'];
$dob = $_POST['dob'];
$email = $_POST['email'];
$stmt = $conn->prepare("INSERT INTO users (name, dob, email) VALUES (?, ?, ?)");
$stmt->bind_param("sss", $name, $dob, $email);
if ($stmt->execute()) {
    echo "Registration successful";
} else {
    echo "Error: " . $stmt->error;
}
$stmt->close();
$conn->close();
?>

Image description

Step 9: Create Target Group

  1. Go to the AWS EC2 Console → Navigate to Target Groups
  2. Click Create target group
  3. Choose Target type: Instance
  4. Enter Target group name: my-target-group
  5. Select Protocol: HTTP
  6. Select Port: 80
  7. Choose the VPC you created earlier
  8. Click Next
  9. Under Register Targets, select your EC2 instances
  10. Click Include as pending below, then Create target group

Image description

Image description

Step 10: Create an Application Load Balancer (ALB)

  1. Go to AWS EC2 Console → Navigate to Load Balancers
  2. Click Create Load Balancer
  3. Choose Application Load Balancer
  4. Enter ALB Name: my-alb
  5. Scheme: Internet-facing
  6. IP address type: IPv4
  7. Select the VPC
  8. Select at least two public subnets (for high availability)
  9. Click Next

Image description

Step 11: Test the Application

  1. Restart Apache sudo systemctl restart apache2
  2. Open your browser and visit: http://your-ec2-public-ip/
  3. Fill in the form and Submit
  4. Check MySQL Database:
mysql -u admin -p -h your-rds-endpoint
USE your_database;
SELECT * FROM table_name;

Image description

This setup ensures a scalable, secure, and high-availability application on AWS! 🚀

Follow for more and happy learning :)

🎯 PostgreSQL Zero to Hero with Parottasalna – 2 Day Bootcamp (FREE!) 🚀

2 March 2025 at 07:09

Databases power the backbone of modern applications, and PostgreSQL is one of the most powerful open-source relational databases trusted by top companies worldwide. Whether you’re a beginner or a developer looking to sharpen your database skills, this FREE bootcamp will take you from Zero to Hero in PostgreSQL!

What You’ll Learn?

✅ PostgreSQL fundamentals & installation

✅ Postgres Architecture
✅ Writing optimized queries
✅ Indexing & performance tuning
✅ Transactions & locking mechanisms
✅ Advanced joins, CTEs & subqueries
✅ Real-world best practices & hands-on exercises

This intensive hands on bootcamp is designed for developers, DBAs, and tech enthusiasts who want to master PostgreSQL from scratch and apply it in real-world scenarios.

Who Should Attend?

🔹 Beginners eager to learn databases
🔹 Developers & Engineers working with PostgreSQL
🔹 Anyone looking to optimize their SQL skills

📅 Date: March 22, 23 -> (Moved to April 5, 6)
⏰ Time: Will be finalized later.
📍 Location: Online
💰 Cost: 100% FREE 🎉

🔗 RSVP Here

Session is not taken !!! Will be announced later.

Prerequisite

  1. Checkout this playlist of our previous postgres session https://www.youtube.com/playlist?list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt

🎉 This bootcamp is completely FREE – Learn without any cost! 🎉

💡 Spots are limited – RSVP now to reserve your seat!

🚀 #FOSS: Mastering Superfile: The Ultimate Terminal-Based File Manager for Power Users

28 February 2025 at 17:07

🔥 Introduction

Are you tired of slow, clunky GUI-based file managers? Do you want lightning-fast navigation and total control over your files—right from your terminal? Meet Superfile, the ultimate tool for power users who love efficiency and speed.

In this blog, we’ll take you on a deep dive into Superfile’s features, commands, and shortcuts, transforming you into a file management ninja! ⚡

💡 Why Choose Superfile?

Superfile isn’t just another file manager it’s a game-changer.

Here’s why

✅ Blazing Fast – No unnecessary UI lag, just pure efficiency.

✅ Keyboard-Driven – Forget the mouse, master navigation with powerful keybindings.

✅ Multi-Panel Support – Work with multiple directories simultaneously.

✅ Smart Search & Sorting – Instantly locate and organize files.

✅ Built-in File Preview & Metadata Display – See what you need without opening files.

✅ Highly Customizable – Tailor it to fit your workflow perfectly.

🛠 Installation

Getting started is easy! Install Superfile using

# For Linux (Debian-based)
wget -qO- https://superfile.netlify.app/install.sh | bash

# For macOS (via Homebrew)
brew install superfile

# For Windows (via Scoop)
scoop install superfile

Once installed, launch it with

spf

🚀 Boom! You’re ready to roll.

⚡ Essential Commands & Shortcuts

🏗 General Operations

  • Launch Superfile: spf
  • Exit: Press q or Esc
  • Help Menu: ?
  • Toggle Footer Panel: F

📂 File & Folder Navigation

  • New File Panel: n
  • Close File Panel: w
  • Toggle File Preview: f
  • Next Panel: Tab or Shift + l
  • Sidebar Panel: s

📝 File & Folder Management

  • Create File/Folder: Ctrl + n
  • Rename: Ctrl + r
  • Copy: Ctrl + c
  • Cut: Ctrl + x
  • Paste: Ctrl + v
  • Delete: Ctrl + d
  • Copy Path: Ctrl + p

🔎 Search & Selection

  • Search: /
  • Select Files: v
  • Select All: Shift + a

📦 Compression & Extraction

  • Extract Zip: Ctrl + e
  • Compress to Zip: Ctrl + a

🏆 Advanced Power Moves

  • Open Terminal Here: Shift + t
  • Open in Editor: e
  • Toggle Hidden Files: .

💡 Pro Tip: Use Shift + p to pin frequently accessed folders for even quicker access!

🎨 Customizing Superfile

Want to make Superfile truly yours? Customize it easily by editing the config file

$EDITOR CONFIG_PATH

To enable the metadata plugin, add

metadata = true

For more customizations, check out the Superfile documentation.

🎯 Final Thoughts

Superfile is the Swiss Army knife of terminal-based file managers. Whether you’re a developer, system admin, or just someone who loves a fast, efficient workflow, Superfile will revolutionize the way you manage files.

🚀 Ready to supercharge your productivity? Install Superfile today and take control like never before!

For more details, visit the Superfile website.

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

24 February 2025 at 16:18

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines! 💻✨

💡 Topics Covered:
🛠 Syed Jafer – FOSS, GLUGs, and open-source communities
🌍 Salman – Why FOSS matters & Linux Commands
🚀 Dhanasekar – Linux and DevOps
🔧 Guhan – GNU and free software

Challenges We Faced


🔐 BitLocker Encryption – Had to disable BitLocker on some laptops
🔧 BIOS/UEFI Problems – Secure Boot, boot order changes needed
🐧 GRUB Issues – Windows not showing up, required boot-repair

🎥 Watch the installation video and try it yourself! https://www.youtube.com/watch?v=m7sSqlam2Sk


▶ Linux Mint Installation Guide https://tkdhanasekar.wordpress.com/2025/02/15/installation-of-linux-mint-22-1-cinnamon-edition/

This is just the beginning!

Learning Notes #69 – Getting Started with k6: Writing Your First Load Test

5 February 2025 at 15:38

Performance testing is a crucial part of ensuring the stability and scalability of web applications. k6 is a modern, open-source load testing tool that allows developers and testers to script and execute performance tests efficiently. In this blog, we’ll explore the basics of k6 and write a simple test script to get started.

What is k6?

k6 is a load testing tool designed for developers. It is written in Go but uses JavaScript for scripting tests. Key features include,

  • High performance with minimal resource consumption
  • JavaScript-based scripting
  • CLI-based execution with detailed reporting
  • Integration with monitoring tools like Grafana and Prometheus

Installation

For installation check : https://grafana.com/docs/k6/latest/set-up/install-k6/

Writing a Basic k6 Test

A k6 test is written in JavaScript. Here’s a simple script to test an API endpoint,


import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  vus: 10, // Number of virtual users
  duration: '10s', // Test duration
};

export default function () {
  let res = http.get('https://api.restful-api.dev/objects');
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
  sleep(1); // Simulate user wait time
}

Running the Test

Save the script as script.js and execute the test using the following command,

k6 run script.js

Understanding the Output

After running the test, k6 will provide a summary including

1. HTTP requests: Total number of requests made during the test.

    2. Response time metrics:

    • min: The shortest response time recorded.
    • max: The longest response time recorded.
    • avg: The average response time of all requests.
    • p(90), p(95), p(99): Percentile values indicating response time distribution.

    3. Checks: Number of checks passed or failed, such as status code validation.

    4. Virtual users (VUs):

    • vus_max: The maximum number of virtual users active at any time.
    • vus: The current number of active virtual users.

    5. Request Rate (RPS – Requests Per Second): The number of requests handled per second.

    6. Failures: Number of errors or failed requests due to timeouts or HTTP status codes other than expected.

    Next Steps

    Once you’ve successfully run your first k6 test, you can explore,

    • Load testing different APIs and endpoints
    • Running distributed tests
    • Exporting results to Grafana
    • Integrating k6 with CI/CD pipelines

    k6 is a powerful tool that helps developers and QA engineers ensure their applications perform under load. Stay tuned for more in-depth tutorials on advanced k6 features!

    RSVP for K6 : Load Testing Made Easy in Tamil

    5 February 2025 at 10:57

    Ensuring your applications perform well under high traffic is crucial. Join us for an interactive K6 Bootcamp, where we’ll explore performance testing, load testing strategies, and real-world use cases to help you build scalable and resilient systems.

    🎯 What is K6 and Why Should You Learn It?

    Modern applications must handle thousands (or millions!) of users without breaking. K6 is an open-source, developer-friendly performance testing tool that helps you

    ✅ Simulate real-world traffic and identify performance bottlenecks.
    ✅ Write tests in JavaScript – no need for complex tools!
    ✅ Run efficient load tests on APIs, microservices, and web applications.
    ✅ Integrate with CI/CD pipelines to automate performance testing.
    ✅ Gain deep insights with real-time performance metrics.

    By mastering K6, you’ll gain the skills to predict failures before they happen, optimize performance, and build systems that scale with confidence!

    📌 Bootcamp Details

    📅 Date: Feb 23 2024 – Sunday
    🕒 Time: 10:30 AM
    🌐 Mode: Online (Link Will be shared in Email after RSVP)
    🗣 Language: தமிழ்

    🎓 Who Should Attend?

    • Developers – Ensure APIs and services perform well under load.
    • QA Engineers – Validate system reliability before production.
    • SREs / DevOps Engineers – Continuously test performance in CI/CD pipelines.

    RSVP Now

    🔥 Don’t miss this opportunity to master load testing with K6 and take your performance engineering skills to the next level!

    Got questions? Drop them in the comments or reach out to me. See you at the bootcamp! 🚀

    Our Previous Monthly meets – https://www.youtube.com/watch?v=cPtyuSzeaa8&list=PLiutOxBS1MizPGGcdfXF61WP5pNUYvxUl&pp=gAQB

    Our Previous Sessions,

    1. Python – https://www.youtube.com/watch?v=lQquVptFreE&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&pp=gAQB
    2. Docker – https://www.youtube.com/watch?v=nXgUBanjZP8&list=PLiutOxBS1Mizi9IRQM-N3BFWXJkb-hQ4U&pp=gAQB
    3. Postgres – https://www.youtube.com/watch?v=04pE5bK2-VA&list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt&pp=gAQB

    Event Summary: FOSS United Chennai Meetup – 25-01-2025

    26 January 2025 at 04:53

    🚀 Attended the FOSS United Chennai Meetup Yesterday! 🚀

    After, attending Grafana & Friends Meetup, straightly went to FOSS United Chennai Meetup at YuniQ in Taramani.

    Had a chance to meet my Friends face to face after a long time. Sakhil Ahamed E. , Dhanasekar T, Dhanasekar Chellamuthu, Thanga Ayyanar, Parameshwar Arunachalam, Guru Prasath S, Krisha, Gopinathan Asokan

    Talks Summary,

    1. Ansh Arora, Gave a tour on FOSS United, How its formed, Motto, FOSS Hack, FOSS Clubs.

    2. Karthikeyan A K, Gave a talk on his open source product injee (The no configuration instant database for frontend developers.). It’s a great tool. He gave a personal demo for me. It’s a great tool with lot of potentials. Would like to contribute !.

    3. Justin Benito, How they celebrated New Year with https://tamilnadu.tech
    It’s single go to page for events in Tamil Nadu. If you are interested ,go to the repo https://lnkd.in/geKFqnFz and contribute.

    From Kaniyam Foundation we are maintaining a Google Calendar for a long time on Tech Events happening in Tamil Nadu https://lnkd.in/gbmGMuaa.

    4. Prasanth Baskar, gave a talk on Harbor, OSS Container Registry with SBOM and more functionalities. SBOM was new to me.

    5. Thanga Ayyanar, gave a talk on Static Site Generation with Emacs.

    At the end, we had a group photo and went for tea. Got to meet my Juniors from St. Joseph’s Institute of Technology in this meet. Had a discussion with Parameshwar Arunachalam on his BuildToLearn Experience. They started prototyping an Tinder app for Tamil Words. After that had a small discussion on our Feb 8th Glug Inauguration at St. Joseph’s Institute of Technology Dr. KARTHI M .

    Happy to see, lot of minds travelling from different districts to attend this meet.

    IRC – My Understanding V2.0

    By: Sugirtha
    21 November 2024 at 10:47

    What is plaintext in my point of view:
    Its simply text without any makeup or add-on, it is just an organic content. For example,

    • A handwritten grocery list what our mother used to give to our father
    • A To-Do List
    • An essay/composition writing in our school days

    Why plaintext is important?
    – The quality of the content only going to get score here: there is no marketing by giving some beautification or formats.
    – Less storage
    – Ideal for long term data storage because Cross-Platform Compatibility
    – Universal Accessibility. Many s/w using plain text for configuration files (.ini, .conf, .json)
    – Data interchange (.csv – interchange data into databases or spreadsheet application)
    – Command line environments, even in cryptography.
    – Batch Processing: Many batch processes use plain text files to define lists of actions or tasks that need to be executed in a batch mode, such as renaming files, converting data formats, or running programs.

    So plain text is simple, powerful and something special we have no doubt about it.

    What is IRC?
    IRC – Internet Relay Chat is a plain text based real time communication System over the internet for one-on-one chat, group chat, online community – making it ideal for discussion.

    It’s a popular network for free and open-source software (FOSS) projects and developers in olden days. Ex. many large projects (like Debian, Arch Linux, GNOME, and Python) discussion used. Nowadays also IRC is using by many communities.

    Usage :
    Mainly a discussion chat forum for open-source software developers, technology, and hobbyist communities.

    Why IRC?
    Already we have so many chat platforms which are very advanced and I could use multimedia also there: but this is very basic, right? So Why should I go for this?

    Yes it is very basic, but the infrastructure of this IRC is not like other chat platforms. In my point of view the important differences are Privacy and No Ads.

    Advantages over other Chat Platforms:

    • No Ads Or Popups: We are not distracted from other ads or popups because my information is not shared with any companies for tracking or targeted marketing.
    • Privacy: Many IRC networks do not require your email, mobile number or even registration. You can simply type your name or nick name, select a server and start chatting instantly. Chat Logs also be stored if required.
    • Open Source and Free: Server, Client – the entire networking model is free and open source. Anybody can install the IRC servers/clients and connect with the network.
    • Decentralized : As servers are decentralized, it could able to work even one server has some issues and it is down. Users can connect to different servers within the same network which is improving reliability and performance.
    • Low Latency: Its a free real time communication system with low latency which is very important for technical communities and time sensitive conversations.
    • Customization and Extensibility: Custom scripts can be written to enhance functionality and IRC supports automation through bots which can record chats, sending notification or moderating channels, etc.
    • Channel Control: Channel Operators (Group Admin) have fine control over the users like who can join, who can be kicked off.
    • Light Weight Tool: As its light weight no high end hardware required. IRC can be accessed from even older computers or even low powered devices like Rasberry Pi.
    • History and Logging: Some IRC Servers allow logging of chats through bots or in local storage.

    Inventor
    IRC is developed by Jarkko Oikarinen (Finland) in 1988.

    Some IRC networks/Servers:
    Libera.Chat(#ubuntu, #debian, #python, #opensource)
    EFNet-Eris Free Network (#linux, #python, #hackers)
    IRCnet(#linux, #chat, #help)
    Undernet(#help, #anime, #music)
    QuakeNet (#quake, #gamers, #techsupport)
    DALnet- for both casual users and larger communities (#tech, #gaming, #music)

    Some Clients-GUI
    HexChat (Linux, macOS, Windows)
    Pidgin (Linux, Windows)
    KVIrc (Linux, Windows, macOS)

    Some IRC Clients for CLI (Command Line Interface) :
    WeeChat
    Irssi

    IRC Clients for Mobile :
    Goguma
    Colloquy (iOS)
    LimeChat (iOS)
    Quassel IRC (via Quassel Core) (Android)
    AndroIRC (Android)

    Directly on the Website – Libera WebClienthttps://web.libera.chat/gamja/ You can click Join, then type the channel name (Group) (Ex. #kaniyam)

    How to get Connected with IRC:
    After installed the IRC client, open.
    Add a new network (e.g., “Libera.Chat”).
    Set the server to irc.libera.chat (or any of the alternate servers above).
    Optionally, you can specify a port (default is 6667 for non-SSL, 6697 for SSL).
    Join a channel like #ubuntu, #python, or #freenode-migrants once you’re connected.

    Popular channels to join on libera chat:
    #ubuntu, #debian, #python, #opensource, #kaniyam

    Local Logs:
    Logs are typically saved in plain text and can be stored locally, allowing you to review past conversations.
    How to get local logs from our System (IRC libera.chat Server)
    folders – /home//.local/share/weechat/logs/ From Web-IRCBot History:
    https://ircbot.comm-central.org:8080

    References:
    https://kaniyam.com/what-is-irc-an-introduction/
    https://www.youtube.com/watch?v=CGurYNb0BM8

    Our daily meetings :
    You can install IRC client, with the help of above link, can join.
    Timings : IST 8pm-9pm
    Server : libera.chat
    Channel : #kaniyam

    ALL ARE WELCOME TO JOIN, DISCUSS and GROW
    

    Installing Arch Linux in UEFI systems(windows)

    12 November 2024 at 15:22

    This will be a very basic overview in what is to be done for installing Arch Linux. For more information check out Arch wiki installation guide.

    The commands shown in this guide will be in italian(font).

    Step 1: Downloading the required files and applications

    I have downloaded a few applications to help ease the process for the installation. You can download them using the links below.

    Rufus:
    This helps in formatting the USB and converting the disc image file to a dd image file. I have used rufus, you can use other tools too. This only works on windows.
    rufus link

    BitTorrent
    The download option in the wiki page suggests we use BitTorrent for downloading the disc image file.
    BitTorrent for windows

    Arch Linux torrent file
    This is for downloading the Arch Linux Torrent File. The download link can be found in the website given below.
    Arch Linux Download Page

    Step 2: The bootable USB

    You will need a USB of size at least 2GB and 4GB or above should be very comfortable to use.

    First open the BitTorrent application or the web based version and upload the magnet link or the torrent file to start downloading the disc image file.

    Then to prepare the USB:

    1. Launch the application to make the bootable USB like rufus.

    2.In the device section select your USB and remember all the data in the drive will be lost after the process.

    3.In boot selection, choose the disc image file that was downloaded through torrent.

    4.In the target system select UEFI as we are using a UEFI system.

    5.In the partition scheme make sure GPT is selected.

    6.In file system select fat32 and 4096 bytes as cluster size.

    7.When you click ready it will present you with 2 options, select the dd image file which is not the default option.

    After the process is done the USB will not be readable to windows, so there is no need to panic if you cannot access the USB.

    If you are using a dual boot make sure you have at least 30 GB of unallocated space.

    I would recommend to turn off bitlocker settings as it could give rise to other challenges during the installation.

    Then get into the UEFI Firmware settings of your system. One easy way is to:
    1.Hold shift key while pressing to restart the computer
    2.Go into Troubleshoot
    3.Go into Advanced Settings
    4.Select UEFI Firmware Settings
    5.You will have to restart again but you will be in the required place.

    Turn off secure boot state. It is usually in the security settings.

    Select save changes and exit.

    When you log back into your system ensure that secure boot state is off by going into system information.

    Go back to UEFI Firmware settings by repeating the process.

    In the boot priority section, give your USB device the highest priority. This is usually in the boot section. Then select save changes and exit.

    Step 3: Preparing Arch Linux Installation

    When all the above steps are done and the system restarts, you will be prompted with a few options. Select Arch Linux install medium and press 'Enter' to enter the installation environment. After this you will need to follow a series of steps.

    1. Verifying you are in UEFI mode.

    To do that type the command
    cat /sys/firmware/efi/fw_platform_size

    You should get the result as 32 or 64. If you get no result then you may not be using UEFI mode.

    2. Connecting to the internet:

    If you are using an ethernet cable then you don't have to worry as you might already be connected to internet.
    Use the command
    ping -c 4 google.com
    or another website to ping from to check if you're connected to the internet.

    To connect to wi-fi, type in the command
    ip link

    This should show you all the internet devices you have. Your wi-fi should typically be wlan0 or something like wlp3s0, which is your device name.

    Then type the command
    iwctl

    This should get you into an interactive command line interface.
    You can explore the options by using the command
    help

    My device name was wlan0 so I'm using wlan0 in the command I'm going to show if yours is different make the appropriate changes.

    To connect to the wifi use the command
    station wlan0 connect "Network Name"
    where "Network Name" is the name of your network.

    If you want to know the name of your network before doing this you can try the command
    station wlan0 get-networks

    To get out of the environment simply use the command
    exit

    After you exit, you can verify your connection with
    ping -c 4 google.com

    If it doesn't work, try the command
    ping -c 4 8.8.8.8

    If the above also doesn't work, the problem may lie with your network.

    However if the second option works for you, the fix would be to manually change the DNS server you're using.
    To do that, run the command
    nano /etc/systemd/resolved.conf

    In this file if the DNS part is commented using a #, remove the # and replace it with a DNS server you desire. For eg: 8.8.8.8

    ctrl + x to save and exit

    Now try pinging a website such as google.com again to make sure you're properly connected to the internet.

    3. Set the proper time

    When you connect to the internet you should have the proper time. To check you can use the command
    timedatectl

    4. Create the partitions for Arch Linux

    To check what partitions you have, use the command
    lsblk

    This will list the partitions you have. It will be in the format /dev/sda or /dev/nvme0n1 or something else. Mine was /dev/nvme0n1 so I'll be using the same in the commands below.

    To make the partitions, use the command
    fdisk /dev/nvme0n1

    This should bring you to a separate command line interface.

    It will give you an introduction on what to do.

    Now we will create the partitions.
    To create a partition, use the command
    n

    It will show you what you want to number your partition and the default option. Click enter as it will automatically take the default option if you don't enter any value. Let's say mine is 1.

    It will show you what sector you want the partition to start from and the default option. Click enter.

    Then it will ask you where you want the sectors to end: type
    +1g

    1g will allot 1 GB to the partition you just created.

    Then create another partition in the same way, let's say mine is sector number 2 this time and finally instead of
    +1g use +4g

    This will allot 4 GB to the second partition you just created.

    Create another partition and this time leave the last sector to default so it can have the remaining space. Let's say this partition is number 3.

    partition 1 - EFI system partition
    partition 2 - Linux SWAP partition
    partition 3 - Linux root partition

    5. Prepare the created partitions for Arch Linux installation

    Here, we are going to format the memory in the chosen partitions and make them the appropriate file systems.

    For the EFI partition:
    mkfs.fat -F 32 /dev/nvme0n1p1

    This converts the 1 GB partition into a fat32 file system.

    For SWAP partition:
    mkswap /dev/nvme0n1p2

    This converts the 4 GB partition into something that can be used as virtual RAM.

    For root partition:
    mkfs.ext4 /dev/nvme0n1p3

    This converts the root partition into a file system that is called ext4.

    6. Mounting the partitions

    This is for setting a reference point to the partitions we just created.

    For the EFI partition:
    mount --mkdir /dev/nvme0n1p1 /mnt/boot

    For the root partition:
    mount /dev/nvme0n1p3 /mnt

    For the swap partition:
    swapon /dev/nvme0n1p2

    Step 3: The Arch Linux Installation

    1. Updating the mirrorlist (optional)

    The mirrorlist is a list of mirror servers from which packages can be downloaded. Choosing the right mirror server could get you higher download speeds.

    This step isn't required as the mirror list is automatically updated when connected to the internet but if you would like to manually do it, its in the file
    /etc/pacman.d/mirrorlist

    2. Installing base Linux kernel and firmware

    To do this, use the command
    pacstrap -K /mnt base linux linux-firmware

    Step 4: Configuring Arch Linux system

    1. generating fstab

    The fstab is the file system table. It contains information on each of the file partitions and storage devices. It also contains information on how they should be mounted during boot.

    To do it, use the command:
    genfstab -U /mnt >> /mnt/etc/fstab

    2. Chroot

    Chroot is short for change root. It is used to directly interact with the Arch Linux partitions from the live environment in the USB.

    To do it, use the command:
    arch-chroot /mnt

    3. Time

    The timezone has 2 parts the region and the city. I am from India so my region is Asia and the city is Kolkata. Change yours appropriately to your needs.

    The command:
    ln -sf /usr/share/zoneinfo/Asia/Kolkata /etc/localtime

    We can also set the time in hardware clock as UTC.
    To do that:
    hwclock --systohc

    4. Installing some important tools

    The system you have installed is a very basic system, so it doesn't have a lot of stuff. I'm recommending two very basic tools as they can be handy.

    i) nano:
    This is a text editor file so you can make changes to configuration files.
    pacman -S nano

    ii) iwd:
    This is called iNet wireless daemon. I recommend this so that you can connect to wi-fi once you reboot to your actual arch system.
    pacman -S iwd

    5. Localization

    This is for setting the keyboard layout and language. Go to the file /etc/locale.conf by using
    nano /etc/locale.conf

    I want to use the english language that is the default in most devices so for doing that you have to uncomment(remove the #) for the line that says
    LANG=en_US.UTF-8

    As there are a lot of lines you can search using ctrl+F.

    Then ctrl+X to save and exit.

    Then use the command
    locale-gen

    This command generates the locale you just uncommented.

    6. Host and password

    To create the host name, we should do it in the /etc/hostname file. Use
    nano /etc/hostname

    Then type in what your hostname would be.
    ctrl + X to save and exit.

    To set the password of your root user, use the command
    passwd

    7. Getting out of chroot and rebooting the system

    To get out of chroot simply use
    exit

    Then to reboot the system use
    reboot

    Remove the installation medium(USB) as the device turns off.

    Step 5: Enjoy Arch Linux

    Arch Linux is one of the most minimal systems. So you can customize it to your liking. You can also install other desktop environments if you feel like it.

    The Search for the Perfect Media Server: A Journey of Discovery

    2 September 2024 at 04:11

    Dinesh, an avid movie collector and music lover, had a growing problem. His laptop was bursting at the seams with countless movies, albums, and family photos. Every time he wanted to watch a movie or listen to her carefully curated playlists, he had to sit around his laptop. And if he wanted to share something with his friends, it meant copying with USB drives or spending hours transferring files.

    One Saturday evening, after yet another struggle to connect his laptop to his smart TV via a mess of cables, Dinesh decided it was time for a change. He needed a solution that would let his access all his media from any device in his house – phone, tablet, and TV. He needed a media server.

    Dinesh fired up his browser and began his search: “How to stream media to all my devices.” He gone through the results – Plex, Jellyfin, Emby… Each option seemed promising but felt too complex, requiring subscriptions or heavy installations.

    Frustrated, Dinesh thought, “There must be something simpler. I don’t need all the bells and whistles; I just want to access my files from anywhere in my house.” He refined her search: “lightweight media server for Linux.”

    There it was – MiniDLNA. Described as a simple, lightweight DLNA server that was easy to set up and perfect for home use, MiniDLNA (also known as ReadyMedia) seemed to be exactly what Dinesh needed.

    MiniDLNA (also known as ReadyMedia) is a lightweight, simple server for streaming media (like videos, music, and pictures) to devices on your network. It is compatible with various DLNA/UPnP (Digital Living Network Alliance/Universal Plug and Play) devices such as smart TVs, media players, gaming consoles, etc.

    How to Use MiniDLNA

    Here’s a step-by-step guide to setting up and using MiniDLNA on a Linux based system.

    1. Install MiniDLNA

    To get started, you need to install MiniDLNA. The installation steps can vary slightly depending on your operating system.

    For Debian/Ubuntu-based systems:

    sudo apt update
    sudo apt install minidlna
    

    For Red Hat/CentOS-based systems:

    First, enable the EPEL repository,

    sudo yum install epel-release
    

    Then, install MiniDLNA,

    sudo yum install minidlna
    

    2. Configure MiniDLNA

    Once installed, you need to configure MiniDLNA to tell it where to find your media files.

    a. Open the MiniDLNA configuration file in a text editor

    sudo nano /etc/minidlna.conf
    

    b. Configure the following parameters:

    • media_dir: Set this to the directories where your media files (music, pictures, and videos) are stored. You can specify different media types for each directory.
    media_dir=A,/path/to/music  # 'A' is for audio
    media_dir=V,/path/to/videos # 'V' is for video
    media_dir=P,/path/to/photos # 'P' is for pictures
    
    • db_dir=: The directory where the database and cache files are stored.
    db_dir=/var/cache/minidlna
    
    • log_dir=: The directory where log files are stored.
    log_dir=/var/log/minidlna
    
    • friendly_name=: The name of your media server. This will appear on your DLNA devices.
    friendly_name=Laptop SJ
    
    • notify_interval=: The interval in seconds that MiniDLNA will notify clients of its presence. The default is 900 (15 minutes).
    notify_interval=900
    

    c. Save and close the file (Ctrl + X, Y, Enter in Nano).

    3. Start the MiniDLNA Service

    After configuration, start the MiniDLNA service

    sudo systemctl start minidlna
    
    

    To enable it to start at boot,

    sudo systemctl enable minidlna
    

    4. Rescan Media Files

    To make MiniDLNA scan your media files and add them to its database, you can force a rescan with

    sudo minidlnad -R
    
    

    5. Access Your Media on DLNA/UPnP Devices

    Now, your MiniDLNA server should be up and running. You can access your media from any DLNA-compliant device on your network:

    • On your Smart TV, look for the “Media Server” or “DLNA” option in the input/source menu.
    • On a Windows PC, go to This PC or Network and find your DLNA server under “Media Devices.”
    • On Android, use a media player app like VLC or BubbleUPnP to find your server.

    6. Check Logs and Troubleshoot

    If you encounter any issues, you can check the logs for more information

    sudo tail -f /var/log/minidlna/minidlna.log
    
    

    To setup for a single user

    Disable the global daemon

    sudo service minidlna stop
    sudo update-rc.d minidlna disable
    
    

    Create the necessary local files and directories as regular user and edit the configuration

    mkdir -p ~/.minidlna/cache
    cd ~/.minidlna
    cp /etc/minidlna.conf .
    $EDITOR minidlna.conf
    

    Configure as you would globally above but these definitions need to be defined locally

    db_dir=/home/$USER/.minidlna/cache
    log_dir=/home/$USER/.minidlna 
    
    

    To start the daemon locally

    minidlnad -f /home/$USER/.minidlna/minidlna.conf -P /home/$USER/.minidlna/minidlna.pid
    

    To stop the local daemon

    xargs kill </home/$USER/.minidlna/minidlna.pid
    
    

    To rebuild the database,

    minidlnad -f /home/$USER/.minidlna/minidlna.conf -R
    

    For more info: https://help.ubuntu.com/community/MiniDLNA

    Additional Tips

    • Firewall Rules: Ensure that your firewall settings allow traffic on the MiniDLNA port (8200 by default) and UPnP (typically port 1900 for UDP).
    • Update Media Files: Whenever you add or remove files from your media directory, run minidlnad -R to update the database.
    • Multiple Media Directories: You can have multiple media_dir lines in your configuration if your media is spread across different folders.

    To set up MiniDLNA with VLC Media Player so you can stream content from your MiniDLNA server, follow these steps:

    Let’s see how to use this in VLC

    On Machine

    1. Install VLC Media Player

    Make sure you have VLC Media Player installed on your device. If not, you can download it from the official VLC website.

    2. Open VLC Media Player

    Launch VLC Media Player on your computer.

    3. Open the UPnP/DLNA Network Stream

    1. Go to the “View” Menu:
      • On the VLC menu bar, click on View and then Playlist or press Ctrl + L (Windows/Linux) or Cmd + Shift + P (Mac).
    2. Locate Your DLNA Server:
      • In the left sidebar, you will see an option for Local Network.
      • Click on Universal Plug'n'Play or UPnP.
      • VLC will search for available DLNA/UPnP servers on your network.
    3. Select Your MiniDLNA Server:
      • After a few moments, your MiniDLNA server should appear under the UPnP section.
      • Click on your server name (e.g., My DLNA Server).
    4. Browse and Play Media:
      • You will see the folders you configured (e.g., Music, Videos, Pictures).
      • Navigate through the folders and double-click on a media file to start streaming.

    4. Alternative Method: Open Network Stream

    If you know the IP address of your MiniDLNA server, you can connect directly:

    1. Open Network Stream:
      • Click on Media in the menu bar and select Open Network Stream... or press Ctrl + N (Windows/Linux) or Cmd + N (Mac).
    2. Enter the URL:
      • Enter the URL of your MiniDLNA server in the format http://[Server IP]:8200.
      • Example: http://192.168.1.100:8200.
    3. Click “Play”:
      • Click on the Play button to start streaming from your MiniDLNA server.

    5. Tips for Better Streaming Experience

    • Ensure the Server is Running: Make sure the MiniDLNA server is running and the media files are correctly indexed.
    • Network Stability: A stable local network connection is necessary for smooth streaming. Use a wired connection if possible or ensure a strong Wi-Fi signal.
    • Firewall Settings: Ensure that the firewall on your server allows traffic on port 8200 (or the port specified in your MiniDLNA configuration).

    On Android

    To set up and stream content from MiniDLNA using an Android app, you will need a DLNA/UPnP client app that can discover and stream media from DLNA servers. Several apps are available for this purpose, such as VLC for Android, BubbleUPnP, Kodi, and others. Here’s how to use VLC for Android and BubbleUPnP, two popular choices

    Using VLC for Android

    1. Install VLC for Android:
    2. Open VLC for Android:
      • Launch the VLC app on your Android device.
    3. Access the Local Network:
      • Tap on the menu button (three horizontal lines) in the upper-left corner of the screen.
      • Select Local Network from the sidebar menu.
    4. Find Your MiniDLNA Server:
      • VLC will automatically search for DLNA/UPnP servers on your local network. After a few moments, your MiniDLNA server should appear in the list.
      • Tap on the name of your MiniDLNA server (e.g., My DLNA Server).
    5. Browse and Play Media:
      • You will see your media folders (e.g., Music, Videos, Pictures) as configured in your MiniDLNA setup.
      • Navigate to the desired folder and tap on any media file to start streaming.

    Additional Tips

    • Ensure MiniDLNA is Running: Make sure your MiniDLNA server is properly configured and running on your local network.
    • Check Network Connection: Ensure your Android device is connected to the same local network (Wi-Fi) as the MiniDLNA server.
    • Firewall Settings: If you are not seeing the MiniDLNA server in your app, ensure that the server’s firewall settings allow DLNA/UPnP traffic.

    Some Problems That you may face

    1. minidlna.service: Main process exited, code=exited, status=255/EXCEPTION - check the logs. Mostly its due to an instance already running on port 8200. Kill that and reload the db. lsof -i :8200 will give PID. and `kill -9 <PID>` will kill the process.
    2. If the media files is not refreshing, then try minidlnad -f /home/$USER/.minidlna/minidlna.conf -R or `sudo minidlnad -R`

    Demystifying IP Addresses and Netmasks: The Complete Overview

    24 August 2024 at 13:14

    In this blog, we will learn about IP addresses and netmasks.

    IP

    The Internet Protocol (IP) is a unique identifier for your device, similar to how a mobile number uniquely identifies your phone.

    IP addresses are typically represented as four Octets for IPv4, with each octet being One byte/Octets in size, and eight octets for IPv6, with each octet being two bytes/Octets in size.

    Examples:

    • IPv4: 192.168.43.64
    • IPv6: 2001:db8:3333:4444:5555:6666:7777:8888

    For the purposes of this discussion, we will focus on IPv4.

    Do we really require four Octets structure with dots between them?

    The answer is NO

    The only requirement for an IPv4 address is that it must be 4 bytes in size. However, it does not have to be written as four octets or even with dots separating them.

    Let’s test this by fetching Google’s IP address using the nslookup command.

    Convert this to binary number using bc calculator in Bash shell.

    And you can see it’s working.

    This is because the octet structure and the dots between them are only for human readability. Computers do not interpret dots; they just need an IP address that is 4 bytes in size, and that’s it.

    The range for IPv4 addresses is from 0.0.0.0 to 255.255.255.255.

    Types of IP Addresses

    IP addresses are classified into two main types: Public IPs and Private IPs.

    Private IP addresses are used for communication between local devices without connecting to the Internet. They are free to use and secure to use.

    You can find your private IP address by using the ifconfig command


    The private IP address ranges are as follows:

    10.0.0.0 to 10.255.255.255
    172.16.0.0 to 172.31.255.255
    192.168.0.0 to 192.168.255.255

    Public IP addresses are Internet-facing addresses provided by an Internet Service Provider (ISP). These addresses are used to access the internet and are not free.

    By default

    Private IP to Private IP communication is possible.
    Public IP to Public IP communication is possible.

    However:

    Public IP to Private IP communication is not possible.
    Private IP to Public IP communication is not possible.

    Nevertheless, these types of communication can occur through Network Address Translation (NAT), which is typically used by your home router. This is why you can access the Internet even with a private IP address.

    Netmasks
    Netmasks are used to define the range of IP addresses within a network.

    Which means,

    You can see 24 Ones and 8 Zeros.

    Here, we have converted 255 to binary using division method.

    255 ÷ 2 = 127 remainder 1

    127 ÷ 2 = 63 remainder 1

    63 ÷ 2 = 31 remainder 1

    31 ÷ 2 = 15 remainder 1

    15 ÷ 2 = 7 remainder 1

    7 ÷ 2 = 3 remainder 1

    3 ÷ 2 = 1 remainder 1

    1 ÷ 2 = 0 remainder 1

    So, binary value of 255 is 11111111

    By using this, we can able to find the number of IP addresses and its range.

    Since we have 8 zeros, so

    Number of IPs = 2 ^8 which equals to 256 IPs. SO, the usable IP range is 10.4.3.1 – 10.4.3.254 and the broadcast IP is 10.4.3.255.

    And we can also write this as 255.255.255.0/24 . Here 24 denotes CIDR (Classless Inter-Domain Routing).

    Thats it.

    Kindly let me know in comments if you are any queries in these topics.

    Docker Ep 8: Tomcat: Exploring Docker Port Mapping and Logs

    15 August 2024 at 09:32

    Caution: We are just starting from basics. Even if you don’t understand below concepts there is no problem.

    In our previous adventures, we’ve dabbled in Docker’s mysterious arts, running containers, inspecting them, and even detaching them to roam in the background.

    Today, we’re stepping up our game by diving into Docker port mapping and the powerful docker logs command. And what better companion for this journey than the Tomcat image, a trusty open-source web server that brings Java servlets to life?

    Summoning the Tomcat Image

    To begin, we need to summon our new ally: the Tomcat image. Tomcat is a legendary web server that, by default, operates on port 8080 within its container. But what if we want to make this web server accessible to the outside world through a different port? This is where Docker’s port mapping comes into play.

    First, let’s visit the Docker Hub and search for the Tomcat image. Once there, we can see that Tomcat will run on port 8080 by default. We’ll need to expose this port and map it to a port on our host machine using the -p option.

    Port Mapping: Connecting the Container to the World

    Docker port mapping allows us to make the services inside our container accessible from the outside world by forwarding a host port to a container port. The syntax for port mapping is:

    -p <host_port>:<container_port>
    

    For example, let’s say we want to map port 8080 inside the container to port 8888 on our host machine. This means that when we access port 8888 on our host, we’ll actually be talking to port 8080 inside the Tomcat container.

    Let’s see this in action.

    Running Tomcat with Port Mapping

    Fire up your Docker terminal, make sure the font size is nice and large, and enter the following command:

    docker run -it -p 8888:8080 tomcat:8.0
    

    Here’s what’s happening:

    • -it runs the container interactively.
    • -p 8888:8080 maps port 8080 inside the container to port 8888 on our host.
    • tomcat:8.0 specifies the Tomcat image and its version.

    Now, sit back and relax (or maybe grab a coffee) as Docker pulls down the Tomcat image. It’s about 300 MB, so depending on your internet connection, this might take a moment.

    Once the image is downloaded, Docker will spin up the container, and our Tomcat server will be ready to roll.

    Accessing the Tomcat Server

    With the Tomcat container up and running, we can access the Tomcat web server through our web browser. But first, we need to know where to find it.

    If you’re running Docker on Linux, or using Docker for Mac or Windows, the host IP is simply localhost. If you’re using Docker Machine, you’ll need to grab the IP address of the Docker Machine.

    Now, open up your browser, type in localhost:8888 (or your Docker Machine’s IP address followed by :8888), and hit Enter.

    The Tomcat console page should greet you warmly.

    Running Tomcat in Detached Mode

    While running a container in the foreground is great for testing, in production, we typically want our containers to run in the background. For this, we use the -d flag to run the container in detached mode.

    Let’s modify our previous command to run Tomcat in the background:

    docker run -d -p 8888:8080 tomcat:8.0
    

    After hitting Enter, Docker will return a long container ID, confirming that Tomcat is now running in the background, silently serving up Java servlets.

    Checking Container Status with docker ps -a

    Curious about the status of your Tomcat container? Use the docker ps -a command to list all the containers on your system, including those that have exited. This is a handy way to check if your container is still running or if it has quietly exited.

    docker ps -a
    

    Reading the Tea Leaves: docker logs

    Sometimes, you want to peek into the inner workings of your running containers to see what they’re up to. This is where the docker logs command comes in. It lets you view the logs generated by your container, which can be incredibly useful for debugging or just keeping an eye on things.

    Let’s say you want to check the logs of your running Tomcat container. You would run:

    docker logs <container_id>
    

    There is more on docker logs, we shall see later.

    Linux Partition: Static Partition Scaling without any data loss

    1 January 2023 at 05:18

    In this blog, we are going to see how to increase or decrease the size of the static partition in Linux without compromising any data loss and how to do that in Online mode without unmounting.

    I already explained the basic concepts of partition in very detail in my previous blog. You can refer to that blog by clicking here.

    In this practical, the Oracle VirtualBox is used for hosting the Redhat Enterprise Linux (RHEL8) Virtual Machine (VM).

    The first step is to attach one hard disk. So, I attached one virtual hard disk with the size of 40GiB. That disk is named “/dev/sdc”. You can check the disk name and all other disks present in your VM by running the following command.

    fdisk -l

    Then, we have to do partition by using “fdisk” command.

    fdisk /dev/sdc

    Then, enter “n” in order to create a new partition. Then enter the partition number and specify the number of sectors or GiB. Here, we entered 20 GiB in order to utilize that much storage unless we do partition, we can’t utilize any storage.

    We had created one partition named as “/dev/sdc1”. Next step is to format the partition. Here, we used Ext4 filesystem(format) to create an inode table.

    Next step is to create one directory using “mkdir” command and mount that partition in that directory since we can’t directly use the hardware device no matter it is either real or virtual.

    One file should be created inside that directory in order to check the data loss after Live scaling of static partition.

    Ok, now the size of the static partition is 20 GiB, we are going to do scaling up to 30GiB without unmounting the partition. For this, again we have to run the following command.

    fdisk /dev/sdc

    Then delete the partition. Don’t bother about the data, it won’t lose.

    Then enter “n” to create the new partition and specify your desired size. Here, I like to scale up to 30GiB. And then one warning will come and it says that “Partition 1 contains an ext4 signature” and ask us what to do with that either remove the signature or retain.

    If you don’t want to lose the data, then enter “N”. Then enter “w” to save the partition. you can verify your partition size by running “fdisk -l” command in terminal. Finally, you increased the size of static partition.

    First part is done. Then next step is to format the partition in order to create the file system. But this time, we will not use “mkfs” command, since it will delete all the data. We don’t need it. We have to do format without comprising the data. For that we have to run the following command.

    resize2fs  /dev/sdc1

    Finally, we done format without comprising the data. We can check this by going inside that mount point and check whether the data is here or not.

    Yes, data is here. It is not lost even though we created new partition and formatted the partition.

    Live Linux Static Partition scaling without any data loss

    In this blog, we are going to see how to increase or decrease the size of the static partition in Linux without compromising any data loss and done in Online mode.

    I already explained the basic concepts of partition in very detail in my previous blog. You can refer to that blog by clicking here.

    In this practical, the Oracle VirtualBox is used for hosting the Redhat Enterprise Linux (RHEL8) Virtual Machine (VM).

    The first step is to attach one hard disk. So, I attached one virtual hard disk with the size of 40GiB. That disk is named “/dev/sdc”. You can check the disk name and all other disks present in your VM by running the following command.

    fdisk -l

    Then, we have to do partition by using “fdisk” command.

    fdisk /dev/sdc

    Then, enter “n” in order to create a new partition. Then enter the partition number and specify the number of sectors or GiB. Here, we entered 20 GiB in order to utilize that much storage unless we do partition, we can’t utilize any storage.

    We had created one partition named as “/dev/sdc1”. Next step is to format the partition. Here, we used Ext4 filesystem(format) to create an inode table.

    Next step is to create one directory using “mkdir” command and mount that partition in that directory since we can’t directly use the hardware device no matter it is either real or virtual.

    One file should be created inside that directory in order to check the data loss after Live scaling of static partition.

    Ok, now the size of the static partition is 20 GiB, we are going to do scaling up to 30GiB without unmounting the partition. For this, again we have to run the following command.

    fdisk /dev/sdc

    Then delete the partition. Don’t bother about the data, it won’t lose.

    Then enter “n” to create the new partition and specify your desired size. Here, I like to scale up to 30GiB. And then one warning will come and it says that “Partition 1 contains an ext4 signature” and ask us what to do with that either remove the signature or retain.

    If you don’t want to lose the data, then enter “N”. Then enter “w” to save the partition. you can verify your partition size by running “fdisk -l” command in terminal. Finally, you increased the size of static partition.

    First part is done. Then next step is to format the partition in order to create the file system. But this time, we will not use “mkfs” command, since it will delete all the data. We don’t need it. We have to do format without comprising the data. For that we have to run the following command.

    resize2fs  /dev/sdc1

    Finally, we done format without comprising the data. We can check this by going inside that mount point and check whether the data is here or not.

    Yes, data is here. It is not lost even though we created new partition and formatted the partition.

    Reduce the size of the Static Partition

    You can also reduce the Static Partition size. For this, you have to follow the below steps.

    • Unmount
    • Cleaning bad sectors
    • Format
    • Mount

    First step is to unmount your mount point since it is online, somebody will using it.

    umount /partition1

    Then we have to clean the bad sectors by running the following command

    e2fsck -f /dev/sdc1

    Then we have to format the size you want. Here we want only 20 GiB and we will reduce the remaining 10 GiB space. This is done by running following command.

    resize2fs /dev/sdc1 20G

    Then we have to mount the partition.

    Finally, we reduced the static partition size.

    Above figure shows that Data is also not lost during scaling down.


    Thank you all for your reads. Stay tuned for my next article, because it is Endless.

    ❌
    ❌