Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How to use existing SSH key on my newly installed Ubuntu

30 March 2025 at 13:55

copy the old ssh keys  ~/.ssh/ directory
$ cp /path/to/id_rsa ~/.ssh/id_rsa
$ cp /path/to/id_rsa.pub ~/.ssh/id_rsa.pub

change permissions on file
$ sudo chmod 600 ~/.ssh/id_rsa
$ sudo chmod 600 ~/.ssh/id_rsa.pub

start the ssh-agent in the background
$ eval $(ssh-agent -s)

make ssh agent to actually use copied key
$ ssh-add ~/.ssh/id_rsa

The command Line

By: Prasanth
28 March 2025 at 14:21

Reference
1.Book: Linux Administration A Beginner's Guide

*Author * : Wale Soyinka

Bash:-

The login shell is a first program runs when user log into a system. Shell is command language interpreter, shell read and process command execute the user types ,it allow to the user interact with operating system by executing command. Bash(Bourn Again Shell) is improved version of shell , shell used to old linux system, now Bash is used to modern distro .Bash is command line only interface ,it does not have GUi only text-based interaction(CLI),contain built-in commands like cd, echo(print text),bash can lanuch programs and manage running process.
.
Job control:-

when you working in the bash environment, you can start multiple program simultaneously at same prompt . each program consider a job ,by default when run a program, it take over the control terminal, can not type other program until the current program job done. some program require full terminal control to display colored text and fromating for example text editor,now see graphical programs do not need full control the terminal, it just run the background.

example:-

$ firefox ( take over the control terminal)
or
$ firefox & ( run the background keep using terminal)
or
$firefox & gedit & vlc & (multiple program)
$jobs

output show the running

Image description

  • ctr + z pass the current running process
  • ctr + c stop the process (later we will see kill, pkill , killall)

syntax:-

fg job of the firefox number

example:-
$firefox &
$jobs
$ fg 1
fg is bring it to the foreground the job.

bg -background the process

example:-

$sleep 100 (you tell the system
pause 100 second, do nothing)
ctr + z (pause the process sleep)
$bg (resume the last process in the process background)
(bg %jobid = resume particular job in the background, if multiple job running background)
$jobs
$fg (bg jobs bring it to the foreground the job)

Image description

Environment Variables:-

  • Every time you open a new terminal (shell) or run a program (process) in Linux, the system loads environment variables like $HOME, $USER, and $PATH. Each shell session or running process has its own set of settings that control how it behaves. These settings are stored in environment variables, which can be modified to customize the system without changing the original program code.

  • Environment variables in Linux are stored as key-value pairs(VARIABLE=value), either globally or locally. They hold values like file paths, system configurations, and application settings (for example, configuring software behavior). Environment variables can be temporary (session-based) or permanent (saved across reboots).

Practicles:

  1. view all environment varibale $ printenv or $ env

2.set a temporary varibale for current session only

bash:-
$export MY_VAR="Hellow_Linux" (globally set env variable)
$echo $MY_VAR
(varibale set only capital)

3.Remove a temp variable
unset MY_VAR

4.set permeant variable
edit ~/.bashrc or ~/.bash_profile(for bash user only)
$ vim ~/.bashrc
press i for insert mode
Add:-
export MY_VAR="Hellow_Linux"
double tap Esc
then
:wq(save and quite)
then
source ~/.bashrc (apply the changes)

or

echo 'export MY_VAR="Permanent Value"' >> ~/.bashrc
source ~/.bashrc > Apply changes

5.Application configuration

export DB_URL="jdbc:postgresql://localhost:5432/mydatabase"
echo $DB_URL
result:-
jdbc:postgresql://localhost:5432/mydatabase

6.storing Api key securely
set port API_KEY="abcde-123456-fghijk"

7.Setting Language
export LANG="en_US.UTF-8"
echo $LANG
result: en_US.UTF-8

8.simple scripting ,temporarily environment variable:-

Image description

bash:-

mkdir -p ~/scripts

make the directory in scripts(dir name) in side of HOME(~)
-p parent

echo -e '#!/bin/bash\necho "welcome to MY Linux blog"' > ~/scripts/myscript.sh

echo -e = print the text in terminal -e enable escape sequence & \n new line.

!/bin/bash = shebang operator ,tell the system to runt eh script using BASH.
=> > = redirect
~/scripts/myscript.sh = script file location

  1. Commonly used Environment variable:- $PATH = specifies directories executable file $HOME = user home directory $SHELL = default shell $USER = current logged in user $LANG = Language setting $PWD = current working directory $HOSTNAME = Machine hostname

example: $echo $HOME

Installation of “CentOS Stream 9″

By: Prasanth
27 March 2025 at 07:50

Installations way of Linux operating systems:-

1.single Boot(Linux only)
2.Dual Boot( Linux + windows or macOS)
3.other operating system + virtual machine( multiple os run simultaneously)

*Early stage way to choose 2 or 3 option and move to 1 option(Recommend way),now we have see the early stage ,if you need 1 option please commend me the blog.

Operating system: Linux CentOS Stream 9

Prerequisites:-

*Pendrive 32 gb
*Internet connection
*minimum 2 gb ram
*free storage 40gb more

step1: Create a Bootable USB (Pendrive)

for windows user(using Rufus)

1.Download Rufus from => https://rufus.ie/en/
Dowload iso image from -https://www.centos.org/download/(choose iso base d on you system info)most likely x86_64 .
2.insert your usb drive into computer or laptop
3.After installation open rufus, fallow the below option :-

device : select your usb drive
Boot selection : choose the centos 9 iso file.'
partion scheme : choose GPT for UEFI or MBR for Legacy bios
filesystem : FAT32(FILE Allocation TABLE) click start ,now you have ready bootable usb-device.

step2: Configure BIOS/UEFI Settings

1.Restart your machine press f2,f12,Esc based on the system model.
2.find the bootable order or boot priority setting ,enable usb boot ,set as usb drive are first boot options.
3.enable boot mode - Legacy Mode /UEFI mode
4.Disable secure boot if enabled
5.save and exit F10

*step3: start installation:-
*

  1. Insert the bootable USB and restart your pc now your machine start from bootable pen drive.

i. Now display show like this

Image description

choose the install centos stream 9

ii. choose your language

Image description

iii. Setup date and time, keyboard layout and language

Image description

Image description

iv. choose your location

Image description

v. setting up software options server whith GUI
as per below image.

Image description

Image description

vi. Installation Destination:-

Now i choose Automatic ,u can choose also custom(Recommended) if you not already available disk, network& host part skip now later we will discuses.
if you choose custom:-

Image description

  1. /boot = Directory store all boot configuration related files & Linux kernels relate file

Example:-
/boot/vmlinuz-5.14.0-70.el9.x86_64
/boot/initramfs-5.14.0-70.el9.x86_64.img
/boot/grub2/grub.cfg (GRUB configuration)
/boot/efi/EFI/centos/grubx64.efi (If using UEFI)
2.Swap

when you physical ram is full , Linux moves inactive data to swap area, it prevent your system crash and keep process running ,when ram is free now data move back from swap to ram.
3./ (Root)

head/parent of the all directory in the Linux. All important files and application ,configuration save this location.

4./home
Directory contain user related files and documents and personal data.

Image description

Image description

vii. user creation & root password set:-

Image description

Image description

Image description

Image description

viii. GNOME at Boot:-

once you reboot system, CLI appear on the screen, login with user and passwd then CLI to GUI change for the below command

$su
=># systemctl enable --now gdm

Image description

Linux Introduction-II

By: Prasanth
26 March 2025 at 16:45

Table of Contents:

1.Linux Architecture
2.Linux vs unix
3.Linux Distro
4.Booting Process

1.Linux Architecture

Image description

1. Hardware Layer


*The hardware layer is base of the Linux
architecture, it contain all the physical parts of a computer for example CPU , Ram , Storage, i/o device. The linux kernel communicates withe hardware using device drivers to control the physical parts.

2. Kernel Layer


*Linux kernel is a core part of the operating system
It acts as a bridge between hardware and software. It directly interacts with the hardware and manages system resources like CPU, memory, and devices.
*users give commands to the shell and the shell processes these commands the by interacting with the kernel .
example, if you runs ls, the kernel requests the file system to get file information from the hard disk.
Types

  1. Monolithic kernel
  2. Micro kernel
  3. Hybrid kernle.

3. Shell Layer

* shell in linux is a command line interface that allow to user interact withe operating system . It Acts as bridge b/w the user and kernel
Type:

1.Bash - default shell used to linux globally
2.Zsh(Z shell) - extended version of Bash shell like auto suggestion
3.fish(fish shell - it provides auto suggestion and web based config.
4.c shell - work like c programming ,mainly used developers

for real case work example

$cat /etc/shells - show only you installed shells
$echo #SHELL -show current shell
$chsh -s /bin/youinstalledshellname

4. User Applications Layer


include all programs and utilites(small programme or tool that help user perform specfic task like ls,cp,ping,top,ps)

2.Linux vs unix

Origin:-
Linux -Created by Linus Torvalds (1991)

Unix -Developed by AT&T Bell Labs (1970s)
License:-
Linux - Open-source & free (GPL)

Unix - Mostly proprietary (licensed)& paid

Usage:-
Linux -Used in PCs, servers, mobile (Android)
Unix - Used in enterprises, mainframes(big companies (like banks, telecoms, government)

Shell Support:-
Linux - Bash, Zsh, Fish, etc.
Unix - Bourne Shell, C Shell, Korn Shell

File System:-
Linux - ext4, XFS, Btrfs... etc

Unix - UFS, JFS, ZFS .... etc

Hardware Support:-
Linux - Runs on all devices (PCs, ARM, etc.)
ARM is type of process used in mobile,tablet,computer.
Unix - Runs on specific hardware (IBM, HP)

Strong security, frequent updates
Linux - gets regular update and fix the security issues and performance
Unix - very secure and less updates

Performance:-
Linux - high performece and flexible(code-chage and install different software and use may device)
Unix - less flexible and run on specific hardware

Examples:-
Linux - ubuntu ,Red hat,fedora,arch
Unix - AIX,HP-UX,solaris,macos

*3.Linux Distro
*

  1. Ubuntu - ubuntu.com
  2. Debian - debian.org
  3. Fedora - getfedora.org
  4. Arch Linux - archlinux.org
  5. Linux Mint - linuxmint.com
  6. openSUSE - opensuse.org
  7. Manjaro - manjaro.org
  8. Kali Linux - kali.org
  9. CentOS - centos.org
  10. Rocky Linux - rockylinux.org

4.Booting Process

i. BIOS/UEFI (Basic input/output system):-

Bios is firmware(means something does not change) store on a non-volatile memory chip on the motherboard that initialize and tests hardware components when a computer is powered on. It also loads the operating system. firmware it stored in a type of rom (read only memory) called firmware ,simply say firmware is written on to the Rom (Read only memory) during manfacturing ,it cannot be alteret,updated or erased later, permantely fixed in the memory.Now current environment used EEPROM(Electrically Erasable Programmable Read -only memory) or flash memory.

Functions of BIOS:

1.POST (Power-On Self-Test) – Checks if hardware components (CPU, RAM, storage,hardware ...etc) are working correctly.if any problem occurred (for example hdd not connect propery in motherboard,ram dust and not detect) show like beeps or error code.

2.Bootstapping (boot process) -find and load the os from storage into ram.

3.Harwae initalization - configure and initialize system hardware before os take the control.

4.Bios setup utility - allows users to configure system settings (boot order,secure boot ,bios password set)

5.basic i/o operations - acts as an interface b/w the os and hardware.

Types of Bios:-

*Legacy Bios _ older system used , it support only MBR partitioning and 16 bit interface( 16 bit processor) not support to GPT, user interface text-based ,keyboard-only, storage support 2.2 TB and 4 primary partition ,basic password protection and slower boot time.
UEFI - (Unified Extensible Firmware Interface)
*Now modern system used UEFi , user interface support graphical and keyboard ,mouse. operating system used 32 64 bit processor , storage support 18 exabyte and 128 partition. security boot ,advanced security feature, faster boot time.

  • search the bootloader in the MBR if bios otherwise UEFI means GPT. bios passes the control bootloader.

ii. Bootloader (GRUB, LILO, etc.)

  • MBR is responsible for loading and executing the grub boot loader ,let we discuss depth , GRUb(GRand Unified Bootloader) is bootloader used in Linux sysem to load the operating system into primary memory RAM when computer starts. it is the first software that after runs firmware(BIOS/UEFi) complete the hardware initialization.

stage 1:(MBR/EFI Execution)
BIOS/UEFI load the first stage o GRUp the MBR if

BIos (or) EFI system partion for UEFI.
MBR fist stage 512 bytes in the boot disk

stage 2:(Loading Kernel & initrd)

GRUB Loads the kernel (vmlinuz) into ram .
vmlinuz = "Virtual Memory LINUx gZip-compressed"
*It is the compressed Linux kernel used for booting the system and also GRUB load the initrd(initial ram disk ) or initramfs(initial Ram Filesystem),which contains essential drivers and tools,
initrd(old)/initramfs(modern) temporary root filesystem load into ram by the bootloader like grub, before then mount root filesystem (mount attach(connect) ,umount detach(disconnect),initramfs not need to mounted like initrd,load the directely in the ram.

*The main GRUB menu is displayed here,
You can select the operating system or kernel version,
If you no selected , it will boot the default OS automatically.
Passes control to the kernel.ck3(custom file)

Kernel Initialization

  • The kernel starts execution and mount the root file system read only mode , it runs systemd/init process , start the essential sevices, you can check the Linux os first prosses(pid) is systemd using top command.

    • systemd is system manager it manage sytem services and control the run levels and so on.

*kernel manage the system resource(cpu,memory,device) and hardware components.

System Initialization (systemd/init)

*Systemd starts all required services and daemons (e.g., networking, logging).
*The kernel starts the init system (older Linux) or systemd (modern Linux).
*Systemd is the first process (PID 1) that manages system services.

  • It manages runlevels (SysVinit(older)) or targets (systemd) to start necessary services: 1.Networking 2.Filesystem mounting 3.Daemons (background services) 4.Login services

Runlevel/Target Execution

OldRunlevesl(SYsVinit)

runlevel
0 shutdown
1 singel user mode (resuce mode)
2 multi user mode (no networking)
3 multi user mode (with networking,CLI)
4 unused/custom mode
5 multi user mode with gui
6 reboot

example :
bash
$ runlevel
$ init 0

New Targets (systemd)

poweroff.target => Equivalent to Runlevel 0 (Shutdown)
rescue.target => Equivalent to Runlevel 1 (Single-user mode)
multi-user.target => Equivalent to Runlevel 2 & 3 (CLI, Networking)
graphical.target => Equivalent to Runlevel 5 (GUI mode)
reboot.target => Equivalent to Runlevel 6 (Reboot)

example :
bash
$ systemctl get-default (check runlevel)
$ systemctl isolate poweroff.target
$ systemctl set-default reboot.target
(isolate -temporary)
(set-default - permanent )

*Login Prompt (getty)
*

Displays a CLI login (TTY) or a Graphical Login (GDM/KDM).Allows the user to log in and access the system.
example:
$who
TTY column
tty1 - local terminal
pts/0 - remote ssh sessions1
pts/1 -remote ssh sessions2

Reference
https://medium.com/%40gangulysutapa96/6-stages-of-linux-boot-process-5ee84265d8a0
https://www.thegeekstuff.com/2011/02/linux-boot-process/
https://www.freecodecamp.org/news/the-linux-booting-process-6-steps-described-in-detail/
https://www.geeksforgeeks.org/linux-vs-unix/
https://www.linuxjournal.com/content/unix-vs-linux-what-is-the-difference
https://www.diffen.com/difference/Linux_vs_Unix

Introduction to Linux

By: Prasanth
25 March 2025 at 08:59

Reference Book: Linux Administration A Beginner's Guide
*Author * : Wale Soyinka

Linux: The operating system:-

  • Linux is an open-source operating system; it contains a kernel, and that kernel is the heart of Linux. The kernel operates the hardware components (RAM, CPU, storage device, and input/output devices) and manages system resources, for example, files, processors, memory, devices, and networks.
  • The kernel is a nontrivial program; it means complex, significant, and not easy to implement.
  • Linux: All distros use different customized versions of the kernel, but core functionality is the same for all distros, such as file management, processing, and networking.
  • Linux distro RHEL, Fedora, Debian, Ubuntu, CentOS, Arch...etc. Linux differentiates two categories. One is commercial (supported by vendor company distro longer release life cycle; it means support and update for a long year with a paid subscription); another one is uncommercial (free), managed by the open source community with a short year release.
  • The open source (uncommercial) and commercial Linux distros are interconnected because commercial companies support free distros as testing grounds. Most Linux software is developed by independent contributors, not the company side.

kernel differences:-

Each company sells its own developed distro and customizes the kernel. The company says that its kernel is better than the others. Why say it is better than other kernels? Most of the company and vendor companies stay updated with patches that are posted on www.kernel.org, the "Linux Kernel Archives." However, the vendor company does not track the release of the single kernel version released on wwe.kernel.org; instead of applying the custom patches to it to run the kernel quality assurance process, it tells the production is ready to help the commercial customer; it makes them confident. Any exception that occurs on the kernel-related vendor team immediately takes the action and fixes the issue, so every vendor maintains its own patch maintenance. Every vendor makes its own market strategies in the traditional environment, and the distro is designed as per the user's requirements, like making the kernel version customizable for purposes such as desktop, server, and enterprise.

The GNU Public License:-

In 1980, Richard Stallman was the founder of the Free Software Movement and the initiator of the GNU Project.

Richard Stallman is quoted as saying:

“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.”

  • The main goal of giving away source code is freedom and flexibility. The user should have control over the software; the user cannot depend on the developer or officials. Open source's main goal is that anyone can modify and improve it. The advantage of open-source software is that it allows the community user with necessary skills to collaborate and add new features and contribute to improving software for the benefit of everyone.

  • The Gnu project is the GNU public license (GPL); it means any GPL software anyone can view the source code, edit, reconstruct, release the full source code, sell, and get profit, but one condition is the GPL editable version released under the GPL license. Other licenses are there, such as BSD and Apache, that have similar rules but differ in terms and redistribution. For the BSD license, anyone can change the source code and use it. You can hide that you made changes to keep it private from others, but GPL is not allowing this term. You must showcase what you modified in the software. You can see more related info from www.opensource.org.

Upstream and Downstream:-

  • Upstream is a source code of original project where the code is first developed , you take the code for open source project and modify it, original upstream means for example you buy the normal veg pizza in a pizza shop you gathered all ingrediencies info take it and make it new version chesses pizza include your ingrediency.

  • Downstream is modify version of code ,the project use, modify, extend the upstream code , you can take the opensource project build your own version your project is downstream. Key points is changes made upstream it affect downstream project for example Linux kernel upstream source code for the many Linux distro, ubuntu, fedora ,downstream project do not go back the upstream.

Single Users vs. Multiple Users vs. Network Users

  • window was originally designed purpose is one time one user can work in window . two people can not work with co incidentally at a time .Linux is support multiuser environment ,it means two user work single time multiuser environment in Linux.
  • Network user and server client model for both Linux and windows provide services like database example sql, mysql, postgresql over a network ,from windows designed for client and server communication with restrictive and Linux flexible ,any program run remotely with admin permission.

The Monolithic Kernel and the Micro-Kernel:-

The kernel has three different types of use in operating systems. One is a monolithic kernel, everything inside of a single program, like Unix and Linux. The second one is a micro-kernel, a limited core set of services needed to implement the operating system example window. The third one is a hybrid; it combines the first two kernels. The current industry uses a Windows hybrid kernel. The Windows kernel provides a small set of services that interface with other executive services like process and I/O management.

Domains and Active Directory:-

  • AD is a centralized device; it manages authentication and authorization for the domain. The domain synchronization model means that the domain controller checks everything for up-to-date information. DNS style, It means following the root domain and child domain architecture; for example, abc.com is the root domain, and it. ABC.com and sales.ABC.com as child domains; each domain maintains its own user and computer, but it is part of the root domain network.-
  • Linux does not have tightly coupled (components do not depend on each other directory) authentication/authorization. Linux handles authentication used for the PAM pluggable authentication module; it means multiple authentication methods follow. Name resolution libraries help Linux find and verify user and group information from different sources, like local files, LDAP, and NIS.-
  • Authentication option is linux, flat files basic authentications using /etc/passwd and /etc/shadow, NIC used to older network for centralized authentication ,LDAP (Lightweight Directory Access Protocol) work like AD but is open source, Kerberos secure authentication using tickets, Samba and Ad it allows to authenticate against windows domain.

How to Manage Multiple Cron Job Executions

16 March 2025 at 06:13

Cron jobs are a fundamental part of automating tasks in Unix-based systems. However, one common problem with cron jobs is multiple executions, where overlapping job runs can cause serious issues like data corruption, race conditions, or unexpected system load.

In this blog, we’ll explore why multiple executions happen, the potential risks, and how flock provides an elegant solution to ensure that a cron job runs only once at a time.

The Problem: Multiple Executions of Cron Jobs

Cron jobs are scheduled to run at fixed intervals, but sometimes a new job instance starts before the previous one finishes.

This can happen due to

  • Long-running jobs: If a cron job takes longer than its interval, a new instance starts while the old one is still running.
  • System slowdowns: High CPU or memory usage can delay job execution, leading to overlapping runs.
  • Simultaneous executions across servers: In a distributed system, multiple servers might execute the same cron job, causing duplication.

Example of a Problematic Cron Job

Let’s say we have the following cron job that runs every minute:

* * * * * /path/to/script.sh

If script.sh takes more than a minute to execute, a second instance will start before the first one finishes.

This can lead to:

✅ Duplicate database writes → Inconsistent data

✅ Conflicts in file processing → Corrupt files

✅ Overloaded system resources → Performance degradation

Real-World Example

Imagine a job that processes user invoices and sends emails

* * * * * /usr/bin/python3 /home/user/process_invoices.py

If the script takes longer than a minute to complete, multiple instances might start running, causing

  1. Users to receive multiple invoices.
  2. The database to get inconsistent updates.
  3. Increased server load due to excessive email sending.

The Solution: Using flock to Prevent Multiple Executions

flock is a Linux utility that manages file locks to ensure that only one instance of a process runs at a time. It works by locking a specific file, preventing other processes from acquiring the same lock.

Using flock in a Cron Job

Modify the cron job as follows

* * * * * /usr/bin/flock -n /tmp/myjob.lock /path/to/script.sh

How It Works

  • flock -n /tmp/myjob.lock → Tries to acquire a lock on /tmp/myjob.lock.
  • If the lock is available, the script runs.
  • If the lock is already held (i.e., another instance is running), flock prevents the new instance from starting.
  • -n (non-blocking) ensures that the job doesn’t wait for the lock and simply exits if it cannot acquire it.

This guarantees that only one instance of the job runs at a time.

Verifying the Solution

You can test the lock by manually running the script with flock

/usr/bin/flock -n /tmp/myjob.lock /bin/bash -c 'echo "Running job..."; sleep 30'

Open another terminal and try to run the same command. You’ll see that the second attempt exits immediately because the lock is already acquired.

Preventing multiple executions of cron jobs is essential for maintaining data consistency, system stability, and efficiency. By using flock, you can easily enforce single execution without complex logic.

✅ Simple & efficient solution. ✅ No external dependencies required. ✅ Works seamlessly with cron jobs.

So next time you set up a cron job, add flock and sleep peacefully knowing your tasks won’t collide. 🚀

How do I use the ResourceTag, condition keys to create an IAM policy for tag-based restriction

By: Kannan
28 February 2025 at 08:12

The following IAM policies use condition keys to create tag-based restriction.

  • Before you use tags to control access to your AWS resources, you must understand how AWS grants access. AWS is composed of collections of resources. An Amazon EC2 instance is a resource. An Amazon S3 bucket is a resource. You can use the AWS API, the AWS CLI, or the AWS Management Console to perform an operation, such as creating a bucket in Amazon S3. When you do, you send a request for that operation. Your request specifies an action, a resource, a principal entity (user or role), a principal account, and any necessary request information.

  • You can then create an IAM policy that allows or denies access to a resource based on that resource's tag. In that policy, you can use tag condition keys to control access to any of the following:

  • Resource – Control access to AWS service resources based on the tags on those resources. To do this, use the_ aws:ResourceTag/key-name_ condition key to determine whether to allow access to the resource based on the tags that are attached to the resource.

ResourceTag condition key

Use the _aws:ResourceTag/tag-key _condition key to compare the tag key-value pair that's specified in the IAM policy with the key-value pair that's attached to the AWS resource. For more information, see Controlling access to AWS resources.

You can use this condition key with the global aws:ResourceTag version and AWS services, such as ec2:ResourceTag. For more information, see Actions, resources, and condition keys for AWS services.

  • The following IAM policy allows users to start, stop, and terminate instances that are in the test application tag
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances"
            ],
            "Resource": "arn:aws:ec2:*:3817********:instance/*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/application": "test"
                }
            }
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeTags"
                "ec2:Describedescribe-instance-status"
            ],
            "Resource": "*"
        }
    ]
}

Create the policy and attach the policy to user or role.

  • Created 2 instance one is with application tag and other is non tagged.

Image description
You can see the tagged instance are able to perform Start and Stop action using the IAM resources tag condition.
non-tagged instance we are not able to perform the same.

  • check the status of the instance

Image description

  • perform the Termination action

Image description

reference commands

aws ec2 start-instances --instance-ids "instance-id"
aws ec2 stop-instances --instance-ids "instance-id"
aws ec2 describe-instance-status  --instance-ids "instance-id"
aws ec2 terminate-instances --instance-ids "instance-id"

String condition operators

String condition operators let you construct Condition elements that restrict access based on comparing a key to a string value.

  • StringEquals - Exact matching, case sensitive

  • StringNotEquals - Negated matching

  • StringEqualsIgnoreCase - Exact matching, ignoring case

  • StringNotEqualsIgnoreCase - Negated matching, ignoring case

  • StringLike - Case-sensitive matching. The values can include multi-character match wildcards (*) and single-character match wildcards (?) anywhere in the string. You must specify wildcards to achieve partial string matches.
    Note
    If a key contains multiple values, StringLike can be qualified with set operators—ForAllValues:StringLike and ForAnyValue:StringLike.

  • StringNotLike - Negated case-sensitive matching. The values can include multi-character match wildcards (*) or single-character match wildcards (?) anywhere in the string.

Script to list the S3 Bucket storage size

By: Kannan
28 February 2025 at 05:41

If we need to fetch the S3 bucket storage size we need to trace via individual bucket under metrics we get the storage size.
on one go use the below script to get the bucket name with storage size.

s3list=`aws s3 ls | awk  '{print $3}'`
for s3dir in $s3list
do
    echo $s3dir
    aws s3 ls "s3://$s3dir"  --recursive --human-readable --summarize | grep "Total Size" 
done
  1. create the .sh file
  2. copy the code on the file
  3. Excecute the script to get the s3 bucket details

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

24 February 2025 at 16:18

Linux Mint Installation Drive – Dual Boot on 10+ Machines!

Hey everyone! Today, we had an exciting Linux installation session at our college. We expected many to do a full Linux installation, but instead, we set up dual boot on 10+ machines! 💻✨

💡 Topics Covered:
🛠 Syed Jafer – FOSS, GLUGs, and open-source communities
🌍 Salman – Why FOSS matters & Linux Commands
🚀 Dhanasekar – Linux and DevOps
🔧 Guhan – GNU and free software

Challenges We Faced


🔐 BitLocker Encryption – Had to disable BitLocker on some laptops
🔧 BIOS/UEFI Problems – Secure Boot, boot order changes needed
🐧 GRUB Issues – Windows not showing up, required boot-repair

🎥 Watch the installation video and try it yourself! https://www.youtube.com/watch?v=m7sSqlam2Sk


▶ Linux Mint Installation Guide https://tkdhanasekar.wordpress.com/2025/02/15/installation-of-linux-mint-22-1-cinnamon-edition/

This is just the beginning!

Can UV Transform Python Scripts into Standalone Executables ?

17 February 2025 at 17:48

Managing dependencies for small Python scripts has always been a bit of a hassle.

Traditionally, we either install packages globally (not recommended) or create a virtual environment, activate it, and install dependencies manually.

But what if we could run Python scripts like standalone binaries ?

Introducing PEP 723 – Inline Script Metadata

PEP 723 (https://peps.python.org/pep-0723/) introduces a new way to specify dependencies directly within a script, making it easier to execute standalone scripts without dealing with external dependency files.

This is particularly useful for quick automation scripts or one-off tasks.

Consider a script that interacts with an API requiring a specific package,

# /// script
# requires-python = ">=3.11"
# dependencies = [
#   "requests",
# ]
# ///

import requests
response = requests.get("https://api.example.com/data")
print(response.json())

Here, instead of manually creating a requirements.txt or setting up a virtual environment, the dependencies are defined inline. When using uv, it automatically installs the required packages and runs the script just like a binary.

Running the Script as a Third-Party Tool

With uv, executing the script feels like running a compiled binary,

$ uv run fetch-data.py
Reading inline script metadata from: fetch-data.py
Installed dependencies in milliseconds

ehind the scenes, uv creates an isolated environment, ensuring a clean dependency setup without affecting the global Python environment. This allows Python scripts to function as independent tools without any manual dependency management.

Why This Matters

This approach makes Python an even more attractive choice for quick automation tasks, replacing the need for complex setups. It allows scripts to be shared and executed effortlessly, much like compiled executables in other programming environments.

By leveraging uv, we can streamline our workflow and use Python scripts as powerful, self-contained tools without the usual dependency headaches.

Top Command in Linux: Tips for Effective Usage

17 February 2025 at 17:17

The top command in Linux is a powerful utility that provides realtime information about system performance, including CPU usage, memory usage, running processes, and more.

It is an essential tool for system administrators to monitor system health and manage resources effectively.

1. Basic Usage

Simply running top without any arguments displays an interactive screen showing system statistics and a list of running processes:

$ top

2. Understanding the top Output

The top interface is divided into multiple sections

Header Section

This section provides an overview of the system status, including uptime, load averages, and system resource usage.

  • Uptime and Load Average – Displays how long the system has been running and the average system load over the last 1, 5, and 15 minutes.
  • Task Summary – Shows the number of processes in various states:
    • Running – Processes actively executing on the CPU.
    • Sleeping – Processes waiting for an event or resource.
    • Stopped – Processes that have been paused.
    • Zombie – Processes that have completed execution but still have an entry in the process table. These occur when the parent process has not yet read the exit status of the child process. Zombie processes do not consume system resources but can clutter the process table if not handled properly.
  • CPU Usage – Breaks down CPU utilization into different categories:
    • us (User Space) – CPU time spent on user processes.
    • sy (System Space) – CPU time spent on kernel operations.
    • id (Idle) – Time when the CPU is not being used.
    • wa (I/O Wait) – Time spent waiting for I/O operations to complete.
    • st (Steal Time) – CPU cycles stolen by a hypervisor in a virtualized environment.
  • Memory Usage – Shows the total, used, free, and available RAM.
  • Swap Usage – Displays total, used, and free swap memory, which is used when RAM is full.

Process Table

The table below the header lists active processes with details such as:

  • PID – Process ID, a unique identifier for each process.
  • USER – The owner of the process.
  • PR – Priority of the process, affecting its scheduling.
  • NI – Nice value, which determines how favorable the process scheduling is.
  • VIRT – The total virtual memory used by the process.
  • RES – The actual RAM used by the process.
  • SHR – The shared memory portion.
  • S – Process state:
    • R – Running
    • S – Sleeping
    • Z – Zombie
    • T – Stopped
  • %CPU – The percentage of CPU time used.
  • %MEM – The percentage of RAM used.
  • TIME+ – The total CPU time consumed by the process.
  • COMMAND – The command that started the process.

3. Interactive Commands

While running top, various keyboard shortcuts allow dynamic interaction:

  • q – Quit top.
  • h – Display help.
  • k – Kill a process by entering its PID.
  • r – Renice a process (change priority).
  • z – Toggle color/monochrome mode.
  • M – Sort by memory usage.
  • P – Sort by CPU usage.
  • T – Sort by process runtime.
  • 1 – Toggle CPU usage breakdown for multi-core systems.
  • u – Filter processes by a specific user.
  • s – Change update interval.

4. Command-Line Options

The top command supports various options for customization:

  • -b (Batch mode): Used for scripting to display output in a non-interactive mode.$ top -b -n 1-n specifies the number of iterations before exit.
  • -o FIELD (Sort by a specific field):$ top -o %CPUSorts by CPU usage.
  • -d SECONDS (Refresh interval):$ top -d 3Updates the display every 3 seconds.
  • -u USERNAME (Show processes for a specific user):$ top -u john
  • -p PID (Monitor a specific process):$ top -p 1234

5. Customizing top Display

Persistent Customization

To save custom settings, press W while running top. This saves the configuration to ~/.toprc.

Changing Column Layout

  • Press f to toggle the fields displayed.
  • Press o to change sorting order.
  • Press X to highlight sorted columns.

6. Alternative to top: htop, btop

For a more user-friendly experience, htop is an alternative:

$ sudo apt install htop  # Debian-based
$ sudo yum install htop  # RHEL-based
$ htop

It provides a visually rich interface with color coding and easy navigation.

Installation of Linux Mint 22.1 Cinnamon Edition

15 February 2025 at 18:13

Linux Mint 22.1 Cinnamon Edition iso
download link
https://mirrors.cicku.me/linuxmint/iso/stable/22.1/linuxmint-22.1-cinnamon-64bit.iso
Installation of Linux Mint 22.1 Cinnamon Edition

make the iso file to a usb installer
then in bios settings make usb as first boot order
insert the pendirve automatically it detects the Linux Mint OS
and this screen will appear

select “start linux mint” then hit enter
we got the linux mint home screen

now double click the “install linux mint” icon
in the welcome screen choose “English” and continue

select keyboard layout to English (US) and continue

Next we got the multimedia codecs wizard leave as it is and
hit continue

next we got the installation type wizard
select “something else ” and continue

in the next wizard click “New Partition Table”

we got the “create new empty partition table on this device ?” wizard
click continue

then select the free space and click “+”

we got the create partition wizard
give the size for root partition (Maximum size 85%)
use as “Ext4 journaling file system”
Mount point : /
then click “OK”

then again select the free space and click “+” sign

give the size for swap (twice the size of RAM usually)
select Use as: swap area
then click “OK”

again select the free space and click “+” sign
give the size for EFI partition 1GB
select Use as: EFI system partition
and click “OK”

again select the free space and click “+” sign
give the size 100 MB for
use as : “Reserved BIOS boot area” then click OK

then click Install continue

the give the time zone as kolkata

then click continue and in the next wizard
give username , computer name , password
either choose login automatically or require password

then click “continue”
the installation process took  a while
when the installation complete
it will ask to remove the installation medium and press ENTER

the system reboots and Linux Mint cinnamon home page will be displayed

🙂

Effortless Git Repo Switching with a Simple Bash Function!

By: krishna
12 February 2025 at 10:36

Why I Created This Function

At first, I used an alias

alias gitdir="cd ~/Git/" (This means gitdir switches to the ~/Git/ directory, but I wanted it to switch directly to a repository.)
So, I wrote a Bash function.

Write Code to .bashrc File

The .bashrc file runs when a new terminal window is opened.
So, we need to write the function inside this file.

Code

gitrepo() {
    # Exact Match
    repoList=$(ls $HOME/Git)
    if [ -n "$(echo "$repoList" | grep -w $1)" ]; then
	cd $HOME/Git/$1
    else
	# Relevant Match
	getRepoName=$(echo "$repoList" | grep -i -m 1 $1)
	
	if [ -n "$getRepoName" ]; then
	    cd "$HOME/Git/$getRepoName"
	else
	    echo "Repository Not Founded"
	    cd $HOME/Git
	fi
	
    fi   
}

Code Explanation

The $repoList variable stores the list of directories inside the Git folder.

Function Logic Has Two Parts:

  • Exact Match
  • Relevant Match

Exact Match

if [ -n "$(echo "$repoList" | grep -w $1)" ]; then
	cd $HOME/Git/$1

If condition: The $repoList variable parses input for grep.

  • grep -w matches only whole words.
  • $1 is the function’s argument in bash.
  • -n checks if a variable is not empty. Example syntax:
    [ a != "" ] is equivalent to [ -n a ]

Relevant Match

getRepoName=$(echo "$repoList" | grep -i -m 1 $1)
	if [ -n "$getRepoName" ]; then
	    cd "$HOME/Git/$getRepoName"

Relevant search: If no Exact Match is found, this logic is executed next.

getRepoName="$repoList" | grep -i -m 1 $1
  • -i ignores case sensitivity.
  • -m 1 returns only the first match.

Example of -m with grep:
ls | grep i3
It returns i3WM and i3status, but -m 1 ensures only i3WM is selected.

No Match

If no match is found, it simply changes the directory to the Git folder.

	else
	    echo "Repository Not Founded"
	    cd $HOME/Git

What I Learned

  • Basics of Bash functions
  • How to use .bashrc and reload changes.

20 Essential Git Command-Line Tricks Every Developer Should Know

5 February 2025 at 16:14

Git is a powerful version control system that every developer should master. Whether you’re a beginner or an experienced developer, knowing a few handy Git command-line tricks can save you time and improve your workflow. Here are 20 essential Git tips and tricks to boost your efficiency.

1. Undo the Last Commit (Without Losing Changes)

git reset --soft HEAD~1

If you made a commit but want to undo it while keeping your changes, this command resets the last commit but retains the modified files in your staging area.

This is useful when you realize you need to make more changes before committing.

If you also want to remove the changes from the staging area but keep them in your working directory, use,

git reset HEAD~1

2. Discard Unstaged Changes

git checkout -- <file>

Use this to discard local changes in a file before staging. Be careful, as this cannot be undone! If you want to discard all unstaged changes in your working directory, use,

git reset --hard HEAD

3. Delete a Local Branch

git branch -d branch-name

Removes a local branch safely if it’s already merged. If it’s not merged and you still want to delete it, use -D

git branch -D branch-name

4. Delete a Remote Branch

git push origin --delete branch-name

Deletes a branch from the remote repository, useful for cleaning up old feature branches. If you mistakenly deleted the branch and want to restore it, you can use

git checkout -b branch-name origin/branch-name

if it still exists remotely.

5. Rename a Local Branch

git branch -m old-name new-name

Useful when you want to rename a branch locally without affecting the remote repository. To update the remote reference after renaming, push the renamed branch and delete the old one,

git push origin -u new-name
git push origin --delete old-name

6. See the Commit History in a Compact Format

git log --oneline --graph --decorate --all

A clean and structured way to view Git history, showing branches and commits in a visual format. If you want to see a detailed history with diffs, use

git log -p

7. Stash Your Changes Temporarily

git stash

If you need to switch branches but don’t want to commit yet, stash your changes and retrieve them later with

git stash pop

To see all stashed changes

git stash list

8. Find the Author of a Line in a File

git blame file-name

Shows who made changes to each line in a file. Helpful for debugging or reviewing historical changes. If you want to ignore whitespace changes

git blame -w file-name

9. View a File from a Previous Commit

git show commit-hash:path/to/file

Useful for checking an older version of a file without switching branches. If you want to restore the file from an old commit

git checkout commit-hash -- path/to/file

10. Reset a File to the Last Committed Version

git checkout HEAD -- file-name

Restores the file to the last committed state, removing any local changes. If you want to reset all files

git reset --hard HEAD

11. Clone a Specific Branch

git clone -b branch-name --single-branch repository-url

Instead of cloning the entire repository, this fetches only the specified branch, saving time and space. If you want all branches but don’t want to check them out initially:

git clone --mirror repository-url

12. Change the Last Commit Message

git commit --amend -m "New message"

Use this to correct a typo in your last commit message before pushing. Be cautious—if you’ve already pushed, use

git push --force-with-lease

13. See the List of Tracked Files

git ls-files

Displays all files being tracked by Git, which is useful for auditing your repository. To see ignored files

git ls-files --others --ignored --exclude-standard

14. Check the Difference Between Two Branches

git diff branch-1..branch-2

Compares changes between two branches, helping you understand what has been modified. To see only file names that changed

git diff --name-only branch-1..branch-2

15. Add a Remote Repository

git remote add origin repository-url

Links a remote repository to your local project, enabling push and pull operations. To verify remote repositories

git remote -v

16. Remove a Remote Repository

git remote remove origin

Unlinks your repository from a remote source, useful when switching remotes.

17. View the Last Commit Details

git show HEAD

Shows detailed information about the most recent commit, including the changes made. To see only the commit message

git log -1 --pretty=%B

18. Check What’s Staged for Commit

git diff --staged

Displays changes that are staged for commit, helping you review before finalizing a commit.

19. Fetch and Rebase from a Remote Branch

git pull --rebase origin main

Combines fetching and rebasing in one step, keeping your branch up-to-date cleanly. If conflicts arise, resolve them manually and continue with

git rebase --continue

20. View All Git Aliases

git config --global --list | grep alias

If you’ve set up aliases, this command helps you see them all. Aliases can make your Git workflow faster by shortening common commands. For example

git config --global alias.co checkout

allows you to use git co instead of git checkout.

Try these tricks in your daily development to level up your Git skills!

Minimal Typing Practice Application in Python

By: krishna
30 January 2025 at 09:40

Introduction

This is a Python-based single-file application designed for typing practice. It provides a simple interface to improve typing accuracy and speed. Over time, this minimal program has gradually increased my typing skill.

What I Learned from This Project

  • 2D Array Validation
    I first simply used a 1D array to store user input, but I noticed some issues. After implementing a 2D array, I understood why the 2D array was more appropriate for handling user inputs.
  • Tkinter
    I wanted to visually see and update correct, wrong, and incomplete typing inputs, but I didn’t know how to implement it in the terminal. So, I used a simple Tkinter gui window

Run This Program

It depends on the following applications:

  • Python 3
  • python3-tk

Installation Command on Debian-Based Systems

sudo apt install python3 python3-tk

Clone repository and run program

git clone https://github.com/github-CS-krishna/TerminalTyping
cd TerminalTyping
python3 terminalType.py

Links

For more details, refer to the README documentation on GitHub.

This will help you understand how to use it.

source code(github)

Learning Notes #61 – Undo a git pull

18 January 2025 at 16:04

Today, i came across a blog on undo a git pull. In this blog, i have reiterated the blog in other words.

Mistakes happen. You run a git pull and suddenly find your repository in a mess. Maybe conflicts arose, or perhaps the changes merged from the remote branch aren’t what you expected.

Fortunately, Git’s reflog comes to the rescue, allowing you to undo a git pull and restore your repository to its previous state. Here’s how you can do it.

Understanding Reflog


Reflog is a powerful feature in Git that logs every update made to the tips of your branches and references. Even actions like resets or rebases leave traces in the reflog. This makes it an invaluable tool for troubleshooting and recovering from mistakes.

Whenever you perform a git pull, Git updates the branch pointer, and the reflog records this action. By examining the reflog, you can identify the exact state of your branch before the pull and revert to it if needed.

Step By Step Guide to UNDO a git pull

1. Check Your Current State Ensure you’re aware of the current state of your branch. If you have uncommitted changes, stash or commit them to avoid losing any work.


git stash
# or
git add . && git commit -m "Save changes before undoing pull"

2. Inspect the Reflog View the recent history of your branch using the reflog,


git reflog

This command will display a list of recent actions, showing commit hashes and descriptions. For example,


0a1b2c3 (HEAD -> main) HEAD@{0}: pull origin main: Fast-forward
4d5e6f7 HEAD@{1}: commit: Add new feature
8g9h0i1 HEAD@{2}: checkout: moving from feature-branch to main

3. Identify the Pre-Pull Commit Locate the commit hash of your branch’s state before the pull. In the above example, it’s 4d5e6f7, which corresponds to the commit made before the git pull.

4. Reset to the Previous Commit Use the git reset command to move your branch back to its earlier state,


git reset <commit-hash>

By default, it’s mixed so changes wont be removed but will be in staging.

The next time a pull operation goes awry, don’t panic—let the reflog guide you back to safety!

Problem Statements : Git & Github Session – St. Joseph’s GDG Meeting

17 December 2024 at 15:13

List of problem statements enough to get your hands dirty on git. These are the list of commands that you mostly use in your development.

Problem 1

  1. Initialize a Repository.
  2. Setup user details globally.
  3. Setup project specific user details.
  4. Check Configuration – List the configurations.

Problem 2

  1. Add Specific files. Create two files app.js and style.css. User git add to stage only style.css . This allows selective addition of files to the staging area before committing.
  2. Stage all files except one.

Problem 3

  1. Commit with a message
  2. Amend a commit
  3. Commit without staging

Problem 4

  1. Create a Branch
  2. Create a new branch named feature/api to work on a feature independently without affecting the main branch.
  3. Delete a branch.
  4. Force delete a branch.
  5. Rename a branch.
  6. List all branches.

Problem 5

  1. Switch to a branch
  2. Switch to the main branch using git checkout.
  3. Create and switch to a branch
  4. Create a new branch named bugfix/001 and switch to it in a single command with git checkout -b.

Problem 6

  1. Start with a repository containing a file named project.txt
  2. Create two branches (feature-1 and feature-2) from the main branch.
  3. Make changes to project.txt in both branches.
  4. Attempt to merge feature-1 and feature-2 into the main branch.
  5. Resolve any merge conflicts and complete the merge process.

Problem 7

  1. View history in one-line format
  2. Graphical commit history
  3. Filter commits by Author
  4. Show changes in a commit

Problem 8

  1. Fetch updates from remote
  2. Fetch and Merge
  3. Fetch changes from the remote branch origin/main and merge them into your local main
  4. List remote references

Problem 9

  1. Create a stash
  2. Apply a stash
  3. Pop a stash
  4. View stash

Problem 10

  1. You need to undo the last commit but want to keep the changes staged for a new commit. What will you do ?

Problem 11

  1. You realize you staged some files for commit but want to unstage them while keeping the changes in your working directory. Which git command will allow you to unstage the files without losing any change ?

Problem 12

  1. You decide to completely discard all local changes and reset the repository to the state of the last commit. What git command should you run to discard all changes and reset your working directory ?

Kanchi Linux Users Group Monthly Meeting – Dec 08, 2024

8 December 2024 at 02:43

Hi everyone,
KanchiLUG’s Monthly meet is scheduled as online meeting this week on Sunday, Dec 08, 2024 17:00 – 18:00 IST

Meeting link : https://meet.jit.si/KanchiLugMonthlyMeet

Can join with any browser or JitSi android app.
All the Discussions are in Tamil.

Talk Details

Talk 0:
Topic : my Elisp ‘load random theme’ function
Description : I wanted to randomly load a theme in Emacs during startup. After i search in online, I achieved this
functionality using Emacs Lisp. this my talk Duration : 10 minutes
Name : Krishna Subramaniyan
About :GNU/Linux and Emacs user 😉

Talk 1:
Topic : PDF generation using python
Description : To demo a python program which will generate a PDF output. Duration : 20 minutes
Name : Sethu
About : Member of KanchiLUG & Kaniyam IRC Channel

Talk 2:
Topic : distrobox – a wrapper on podman/docker
Description : Intro about the tool, why I had to use that and a demo Duration : 15 minutes
Name : Annamalai N
About : a GNU/Linux user

Talk 3:
Topic : Real Time Update Mechanisms (Polling, Long Polling, Server Sent Events)
Description : To demo Real Time Update Mechanisms with JS and Python Duration : 30 minutes
Name :Syed Jafer (parottasalna)
About : Developer. Currently teaching postgres at
https://t.me/parottasalna

After Talks : Q&A, General discussion

About KanchiLUG : Kanchi Linux Users Group [ KanchiLUG ] has been spreading awareness on Free/Open Source Software (F/OSS) in
Kanchipuram since November 2006.

Anyone can join! (Entry is free)
Everyone is welcome
Feel free to share this to your friends

Mailing list: kanchilug@freelists.org
Repository : https://gitlab.com/kanchilug
Twitter handle: @kanchilug
Kanchilug Blog : http://kanchilug.wordpress.com

To subscribe/unsubscribe kanchilug mailing list :
http://kanchilug.wordpress.com/join-mailing-list/

Kanchi Linux Users Group Monthly Meeting – Dec 08, 2024

8 December 2024 at 02:43

Hi everyone,
KanchiLUG’s Monthly meet is scheduled as online meeting this week on Sunday, Dec 08, 2024 17:00 – 18:00 IST

Meeting link : https://meet.jit.si/KanchiLugMonthlyMeet

Can join with any browser or JitSi android app.
All the Discussions are in Tamil.

Talk Details

Talk 0:
Topic : my Elisp ‘load random theme’ function
Description : I wanted to randomly load a theme in Emacs during startup. After i search in online, I achieved this
functionality using Emacs Lisp. this my talk Duration : 10 minutes
Name : Krishna Subramaniyan
About :GNU/Linux and Emacs user 😉

Talk 1:
Topic : PDF generation using python
Description : To demo a python program which will generate a PDF output. Duration : 20 minutes
Name : Sethu
About : Member of KanchiLUG & Kaniyam IRC Channel

Talk 2:
Topic : distrobox – a wrapper on podman/docker
Description : Intro about the tool, why I had to use that and a demo Duration : 15 minutes
Name : Annamalai N
About : a GNU/Linux user

Talk 3:
Topic : Real Time Update Mechanisms (Polling, Long Polling, Server Sent Events)
Description : To demo Real Time Update Mechanisms with JS and Python Duration : 30 minutes
Name :Syed Jafer (parottasalna)
About : Developer. Currently teaching postgres at
https://t.me/parottasalna

After Talks : Q&A, General discussion

About KanchiLUG : Kanchi Linux Users Group [ KanchiLUG ] has been spreading awareness on Free/Open Source Software (F/OSS) in
Kanchipuram since November 2006.

Anyone can join! (Entry is free)
Everyone is welcome
Feel free to share this to your friends

Mailing list: kanchilug@freelists.org
Repository : https://gitlab.com/kanchilug
Twitter handle: @kanchilug
Kanchilug Blog : http://kanchilug.wordpress.com

To subscribe/unsubscribe kanchilug mailing list :
http://kanchilug.wordpress.com/join-mailing-list/

❌
❌