❌

Reading view

There are new articles available, click to refresh the page.

Installing Arch Linux in UEFI systems(windows)

This will be a very basic overview in what is to be done for installing Arch Linux. For more information check out Arch wiki installation guide.

The commands shown in this guide will be in italian(font).

Step 1: Downloading the required files and applications

I have downloaded a few applications to help ease the process for the installation. You can download them using the links below.

Rufus:
This helps in formatting the USB and converting the disc image file to a dd image file. I have used rufus, you can use other tools too. This only works on windows.
rufus link

BitTorrent
The download option in the wiki page suggests we use BitTorrent for downloading the disc image file.
BitTorrent for windows

Arch Linux torrent file
This is for downloading the Arch Linux Torrent File. The download link can be found in the website given below.
Arch Linux Download Page

Step 2: The bootable USB

You will need a USB of size at least 2GB and 4GB or above should be very comfortable to use.

First open the BitTorrent application or the web based version and upload the magnet link or the torrent file to start downloading the disc image file.

Then to prepare the USB:

  1. Launch the application to make the bootable USB like rufus.

2.In the device section select your USB and remember all the data in the drive will be lost after the process.

3.In boot selection, choose the disc image file that was downloaded through torrent.

4.In the target system select UEFI as we are using a UEFI system.

5.In the partition scheme make sure GPT is selected.

6.In file system select fat32 and 4096 bytes as cluster size.

7.When you click ready it will present you with 2 options, select the dd image file which is not the default option.

After the process is done the USB will not be readable to windows, so there is no need to panic if you cannot access the USB.

If you are using a dual boot make sure you have at least 30 GB of unallocated space.

I would recommend to turn off bitlocker settings as it could give rise to other challenges during the installation.

Then get into the UEFI Firmware settings of your system. One easy way is to:
1.Hold shift key while pressing to restart the computer
2.Go into Troubleshoot
3.Go into Advanced Settings
4.Select UEFI Firmware Settings
5.You will have to restart again but you will be in the required place.

Turn off secure boot state. It is usually in the security settings.

Select save changes and exit.

When you log back into your system ensure that secure boot state is off by going into system information.

Go back to UEFI Firmware settings by repeating the process.

In the boot priority section, give your USB device the highest priority. This is usually in the boot section. Then select save changes and exit.

Step 3: Preparing Arch Linux Installation

When all the above steps are done and the system restarts, you will be prompted with a few options. Select Arch Linux install medium and press 'Enter' to enter the installation environment. After this you will need to follow a series of steps.

1. Verifying you are in UEFI mode.

To do that type the command
cat /sys/firmware/efi/fw_platform_size

You should get the result as 32 or 64. If you get no result then you may not be using UEFI mode.

2. Connecting to the internet:

If you are using an ethernet cable then you don't have to worry as you might already be connected to internet.
Use the command
ping -c 4 google.com
or another website to ping from to check if you're connected to the internet.

To connect to wi-fi, type in the command
ip link

This should show you all the internet devices you have. Your wi-fi should typically be wlan0 or something like wlp3s0, which is your device name.

Then type the command
iwctl

This should get you into an interactive command line interface.
You can explore the options by using the command
help

My device name was wlan0 so I'm using wlan0 in the command I'm going to show if yours is different make the appropriate changes.

To connect to the wifi use the command
station wlan0 connect "Network Name"
where "Network Name" is the name of your network.

If you want to know the name of your network before doing this you can try the command
station wlan0 get-networks

To get out of the environment simply use the command
exit

After you exit, you can verify your connection with
ping -c 4 google.com

If it doesn't work, try the command
ping -c 4 8.8.8.8

If the above also doesn't work, the problem may lie with your network.

However if the second option works for you, the fix would be to manually change the DNS server you're using.
To do that, run the command
nano /etc/systemd/resolved.conf

In this file if the DNS part is commented using a #, remove the # and replace it with a DNS server you desire. For eg: 8.8.8.8

ctrl + x to save and exit

Now try pinging a website such as google.com again to make sure you're properly connected to the internet.

3. Set the proper time

When you connect to the internet you should have the proper time. To check you can use the command
timedatectl

4. Create the partitions for Arch Linux

To check what partitions you have, use the command
lsblk

This will list the partitions you have. It will be in the format /dev/sda or /dev/nvme0n1 or something else. Mine was /dev/nvme0n1 so I'll be using the same in the commands below.

To make the partitions, use the command
fdisk /dev/nvme0n1

This should bring you to a separate command line interface.

It will give you an introduction on what to do.

Now we will create the partitions.
To create a partition, use the command
n

It will show you what you want to number your partition and the default option. Click enter as it will automatically take the default option if you don't enter any value. Let's say mine is 1.

It will show you what sector you want the partition to start from and the default option. Click enter.

Then it will ask you where you want the sectors to end: type
+1g

1g will allot 1 GB to the partition you just created.

Then create another partition in the same way, let's say mine is sector number 2 this time and finally instead of
+1g use +4g

This will allot 4 GB to the second partition you just created.

Create another partition and this time leave the last sector to default so it can have the remaining space. Let's say this partition is number 3.

partition 1 - EFI system partition
partition 2 - Linux SWAP partition
partition 3 - Linux root partition

5. Prepare the created partitions for Arch Linux installation

Here, we are going to format the memory in the chosen partitions and make them the appropriate file systems.

For the EFI partition:
mkfs.fat -F 32 /dev/nvme0n1p1

This converts the 1 GB partition into a fat32 file system.

For SWAP partition:
mkswap /dev/nvme0n1p2

This converts the 4 GB partition into something that can be used as virtual RAM.

For root partition:
mkfs.ext4 /dev/nvme0n1p3

This converts the root partition into a file system that is called ext4.

6. Mounting the partitions

This is for setting a reference point to the partitions we just created.

For the EFI partition:
mount --mkdir /dev/nvme0n1p1 /mnt/boot

For the root partition:
mount /dev/nvme0n1p3 /mnt

For the swap partition:
swapon /dev/nvme0n1p2

Step 3: The Arch Linux Installation

1. Updating the mirrorlist (optional)

The mirrorlist is a list of mirror servers from which packages can be downloaded. Choosing the right mirror server could get you higher download speeds.

This step isn't required as the mirror list is automatically updated when connected to the internet but if you would like to manually do it, its in the file
/etc/pacman.d/mirrorlist

2. Installing base Linux kernel and firmware

To do this, use the command
pacstrap -K /mnt base linux linux-firmware

Step 4: Configuring Arch Linux system

1. generating fstab

The fstab is the file system table. It contains information on each of the file partitions and storage devices. It also contains information on how they should be mounted during boot.

To do it, use the command:
genfstab -U /mnt >> /mnt/etc/fstab

2. Chroot

Chroot is short for change root. It is used to directly interact with the Arch Linux partitions from the live environment in the USB.

To do it, use the command:
arch-chroot /mnt

3. Time

The timezone has 2 parts the region and the city. I am from India so my region is Asia and the city is Kolkata. Change yours appropriately to your needs.

The command:
ln -sf /usr/share/zoneinfo/Asia/Kolkata /etc/localtime

We can also set the time in hardware clock as UTC.
To do that:
hwclock --systohc

4. Installing some important tools

The system you have installed is a very basic system, so it doesn't have a lot of stuff. I'm recommending two very basic tools as they can be handy.

i) nano:
This is a text editor file so you can make changes to configuration files.
pacman -S nano

ii) iwd:
This is called iNet wireless daemon. I recommend this so that you can connect to wi-fi once you reboot to your actual arch system.
pacman -S iwd

5. Localization

This is for setting the keyboard layout and language. Go to the file /etc/locale.conf by using
nano /etc/locale.conf

I want to use the english language that is the default in most devices so for doing that you have to uncomment(remove the #) for the line that says
LANG=en_US.UTF-8

As there are a lot of lines you can search using ctrl+F.

Then ctrl+X to save and exit.

Then use the command
locale-gen

This command generates the locale you just uncommented.

6. Host and password

To create the host name, we should do it in the /etc/hostname file. Use
nano /etc/hostname

Then type in what your hostname would be.
ctrl + X to save and exit.

To set the password of your root user, use the command
passwd

7. Getting out of chroot and rebooting the system

To get out of chroot simply use
exit

Then to reboot the system use
reboot

Remove the installation medium(USB) as the device turns off.

Step 5: Enjoy Arch Linux

Arch Linux is one of the most minimal systems. So you can customize it to your liking. You can also install other desktop environments if you feel like it.

The Search for the Perfect Media Server: A Journey of Discovery

Dinesh, an avid movie collector and music lover, had a growing problem. His laptop was bursting at the seams with countless movies, albums, and family photos. Every time he wanted to watch a movie or listen to her carefully curated playlists, he had to sit around his laptop. And if he wanted to share something with his friends, it meant copying with USB drives or spending hours transferring files.

One Saturday evening, after yet another struggle to connect his laptop to his smart TV via a mess of cables, Dinesh decided it was time for a change. He needed a solution that would let his access all his media from any device in his house – phone, tablet, and TV. He needed a media server.

Dinesh fired up his browser and began his search: β€œHow to stream media to all my devices.” He gone through the results – Plex, Jellyfin, Emby… Each option seemed promising but felt too complex, requiring subscriptions or heavy installations.

Frustrated, Dinesh thought, β€œThere must be something simpler. I don’t need all the bells and whistles; I just want to access my files from anywhere in my house.” He refined her search: β€œlightweight media server for Linux.”

There it was – MiniDLNA. Described as a simple, lightweight DLNA server that was easy to set up and perfect for home use, MiniDLNA (also known as ReadyMedia) seemed to be exactly what Dinesh needed.

MiniDLNA (also known as ReadyMedia) is a lightweight, simple server for streaming media (like videos, music, and pictures) to devices on your network. It is compatible with various DLNA/UPnP (Digital Living Network Alliance/Universal Plug and Play) devices such as smart TVs, media players, gaming consoles, etc.

How to Use MiniDLNA

Here’s a step-by-step guide to setting up and using MiniDLNA on a Linux based system.

1. Install MiniDLNA

To get started, you need to install MiniDLNA. The installation steps can vary slightly depending on your operating system.

For Debian/Ubuntu-based systems:

sudo apt update
sudo apt install minidlna

For Red Hat/CentOS-based systems:

First, enable the EPEL repository,

sudo yum install epel-release

Then, install MiniDLNA,

sudo yum install minidlna

2. Configure MiniDLNA

Once installed, you need to configure MiniDLNA to tell it where to find your media files.

a. Open the MiniDLNA configuration file in a text editor

sudo nano /etc/minidlna.conf

b. Configure the following parameters:

  • media_dir: Set this to the directories where your media files (music, pictures, and videos) are stored. You can specify different media types for each directory.
media_dir=A,/path/to/music  # 'A' is for audio
media_dir=V,/path/to/videos # 'V' is for video
media_dir=P,/path/to/photos # 'P' is for pictures
  • db_dir=: The directory where the database and cache files are stored.
db_dir=/var/cache/minidlna
  • log_dir=: The directory where log files are stored.
log_dir=/var/log/minidlna
  • friendly_name=: The name of your media server. This will appear on your DLNA devices.
friendly_name=Laptop SJ
  • notify_interval=: The interval in seconds that MiniDLNA will notify clients of its presence. The default is 900 (15 minutes).
notify_interval=900

c. Save and close the file (Ctrl + X, Y, Enter in Nano).

3. Start the MiniDLNA Service

After configuration, start the MiniDLNA service

sudo systemctl start minidlna

To enable it to start at boot,

sudo systemctl enable minidlna

4. Rescan Media Files

To make MiniDLNA scan your media files and add them to its database, you can force a rescan with

sudo minidlnad -R

5. Access Your Media on DLNA/UPnP Devices

Now, your MiniDLNA server should be up and running. You can access your media from any DLNA-compliant device on your network:

  • On your Smart TV, look for the β€œMedia Server” or β€œDLNA” option in the input/source menu.
  • On a Windows PC, go to This PC or Network and find your DLNA server under β€œMedia Devices.”
  • On Android, use a media player app like VLC or BubbleUPnP to find your server.

6. Check Logs and Troubleshoot

If you encounter any issues, you can check the logs for more information

sudo tail -f /var/log/minidlna/minidlna.log

To setup for a single user

Disable the global daemon

sudo service minidlna stop
sudo update-rc.d minidlna disable

Create the necessary local files and directories as regular user and edit the configuration

mkdir -p ~/.minidlna/cache
cd ~/.minidlna
cp /etc/minidlna.conf .
$EDITOR minidlna.conf

Configure as you would globally above but these definitions need to be defined locally

db_dir=/home/$USER/.minidlna/cache
log_dir=/home/$USER/.minidlna 

To start the daemon locally

minidlnad -f /home/$USER/.minidlna/minidlna.conf -P /home/$USER/.minidlna/minidlna.pid

To stop the local daemon

xargs kill </home/$USER/.minidlna/minidlna.pid

To rebuild the database,

minidlnad -f /home/$USER/.minidlna/minidlna.conf -R

For more info: https://help.ubuntu.com/community/MiniDLNA

Additional Tips

  • Firewall Rules: Ensure that your firewall settings allow traffic on the MiniDLNA port (8200 by default) and UPnP (typically port 1900 for UDP).
  • Update Media Files: Whenever you add or remove files from your media directory, run minidlnad -R to update the database.
  • Multiple Media Directories: You can have multiple media_dir lines in your configuration if your media is spread across different folders.

To set up MiniDLNA with VLC Media Player so you can stream content from your MiniDLNA server, follow these steps:

Let’s see how to use this in VLC

On Machine

1. Install VLC Media Player

Make sure you have VLC Media Player installed on your device. If not, you can download it from the official VLC website.

2. Open VLC Media Player

Launch VLC Media Player on your computer.

3. Open the UPnP/DLNA Network Stream

  1. Go to the β€œView” Menu:
    • On the VLC menu bar, click on View and then Playlist or press Ctrl + L (Windows/Linux) or Cmd + Shift + P (Mac).
  2. Locate Your DLNA Server:
    • In the left sidebar, you will see an option for Local Network.
    • Click on Universal Plug'n'Play or UPnP.
    • VLC will search for available DLNA/UPnP servers on your network.
  3. Select Your MiniDLNA Server:
    • After a few moments, your MiniDLNA server should appear under the UPnP section.
    • Click on your server name (e.g., My DLNA Server).
  4. Browse and Play Media:
    • You will see the folders you configured (e.g., Music, Videos, Pictures).
    • Navigate through the folders and double-click on a media file to start streaming.

4. Alternative Method: Open Network Stream

If you know the IP address of your MiniDLNA server, you can connect directly:

  1. Open Network Stream:
    • Click on Media in the menu bar and select Open Network Stream... or press Ctrl + N (Windows/Linux) or Cmd + N (Mac).
  2. Enter the URL:
    • Enter the URL of your MiniDLNA server in the format http://[Server IP]:8200.
    • Example: http://192.168.1.100:8200.
  3. Click β€œPlay”:
    • Click on the Play button to start streaming from your MiniDLNA server.

5. Tips for Better Streaming Experience

  • Ensure the Server is Running: Make sure the MiniDLNA server is running and the media files are correctly indexed.
  • Network Stability: A stable local network connection is necessary for smooth streaming. Use a wired connection if possible or ensure a strong Wi-Fi signal.
  • Firewall Settings: Ensure that the firewall on your server allows traffic on port 8200 (or the port specified in your MiniDLNA configuration).

On Android

To set up and stream content from MiniDLNA using an Android app, you will need a DLNA/UPnP client app that can discover and stream media from DLNA servers. Several apps are available for this purpose, such as VLC for Android, BubbleUPnP, Kodi, and others. Here’s how to use VLC for Android and BubbleUPnP, two popular choices

Using VLC for Android

  1. Install VLC for Android:
  2. Open VLC for Android:
    • Launch the VLC app on your Android device.
  3. Access the Local Network:
    • Tap on the menu button (three horizontal lines) in the upper-left corner of the screen.
    • Select Local Network from the sidebar menu.
  4. Find Your MiniDLNA Server:
    • VLC will automatically search for DLNA/UPnP servers on your local network. After a few moments, your MiniDLNA server should appear in the list.
    • Tap on the name of your MiniDLNA server (e.g., My DLNA Server).
  5. Browse and Play Media:
    • You will see your media folders (e.g., Music, Videos, Pictures) as configured in your MiniDLNA setup.
    • Navigate to the desired folder and tap on any media file to start streaming.

Additional Tips

  • Ensure MiniDLNA is Running: Make sure your MiniDLNA server is properly configured and running on your local network.
  • Check Network Connection: Ensure your Android device is connected to the same local network (Wi-Fi) as the MiniDLNA server.
  • Firewall Settings: If you are not seeing the MiniDLNA server in your app, ensure that the server’s firewall settings allow DLNA/UPnP traffic.

Some Problems That you may face

  1. minidlna.service: Main process exited, code=exited, status=255/EXCEPTION - check the logs. Mostly its due to an instance already running on port 8200. Kill that and reload the db. lsof -i :8200 will give PID. and `kill -9 <PID>` will kill the process.
  2. If the media files is not refreshing, then try minidlnad -f /home/$USER/.minidlna/minidlna.conf -R or `sudo minidlnad -R`

Demystifying IP Addresses and Netmasks: The Complete Overview

In this blog, we will learn about IP addresses and netmasks.

IP

The Internet Protocol (IP) is a unique identifier for your device, similar to how a mobile number uniquely identifies your phone.

IP addresses are typically represented as four Octets for IPv4, with each octet being One byte/Octets in size, and eight octets for IPv6, with each octet being two bytes/Octets in size.

Examples:

  • IPv4:Β 192.168.43.64
  • IPv6:Β 2001:db8:3333:4444:5555:6666:7777:8888

For the purposes of this discussion, we will focus on IPv4.

Do we really require four Octets structure with dots between them?

The answer is NO

The only requirement for an IPv4 address is that it must be 4 bytes in size. However, it does not have to be written as four octets or even with dots separating them.

Let’s test this by fetching Google’s IP address using theΒ nslookupΒ command.

Convert this to binary number using bc calculator in Bash shell.

And you can see it’s working.

This is because the octet structure and the dots between them are only for human readability. Computers do not interpret dots; they just need an IP address that is 4 bytes in size, and that’s it.

The range for IPv4 addresses is from 0.0.0.0 to 255.255.255.255.

Types of IP Addresses

IP addresses are classified into two main types: Public IPs and Private IPs.

Private IP addresses are used for communication between local devices without connecting to the Internet. They are free to use and secure to use.

You can find your private IP address by using the ifconfig command


The private IP address ranges are as follows:

10.0.0.0 to 10.255.255.255
172.16.0.0 to 172.31.255.255
192.168.0.0 to 192.168.255.255

Public IP addresses are Internet-facing addresses provided by an Internet Service Provider (ISP). These addresses are used to access the internet and are not free.

By default

Private IP to Private IP communication is possible.
Public IP to Public IP communication is possible.

However:

Public IP to Private IP communication is not possible.
Private IP to Public IP communication is not possible.

Nevertheless, these types of communication can occur through Network Address Translation (NAT), which is typically used by your home router. This is why you can access the Internet even with a private IP address.

Netmasks
Netmasks are used to define the range of IP addresses within a network.

Which means,

You can see 24 Ones and 8 Zeros.

Here, we have converted 255 to binary using division method.

255 Γ· 2 = 127 remainder 1

127 Γ· 2 = 63 remainder 1

63 Γ· 2 = 31 remainder 1

31 Γ· 2 = 15 remainder 1

15 Γ· 2 = 7 remainder 1

7 Γ· 2 = 3 remainder 1

3 Γ· 2 = 1 remainder 1

1 Γ· 2 = 0 remainder 1

So, binary value of 255 is 11111111

By using this, we can able to find the number of IP addresses and its range.

Since we have 8 zeros, so

Number of IPs = 2 ^8 which equals to 256 IPs. SO, the usable IP range is 10.4.3.1 – 10.4.3.254 and the broadcast IP is 10.4.3.255.

And we can also write this as 255.255.255.0/24 . Here 24 denotes CIDR (Classless Inter-Domain Routing).

Thats it.

Kindly let me know in comments if you are any queries in these topics.

Linux Partition: Static Partition Scaling without any data loss

In this blog, we are going to see how to increase or decrease the size of the static partition in Linux without compromising any data loss and how to do that in Online mode without unmounting.

I already explained the basic concepts of partition in very detail in my previous blog. You can refer to that blog by clicking here.

In this practical, the Oracle VirtualBox is used for hosting the Redhat Enterprise Linux (RHEL8) Virtual Machine (VM).

The first step is to attach one hard disk. So, I attached one virtual hard disk with the size of 40GiB. That disk is named β€œ/dev/sdc”. You can check the disk name and all other disks present in your VM by running the following command.

fdisk -l

Then, we have to do partition by using β€œfdisk” command.

fdisk /dev/sdc

Then, enter β€œn” in order to create a new partition. Then enter the partition number and specify the number of sectors or GiB. Here, we entered 20 GiB in order to utilize that much storage unless we do partition, we can’t utilize any storage.

We had created one partition named as β€œ/dev/sdc1”. Next step is to format the partition. Here, we used Ext4 filesystem(format) to create an inode table.

Next step is to create one directory using β€œmkdir” command and mount that partition in that directory since we can’t directly use the hardware device no matter it is either real or virtual.

One file should be created inside that directory in order to check the data loss after Live scaling of static partition.

Ok, now the size of the static partition is 20 GiB, we are going to do scaling up to 30GiB without unmounting the partition. For this, again we have to run the following command.

fdisk /dev/sdc

Then delete the partition. Don’t bother about the data, it won’t lose.

Then enter β€œn” to create the new partition and specify your desired size. Here, I like to scale up to 30GiB. And then one warning will come and it says that β€œPartition 1 contains an ext4 signature” and ask us what to do with that either remove the signature or retain.

If you don’t want to lose the data, then enter β€œN”. Then enter β€œw” to save the partition. you can verify your partition size by running β€œfdisk -l” command in terminal. Finally, you increased the size of static partition.

First part is done. Then next step is to format the partition in order to create the file system. But this time, we will not use β€œmkfs” command, since it will delete all the data. We don’t need it. We have to do format without comprising the data. For that we have to run the following command.

resize2fs  /dev/sdc1

Finally, we done format without comprising the data. We can check this by going inside that mount point and check whether the data is here or not.

Yes, data is here. It is not lost even though we created new partition and formatted the partition.

Live Linux Static Partition scaling without any dataΒ loss

In this blog, we are going to see how to increase or decrease the size of the static partition in Linux without compromising any data loss and done in Online mode.

I already explained the basic concepts of partition in very detail in my previous blog. You can refer to that blog by clicking here.

In this practical, the Oracle VirtualBox is used for hosting the Redhat Enterprise Linux (RHEL8) Virtual Machine (VM).

The first step is to attach one hard disk. So, I attached one virtual hard disk with the size of 40GiB. That disk is named β€œ/dev/sdc”. You can check the disk name and all other disks present in your VM by running the following command.

fdisk -l

Then, we have to do partition by using β€œfdisk” command.

fdisk /dev/sdc

Then, enter β€œn” in order to create a new partition. Then enter the partition number and specify the number of sectors or GiB. Here, we entered 20 GiB in order to utilize that much storage unless we do partition, we can’t utilize any storage.

We had created one partition named as β€œ/dev/sdc1”. Next step is to format the partition. Here, we used Ext4 filesystem(format) to create an inode table.

Next step is to create one directory using β€œmkdir” command and mount that partition in that directory since we can’t directly use the hardware device no matter it is either real or virtual.

One file should be created inside that directory in order to check the data loss after Live scaling of static partition.

Ok, now the size of the static partition is 20 GiB, we are going to do scaling up to 30GiB without unmounting the partition. For this, again we have to run the following command.

fdisk /dev/sdc

Then delete the partition. Don’t bother about the data, it won’t lose.

Then enter β€œn” to create the new partition and specify your desired size. Here, I like to scale up to 30GiB. And then one warning will come and it says that β€œPartition 1 contains an ext4 signature” and ask us what to do with that either remove the signature or retain.

If you don’t want to lose the data, then enter β€œN”. Then enter β€œw” to save the partition. you can verify your partition size by running β€œfdisk -l” command in terminal. Finally, you increased the size of static partition.

First part is done. Then next step is to format the partition in order to create the file system. But this time, we will not use β€œmkfs” command, since it will delete all the data. We don’t need it. We have to do format without comprising the data. For that we have to run the following command.

resize2fs  /dev/sdc1

Finally, we done format without comprising the data. We can check this by going inside that mount point and check whether the data is here or not.

Yes, data is here. It is not lost even though we created new partition and formatted the partition.

Reduce the size of the Static Partition

You can also reduce the Static Partition size. For this, you have to follow the below steps.

  • Unmount
  • Cleaning bad sectors
  • Format
  • Mount

First step is to unmount your mount point since it is online, somebody will using it.

umount /partition1

Then we have to clean the bad sectors by running the following command

e2fsck -f /dev/sdc1

Then we have to format the size you want. Here we want only 20 GiB and we will reduce the remaining 10 GiB space. This is done by running following command.

resize2fs /dev/sdc1 20G

Then we have to mount the partition.

Finally, we reduced the static partition size.

Above figure shows that Data is also not lost during scaling down.


Thank you all for your reads. Stay tuned for my next article, because it is Endless.

Setting up your own High Availability managed WordPress hosting using Amazon RDS

Hosting your own WordPress website is interesting right!! Ok, come on let’s do it!!

We are going to do this practical from Scratch. From the Creation of our Own VPC, Subnets, Internet Gateway, Route tables to Deployment of WordPress.

Here, we are going to use Amazon Web Service’s RDS service for hosting our own WordPress site. Before that, let’s take a look at a basic introduction to RDS service.

Amazon Relational Database Service is a distributed relational database service by Amazon Web Services (AWS). It is a web service running in the cloud designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically.

Features of AWSΒ RDS

  • Lower administrative burden. Easy to use
  • Performance. General Purpose (SSD) Storage
  • Scalability. Push-button compute scaling
  • Availability and durability. Automated backups
  • Security. Encryption at rest and in transit
  • Manageability. Monitoring and metrics
  • Cost-effectiveness. Pay only for what you use

Ok, let’s jump onto the practical part!!

We will do this practical from scratch. Since it will be big, so we divided this into 5 small parts namely

  • Creating a MySQL database with RDS
  • Creating an EC2 instance
  • Configuring your RDS database
  • Configuring WordPress on EC2
  • Deployment of WordPress website

Creating a MySQL database withΒ RDS

Before that, we have to do two pre-works namely the Creation of Virtual Private Cloud(VPC), Subnets and Security groups. These are more important because in order to have a reliable connection between WordPress and MySQL database, they should be located in the same VPC and should have the same Security Group.

Since Instances are launched on Subnets only, Moreover RDS will launch your MySQL database in EC2 instance only that we cannot able to see since it is fully managed by AWS.

VPC Dashboard

We are going to create our own VPC. For that, we have to specify IP range and CIDR. We specified IP and CIDR as 192.168.0.0/16.

What is CIDR?. I explained this in my previous blog in very detail. You can refer here.

Lets come to the point. After specifying the IP range and CIDR, enter your VPC name.

Now, VPC is successfully created with our specified details.

Next step is to launch the subnet in the above VPC.

Subnet Dashboard

For Creating Subnets, you have to specify which VPC the lab should launch. We already have our own VPC named β€œmyvpc123”.

And then we have to specify the range of Subnet IP and CIDR. Please note that the Subnet range should come under VPC range, it should not exceed VPC range.

For achieving the property of High Availability, We have to launch minimum two subnets, so that Amazon RDS will launch its database in two subnets, if one subnet collapsed means, it won’t cause any trouble.

Now, two Subnets with their specified range of IPs and CIDR are launched successfully inside our own VPC and they are available.

Next step is to create a security group in order to secure the WordPress and MySQL databases. Note that both should have the same Security Group or else it won’t connect.

For creating a Security Group, we have to specify which VPC it should be launched and adding a Description is mandatory.

Then we have to specify inbound rules, for making this practical simple, we are allowing all traffic to access our instance.

Now, the Security Group is successfully created with our specified details.

Now let’s jump into part 1 which is about Creating a MySQL database with RDS.

RDS dashboard

Select Create database, then select Standard create and specify the database type.

Then you have to specify the Version. Version plays a major role in MySQL when integrating with WordPress, so select the compactible version or else it will cause serious trouble at the end. Then select the template, here we are using Free-tier since it won’t be chargeable.

Then you have to specify the credentials such as Database Instance name, Master username and Master password.

Most important part is a selection of VPC, you should select the same VPC where you will launch your EC2 instance for your WordPress and we can’t modify the VPC once the database is created. Then select the Public access as No for providing more security to our database. Now, the people outside of your VPC can’t connect to your database.

Then you have to specify the security group for your database. Note that the Security Group for your database and WordPress should be the same or else it will cause serious trouble.

Note that Security Groups is created per VPC. After selecting Security Group, then click Ok to create the RDS database.

Creating an EC2Β instance

Before creating an instance, there should be two things you configured namely Internet Gateway and Route tables. It is used for providing outside internet connectivity to an instance launched in the subnet.

Internet Gateway Dashboard

Internet Gateway is created per VPC. First, we have to create one new Internet Gateway with the specified details.

Then you have to attach Internet Gateway to the VPC

Next step is to create Routing tables. Note that Route table is created per Subnet.

We have to specify which VPC in which your subnet is available to attach routing table with it, specify Name and click create to create the route table.

Then click Edit route to edit the route details namely destination and target. Enter destination as 0.0.0.0/0 for accessing any IP anywhere on the Internet and target is your Internet Gateway.

After entering the details, click Save routes.

We created a Route table, then we have to attach that table to your Subnet. For that click Edit route table association and select your subnet where you want to attach the route table with it.

Now, lets jump into the task of creating an EC2 instance.

First, you have to choose the AMI image in which you used for creating an EC2 instance, here I selected Amazon Linux 2 AMI for that.

Then you have to select Instance type, here I selected t2.micro since it comes under free tier.

Then you have to specify the VPC, Subnet for your instance and you have to enable Auto-assign Public IP in order to get your Public IP to your instance.

Then you have to add storage for your instance. It is optional only.

Then you have to specify the tags which will be more useful especially for automation.

Then you have to select the Security Group for your instance. It should be the same as your database have.

And click Review and Launch. Then you have to add Keypair to launch your EC2 instance. If you didn’t have Keypair means, you can create at that time.

Configuring your RDSΒ database

At this point, you have created an RDS database and an EC2 instance. Now, we will configure the RDS database to allow access to specific entities.

You have to run the below command in your EC2 instance in order to establish the connection with your database.

export MYSQL_HOST=<your-endpoint>

You can find your endpoint by clicking database in the RDS dashboard. Then you have to run the following command.

mysql --user=<user> --password=<password> dbname

This output shows the database is successfully connected to an EC2 instance.

In the MySQL command terminal, you have to run the following commands in order to get all privileges to your account.

CREATE USER 'vishnu' IDENTIFIED BY 'vishnupassword';
GRANT ALL PRIVILEGES ON dbname.* TO vishnu;
FLUSH PRIVILEGES;
Exit

Configuring WordPress onΒ EC2

For Configuring WordPress on EC2 instance, the first step is to configure the webserver, here I am using Apache webserver. For that, you have to run the following commands.

sudo yum install -y httpd
sudo service httpd start

Next step would be download the WordPress application from the internet by using wget command. Run the following code to download the WordPress application.

wget https://wordpress.org/latest.tar.gz
tar -xzf latest.tar.gz

Then we have to do some configuration, for this follow the below steps.

cd wordpress
cp wp-config-sample.php wp-config.php
cd wp-config.php

Go inside the wp-config.php file and enter your credentials (including your password too)

Then, Goto this link and copy all and paste it to replace the existing lines of code.

Next step is to deploy the WordPress application. For that, you have to run the following commands in order to solve the dependencies and deploy WordPress in the webserver.

sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
sudo cp -r wordpress/* /var/www/html/
sudo service httpd restart

That’s it. You have a live, publicly-accessible WordPress installation using a fully-managed MySQL database on Amazon RDS.

Then if you enter your WordPress instance IP in your browser, you will land your WordPress home page.

After you filled in your credentials, you will get your own homepage.

That’s it. You launched your own application in your own instance and your database is managed by AWS RDS service.


Thank you all for your reads. Stay tuned for my next article.

July 8, 2024 Python Meet & Greet - Part 2

This is a series of Python learning blog from parottosalna community by kaniyam foundation

This blog is based on this video "Python Meet & Greet - Part 2 | Parotta Salna" - Day 1 of Python learning Introduction class

To watch the full series check this youtube playlist

Objectives

  1. Introduction to Python Program
  2. About Linux user groups and their activites in Tamilnadu
  3. Contact numbers and links
  4. Forum questions asking methods
  5. Today Tasks
  • Linux groups and meetups in tamilnadu

  • To get updates for the linux meetups you can check the kaniyam.com website calender

  • The know for meetups check for meetup.com

  • These groups are volunteer group for knowledge sharing

  • GNU Linux

  • Windows or Mac are properaty software

  • Linux is made by the people for the people

  • GNU image for alternative photoshop

  • The only thing is after modify the software, it should be reelase it with original license

  • muthuramalingam tamil python - Youtube channel

  • To eliminate Thayakkam

    • This is what i know, and I am sharing it with you
  • By asking questions in forums you are creating a knowledge database

  • Ask questions in mailing list in chennaipy.org

  • By using tanglish you are not strong in either english or in tamil.

  • Things to remember when asking question

    • Don't ask your questions in one line
    • If possible add error code
    • If you have any question, always ask in public
  • Programming is like cooking

  • UI/UX - Javascript or HTML or CSS

  • Srinivasan is using Ubuntu KDE distribution of Linux

  • Srinivasan's Phone number: 9841795468

  • Dhanasekaran and Srinivasan is organising the class

  • Every week saturday teaching reactJS

  • Today Exercise

    • Install Python
    • Login in colab
  • Freetamilebook.com

  • python books

  • Telegram link: https://t.me/+2Q_uTW7j9xtkMmVl

July 8, 2024 Python - Meet & Greet - Part 1

This is a series of Python learning blog from parottosalna community by kaniyam foundation

This blog is based on this video "Python - Meet & Greet - Part 1 | ParottaSalna" - Day 1 of Python learning Introduction class

To watch the full series check this youtube playlist

Objectives:

  1. Why Python ?
  2. Course Syllabus
  3. Python Installation - Windows, Linux
  4. Collab Notebook
  5. Where to see updates & recordings.
  6. Where to ask questions ?
  7. Our Expectations
  8. Basic Print Command
  9. About FOSS, FOSS Communities.
  • Applications of Python language

  • Syllabus

  • Resources provided the team

    1. Blog
    2. Infographic PDF with important concepts
    3. Quiz
    4. Task like exercises
  • Learn -> Write blog -> Teach someone

  • python is open source

  • Python Colab in google for temperavery

    • Each program set is called cell
  • To ask questions

    • Ask in Tamil linux forum
    • Direct question in whatsup after class upto 2 hours
    • Few minutes before the class
  • To check updates for class

    • Check in Kaniyam website
    • Check in python syllabus session
    • Youtube channel live notification
    • Direct message to mentors
  • To find answer for my question

    1. First check in google search
    2. Then ask question
  • Expectations

    • To write blog notes after each classes
  • Programming

    • Print command
      • print("") this the format of the print command
      • To run the code python print or python print.py
  • Python is ready install in linux, if not install it with brew command

  • Mentor will use VScode and colob for teaching

  • Telegram link: https://t.me/+2Q_uTW7j9xtkMmVl

ntfy.sh – To save you from un-noticed events

Alex Pandian was the system administrator for a tech company, responsible for managing servers, maintaining network stability, and ensuring that everything ran smoothly.

With many scripts running daily and long-running processes that needed monitoring, Alex was constantly flooded with notifications.

Alex Pandian: β€œEvery day, I have to gothrough dozens of emails and alerts just to find the ones that matter,”

Alex muttered while sipping coffee in the server room.

Alex Pandian: β€œThere must be a better way to streamline all this information.”

Despite using several monitoring tools, the notifications from these systems were scattered and overwhelming. Alex needed a more efficient method to receive alerts only when crucial events occurred, such as script failures or the completion of resource-intensive tasks.

Determined to find a better system, Alex began searching online for a tool that could help consolidate and manage notifications.

After reading through countless forums and reviews, Alex stumbled upon a discussion about ntfy.sh, a service praised for its simplicity and flexibility.

β€œThis looks promising,” Alex thought, excited by the ability to publish and subscribe to notifications using a straightforward, topic-based system. The idea of having notifications sent directly to a phone or desktop without needing complex configurations was exactly what Alex was looking for.

Alex decided to consult with Sam, a fellow system admin known for their expertise in automation and monitoring.

Alex Pandian: β€œHey Sam, have you ever used ntfy.sh?”

Sam: β€œAbsolutely, It’s a lifesaver for managing notifications. How do you plan to use it?”

Alex Pandian: β€œI’m thinking of using it for real-time alerts on script failures and long-running commands, Can you show me how it works?”

Sam: β€œOf course,”

with a smile, eager to guide Alex through setting up ntfy.sh to improve workflow efficiency.

Together, Sam and Alex began configuring ntfy.sh for Alex’s environment. They focused on setting up topics and integrating them with existing systems to ensure that important notifications were delivered promptly.

Step 1: Identifying Key Topics

Alex identified the main areas where notifications were needed:

  • script-failures: To receive alerts whenever a script failed.
  • command-completions: To notify when long-running commands finished.
  • server-health: For critical server health alerts.

Step 2: Subscribing to Topics

Sam showed Alex how to subscribe to these topics using ntfy.sh on a mobile device and desktop. This ensured that Alex would receive notifications wherever they were, without having to constantly check email or dashboards.


# Subscribe to topics
ntfy subscribe script-failures
ntfy subscribe command-completions
ntfy subscribe server-health

Step 3: Automating Notifications

Sam explained how to use bash scripts and curl to send notifications to ntfy.sh whenever specific events occurred.

β€œFor example, if a script fails, you can automatically send an alert to the β€˜script-failures’ topic,” Sam demonstrated.


# Notify on script failure
./backup-script.sh || curl -d "Backup script failed!" ntfy.sh/script-failures

Alex was impressed by the simplicity and efficiency of this approach. β€œI can automate all of this?” Alex asked.

β€œDefinitely,” Sam replied. β€œYou can integrate it with cron jobs, monitoring tools, and more. It’s a great way to keep track of important events without getting bogged down by noise.”

With the basics in place, Alex began applying ntfy.sh to various real-world scenarios, streamlining the notification process and improving overall efficiency.

Monitoring Script Failures

Alex set up automated alerts for critical scripts that ran daily, ensuring that any failures were immediately reported. This allowed Alex to address issues quickly, minimizing downtime and improving system reliability.


# Notify on critical script failure
./critical-task.sh || curl -d "Critical task script failed!" ntfy.sh/script-failures

Tracking Long-Running Commands

Whenever Alex initiated a long-running command, such as a server backup or data migration, notifications were sent upon completion. This enabled Alex to focus on other tasks without constantly checking on progress.


# Notify on long-running command completion
long-command && curl -d "Long command completed successfully." ntfy.sh/command-completions

Server Health Alerts

To monitor server health, Alex integrated ntfy.sh with existing monitoring tools, ensuring that any critical issues were immediately flagged.


# Send server health alert
curl -d "Server CPU usage is critically high!" ntfy.sh/server-health

As with any new tool, there were challenges to overcome. Alex encountered a few hurdles, but with Sam’s guidance, these were quickly resolved.

Challenge: Managing Multiple Notifications

Initially, Alex found it challenging to manage multiple notifications and ensure that only critical alerts were prioritized. Sam suggested using filters and priorities to focus on the most important messages.


# Subscribe with filters for high-priority alerts
ntfy subscribe script-failures --priority=high

Challenge: Scheduling Notifications

Alex wanted to schedule notifications for regular maintenance tasks and reminders. Sam introduced Alex to using cron for scheduling automated alerts.S

# Schedule notification for regular maintenance
echo "Time for weekly server maintenance." | at 8:00 AM next Saturday ntfy.sh/server-health


Sam gave some more examples to alex,

Monitoring disk space

As a system administrator, you can use ntfy.sh to receive alerts when disk space usage reaches a critical level. This helps prevent issues related to insufficient disk space.


# Check disk space and notify if usage is over 80%
disk_usage=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')
if [ $disk_usage -gt 80 ]; then
  curl -d "Warning: Disk space usage is at ${disk_usage}%." ntfy.sh/disk-space
fi

Alerting on Website Downtime

You can use ntfy.sh to monitor the status of a website and receive notifications if it goes down.


# Check website status and notify if it's down
website="https://example.com"
status_code=$(curl -o /dev/null -s -w "%{http_code}\n" $website)

if [ $status_code -ne 200 ]; then
  curl -d "Alert: $website is down! Status code: $status_code." ntfy.sh/website-monitor
fi

Reminding for Daily Tasks

You can set up ntfy.sh to send you daily reminders for important tasks, ensuring that you stay on top of your schedule.


# Schedule daily reminders
echo "Time to review your daily tasks!" | at 9:00 AM ntfy.sh/daily-reminders
echo "Stand-up meeting at 10:00 AM." | at 9:50 AM ntfy.sh/daily-reminders

Alerting on High System Load

Monitor system load and receive notifications when it exceeds a certain threshold, allowing you to take action before it impacts performance.

# Check system load and notify if it's high
load=$(uptime | awk '{ print $10 }' | sed 's/,//')
threshold=2.0

if (( $(echo "$load > $threshold" | bc -l) )); then
  curl -d "Warning: System load is high: $load" ntfy.sh/system-load
fi

Notify on Backup Completion

Receive a notification when a backup process completes, allowing you to verify its success.

# Notify on backup completion
backup_command="/path/to/backup_script.sh"
$backup_command && curl -d "Backup completed successfully." ntfy.sh/backup-status || curl -d "Backup failed!" ntfy.sh/backup-status

Notifying on Container Events with Docker

Integrate ntfy.sh with Docker to send alerts for specific container events, such as when a container stops unexpectedly.


# Notify on Docker container stop event
container_name="my_app"
container_status=$(docker inspect -f '{{.State.Status}}' $container_name)

if [ "$container_status" != "running" ]; then
  curl -d "Alert: Docker container $container_name has stopped." ntfy.sh/docker-alerts
fi

Integrating with CI/CD Pipelines

Use ntfy.sh to notify you about the status of CI/CD pipeline stages, ensuring you stay informed about build successes or failures.


# Example GitLab CI/CD YAML snippet
stages:
  - build

build_job:
  stage: build
  script:
    - make build
  after_script:
    - if [ "$CI_JOB_STATUS" == "success" ]; then
        curl -d "Build succeeded for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
      else
        curl -d "Build failed for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
      fi

Notification on ssh login to server

Lets try with docker,


FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
# Set root password for SSH access (change 'your_password' to your desired password)
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
COPY ntfy-ssh.sh /usr/bin/ntfy-ssh.sh
RUN chmod +x /usr/bin/ntfy-ssh.sh
RUN echo "session optional pam_exec.so /usr/bin/ntfy-ssh.sh" >> /etc/pam.d/sshd
RUN apt-get -y update; apt-get -y install curl
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

script to send notification,


#!/bin/bash
if [ "${PAM_TYPE}" = "open_session" ]; then
  echo "here"
  curl \
    -H prio:high \
    -H tags:warning \
    -d "SSH login: ${PAM_USER} from ${PAM_RHOST}" \
    ntfy.sh/syed-alerts
fi

With ntfy.sh as an integral part of daily operations, Alex found a renewed sense of balance and control. The once overwhelming chaos of notifications was now a manageable stream of valuable information.

As Alex reflected on the journey, it was clear that ntfy.sh had transformed not just the way notifications were managed, but also the overall approach to system administration.

In a world full of noise, ntfy.sh had provided a clear and effective way to stay informed without distractions. For Alex, it was more than just a toolβ€”it was a new way of managing systems efficiently.

Understanding Proprietary and Open Source Software

Introduction:

Software plays a crucial role in our daily lives, but not all software is the same. There are two main types: proprietary software and open source software. Each has its own characteristics, benefits, and limitations. This blog will explain what these types of software are, their differences, and why they matter.

Image description

Proprietary Software:
Proprietary software is software that is only available to selected users. Users get this software in a ready-to-use format called binary form. They do not have access to the source code, which is the code written by programmers that makes the software work. Only the owner, or proprietor, of the software can fix issues, provide updates, and offer support.

Image description

There are many limitations for users of proprietary software:

Users cannot share the software with others.
Users cannot modify or change the software to suit their needs.
Examples of proprietary software include Microsoft Windows and Adobe Photoshop. These programs often come with good support and regular updates, but users have less freedom to customize or share them.

Open Source Software
Open source software is different because it is available for free and users have access to the source code. This means anyone can see how the software works, make changes to it, and share it with others. Open source software is not just for use; it encourages users to study, modify, and improve it.
Image description
Benefits of opensource software include:

*Fewer errors because many people can find and fix bugs.
*Diverse ideas and contributions from a wide community.
*Faster development and more stable products.
*Openness to new changes and improvements from the community.
*The General Public License (GPL) provides four key freedoms to users:

Use the software for any purpose.
*Study and modify the software.
*Distribute the original software.
*Distribute modified versions of the software.

Open source software emphasizes software freedom. It believes that software, which is a mix of knowledge and science, should be open and accessible to all humans, not owned by one person or organization.

Opportunities for Monetization
While open source software is free, there are still ways to make money from it:

*Providing services for the software, such as installation or customization.
*Offering online or offline support and charging for it.
*Charging for customizations made to the open source software.

History of GNU/Linux

The history of open source software is rich and interesting. Richard M. Stallman (RMS) started the movement for software freedom in the 1980s while working at the MIT AI Lab. He created the GNU Project, which stands for "GNU's Not Unix," to ensure the four software freedoms.
The GNU Project began with tools like compilers, editors, languages, network tools, servers, databases, and more. Andrew S. Tanenbaum wrote a book on operating systems design that inspired Linus Torvalds to create the Linux kernel, combining it with GNU tools to form what we now call GNU/Linux.

Conclusion
Understanding the differences between proprietary and open source software helps us appreciate the choices we have. Proprietary software offers controlled, supported environments, while open source software offers freedom, collaboration, and continuous improvement. Both have their place in the world of technology, and knowing their strengths and weaknesses allows us to make better decisions for our software needs.

Call to Action
Feel free to share your thoughts on proprietary vs. open source software in the comments below. If you found this blog helpful, share it with others to spread the knowledge!

Author Bio
Jayaram is a materials researcher with a passion for exploring different types of software. Jayaram is especially interested in how open source software can be used for materials discovery and materials-based data science, pushing the boundaries of innovation and collaboration in these fields.

Reference:

Image description

Open Source Software - an Introduction

Properatiory Software - Def

Properatiory Software is nothing but software is available for selected users in binary form only

  • no access to the source code any support is given by only the Properatior
    • there are so many limitatation to end users
    • you cannot share the software
    • you cannot change the software

Open Source Software - Def

Open Source Software is nothing but software is available free to use but not only limited to use.

  • you have access to the source code
  • you can you can change customize the source code according to your needs
  • distribute binaries source code

Benefits of Open Source Software

  • low errors
  • able to get different thought process and implement
  • faster development
  • stable products
  • open to get new changes from community

Open Source Software giving rights to the consumers aka End Users
(GPL License giving 4 Rights)

  1. Where ever we can use
  2. Whatever you can change
  3. Sell or Giveaway
  4. Can Share the source code

Open Source Software - Emphasiszes Software freedom

Software is mixture of knownledge and science it should be open and owned by all of the human being ownership is not limited to a person / organization

Open Source Software - Oppurtunities of monitization

  1. you can give service for the software
  2. you can get money from installation service
  3. you can give online/offline support and can be charge for it
  4. you can charge for the customization you provide from open source software

History of GNU/Linux

Unix Family Tree

Rise of GNU

Richard M Stallman (RMS) MIT AI Lab (1980s)

  • Started from the Printer problem unable to manage queue
  • RMS started to fight for software freedom

GNU = GNU Not Linux
Ensures 4 freedoms

  • ** Use ** for any purpose
  • ** Study ** and adapt(modify)
  • ** Distribute ** free
  • Distribute ** the modifed source**

GNU Project Starts With

  • Compilers
  • Editors
  • Languages
  • Network Tools
  • Servers
  • Databases
  • Device Drivers
  • Desktop Utilities
  • Multimedia Apps
  • Games
  • Office Applications and more

Andruw S Thanebaum writes a book for minix book operating systems design and implementation which becomes inspration for Linus Torwards to create the Kernal named Linux

Image description

References

Starting of a new Chapter

Just started my new session on Linux operating system. Had my first session today with one of my colleague around how Linux came into existence. How a problem to buy the driver for a printer led to creating a whole new world of open source. How people were restricted to using proprietary software where they were not able to do the changes. Different group of people came together from different parts of the world, some build the code some build the kernel and together they called it Linux GNU . Well!! that was all for the basic session. See you fellas tomorrow with something new.

Scanner |Java

import java.util.Scanner;

Scanner sc = new Scanner(System.in);

A simple text scanner that can parse primitive types and strings using regular expressions.
A Scanner breaks its input into tokens using a delimiter pattern, which by default matches whitespace. The resulting tokens may then be converted into values of different types using the various next methods.

Scanner sc = new Scanner(System.in);
System.out.println("Enter the value:");
int value =sc.nextInt();
System.out.println(value);
  • int –> nextInt()
  • long –>nextLong()
  • short –>nextShort()
  • float –> nextFloat()
  • double –> nextDouble()
  • boolean –> nextBoolean()
  • String –> nextLine() & next()

reference:https://docs.oracle.com/javase/8/docs/api/java/util/Scanner.html

Scanner |Java

import java.util.Scanner;

Scanner sc = new Scanner(System.in);

A simple text scanner that can parse primitive types and strings using regular expressions.
A Scanner breaks its input into tokens using a delimiter pattern, which by default matches whitespace. The resulting tokens may then be converted into values of different types using the various next methods.

Scanner sc = new Scanner(System.in);
System.out.println("Enter the value:");
int value =sc.nextInt();
System.out.println(value);
  • int –> nextInt()
  • long –>nextLong()
  • short –>nextShort()
  • float –> nextFloat()
  • double –> nextDouble()
  • boolean –> nextBoolean()
  • String –> nextLine() & next()

reference:https://docs.oracle.com/javase/8/docs/api/java/util/Scanner.html

Id | Sudo | Su

To know the user is using id command

id username

sudo

permits the user to execute a command as a super user at the admin level


su

su --> switch user
su -username
id
exit

Zip

zip filename file
zip filename *.txt
zip -sf filename.zip
to view the zipped file.

unzip

unzip filename 

to make another directory to unzip the file or rename the exisisting file.


ulimit

to manage the resources of the particular user.

ulimit -a

soft limit cannot more than hard limit

cat  /etc/security/limits.conf

type> can have the two values:

-β€œsoft” for enforcing the soft limits

– β€œhard” for enforcing hard limits


type

to know where the command it will present

type command

which 

where the command will locate

which mv


whatis

to know the description of the command


whereis

locate the binary, source, and manual page files for a command


apropos

search the manual page names and descriptions


change attribute

which makes the file unchangeable in the same location. but will copy the file in another location and edit the file.

sudo chattr +i filename
lsattr
sudo chattr -i file

it will convert unchangeable file to changeable file

sudo chattr -R +i dir1
sudo chattr -R -i dir1

to change the directory to unmutable


pid | pwdx

pidof ---> find the process ID of running program
vi test --> new process
ps -ef | grep vi
pidof vi
pwdx 8279
to know the present working directory of vi using pid.

nice value

NI –> (-20 to 19) highest priority to lowest priority range.

1 to 19 –> non root users will aslo set the ni values

-20 to -1 –> root user only set the value

to modify the program priority.
ps -l --> process list
to change the priority of the running program
nice -n 11 ps -l

renice

changing the priority of the running program.


user-management

useradd

create a new user or update default new user information.

tail /etc/passwd –> userslist

adduser –> adduser, addgroup – add a user or group to the system

sudo passwd sample
to set a password for sample user.
sudo passwd -Sa
to check all the users having password or not.
set a password for the sample user

after setting the passwd for sample user

sudo passwd -d sample
d --> delete
NP --> no passwd

user-delete

sudo userdel sample
sudo userdel -r sample --> to delete with home

Group

to create group with user

to create a group

addgroup :

deletegroup :

sudo groupdel groupname
sudo delgroup groupname
sudo usermod -aG sudo username
permission to using i commands for new user
to give sudo or root previlage for new user.

Id | Sudo | Su

To know the user is using id command

id username

sudo

permits the user to execute a command as a super user at the admin level


su

su --> switch user
su -username
id
exit

Zip

zip filename file
zip filename *.txt
zip -sf filename.zip
to view the zipped file.

unzip

unzip filename 

to make another directory to unzip the file or rename the exisisting file.


ulimit

to manage the resources of the particular user.

ulimit -a

soft limit cannot more than hard limit

cat  /etc/security/limits.conf

type> can have the two values:

-β€œsoft” for enforcing the soft limits

– β€œhard” for enforcing hard limits


type

to know where the command it will present

type command

which 

where the command will locate

which mv


whatis

to know the description of the command


whereis

locate the binary, source, and manual page files for a command


apropos

search the manual page names and descriptions


change attribute

which makes the file unchangeable in the same location. but will copy the file in another location and edit the file.

sudo chattr +i filename
lsattr
sudo chattr -i file

it will convert unchangeable file to changeable file

sudo chattr -R +i dir1
sudo chattr -R -i dir1

to change the directory to unmutable


pid | pwdx

pidof ---> find the process ID of running program
vi test --> new process
ps -ef | grep vi
pidof vi
pwdx 8279
to know the present working directory of vi using pid.

nice value

NI –> (-20 to 19) highest priority to lowest priority range.

1 to 19 –> non root users will aslo set the ni values

-20 to -1 –> root user only set the value

to modify the program priority.
ps -l --> process list
to change the priority of the running program
nice -n 11 ps -l

renice

changing the priority of the running program.


user-management

useradd

create a new user or update default new user information.

tail /etc/passwd –> userslist

adduser –> adduser, addgroup – add a user or group to the system

sudo passwd sample
to set a password for sample user.
sudo passwd -Sa
to check all the users having password or not.
set a password for the sample user

after setting the passwd for sample user

sudo passwd -d sample
d --> delete
NP --> no passwd

user-delete

sudo userdel sample
sudo userdel -r sample --> to delete with home

Group

to create group with user

to create a group

addgroup :

deletegroup :

sudo groupdel groupname
sudo delgroup groupname
sudo usermod -aG sudo username
permission to using i commands for new user
to give sudo or root previlage for new user.

MariaDb

sudo apt install mariadb-server mariadb-client

sudo systemctl status mariadb.servcie

to check the mariadb services.

sudo systemctl enable mariadb.service
sudo systemctl start mariadb.service
sudo systemctl stop mariadb.service
sudo mysql_secure_installation

enter into the database using

mysql -u root -p

to view the default databases which present in mariadb using

show databases;
create new database using 
create database name;
show created database
show databases:
to use the exsisting database
use databasename;

MariaDb

sudo apt install mariadb-server mariadb-client

sudo systemctl status mariadb.servcie

to check the mariadb services.

sudo systemctl enable mariadb.service
sudo systemctl start mariadb.service
sudo systemctl stop mariadb.service
sudo mysql_secure_installation

enter into the database using

mysql -u root -p

to view the default databases which present in mariadb using

show databases;
create new database using 
create database name;
show created database
show databases:
to use the exsisting database
use databasename;
❌