Normal view

There are new articles available, click to refresh the page.
Before yesterdayKannan

AWS-Elastic Container Service

By: Kannan
30 December 2023 at 02:28

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.

  • EC2 is that it deploys isolated VM instances with auto scaling support, and ECS deploys scalable clusters of managed Docker containers.

  • Amazon Elastic Compute Service (ECS), Elastic Kubernetes Service (EKS), and AWS Fargate help deploy and manage containers

  • AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.

Step-1. Create Cluster and task definition using AWS Fargate

Image description

Image description

Image description

Image description

Step-2. Create the services in the cluster.

  • Create the service with the task definition family which we created for nginx.

Image description

Image description

Image description

Image description

  • Once service created we can access the Public IP details from the Task tab.

Image description

  • Now you able to access the Nginx on the browser with the Public IP.

AWS-Lambda (Start/Stop EC2 Instance)

By: Kannan
27 December 2023 at 16:30

AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.

  • Lambda functions are efficient whenever you want to create a function that will only contain simple expressions

  • Each Lambda function runs in its own container. When a function is created, Lambda packages it into a new container and then executes that container on a multi-tenant cluster of machines managed by AWS.

Step-1.Create the EC2 Instance.

Image description

Step-2. Create IAM Roles and policies.

  • Create a policy > Select EC2 type >Access level -Write (Stop Instance).

Image description

  • Add Specific ARN (Details of the EC2 Instance which we need to start/stop)

Image description

Image description

  • We have created separate policy for start/stop the EC2 Instance.

Image description

  • Create a Role > select entity (AWS Service)>select the use case as "Lambda".

Image description

Image description

Image description

  • We have created separate Roles for start/stop the EC2 Instance.

Image description

Step-3. Create Lambda function.

Image description

Image description

Image description

  • We can add the trigger rule "Event Bridge"

Image description

  • The similar we create another lambda function for start the EC2 instance and schedule corn job using add trigger"Event bridge"

  • It will start/stop EC2 instance using Lambda function.

AWS-VPC (Peering Connections)

By: Kannan
26 December 2023 at 17:46

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network.

  • VPC peering connections are limited on the number of active and pending peering connections that you can have per VPC.

  • VPC peering is a technique for securely connecting two or more virtual private clouds, or VPCs

Image description

Step-1. As per the above VPC Peering connection architect Create a VPC and subnet and Rout table.

Image description

Image description

Image description

  • Associate the subnet with the route table.

Image description

Step-2. Create the Internet Gateway and attach the VPC.

Image description

  • Edit and add the Internet Gateway in the route table.

Image description

Step-3. Create the EC2 Instance with VPC-A network settings and Publich IP enabled on the Subnet and Instance.

Image description

Step-4. As the above steps we have created another VPC, Subnet and Route table.

Image description

Image description

Image description

  • Associate the Subnet on the route table and create EC2 Instance.

Image description

Image description

Step-5. We need to copy the .pem key from local and paste in the Primary VPC-A to get SSH access for another VPC-B.

  • Not getting connect to the secondary VPC EC 2 Instance via SSH.

**Step-6. **Create a peering connection.

Image description

  • Accept the Peer Request.

Image description

Step-7. Add the Secondary IPV4 CIDR range and select the peering connection and save on the Primary Route table.

Image description
Step-8. Add the Primary IPV4 CIDR range and select the peering connection and save on the Secondary Route table.

Image description

Step-9. Now we able to access the Secondary VPC EC2 Instance through the Primary VPC EC2 Instance via Peering connection.

AWS-Virtual Private Cloud VPC(Subnet,Route table,Internet Gateway,NAT gateway)

By: Kannan
26 December 2023 at 15:22

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups.

  • users can avoid underutilizing their resources during periods of low demand or overloading their infrastructure during peak periods. Overall, the advantages of using VPC for your infrastructure include improved security, greater flexibility, and scalability.

  • We are going to create a VPC on particular availability zone and differentiate with Public and private subnet/Route table as mentioned in the architect diagram.

Image description

Step-1. Create a VPC with tag of VPC-A

  • Select and make range for IPV4 CIDR, select the No IPV6 CIDR Block.

Image description

Step-2. Create a Subnet with the VPC ID we created.

  • verify the availability zone and IPV4 VPC CIDR and provide the range of subnet on IPV4 subnet CIDR to create the subnet.

Image description

Step-3.Create a Route table

  • select the VPC and create the route table
    Image description

  • Once route table created associate the subnet with the table. and enable the "Auto assign public IP"

Image description

Image description

Step-4. Create an Internet gateway and attach it with the VPC which we created.

Image description

Image description

  • Add the Internet gateway on the route table.

Image description

Image description

Step-5. Create an EC2 instance

  • On Network settings select the VPC,subnet,and public IP enable.

Image description

  • we are able to access the EC2 instance using public IP via SSH.

Step-6. Now we need to create the private subnet and route table, associate the private subnet on the route table.

Image description

Image description

Image description

Step-7. Create an EC2 instance

  • On Network settings select the VPC,private subnet.

Image description

  • Login to the Public VPC Instance and copy the .pem key from the local to get SSH access for the private instance.

  • We are able to login public Instance and get connected to Private Instance via Local gateway.

  • If we need to access internet on private instance to install any application need to create the NAT gateway.

Step-8.Create a NAT Gateway

  • select the public instance subnet range and allocate the "Elastic IP".

Image description

Step-9. Add the NAT gateway on the private Route table to get internet access on the private Instance.

Image description

Image description

  • We are successfully login to the public instance via SSH and from the public-EC2 we are able to login to private and access the internet.
kannan@kannan-PC:~$ ssh -i apache.pem ubuntu@13.201.97.155
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.2.0-1017-aws x86_64)

ubuntu@ip-192-168-1-99:~$ ssh -i apache.pem ubuntu@192.168.2.221
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.2.0-1017-aws x86_64)


ubuntu@ip-192-168-2-221:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=50 time=1.90 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=50 time=1.55 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=50 time=1.56 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.546/1.671/1.904/0.164 ms
ubuntu@ip-192-168-2-221:~$ ping www.google.com
PING www.google.com (142.251.42.4) 56(84) bytes of data.
64 bytes from bom12s19-in-f4.1e100.net (142.251.42.4): icmp_seq=1 ttl=109 time=1.79 ms
64 bytes from bom12s19-in-f4.1e100.net (142.251.42.4): icmp_seq=2 ttl=109 time=1.58 ms

AWS-Key Management Service(KMS)

By: Kannan
14 December 2023 at 18:53

AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and AWS services.

  • The service is integrated with other AWS services making it easier to encrypt data you store in these services and control access to the keys that decrypt it.

  • Creating KMS key. KMS > Customer managed Keys > Create key

Image description

Image description

Image description

Image description

  • Install the aws-encryption-cli to encrypt and decrypt the file via CLI.
sudo apt install python3-pip
sudo pip install aws-encryprion-sdk-cli
aws-encryption-cli --version
  • AWS CLI commands to encrypt the file

Image description

kannan@kannan-PC:~$ aws kms encrypt \
    --key-id alias/kannan1 \
    --plaintext fileb://kms.txt \
    --output text \
    --query CiphertextBlob | base64 \
    --decode > kms_encrypt.txt
kannan@kannan-PC:~$ cat kms_encrypt.txt 
x����X�[���4u|��e�J�Q0X��U�
0f0d0_  `�He.0             p����gWI�u0s *�H��
              YU"�    I����2$y��|e!��l�\nų���5�%�����k�~d��~e�g=�+jI�N@g6ETkannan@kannan-PC:~$ 


  • AWS CLI commands to decrypt the file

Image description

kannan@kannan-PC:~$ aws kms decrypt \
    --ciphertext-blob fileb://kms_encrypt.txt \
    --key-id alias/kannan1  \
    --output text \                            
    --query Plaintext | base64 \
    --decode > kms_decrypt.txt
kannan@kannan-PC:~$ cat kms_decrypt.txt 
Test line for kms key 

  • create directory to store the encrypted and decrypted files
mkdir encrypt
mkdir decrypt

  • create a variable to store the arn value which is genetrated for the KMS key
kannankey=arn:aws:kms:ap-south-1:155364343822:key/ef88420b-bbc5-4807-b1f3-c82eb5191c7f

kannan@kannan-PC:~$ cd encrypt/
kannan@kannan-PC:~/encrypt$ ls
example.txt.encrypted  kms.txt.encrypted
kannan@kannan-PC:~/encrypt$ cat kms.txt.encrypted 
xiCeJC�T��mb���w�����/'a8��_aws-crypto-public-keyDA9IoQRQ6f8U3WV8eoVxkQyhEZ1O/QXOXdr9L/Zx6bHP53ZEIfhYq26YJIshCIf8f8Q==aws-kmsLarn:aws:kms:ap-south-1:1550o0m0h��`�He.0���zp~0|-b*�H��807-b1f3-c82eb5191c7f�x4�u���l�\��?����<�Dya
              .�K�B�w
3����>����ǔXnL��U��cj9�1���g�%uray��߳�ɗ���x��0KYf�aE����6�j�@�Ϯ6�_k�!�Q�7x<�ǯ4u��V�6��G�������Vn�v<�%j��龎�����J��vz�u%aÌ�sg0e0b(��)!��
d9�G�Ɩ�.0$����%��
                 V�Ϗc;_���]��fl1�{
                                  o�檈R&\��\&��m6)L\,锌z!��S�<Ɪ,��kannan@kannan-PC:~/encrypt$ 
kannan@kannan-PC:~/encrypt$ cd ..
kannan@kannan-PC:~$ cd decrypt/
kannan@kannan-PC:~/decrypt$ ls
example.txt.encrypted.decrypted  kms.txt.encrypted.decrypted
kannan@kannan-PC:~/decrypt$ cat kms.txt.encrypted.decrypted 
Test line for kms key 

We can encrypt and decrypt the S3 bucket using the KMS key

  • EC2 >EBS>Volumes >create volume >enable "Encrypt this volume".

Image description

Image description

  • create an S3 bucket using CLI
kannan@kannan-PC:~$ aws s3 mb s3://kannandemo-bucket
make_bucket: kannandemo-bucket

  • select the bucket > properties > edit default encryption

  • select "Server-side encryption with AWS Key Management Service keys (SSE-KMS)"

  • choose "Choose from your AWS KMS keys"

Image description

Image description

  • It will auto encrypt and decrypt the objects inside the S3 bucket.

To delete the KMS key we need to schedule the key deletion it took minimum 7 day

Image description

Image description

AWS -Relational Database Service(RDS)

By: Kannan
14 December 2023 at 17:34
  • Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Choose from eight popular engines: Amazon Aurora PostgreSQL-Compatible Edition, Amazon Aurora MySQL-Compatible Edition, RDS for PostgreSQL, RDS for MySQL, RDS for MariaDB, RDS for SQL Server, RDS for Oracle, and RDS for Db2. Deploy on premises with Amazon RDS on AWS Outposts or with elevated access to the underlying operating system and database environment using Amazon RDS Custom.

  • Now we are going to create Mysql DB using RDS.we need to confirm the ports allowed in the Security groups of your AWS.

Image description

Image descriptiony

Image description

Image description

  • install mysql-client on the local machine
sudo apt install mysql-client -y

Once DB is on available state select >modify >connectivity >public accessibility.

Image description

  • we can access the DB via terminal with the endpoint id,port username and password
mysql -h demomysqldb.cg35jaodi4xh.ap-south-1.rds.amazonaws.com -P 3306 -u admin -p
kannan@kannan-PC:~$ mysql -h demomysqldb.cg35jaodi4xh.ap-south-1.rds.amazonaws.com -P 3306 -u admin -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 27
Server version: 8.0.33 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.02 sec)


  • we can create a PostgreSQL DB with the "Easy create" method
sudo apt install postgresql-client
  • Lets see the another method to create a DB with "Standard create"

Image description

Image description

Image description

Image description

Image description

Image description

Image description

Image description

  • install postgresql-client on the local machine
sudo apt install postgresql-client -y
  • we can access the DB via terminal with the endpoint id,port username and password
psql --host=database-1.cg35jaodi4xh.ap-south-1.rds.amazonaws.com --port=5432 --username=postgres --dbname=demodb --password

Image description

AWS S3-Replication

By: Kannan
3 December 2023 at 14:49
  • Amazon Simple Storage Service (S3) Replication is an elastic, fully managed, low cost feature that replicates objects between buckets.

  • To automatically replicate new objects as they are written to the bucket, use live replication, such as Cross-Region Replication (CRR). To replicate existing objects to a different bucket on demand, use S3 Batch Replication.

Step1. Now we create source S3 Bucket make sure to enable the versioning and upload one file.

Image description

Image description

Step2. Create a another Destination S3 bucket with versioning enabled.
Image description

Step3. Now on the source Bucket >management >Replication rules.
we need to create these replication rules on the source bucket.

Image description

  • Choose the destination bucket on the replication rules.alternative you can choose the different AWS account/S3 Bucket to replicate.

Image description

  • On the "IAM role" create new role for these job.

Image description

  • Once replication rule has been created need to confirm either you need to replicate existing objects or not.

Image description

Step4. At "Create Batch operation job" we are disabling the "Generate completion report" and create new IAM role.

Image description

Image description

  • The file which we uploaded on the source bucket is replicated on the Destination bucket using "Replication Rules".

Image description

Step5. If we upload new file/object on the source bucket it will automatically replicate on Destination bucket.

Image description

Image description

Image description

  • Similar we can use Cross-Region Replication (CRR).
  • To enable CRR, you add a replication configuration to your source bucket. The minimum configuration must provide the following:

The destination bucket or buckets where you want Amazon S3 to replicate objects

An AWS Identity and Access Management (IAM) role that Amazon S3 can assume to replicate objects on your behalf

AWS-S3 Bucket creation access and Life cycle policy

By: Kannan
30 November 2023 at 17:58

Amazon S3 to store and retrieve any amount of data at any time, from anywhere.An object consists of a file and optionally any metadata that describes that file.

  • Amazon S3 is object storage built to store and retrieve any amount of data from anywhere. S3 is a simple storage service that offers industry leading durability, availability, performance, security, and virtually unlimited scalability at very low costs.

  • You can create up to 100 buckets in each of your AWS cloud accounts,If needed, you can request up to 1,000 more buckets by submitting a service limit increase.

  • Lets see how to creat a S3 Bucket and upload file.

Image description

Image description

Image description

Image description

Image description

Image description

Image description

Image description

Image description
-On the above we have seen to view the upload files via presigned URL.

  • Another way to view the files
    select the file > permissions > Uncheck block all public access.

  • Bucket policy > edit bucket policy >Policy generator.

Image description

  • Open as new tab to generate policy, we have select the s3 bucket policy followed by "Get Object" and copy the Amazon Resource Name(ARN) from the uploaded file.

Image description

Image description

  • Once policy generated copy the policy and paste on the Bucket policy >policy >save.

Image description

  • Via file "object URL" You can access the file on public.

Image description

  • AWS S3 Object storage classes, by default comes with "Standard" you can modify via Action >Edit storage class.

Image description

  • If you modified the local file and uploaded to S3 you want version basis need to enable "Bucket versioning" Properties > Bucket versioning >Edit.

Image description

  • Once you enabled the versioning able suspend (not to disable)
    Image description

  • the file1.txt was edited locally and uploaded to S3 bucket to see the versioning.

Image description

Life cycle rule

  • A lifecycle policy consists of one or more rules that determine which images in a repository should be expired.

  • Lifecycle policies help manage and automate the life of your objects within S3, preventing you from leaving data unnecessarily available. They make it possible to select cheaper storage options if your data needs to be retained, while at the same time, adopting additional security control from Glacier.

  1. Click on your bucket.
  2. Click on Management tab.
  3. Click on Create lifecycle rule.
  4. First, give a name for the rule.
  5. Then choose a rule scope. ...
  6. If you choose to apply the rule to all objects then choose “Apply to all objects in the bucket”. ...
  7. You can specify the transition for each option.

Image description

Image description

Image description

Image description

Image description

AWS Auto Scaling with Load Balancer

By: Kannan
30 November 2023 at 15:38
  • Create a instance with Ubuntu OS and with instance type as t2.nano and selected the existing key pair and security group and set 2 instance to create and launch.

  • Run the apdt update and install apache webserver and modify the index.html
    sudo apt update
    sudo apt install apache2 -y
    sudo rm /var/www/html/index.html
    sudo vim /var/www/html/index.html
    <h1>Auto Scaling Group with Load Balancer</h1>

  • Go to Action >Image and Template > Create Image

Image description

  • Create an image template by enabling noboot option.

Image description

Now create Target group and Load Balancer

  • Target Groups was created as "demo-tg"

Image description

  • Load balancer also created as "demo-lb"

Image description

  • Launch template was created with the AMI Image.

Image description

  • Create Auto scaling Group >Select the template >Select all network availability zone >Select Load balancing.

  • On Load balancing setup select the "Attach to an existing Load Balancer"

  • Group size >Desired capacity as 2 >Traget tracking scaling policies,Set Execute policy with CPU utilisation exceeds 50.

  • Select instance maintance policy as "Launch before Terminating"

  • Add Notification >Create a topic and provide name of the topic and email ID to send notification > Unselect "Fail to launch & Terminate"

Image description

Image description

Image description

  • Auto Scaling group has been create and launch the Instance from the AMI Image Template. you can monitor on the Activity tab.

Image description

  • Verfiy the instance status

Image description

Image description

  • On Cloud watch > Alarms> verify the scaling target alarm in detail.

Image description

  • Login to the 2 instance created by Auto Scaling groups and execute these commands to see the CPU utilisation and execute the cmd to exceed the CPU utilisation. top yes > /dev/null &
ubuntu@ip-172-31-1-56:~$ yes > /dev/null &
[1] 918
ubuntu@ip-172-31-1-56:~$ top

top - 15:16:41 up 18 min,  1 user,  load average: 0.08, 0.02, 0.01
Tasks: 101 total,   2 running,  99 sleeping,   0 stopped,   0 zombie
%Cpu(s): 75.0 us, 25.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :    446.5 total,     24.5 free,    148.0 used,    274.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    269.6 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                            
    918 ubuntu    20   0    5764   1920   1920 R  93.8   0.4   0:05.42 yes                           
  • It will exceed the CPU utilisation and trigger the ASG with Load balancer and create another instance.

Image description

  • Cloud watch Alram Notification.

Image description

  • To kill the cmd which we have run to exceed the CPU utilisation.Run the cmd with kill "PID".
ubuntu@ip-172-31-1-56:~$ kill 918
ubuntu@ip-172-31-1-56:~$ top

top - 15:33:28 up 35 min,  2 users,  load average: 0.85, 0.94, 0.68
Tasks:  98 total,   1 running,  97 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :    446.5 total,     17.4 free,    152.4 used,    276.6 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    265.1 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                            
      1 root      20   0  166200  11352   8280 S   0.0   2.5   0:04.58 systemd         

AWS Auto scaling without Load Balancer

By: Kannan
27 November 2023 at 18:15
  • Create a instance with Ubuntu OS and with instance type as t2.nano and selected the existing key pair and security group and set 2 instance to create and launch.

  • Run the apdt update and install apache webserver and modify the index.html
    sudo apt update
    sudo apt install apache2 -y
    sudo rm /var/www/html/index.html
    sudo vim /var/www/html/index.html
    <h1>Testing for Auto Scaling Group</h1>

  • Go to Action >Image and Template > Create Image

Image description

  • set as noboot and create image

Image description

Image description

  • Once image created Stop and Terminate the instance which we created

Image description

  • Instance >Launch template >Create the launch template>Select AMI image.

Image description

Image description

Image description

  • Instance type s t2.nano>select keypair >Existing security group> Advance Network Configuration >Enable "Auto Assign IP"

Image description

Image description

Image description

  • Create Auto scaling Group >Select the template >Select all network availability zone >Select No Load balancing.

Image description

Image description

Image description

  • Group size >Desired capacity as 1 >No scaling policies

Image description

  • Select instance maintance policy as "Launch before Terminating"

Image description

  • Add Notification >Create a topic and provide name of the topic and email ID to send notification > Unselect "Fail to launch & Terminate"

Image description

  • Add Tag

Image description

  • Once done Review all the settings and create a auto scaling group

Image description

Image description

  • Verify the email and confirm the notification subscription

Image description

Image description

  • Auto Scaling group has been create and launch the Instance from the AMI Image Template. you can monitor on the Activity tab.

Image description

  • Verfiy the instance status

Image description.

Image description

  • If you stop the instance ASG-1 it will create the new instance using Auto scaling group monitor the process on the ASG Activity. Image description

Image description

Image description

  • These how auto scaling group works without load balancer.

AWS Elastic Block Storage(EBS)

By: Kannan
27 November 2023 at 15:48

Here We are going to see how to add volumes on the instance using Elastic Block Storage(EBS)

  • Create a EC2 Instance with Ubuntu OS and with instance type as t2.nano and selected the existing key pair and security group to launch the instance.

Image description

  • Go to Elastic Block store select volumes

Image description

  • Create a Volume and set 10 GB as additional storage

Image description

  • Once volume is created attach the volume on the instance which we created

Image description

Image description

  • login to the instance via ssh and view the attached volume using list block "lsblk"and formate the file system type to "ext4"
kannan@kannan-PC:~$ ssh -i apache.pem ubuntu@3.110.83.63

ubuntu@ip-172-31-47-222:~$ df -Th
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/root      ext4   7.6G  1.6G  6.0G  21% /
tmpfs          tmpfs  224M     0  224M   0% /dev/shm
tmpfs          tmpfs   90M  832K   89M   1% /run
tmpfs          tmpfs  5.0M     0  5.0M   0% /run/lock
/dev/xvda15    vfat   105M  6.1M   99M   6% /boot/efi
tmpfs          tmpfs   45M  4.0K   45M   1% /run/user/1000
ubuntu@ip-172-31-47-222:~$ lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0      7:0    0  24.6M  1 loop /snap/amazon-ssm-agent/7528
loop1      7:1    0  55.7M  1 loop /snap/core18/2790
loop2      7:2    0  63.5M  1 loop /snap/core20/2015
loop3      7:3    0 111.9M  1 loop /snap/lxd/24322
loop4      7:4    0  40.8M  1 loop /snap/snapd/20092
xvda     202:0    0     8G  0 disk 
├─xvda1  202:1    0   7.9G  0 part /
├─xvda14 202:14   0     4M  0 part 
└─xvda15 202:15   0   106M  0 part /boot/efi
xvdf     202:80   0    10G  0 disk 

ubuntu@ip-172-31-47-222:~$ sudo file -s /dev/xvdf
/dev/xvdf: data
ubuntu@ip-172-31-47-222:~$ sudo mkfs -t ext4 /dev/xvdf
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: adc5036f-9501-4b72-951e-1cf30aa4bf72
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

  • To mount the formatted volume we need to create a directory we can mount the volume with file name or UUID
ubuntu@ip-172-31-47-222:~$ sudo mkdir /data
ubuntu@ip-172-31-47-222:~$ sudo file -s /dev/xvdf
/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=adc5036f-9501-4b72-951e-1cf30aa4bf72 (extents) (64bit) (large files) (huge files)

ubuntu@ip-172-31-47-222:~$ sudo mount UUID=adc5036f-9501-4b72-951e-1cf30aa4bf72 /data
ubuntu@ip-172-31-47-222:~$ df -Th
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/root      ext4   7.6G  1.6G  6.0G  21% /
tmpfs          tmpfs  224M     0  224M   0% /dev/shm
tmpfs          tmpfs   90M  844K   89M   1% /run
tmpfs          tmpfs  5.0M     0  5.0M   0% /run/lock
/dev/xvda15    vfat   105M  6.1M   99M   6% /boot/efi
tmpfs          tmpfs   45M  4.0K   45M   1% /run/user/1000
/dev/xvdf      ext4   9.8G   24K  9.3G   1% /data

  • If we reboot the instance the attached volume will not detected to keep detect after reboot or restart the instance we need to backup the volume file.
ubuntu@ip-172-31-47-222:~$ sudo cp /etc/fstab /etc/fstab.backup
ubuntu@ip-172-31-47-222:~$ sudo vi /etc/fstab

  • make an volume file entry on the "sudo vim /etc/fstab"

Image description
on the above path we can use either path or UUID of the volume.
Mount the Volume file

ubuntu@ip-172-31-47-222:~$ sudo mount -a
ubuntu@ip-172-31-47-222:~$ df -Th
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/root      ext4   7.6G  1.6G  6.0G  21% /
tmpfs          tmpfs  224M     0  224M   0% /dev/shm
tmpfs          tmpfs   90M  844K   89M   1% /run
tmpfs          tmpfs  5.0M     0  5.0M   0% /run/lock
/dev/xvda15    vfat   105M  6.1M   99M   6% /boot/efi
tmpfs          tmpfs   45M  4.0K   45M   1% /run/user/1000
/dev/xvdf      ext4   9.8G   24K  9.3G   1% /data

Now the Volume file were mounted permanently.

AWS Elastic Load Balancer

By: Kannan
27 November 2023 at 14:39

We are going to see how to create a Elastic Load Balancer(ELB)

Step 1. we need to create and launch the instance with Ubuntu OS and with instance type as t2.nano and selected the existing key pair and security group and set 2 instance to create and launch.

Image description

Step 2. Edit the name as app_server-1 and app_server-2 for identification.
login via ssh using the public IP of each instance and do the "sudo apt update","sudo spt install apache2 -y" remove the index.html file and create and edit new index.html file.
Enable & verify the status of the service using "sudo systemctl enable apache2", "sudo systemctl status apache2".
verify the apache application by running "curl @(pubilc IP of EC2-Instance)"

kannan@kannan-PC:~$ ssh -i apache.pem ubuntu@13.234.122.186

ubuntu@ip-172-31-35-171:~$ sudo apt update

ubuntu@ip-172-31-35-171:~$ sudo apt install apache2 -y

ubuntu@ip-172-31-35-171:~$ sudo rm /var/www/html/index.html 
ubuntu@ip-172-31-35-171:~$ sudo vi /var/www/html/index.html 

ubuntu@ip-172-31-35-171:~$ sudo systemctl enable apache2
Synchronizing state of apache2.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable apache2
ubuntu@ip-172-31-35-171:~$ sudo systemctl status apache2
● apache2.service - The Apache HTTP Server
     Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-11-27 13:37:55 UTC; 1min 37s ago
       Docs: https://httpd.apache.org/docs/2.4/
   Main PID: 2249 (apache2)
      Tasks: 55 (limit: 517)
     Memory: 6.4M
        CPU: 39ms
     CGroup: /system.slice/apache2.service
             ├─2249 /usr/sbin/apache2 -k start
             ├─2251 /usr/sbin/apache2 -k start
             └─2252 /usr/sbin/apache2 -k start

Nov 27 13:37:55 ip-172-31-35-171 systemd[1]: Starting The Apache HTTP Server...
Nov 27 13:37:55 ip-172-31-35-171 systemd[1]: Started The Apache HTTP Server.

ubuntu@ip-172-31-35-171:~$ curl @13.234.122.186
<h1>Application running on server-1</h2>


Step 3.Follow the above same procedure for the second instance and verify.

ubuntu@ip-172-31-34-163:~$ sudo rm /var/www/html/index.html 
ubuntu@ip-172-31-34-163:~$ sudo vi /var/www/html/index.html 
ubuntu@ip-172-31-34-163:~$ curl @13.109.139.153
<h1>Application running on server-2</h1>

Now 2 apache application is running on the two instance.

Step 4. Create a Target group

Image description

Step 5. Create a Load Balancer

Image description

Step 6.Select and create Application load balancer type.

Image description

Step 7.Set the basic configuration

Image description

Step 8. Set Network mapping

Image description

Step 9. Set security group and select the target groups which we have created.

Image description

Step 10. On the summary tab we need to verify the settings and create the load balancer. it took approx 5 mins for provisioning.

Image description

Image description

Step 11. Once the ELB has been active open and copy the DNS and verify on the browser for every refresh it will change the instance using ELB.

Image description

Image description

Image description

AWS - EC2 instance creation and install apache

By: Kannan
20 November 2023 at 15:23

EC2 - Elastic Cloud Compute
An EC2 instance is a virtual server in Amazon Web Services terminology
We have already created an AWS account and now we are go to see how to create an EC2 instance under free tier.

  • Search "EC2 instance" on search icon and open the EC2 Dashboard

Image description

  • Launch instance and select the Ubuntu OS

Image description

  • Select the Architecture and Instance type as t2.micro

Image description

  • Create a key pair .pem file and store in home folder to access the instance.

Image description

  • Under Network settings keep "Auto assign public IP" and select the SSH to connect to instance if can modify the static IP to access the instance via that IP.

Image description

  • On configure storage set as 8 GIB and gp2 (free tier) If need add volume on payment basis.

Image description

  • All set now we are going to launch the instance.Once instance launched it took 2 mins to get Running state.

Image description

  • Check the instance connect methods by click on "connect".

Image description

  • Now we are going to connect the instance via SSH method
    using the key pair .pem file which we have downloaded.

  • If you tried to connect the instance via SSH without change the permission for the .pem file it shows warning

kannan@kannan-PC:~$ ssh -i apache.pem ubuntu@15.206.187.19
The authenticity of host '15.206.187.19 (15.206.187.19)' can't be established.
ED25519 key fingerprint is SHA256:94WhMQXEW7ygmAS+dTcSfHGM/6UUoRYLCTGxhJJoIVc.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '15.206.187.19' (ED25519) to the list of known hosts.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0664 for 'apache.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "apache.pem": bad permissions
ubuntu@15.206.187.19: Permission denied (publickey).


  • Change the file permission and then connect to instance by the public IP
kannan@kannan-PC:~$ chmod 600 apache.pem 
kannan@kannan-PC:~$ ssh -i apache.pem ubuntu@15.206.187.19
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.2.0-1012-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Mon Nov 20 14:24:38 UTC 2023

  System load:  0.0               Processes:             97
  Usage of /:   20.5% of 7.57GB   Users logged in:       0
  Memory usage: 21%               IPv4 address for eth0: 172.31.3.250
  Swap usage:   0%

Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


The list of available updates is more than a week old.
To check for new updates run: sudo apt update


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

  • set the root password and to apt update and install apache using the below cmds.
sudo passwd root
sudo apt update
sudo apt install apache2 -y

Image description

  • Note
    Every test study completion Terminate the instance to avoid Billing.

  • Enable the Alert preference and Invoice delivery preference on Billing preference to get Notification alerts.

Deploy codes to remote server over SSH method

By: Kannan
13 November 2023 at 13:46
  • Create a Git Repository name "deploycode" add README.md file mention description as deploycodetoremoteserver.

  • Clone the Git repo to local machine.

  • Create Git branch as "dev" & "test"

  • Create index.html file for both branches

kannan@kannan-PC:~/deploycode$ ls
index.html  README.md
kannan@kannan-PC:~/deploycode$ cat index.html 
<h1>Testcode</h1>
<h1>testcode version-1</h1>
<h1>testcode version-2</h1>

kannan@kannan-PC:~/deploycode$ git branch 
  dev
  main
* test
kannan@kannan-PC:~/deploycode$ git checkout dev
Switched to branch 'dev'
kannan@kannan-PC:~/deploycode$ git branch 
* dev
  main
  test
kannan@kannan-PC:~/deploycode$ ls
index.html  README.md
kannan@kannan-PC:~/deploycode$ cat index.html 
<h1>Devcode </h1>
<h1>dev code version-1</h1>
<h1>dev code version-2</h1>

  • Git add . and commit, Push to the Git repo.

Remote server setup

  • Create a remote server

  • Do the "apt update" and and "install apache2" on the both server

  • Do the the "systemctl start, enable, status of the apache service"

Jenkins setup

  • Go to Jenkins dashboard > Manage Jenkins > Plugins > "publish over SSH"

  • Manage Jenkins > System > SSH Servers (Add the SSH server details for dev and test)

Image description

Image description

  • Click on advance "Use password authentication" enter the password for the remote server.

Image description

  • Test run the configuration to check the remote server connectivity.

Jenkins Project for devserver.

  • Go to Jenkins dashboard > Add items > Freestyle project> ok

  • Select the Github project and paste the Git repo "URL".
    Image description

  • Copy the Git repo http url and mention the branch as "dev"

Image description

  • Select the "Poll SCM" and set the schedule period.
    Image description

  • Select send files or execute commands over SSH.
    Image description

  • Select Editable e mail Notification, add the email on "Project recipient list"

Image description

  • Once done it will automatically Build Now and provides the output on Console output.

Image description

Image description

Jenkins Project for testserver.

  • Follow the same configuration procedure as the above "General, source code management, Build trigger, Post buils action".
  • Only need to modify the Build steps to run the "testserver"

Image description

  • Once done it will automatically Build Now and provides the output on Console output.

Image description

Image description

Build and Push Docker images to Docker Hub using Jenkins Pipeline

By: Kannan
13 November 2023 at 11:42
  • Create a Git Repository named "dockerhub_jenkins" with Readme.md

  • Clone the repo to local machine and add the below required files

  • Create Dockerfile and image to run

FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update
RUN apt install apache2 -y
RUN apt install apache2-utils -y
RUN apt clean
COPY index.html /var/www/html
EXPOSE 80
CMD ["apache2ctl","-D","FOREGROUND"]
  • Create Jenkins file
kannan@kannan-PC:~/dockerhub_jenkins$ cat Jenkinsfile 
pipeline {
  agent any
  options {
    buildDiscarder(logRotator(numToKeepStr: '5'))
  }
  environment {
    DOCKERHUB_CREDENTIALS = credentials('kannanb95-dockerhub')
  }
  stages {
    stage('Build') {
      steps {
        sh 'docker build -t kannanb95/kaniyam:latest .'
      }
    }
    stage('Login') {
      steps {
        sh 'echo $DOCKERHUB_CREDENTIALS_PSW | docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
      }
    }
    stage('Push') {
      steps {
        sh 'docker push kannanb95/kaniyam:latest'
      }
    }
  }
  post {
    always {
      sh 'docker logout'
    }
  }
}

  • Do the Git commit and push to the repo.

  • Login to Dockerhub >Account settings >Security >Access Token > Create New Access Token.

Image description

  • Go to Jenkins dashboard > Manage Jenkins >Credentials > systems > add docker hub credentials > at password tab paste the access token created on Docker hub.

Image description

  • At jenkins dashboard > Add items > named as "Dockerhub_jenkins" > Pipeline.

  • Pipeline > Definition > Pipeline script from SCM > set SCM as "Git"> Copy the Repo "URL" >Set branch as "Main" > save and apply > Build Now.

Image description

Image description

Image description

Image description

Image description

  • Verify the Dockerhub

Image description

Jenkins Pipeline Concept

By: Kannan
13 November 2023 at 10:41

Using Pipeline script concept

  • Jenkins dashboard > New item > select "Pipeline" named as pipeline script.

  • Pipeline >Definition > select pipeline script> edit the script enable "Use Groovy sandbox".

Image description

Image description

Image description

Using pipeline SCM concept

  • Jenkins dashboard > New item > select "Pipeline" named as pipelinescm concept.
  • Pipeline >Definition > select pipeline SCM> select "Git" on SCM > Paste the repo URL > mention the Branch "main" save and apply > Build Now.

Image description

Image description

  • Console Output

Image description

Configure and setup Email extended notification on Jenkins and create a project to verify the notification

By: Kannan
13 November 2023 at 08:30

*On respective Email configuration *

  • set 2-factor authentication and enable & add the app password under security tab. -Create the app password for Jenkins and need to add the password on jenkins.

*On Jenkins Dashboard *

  • We have completed with the installation and basic setup of Jenkins app.

  • Jenkins dashboard > Manage Jenkins > plugins >Email Extension Template Plugin.(add these plugin for E-mail Notification)

  • Jenkins dashboard > Manage Jenkins > system >E-mail Notification.(In the password tab enter the app password which we created on gmail "without space")

Image description

  • enable the "Test configuration by sending test e-mail"
    enter the email for validation purpose and verify the email inbox for Notification.

  • On Extended Email Notification add the credientials and enable "Use SSL".

Image description

  • On Default trigger tab select "Always", "Success", "Failure if any".

Image description

  • On Jenkins dashboard > Add New Item > select "Freestyle project" > provide project name on "CAMEL Format"> ok.

Image description

  • On "Build Steps" select "Execute shell" provide the commands need to run.

Image description

  • On "Post Build Action" select "Editable E mail Notification".

  • Provide the "email ID" to whom you want to send the project Build Log Notification Apply and save.

  • Once saved Click on "Build Now" to run the project and view the console output and verify the email notification.

Image description

  • If the notification Email is getting from "address not yet found" you can modify the system admin e-mail address.

Image description

  • All the Jenkins Project logs are stored on the below path
/var/lib/jenkins/jobs/

Prometheus-Grafana Using BlackBox exporter

By: Kannan
13 November 2023 at 07:12

Here we are going to use Blackbox export on grafana Dashboard

  • Download the blackbox exporter
wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.24.0/blackbox_exporter-0.24.0.linux-amd64.tar.gz
  • extract the package
tar -xvf blackbox_exporter-0.24.0.linux-amd64.tar.gz
  • enter into blackbox directory
cd blackbox_exporter-0.24.0.linux-amd64
  • create the monitor_website.yml file
vim monitor_website.yml
modules:
  http_2xx_example:
    prober: http
    timeout: 5s
    http:
      valid_http_versions: ["HTTP/1.1", "HTTP/2.0"]
      valid_status_codes: [200]  # Defaults to 2xx
      method: GET
  • Create the service file for blackbox
vim /etc/systemd/system/blackbox.service
[Unit]
Description = Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target

[Service]
User=root
Restart=on-failure

ExecStart=/root/blackbox_exporter-0.24.0.linux-amd64/blackbox_exporter --config.file=/root/blackbox_exporter-0.24.0.linux-amd64/monitor_website.yml

[Install]
WantedBy=multi-user.target
  • Reload,start,enable and find the status of the balckbox service
systemctl daemon-reload
systemctl start blackbox
systemctl enable blackbox
systemctl status blackbox
  • At prometheus server creating the scrape config file
vim /etc/prometheus/prometheus.yml
- job_name: 'blackbox'
    metrics_path: /probe
    params:
      module: [http_2xx_example]  # Look for a HTTP 200 response.
    static_configs:
      - targets:
        - http://prometheus.io    # Target to probe with http.
        - https://prometheus.io   # Target to probe with https.
        - http://kaniyam.com
        - http://freetamilebooks.com
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 192.168.122.138:9115
  • Restart the services
systemctl restart prometheus
systemctl restart grafana-server        

Image description

  • In Grafana Dashboard Create a query and add metrics
probe_http_status_code
probe_http_ssl

Image description

  • Add > Virtulization >select tables >set value mappings and set colour >rename the panel.

  • Below the metric "option" set the type as "Table", "instant"
    Select "Transform data" and modify the name as your choice.

Image description

Monitor MySQL DB using Promethus-Grafana/Mysqld exporter

By: Kannan
4 November 2023 at 15:57
  • Create a traget machine to install Mysql server
    Here I have created a MySQL target machine using VM (ubuntu 22.04)

  • Lets Install Mysql server
    apt update
    apt install mysql-server
    systemctl start mysql
    systemctl enable mysql
    systemctl status mysql

  • Add prometheus user in prometheus group

useradd --no-create-home --shell /bin/false prometheus
groupadd --system prometheus

useradd -s /sbin/nologin --system -g prometheus prometheus

  • Downloading latest Mysqld-exporter
curl -s https://api.github.com/repos/prometheus/mysqld_exporter/releases/latest | grep browser_download_url   | grep linux-amd64 | cut -d '"' -f 4   | wget -qi -
  • Extract the downloaded file
tar xvf mysqld_exporter*.tar.gz
root@mysql-2:~# tar xvf mysqld_exporter*.tar.gz
mysqld_exporter-0.15.0.linux-amd64/
mysqld_exporter-0.15.0.linux-amd64/mysqld_exporter
mysqld_exporter-0.15.0.linux-amd64/NOTICE
mysqld_exporter-0.15.0.linux-amd64/LICENSE

Move the mysqld-exporter to /usr/local/bin

mv  mysqld_exporter-*.linux-amd64/mysqld_exporter /usr/local/bin/
  • giving permission to mysqld-exporter
chmod +x /usr/local/bin/mysqld_exporter
  • verify the mysqld-exporter version

mysqld_exporter --version

root@mysql-2:~# mysqld_exporter  --version
mysqld_exporter, version 0.15.0 (branch: HEAD, revision: 6ca2a42f97f3403c7788ff4f374430aa267a6b6b)
  build user:       root@c4fca471a5b1
  build date:       20230624-04:09:04
  go version:       go1.20.5
  platform:         linux/amd64
  tags:             netgo

  • Creating MySQL user and DB for mysqld-exporter
mysql -u root -p
CREATE USER 'mysqld_exporter'@'localhost' IDENTIFIED BY 'StrongPassword';
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'mysqld_exporter'@'localhost';
FLUSH PRIVILEGES;
EXIT
root@mysql-2:~# mysql -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mysqld_exporter'@'localhost' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.02 sec)

mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'mysqld_exporter'@'localhost';
Query OK, 0 rows affected (0.01 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

mysql> EXIT
Bye

  • Configure MySQL DB credentials

vim /etc/.mysqld_exporter.cnf

root@mysql-2:~# cat /etc/.mysqld_exporter.cnf
[client]
user=mysqld_exporter
password=******

  • providing ownership

chown root:prometheus /etc/.mysqld_exporter.cnf

  • Create systemmd unit file

vim /etc/systemd/system/mysql_exporter.service

root@mysql-2:~# cat /etc/systemd/system/mysql_exporter.service
[Unit]
Description=Prometheus MySQL Exporter
After=network.target
User=prometheus
Group=prometheus

[Service]
Type=simple
Restart=always
ExecStart=/usr/local/bin/mysqld_exporter \
--config.my-cnf /etc/.mysqld_exporter.cnf \
--collect.global_status \
--collect.info_schema.innodb_metrics \
--collect.auto_increment.columns \
--collect.info_schema.processlist \
--collect.binlog_size \
--collect.info_schema.tablestats \
--collect.global_variables \
--collect.info_schema.query_response_time \
--collect.info_schema.userstats \
--collect.info_schema.tables \
--collect.perf_schema.tablelocks \
--collect.perf_schema.file_events \
--collect.perf_schema.eventswaits \
--collect.perf_schema.indexiowaits \
--collect.perf_schema.tableiowaits \
--collect.slave_status \
--web.listen-address=0.0.0.0:9104

[Install]
WantedBy=multi-user.target

  • Reload the daemon and start,enable,status of the service

systemctl daemon-reload
systemctl enable mysql_exporter
systemctl start mysql_exporter
systemctl status mysql_exporter

  • Already we have created a prometheus server machine and done with the installation of(prometheus,grafana,alertmanager,node-exporter)

  • Adding scrape config file to communicate with db

vim /etc/prometheus/prometheus.yml

- job_name: 'server1_db'
    scrape_interval: 5s
    static_configs:
      - targets: ['server_ip:9104']
root@prometheus-2:~# cat etc/prometheus/prometheus.yml
cat: etc/prometheus/prometheus.yml: No such file or directory
root@prometheus-2:~# cat /etc/prometheus/prometheus.yml

global:
  scrape_interval: 10s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'prometheus_server'
    scrape_interval: 5s
    static_configs:
      - targets: ['192.168.122.138:9100']

  - job_name: 'server1_db'
    scrape_interval: 5s
    static_configs:
      - targets: ['192.168.122.137:9104']

  • Adding Alert rules for msqld-exporter

vim /etc/prometheus/rules/alert-rules.yml

alertmanager rules:
- alert: MysqlDown
    expr: mysql_up == 0
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: MySQL down (instance {{ $labels.instance }})
      description: "MySQL instance is down on {{ $labels.instance }}\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"
  • Restart and verify the status of all services (prometheus,grafana,node_exporter,alertmanager)

systemctl restart prometheus
systemctl status prometheus
systemctl restart grafana
systemctl status grafana
systemctl restart node_exporter
systemctl status node_exporter
systemctl restart alertmanager
systemctl status alertmanager

  • Need to import the JASON file at the Grafana dashboard

  • find the below link to get the JASON file.

[(https://github.com/prometheus/mysqld_exporter/blob/main/mysqld-mixin/dashboards/mysql-overview.json#L3)]

  • Copy the mysql-overview.jason file from the above link and paste under "import via dashboard JASON model"

Image description

  • Name the Dashboard and keep time sync "every 5 minutes" and the save the dashboard.

Image description

Now we able to Monitor the MySQL DB using Prometheus-Grafana/mysqld-exporter.

❌
❌