Reading view

There are new articles available, click to refresh the page.

Introduction to AWS

Hi folks , welcome to my blog. Here we are going to see about "Introduction to AWS".

Amazon Web Services (AWS) is the world’s leading cloud computing platform, offering a wide range of services to help businesses scale and innovate. Whether you're building an application, hosting a website, or storing data, AWS provides reliable and cost-effective solutions for individuals and organizations of all sizes.

What is AWS?
AWS is a comprehensive cloud computing platform provided by Amazon. It offers on-demand resources such as compute power, storage, networking, and databases on a pay-as-you-go basis. This eliminates the need for businesses to invest in and maintain physical servers.

Core Benefits of AWS

  1. Scalability: AWS allows you to scale your resources up or down based on your needs.
  2. Cost-Effective: With its pay-as-you-go pricing, you only pay for what you use.
  3. Global Availability: AWS has data centers worldwide, ensuring low latency and high availability.
  4. Security: AWS follows a shared responsibility model, offering top-notch security features like encryption and access control.
  5. Flexibility: Supports multiple programming languages, operating systems, and architectures.

Key AWS Services
Here are some of the most widely used AWS services:

  1. Compute:
    • Amazon EC2: Virtual servers to run your applications.
    • AWS Lambda: Serverless computing to run code without managing servers.
  2. Storage:
    • Amazon S3: Object storage for data backup and distribution.
    • Amazon EBS: Block storage for EC2 instances.
  3. Database:
    • Amazon RDS: Managed relational databases like MySQL, PostgreSQL, and Oracle.
    • Amazon DynamoDB: NoSQL database for high-performance applications.
  4. Networking:
    • Amazon VPC: Create isolated networks in the cloud.
    • Amazon Route 53: Domain name system (DNS) and traffic management.
  5. AI/ML:
    • Amazon SageMaker: Build, train, and deploy machine learning models.
  6. DevOps Tools:
    • AWS CodePipeline: Automates the release process.
    • Amazon EKS: Managed Kubernetes service.

Conclusion
AWS has revolutionized the way businesses leverage technology by providing scalable, secure, and flexible cloud solutions. Whether you're a developer, an enterprise, or an enthusiast, understanding AWS basics is the first step toward mastering the cloud. Start your AWS journey today and unlock endless possibilities!

Follow for more and happy learning :)

Ingesting large files to postgres through S3

One of the tasks I recently came accross my job was ingest large files but with the following

  1. Do some processing ( like generate hash for each row )
  2. Insert it into S3 for audit purposes
  3. Insert into postgres

Note:

Keep in mind your postgres database needs to support this and a s3 bucket policy needs to exist in order to allow the data to be copied over.

The setup I am using is a RDS database with S3 in the same region and proper policies and IAM roles already created.

Read more on that here – AWS documentation

For the purpose of this post I will be using dummy data from – eforexcel(1 million records)

The most straight forward way to do this would be to just do a df.to_sql like this

df = pd.read_csv("records.csv")
df.to_sql(
    name="test_table",
    con=connection_detail,
    schema="schema",
    if_exists="replace",
)

Something like this would take more than an hour! Lets do it in less than 5 minutes.

Now ofcourse there are several ways to make this faster – using copy expert, psycogpg driver etc(maybe a sepearate blog post on these), but that’s not the use case I have been tasked with. Since we need to upload the file s3 in the end for audit purposes I will ingest the data from S3 to DB.

Generate table metadata

Before we can assign an s3 operator to ingest the data we need to create the table into which this data will be inserted. We have two ways that I can think of

  1. Each column in the file will be created in the DB with a highest threshold value like varchar(2000)
  2. Each column is created with the data length as max length in each row

I will be going with option 2 here.

This entire process took around 210 seconds instead of more than an hour like the last run.

Let’s go over the code one by one

Read the csv

  1. We can pass the data directly to pandas or stream it into buffered memory something like this
with open("records.csv") as f:
    csv_rdr = csv.reader(f, delimiter=",")
    header = next(csv_rdr)
    with gzip.GzipFile(fileobj=mem_file, mode="wb", compresslevel=6) as gz:
        buff = io.StringIO()
        writer = csv.writer(buff)
        writer.writerows([header])
        for row in csv_rdr:
            writer.writerows([row])
        gz.write(buff.getvalue().encode("utf-8", "replace"))
    mem_file.seek(0)
    s3.put_object(Bucket="mybucket", Key="folder/file.gz", Body=mem_file)

2. Since the file is less than 50 MB i’ll go ahead and load it directly.

Create the table

Get the max lengths of each column and use that to generate the table. We use pandas to_sql() function for this and pass the dtypes.

Copy data from s3 gzipped file to postgres

Finally we use –

aws_s3.table_import_from_s3

to copy over the file to the postgres table.

Generating Signature Version 4 URL’s using boto3

If your application allows your users to download files directly from s3, you are bound to get this error sometime in the future whenever you scale to other regions – The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

The issue has been raised on various forums and github eg – https://github.com/rstudio/pins/issues/233 , https://stackoverflow.com/questions/57591989/amazon-web-services-s3-the-authorization-mechanism-you-have-provided-is-not-s. None of these solutions worked for me.

Here is what did

Replace the Region and Airflow bucket and you’re good to go.

Setting up your own High Availability managed WordPress hosting using Amazon RDS

Hosting your own WordPress website is interesting right!! Ok, come on let’s do it!!

We are going to do this practical from Scratch. From the Creation of our Own VPC, Subnets, Internet Gateway, Route tables to Deployment of WordPress.

Here, we are going to use Amazon Web Service’s RDS service for hosting our own WordPress site. Before that, let’s take a look at a basic introduction to RDS service.

Amazon Relational Database Service is a distributed relational database service by Amazon Web Services (AWS). It is a web service running in the cloud designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically.

Features of AWS RDS

  • Lower administrative burden. Easy to use
  • Performance. General Purpose (SSD) Storage
  • Scalability. Push-button compute scaling
  • Availability and durability. Automated backups
  • Security. Encryption at rest and in transit
  • Manageability. Monitoring and metrics
  • Cost-effectiveness. Pay only for what you use

Ok, let’s jump onto the practical part!!

We will do this practical from scratch. Since it will be big, so we divided this into 5 small parts namely

  • Creating a MySQL database with RDS
  • Creating an EC2 instance
  • Configuring your RDS database
  • Configuring WordPress on EC2
  • Deployment of WordPress website

Creating a MySQL database with RDS

Before that, we have to do two pre-works namely the Creation of Virtual Private Cloud(VPC), Subnets and Security groups. These are more important because in order to have a reliable connection between WordPress and MySQL database, they should be located in the same VPC and should have the same Security Group.

Since Instances are launched on Subnets only, Moreover RDS will launch your MySQL database in EC2 instance only that we cannot able to see since it is fully managed by AWS.

VPC Dashboard

We are going to create our own VPC. For that, we have to specify IP range and CIDR. We specified IP and CIDR as 192.168.0.0/16.

What is CIDR?. I explained this in my previous blog in very detail. You can refer here.

Lets come to the point. After specifying the IP range and CIDR, enter your VPC name.

Now, VPC is successfully created with our specified details.

Next step is to launch the subnet in the above VPC.

Subnet Dashboard

For Creating Subnets, you have to specify which VPC the lab should launch. We already have our own VPC named “myvpc123”.

And then we have to specify the range of Subnet IP and CIDR. Please note that the Subnet range should come under VPC range, it should not exceed VPC range.

For achieving the property of High Availability, We have to launch minimum two subnets, so that Amazon RDS will launch its database in two subnets, if one subnet collapsed means, it won’t cause any trouble.

Now, two Subnets with their specified range of IPs and CIDR are launched successfully inside our own VPC and they are available.

Next step is to create a security group in order to secure the WordPress and MySQL databases. Note that both should have the same Security Group or else it won’t connect.

For creating a Security Group, we have to specify which VPC it should be launched and adding a Description is mandatory.

Then we have to specify inbound rules, for making this practical simple, we are allowing all traffic to access our instance.

Now, the Security Group is successfully created with our specified details.

Now let’s jump into part 1 which is about Creating a MySQL database with RDS.

RDS dashboard

Select Create database, then select Standard create and specify the database type.

Then you have to specify the Version. Version plays a major role in MySQL when integrating with WordPress, so select the compactible version or else it will cause serious trouble at the end. Then select the template, here we are using Free-tier since it won’t be chargeable.

Then you have to specify the credentials such as Database Instance name, Master username and Master password.

Most important part is a selection of VPC, you should select the same VPC where you will launch your EC2 instance for your WordPress and we can’t modify the VPC once the database is created. Then select the Public access as No for providing more security to our database. Now, the people outside of your VPC can’t connect to your database.

Then you have to specify the security group for your database. Note that the Security Group for your database and WordPress should be the same or else it will cause serious trouble.

Note that Security Groups is created per VPC. After selecting Security Group, then click Ok to create the RDS database.

Creating an EC2 instance

Before creating an instance, there should be two things you configured namely Internet Gateway and Route tables. It is used for providing outside internet connectivity to an instance launched in the subnet.

Internet Gateway Dashboard

Internet Gateway is created per VPC. First, we have to create one new Internet Gateway with the specified details.

Then you have to attach Internet Gateway to the VPC

Next step is to create Routing tables. Note that Route table is created per Subnet.

We have to specify which VPC in which your subnet is available to attach routing table with it, specify Name and click create to create the route table.

Then click Edit route to edit the route details namely destination and target. Enter destination as 0.0.0.0/0 for accessing any IP anywhere on the Internet and target is your Internet Gateway.

After entering the details, click Save routes.

We created a Route table, then we have to attach that table to your Subnet. For that click Edit route table association and select your subnet where you want to attach the route table with it.

Now, lets jump into the task of creating an EC2 instance.

First, you have to choose the AMI image in which you used for creating an EC2 instance, here I selected Amazon Linux 2 AMI for that.

Then you have to select Instance type, here I selected t2.micro since it comes under free tier.

Then you have to specify the VPC, Subnet for your instance and you have to enable Auto-assign Public IP in order to get your Public IP to your instance.

Then you have to add storage for your instance. It is optional only.

Then you have to specify the tags which will be more useful especially for automation.

Then you have to select the Security Group for your instance. It should be the same as your database have.

And click Review and Launch. Then you have to add Keypair to launch your EC2 instance. If you didn’t have Keypair means, you can create at that time.

Configuring your RDS database

At this point, you have created an RDS database and an EC2 instance. Now, we will configure the RDS database to allow access to specific entities.

You have to run the below command in your EC2 instance in order to establish the connection with your database.

export MYSQL_HOST=<your-endpoint>

You can find your endpoint by clicking database in the RDS dashboard. Then you have to run the following command.

mysql --user=<user> --password=<password> dbname

This output shows the database is successfully connected to an EC2 instance.

In the MySQL command terminal, you have to run the following commands in order to get all privileges to your account.

CREATE USER 'vishnu' IDENTIFIED BY 'vishnupassword';
GRANT ALL PRIVILEGES ON dbname.* TO vishnu;
FLUSH PRIVILEGES;
Exit

Configuring WordPress on EC2

For Configuring WordPress on EC2 instance, the first step is to configure the webserver, here I am using Apache webserver. For that, you have to run the following commands.

sudo yum install -y httpd
sudo service httpd start

Next step would be download the WordPress application from the internet by using wget command. Run the following code to download the WordPress application.

wget https://wordpress.org/latest.tar.gz
tar -xzf latest.tar.gz

Then we have to do some configuration, for this follow the below steps.

cd wordpress
cp wp-config-sample.php wp-config.php
cd wp-config.php

Go inside the wp-config.php file and enter your credentials (including your password too)

Then, Goto this link and copy all and paste it to replace the existing lines of code.

Next step is to deploy the WordPress application. For that, you have to run the following commands in order to solve the dependencies and deploy WordPress in the webserver.

sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
sudo cp -r wordpress/* /var/www/html/
sudo service httpd restart

That’s it. You have a live, publicly-accessible WordPress installation using a fully-managed MySQL database on Amazon RDS.

Then if you enter your WordPress instance IP in your browser, you will land your WordPress home page.

After you filled in your credentials, you will get your own homepage.

That’s it. You launched your own application in your own instance and your database is managed by AWS RDS service.


Thank you all for your reads. Stay tuned for my next article.

Terraform code for AWS Postgresql RDS

create directory postgres and navigate
$ mkdir postgres && cd postgres
create main.tf file
$ vim main.tf

provider "aws" {
}
resource "aws_security_group" "rds_sg" {
name = "rds_sg"
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_db_instance" "myinstance" {
engine = "postgres"
identifier = "myrdsinstance"
allocated_storage = 20
engine_version = "14"
instance_class = "db.t3.micro"
username = "myrdsuser"
password = "myrdspassword"
parameter_group_name = "default.postgres14"
vpc_security_group_ids = ["${aws_security_group.rds_sg.id}"]
skip_final_snapshot = true
publicly_accessible = true
}

output "rds_endpoint" {
value = "${aws_db_instance.myinstance.endpoint}"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
Install postgres client in local machine
$ sudo apt install -y postgresql-client
To access AWS postgresql RDS instance
$ psql -h <end_point_URL> –p=5432 –username=myrdsuser –password –dbname=mydb
To destroy postgresql RDS instance
$ terraform destroy -auto-approve

Terraform code for AWS MySQL RDS

create directory mysql and navigate
$ mkdir mysql && cd mysql
create main.tf
$ vim main.tf

provider "aws" {
}
resource "aws_security_group" "rds_sg" {
name = "rds_sg"
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_db_instance" "myinstance" {
engine = "mysql"
identifier = "myrdsinstance"
allocated_storage = 20
engine_version = "5.7"
instance_class = "db.t2.micro"
username = "myrdsuser"
password = "myrdspassword"
parameter_group_name = "default.mysql5.7"
vpc_security_group_ids = ["${aws_security_group.rds_sg.id}"]
skip_final_snapshot = true
publicly_accessible = true
}

output "rds_endpoint" {
value = "${aws_db_instance.myinstance.endpoint}"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
install mysql client in local host
$ sudo apt install mysql-client
To access the mysql
$ mysql -h <end_point_URL> -P 3306 -u <username> -p
To destroy the mysql RDS instance
$ terraform destroy -auto-approve

code for s3 bucket creation and public access

provider "aws" {
region = "ap-south-1"
}

resource "aws_s3_bucket" "example" {
bucket = "example-my"
}

resource "aws_s3_bucket_ownership_controls" "ownership" {
bucket = aws_s3_bucket.example.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}

resource "aws_s3_bucket_public_access_block" "pb" {
bucket = aws_s3_bucket.example.id

block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}

resource "aws_s3_bucket_acl" "acl" {
depends_on = [aws_s3_bucket_ownership_controls.ownership]
bucket = aws_s3_bucket.example.id
acl = "private"
}

S3 bucket creation and object storage

create directory s3-demo and navigate
$ mkdir s3-demo && cd s3-demo
create a demo file sample.txt and contents
$ echo “this is sample object to store in demo-bucket” > sample.txt
create main.tf file
$ vim main.tf

provider "aws" {
region = "ap-south-1"
}

resource "aws_s3_bucket" "example" {
bucket = "mydemo-bucket1"
}

resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.example.bucket
key = "sample.txt"
source = "./sample.txt"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve

create S3 bucket using terraform

create directory s3 and navigate to the directory
$ mkdir s3 && cd s3
create main.tf file
$ vim main.tf

provider "aws" {
region = "ap-south-1"
}

resource "aws_s3_bucket" "my_bucket" {
bucket = "mydemo-bucket"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
To destroy the bucket
$ terraform destroy -auto-approve

How to configure AWS CLI and configure in ubuntu

Install AWS CLI
$ sudo apt install awscli -y
To check for the version
$ aws –version
To configure AWS account crdentials
copy the access and secret key from AWS account security credentials
$ aws configure
AWS Access Key ID [None]: ***************** 
AWS Secret Access Key [None]: ******************
Default region name [None]: ap-south-1
Default output format [None]: json or table or text

How to install AWS EKS Cluster

 

step1: create EC2 instance

step2:
Create an IAM Role and attach it to EC2 instance 
IAM 
EC2 
VPC 
CloudFormation 
Administrative access

In the EC2 instance install kubectl and eksctl
step3: install kubectl
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin
$ kubectl version --client

step4: install eksctl
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version

step5: install aws-iam-authenticator
$ curl -Lo aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.6.14/aws-iam-authenticator_0.6.14_linux_amd64
$ chmod +x ./aws-iam-authenticator
$ mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
$ echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
step6:
create clusters and nodes
$ eksctl create cluster --name dhana-cluster --region ap-south-1 --node-type t2.small

step7:
Validate your cluster using by creating by checking nodes and by creating a pod
$ kubectl get nodes
$ kubectl run pod tomcat --image=tomcat 
$ kubectl run pod nginx --image=nginx

To delete the cluster dhana-cluster
$ eksctl delete cluster dhana-cluster --region ap-south-1

AWS-Elastic Container Service

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.

  • EC2 is that it deploys isolated VM instances with auto scaling support, and ECS deploys scalable clusters of managed Docker containers.

  • Amazon Elastic Compute Service (ECS), Elastic Kubernetes Service (EKS), and AWS Fargate help deploy and manage containers

  • AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.

Step-1. Create Cluster and task definition using AWS Fargate

Image description

Image description

Image description

Image description

Step-2. Create the services in the cluster.

  • Create the service with the task definition family which we created for nginx.

Image description

Image description

Image description

Image description

  • Once service created we can access the Public IP details from the Task tab.

Image description

  • Now you able to access the Nginx on the browser with the Public IP.

AWS-Lambda (Start/Stop EC2 Instance)

AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.

  • Lambda functions are efficient whenever you want to create a function that will only contain simple expressions

  • Each Lambda function runs in its own container. When a function is created, Lambda packages it into a new container and then executes that container on a multi-tenant cluster of machines managed by AWS.

Step-1.Create the EC2 Instance.

Image description

Step-2. Create IAM Roles and policies.

  • Create a policy > Select EC2 type >Access level -Write (Stop Instance).

Image description

  • Add Specific ARN (Details of the EC2 Instance which we need to start/stop)

Image description

Image description

  • We have created separate policy for start/stop the EC2 Instance.

Image description

  • Create a Role > select entity (AWS Service)>select the use case as "Lambda".

Image description

Image description

Image description

  • We have created separate Roles for start/stop the EC2 Instance.

Image description

Step-3. Create Lambda function.

Image description

Image description

Image description

  • We can add the trigger rule "Event Bridge"

Image description

  • The similar we create another lambda function for start the EC2 instance and schedule corn job using add trigger"Event bridge"

  • It will start/stop EC2 instance using Lambda function.

AWS-VPC (Peering Connections)

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network.

  • VPC peering connections are limited on the number of active and pending peering connections that you can have per VPC.

  • VPC peering is a technique for securely connecting two or more virtual private clouds, or VPCs

Image description

Step-1. As per the above VPC Peering connection architect Create a VPC and subnet and Rout table.

Image description

Image description

Image description

  • Associate the subnet with the route table.

Image description

Step-2. Create the Internet Gateway and attach the VPC.

Image description

  • Edit and add the Internet Gateway in the route table.

Image description

Step-3. Create the EC2 Instance with VPC-A network settings and Publich IP enabled on the Subnet and Instance.

Image description

Step-4. As the above steps we have created another VPC, Subnet and Route table.

Image description

Image description

Image description

  • Associate the Subnet on the route table and create EC2 Instance.

Image description

Image description

Step-5. We need to copy the .pem key from local and paste in the Primary VPC-A to get SSH access for another VPC-B.

  • Not getting connect to the secondary VPC EC 2 Instance via SSH.

**Step-6. **Create a peering connection.

Image description

  • Accept the Peer Request.

Image description

Step-7. Add the Secondary IPV4 CIDR range and select the peering connection and save on the Primary Route table.

Image description
Step-8. Add the Primary IPV4 CIDR range and select the peering connection and save on the Secondary Route table.

Image description

Step-9. Now we able to access the Secondary VPC EC2 Instance through the Primary VPC EC2 Instance via Peering connection.

AWS-Virtual Private Cloud VPC(Subnet,Route table,Internet Gateway,NAT gateway)

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups.

  • users can avoid underutilizing their resources during periods of low demand or overloading their infrastructure during peak periods. Overall, the advantages of using VPC for your infrastructure include improved security, greater flexibility, and scalability.

  • We are going to create a VPC on particular availability zone and differentiate with Public and private subnet/Route table as mentioned in the architect diagram.

Image description

Step-1. Create a VPC with tag of VPC-A

  • Select and make range for IPV4 CIDR, select the No IPV6 CIDR Block.

Image description

Step-2. Create a Subnet with the VPC ID we created.

  • verify the availability zone and IPV4 VPC CIDR and provide the range of subnet on IPV4 subnet CIDR to create the subnet.

Image description

Step-3.Create a Route table

  • select the VPC and create the route table
    Image description

  • Once route table created associate the subnet with the table. and enable the "Auto assign public IP"

Image description

Image description

Step-4. Create an Internet gateway and attach it with the VPC which we created.

Image description

Image description

  • Add the Internet gateway on the route table.

Image description

Image description

Step-5. Create an EC2 instance

  • On Network settings select the VPC,subnet,and public IP enable.

Image description

  • we are able to access the EC2 instance using public IP via SSH.

Step-6. Now we need to create the private subnet and route table, associate the private subnet on the route table.

Image description

Image description

Image description

Step-7. Create an EC2 instance

  • On Network settings select the VPC,private subnet.

Image description

  • Login to the Public VPC Instance and copy the .pem key from the local to get SSH access for the private instance.

  • We are able to login public Instance and get connected to Private Instance via Local gateway.

  • If we need to access internet on private instance to install any application need to create the NAT gateway.

Step-8.Create a NAT Gateway

  • select the public instance subnet range and allocate the "Elastic IP".

Image description

Step-9. Add the NAT gateway on the private Route table to get internet access on the private Instance.

Image description

Image description

  • We are successfully login to the public instance via SSH and from the public-EC2 we are able to login to private and access the internet.
kannan@kannan-PC:~$ ssh -i apache.pem ubuntu@13.201.97.155
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.2.0-1017-aws x86_64)

ubuntu@ip-192-168-1-99:~$ ssh -i apache.pem ubuntu@192.168.2.221
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.2.0-1017-aws x86_64)


ubuntu@ip-192-168-2-221:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=50 time=1.90 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=50 time=1.55 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=50 time=1.56 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.546/1.671/1.904/0.164 ms
ubuntu@ip-192-168-2-221:~$ ping www.google.com
PING www.google.com (142.251.42.4) 56(84) bytes of data.
64 bytes from bom12s19-in-f4.1e100.net (142.251.42.4): icmp_seq=1 ttl=109 time=1.79 ms
64 bytes from bom12s19-in-f4.1e100.net (142.251.42.4): icmp_seq=2 ttl=109 time=1.58 ms

AWS-Key Management Service(KMS)

AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and AWS services.

  • The service is integrated with other AWS services making it easier to encrypt data you store in these services and control access to the keys that decrypt it.

  • Creating KMS key. KMS > Customer managed Keys > Create key

Image description

Image description

Image description

Image description

  • Install the aws-encryption-cli to encrypt and decrypt the file via CLI.
sudo apt install python3-pip
sudo pip install aws-encryprion-sdk-cli
aws-encryption-cli --version
  • AWS CLI commands to encrypt the file

Image description

kannan@kannan-PC:~$ aws kms encrypt \
    --key-id alias/kannan1 \
    --plaintext fileb://kms.txt \
    --output text \
    --query CiphertextBlob | base64 \
    --decode > kms_encrypt.txt
kannan@kannan-PC:~$ cat kms_encrypt.txt 
x����X�[���4u|��e�J�Q0X��U�
0f0d0_  `�He.0             p����gWI�u0s *�H��
              YU"�    I����2$y��|e!��l�\nų���5�%�����k�~d��~e�g=�+jI�N@g6ETkannan@kannan-PC:~$ 


  • AWS CLI commands to decrypt the file

Image description

kannan@kannan-PC:~$ aws kms decrypt \
    --ciphertext-blob fileb://kms_encrypt.txt \
    --key-id alias/kannan1  \
    --output text \                            
    --query Plaintext | base64 \
    --decode > kms_decrypt.txt
kannan@kannan-PC:~$ cat kms_decrypt.txt 
Test line for kms key 

  • create directory to store the encrypted and decrypted files
mkdir encrypt
mkdir decrypt

  • create a variable to store the arn value which is genetrated for the KMS key
kannankey=arn:aws:kms:ap-south-1:155364343822:key/ef88420b-bbc5-4807-b1f3-c82eb5191c7f

kannan@kannan-PC:~$ cd encrypt/
kannan@kannan-PC:~/encrypt$ ls
example.txt.encrypted  kms.txt.encrypted
kannan@kannan-PC:~/encrypt$ cat kms.txt.encrypted 
xiCeJC�T��mb���w�����/'a8��_aws-crypto-public-keyDA9IoQRQ6f8U3WV8eoVxkQyhEZ1O/QXOXdr9L/Zx6bHP53ZEIfhYq26YJIshCIf8f8Q==aws-kmsLarn:aws:kms:ap-south-1:1550o0m0h��`�He.0���zp~0|-b*�H��807-b1f3-c82eb5191c7f�x4�u���l�\��?����<�Dya
              .�K�B�w
3����>����ǔXnL��U��cj9�1���g�%uray��߳�ɗ���x��0KYf�aE����6�j�@�Ϯ6�_k�!�Q�7x<�ǯ4u��V�6��G�������Vn�v<�%j��龎�����J��vz�u%aÌ�sg0e0b(��)!��
d9�G�Ɩ�.0$����%��
                 V�Ϗc;_���]��fl1�{
                                  o�檈R&\��\&��m6)L\,锌z!��S�<Ɪ,��kannan@kannan-PC:~/encrypt$ 
kannan@kannan-PC:~/encrypt$ cd ..
kannan@kannan-PC:~$ cd decrypt/
kannan@kannan-PC:~/decrypt$ ls
example.txt.encrypted.decrypted  kms.txt.encrypted.decrypted
kannan@kannan-PC:~/decrypt$ cat kms.txt.encrypted.decrypted 
Test line for kms key 

We can encrypt and decrypt the S3 bucket using the KMS key

  • EC2 >EBS>Volumes >create volume >enable "Encrypt this volume".

Image description

Image description

  • create an S3 bucket using CLI
kannan@kannan-PC:~$ aws s3 mb s3://kannandemo-bucket
make_bucket: kannandemo-bucket

  • select the bucket > properties > edit default encryption

  • select "Server-side encryption with AWS Key Management Service keys (SSE-KMS)"

  • choose "Choose from your AWS KMS keys"

Image description

Image description

  • It will auto encrypt and decrypt the objects inside the S3 bucket.

To delete the KMS key we need to schedule the key deletion it took minimum 7 day

Image description

Image description

AWS -Relational Database Service(RDS)

  • Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Choose from eight popular engines: Amazon Aurora PostgreSQL-Compatible Edition, Amazon Aurora MySQL-Compatible Edition, RDS for PostgreSQL, RDS for MySQL, RDS for MariaDB, RDS for SQL Server, RDS for Oracle, and RDS for Db2. Deploy on premises with Amazon RDS on AWS Outposts or with elevated access to the underlying operating system and database environment using Amazon RDS Custom.

  • Now we are going to create Mysql DB using RDS.we need to confirm the ports allowed in the Security groups of your AWS.

Image description

Image descriptiony

Image description

Image description

  • install mysql-client on the local machine
sudo apt install mysql-client -y

Once DB is on available state select >modify >connectivity >public accessibility.

Image description

  • we can access the DB via terminal with the endpoint id,port username and password
mysql -h demomysqldb.cg35jaodi4xh.ap-south-1.rds.amazonaws.com -P 3306 -u admin -p
kannan@kannan-PC:~$ mysql -h demomysqldb.cg35jaodi4xh.ap-south-1.rds.amazonaws.com -P 3306 -u admin -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 27
Server version: 8.0.33 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.02 sec)


  • we can create a PostgreSQL DB with the "Easy create" method
sudo apt install postgresql-client
  • Lets see the another method to create a DB with "Standard create"

Image description

Image description

Image description

Image description

Image description

Image description

Image description

Image description

  • install postgresql-client on the local machine
sudo apt install postgresql-client -y
  • we can access the DB via terminal with the endpoint id,port username and password
psql --host=database-1.cg35jaodi4xh.ap-south-1.rds.amazonaws.com --port=5432 --username=postgres --dbname=demodb --password

Image description

AWS CLI for AWS RDS PgSQL

To create postgres DB
$ aws rds create-db-instance --db-instance-identifier demo-postgresql --db-instance-class db.t3.micro --engine postgres --master-username postgres --master-user-password passcode123 --allocated-storage 20

To describe and get the endpoint url
$ aws rds describe-db-instances --db-instance-identifier demo-postgresql | grep Address

To access the remote postgresql
$ psql --host=<endpoint_url> --port=5432 --username=postgres
--dbname=postgres --password

To delete the db instance without final snapshot and automated backups
$ aws rds delete-db-instance --db-instance-identifier demo-postgresql
--skip-final-snapshot --delete-automated-backups

❌