❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 20 November 2024Main stream

Introduction to AWS

By: Ragul.M
20 November 2024 at 16:13

Hi folks , welcome to my blog. Here we are going to see about "Introduction to AWS".

Amazon Web Services (AWS) is the world’s leading cloud computing platform, offering a wide range of services to help businesses scale and innovate. Whether you're building an application, hosting a website, or storing data, AWS provides reliable and cost-effective solutions for individuals and organizations of all sizes.

What is AWS?
AWS is a comprehensive cloud computing platform provided by Amazon. It offers on-demand resources such as compute power, storage, networking, and databases on a pay-as-you-go basis. This eliminates the need for businesses to invest in and maintain physical servers.

Core Benefits of AWS

  1. Scalability: AWS allows you to scale your resources up or down based on your needs.
  2. Cost-Effective: With its pay-as-you-go pricing, you only pay for what you use.
  3. Global Availability: AWS has data centers worldwide, ensuring low latency and high availability.
  4. Security: AWS follows a shared responsibility model, offering top-notch security features like encryption and access control.
  5. Flexibility: Supports multiple programming languages, operating systems, and architectures.

Key AWS Services
Here are some of the most widely used AWS services:

  1. Compute:
    • Amazon EC2: Virtual servers to run your applications.
    • AWS Lambda: Serverless computing to run code without managing servers.
  2. Storage:
    • Amazon S3: Object storage for data backup and distribution.
    • Amazon EBS: Block storage for EC2 instances.
  3. Database:
    • Amazon RDS: Managed relational databases like MySQL, PostgreSQL, and Oracle.
    • Amazon DynamoDB: NoSQL database for high-performance applications.
  4. Networking:
    • Amazon VPC: Create isolated networks in the cloud.
    • Amazon Route 53: Domain name system (DNS) and traffic management.
  5. AI/ML:
    • Amazon SageMaker: Build, train, and deploy machine learning models.
  6. DevOps Tools:
    • AWS CodePipeline: Automates the release process.
    • Amazon EKS: Managed Kubernetes service.

Conclusion
AWS has revolutionized the way businesses leverage technology by providing scalable, secure, and flexible cloud solutions. Whether you're a developer, an enterprise, or an enthusiast, understanding AWS basics is the first step toward mastering the cloud. Start your AWS journey today and unlock endless possibilities!

Follow for more and happy learning :)

Before yesterdayMain stream

Windows | command lines |

1 September 2024 at 16:37
Go forward : cd path\to\folder 
Go backward : cd ../..
Show lists of files and directories : dir
Show lists of files and directories with hidden files also : dir /a
Clearing screen : cls
Show specific type of files: dir *.png | dir *.jpg
Help for a specific command: ipconfig /? | cls /?
Create a new directory : mkdir myDir | mkdir path\to
Remove or delete directories: if your directory is empty rmdir myDir else rmdir /S myDir
Changing drivers : C: | D:
Show path variables: path
Show available drive names: wmic logicaldisk get name
Change color: color 0B | color 90 or back to default just use color
Creating a file : echo somecontent > file.txt
Deleting file: del filename.ext
Reading contents of a file: type file.ext\
Override to a file : echo newcontent > samefile.ext
Appending to a file : echo appendingcontent >> samefile.ext
Copying files: copy test.txt mydir
Introduction to Command Prompt:

The command line in Windows is also known as Command Prompt or CMD.

On Mac and Linux systems, it's called Terminal.

Image description

To open Command Prompt, follow these steps: Open Start, search for 'Command Prompt', and select it.

Alternatively, you can click the keyboard shortcut (Windows + R), type 'cmd', and press Enter.

Image description

Image description

The first line will indicate the version we are using and the current file location by default.

Image description

Right-click on the top title bar, and the Properties screen will appear. From there,

Image description

Image description

To move from one directory to another, use the cd command followed by the folder name.

For example, to move to the 'Desktop' directory from C:\user\ranjith, type cd desktop.

C:\user> ranjith >
Cd space desktop 

To go to the 'python' folder inside 'Desktop', type cd desktop\python.

C:\user> ranjith >
Cd desktop > python

To return to the parent directory, use cd ... For example, if you are in C:\user\ranjith\desktop\python and want to go back two levels to

C:\user\ranjith, 
type cd ..\.. 

To navigate directly to a specific directory in one line, you can provide the full path.

For example, to go directly to C:\user\ranjith\desktop\python,

you can type cd 

C:\user\ranjith\desktop\python.

Image description
To list files and directories :

use the dir command.

For example, C:\user\ranjith> dir

will show the files, folders, free space, and storage details in the current directory.

Image description

To view the contents of a specific folder, use dir followed by the folder path.

 For example, C:\user\ranjith>dir 

Image description

Image description

Desktop\Python will display all files and folders in the Python folder on the Desktop.

Image description

To view hidden or system files, you can use the dir /a command.

 For example,

 C:\user\ranjith>dir /a

will display all files, including hidden and system files.

Image description

To clear the command prompt screen, use the cls command.

For example, 

C:\user\ranjith> cls 

Image description
will clear the screen and remove previous command outputs.

Opening Files and Viewing History:

To list files of a specific type in a directory, use the dir command with a filter.

For example, 

C:\user\ranjith>dir *.png will list all PNG 

Image description

image files in the current directory.

To open a specific file, enter its filename.

Image description

For instance,

C:\user\ranjith\python>binary_search.png would open the binary_search.png file.

Image description

To navigate through your command history, use the Up and Down arrow keys. Pressing the Up arrow key will cycle through previous commands, while the Down arrow key will move forward through the commands.


To get help on a command, use the /? option.

Image description

For example,

C:\user\ranjith>ipconfig /? will show help for the ipconfig command.

Creating and Removing Folders:

To create a new folder, use the mkdir command followed by the folder name.

For example,

C:\user\ranjith>python>mkdir temp will create a folder named temp.

Image description

Image description
To remove a folder, use the rmdir command followed by the folder name.

For example,

C:\user\ranjith>python>rmdir temp will delete the temp folder.

Image description
Note that the rm command is used for files in some systems, but in Command Prompt, you use del for files and rmdir for directories.

Creating and removing directories:

To create a directory, use the command mkdir txt. 
To remove a directory and its contents, use rmdir /s txt.

This will delete all files and subfolders in the directory as well as the directory itself.

Image description

Use Ctrl + Left Arrow to move the cursor to the beginning of the line and Ctrl + Right Arrow to move it to the end. 

Image description

Image description

To check the version, use var.

Image description

 To start multiple command boxes, use Start.

Image description

Image description

To exit, use Exit.

Image description


Drives and Color Commands:

To list all drives, use: wmic logicaldisk get name. This will show all available drives.

c:\user> ranjith > 
wmic logicaldisk get name

Image description

To switch to a different drive, type the drive letter followed by a colon (e.g., E:).

C:\user> ranjith >  E:

To list files in the current drive, use: dir.

E :\> dir

To view hidden files, use: dir /a.

E :\> dir /a

Image description

To see a directory tree, use: tree.

Image description

E :\> tree 

Changing Text and Background Colors:

E :\> color /?

Image description

Image description

To change the color of text and background, use: color /? to see help options.

For example, color a changes the text color to green.

Image description

E :\> color
E :\> color a

color fc sets a bright white background (if 'f' is not given, it defaults to black) and changes text color to bright red.

Image description

E :\> color fc

Image description

These commands help manage files and customize the appearance of your command prompt

File Attributes:

To view file attributes and get help, use: attrib /?.

Image description

C:\user> ranjith >  YouTube > attrib /? 

Image description
To see the attributes of a file, use: attrib sample.txt.

Image description

Image description

C:\user> ranjith >  Desktop >youtube >
attrib sample. txt

Replace sample.txt with your file name.
To add the "hidden" attribute to a file, use: attrib +h sample.txt.

Image description

C:\user> ranjith >  Desktop >youtube >
attrib +h sample. txt 

To remove the "hidden" attribute, use: attrib -h sample.txt.

Image description

Image description

C:\user> ranjith >  Desktop >youtube >
attrib +r - h sample. txt

Deleting and Creating Files:

To delete a file, use: del sample.txt.

Image description

C:\user> ranjith >  Desktop >youtube >
del sample. txt

del - delete <FileName >

To create a new file, use: echo. > sample.txt. This creates an empty file.

Image description

C:\Users\mrkis\Desktop\Youtube>
echo > sample.txt

To write text to a file, use: echo Kishore > sample.txt. This writes "Kishore" to the file.

Image description

Image description

C:\Users\mrkis\Desktop\Youtube>
echo Kishore > sample.txt

Image description

Image description

C:\Users\mrkis\Desktop\Youtube>
type sample.txt

To view the contents of the file, use: type sample.txt.
Appending Text to Files:

Image description

C:\Users\mrkis\Desktop\Youtube>echo hello>sample.txt

To add text to the end of a file without overwriting existing content, use: echo world >> sample.txt.

C:\Users\mrkis\Desktop\Youtube>type sample.txt

This will add "world" to the end of sample.txt.

Image description

To see the updated content, use: type sample.txt.
Copying Files:

To copy a file to another location or with a new name, use: copy sample.txt test2.txt. This copies sample.txt to a new file named test2.txt in the same directory. If you want to specify a different directory, provide the path instead of just the filename.

Image description

Image description

C:\Users\mrkis\Desktop\Youtube>
copy sample.txt test2

This guide helps with managing file attributes, performing file operations, and

Copying Files Between Disks:

To copy a file from one disk to another, use: copy sample.txt E:. This copies sample.txt from the current location to the E: drive.
Using XCOPY for Copying Directories:

To copy files and directories, including subdirectories, use: xcopy test1 test2 /s. This copies test1 (which can be a file or directory) to test2, including all subfolders and files.
Moving Files:

C:\Users\mrkis\Desktop\Youtube>
copy sample.txt e:
E - another disk

To move files from one location to another, use: move test1 test2. This command moves test1 to test2. If test2 is a folder, test1 will be moved into it. If test2 is a file name, test1 will be renamed to test2.
In summary:

C:\Users\mrkis\Desktop\Youtube>
xcopy test1 test2 /s
copy sample.txt test2
Sample. txt - endha file ah copy seiya vendum. 

S - sub files

copy source destination copies files.
xcopy source destination /s copies files and directories, including subdirectories.
move source destination moves files or renames them

Image description

Image description
Image description

Demystifying IP Addresses and Netmasks: The Complete Overview

24 August 2024 at 13:14

In this blog, we will learn about IP addresses and netmasks.

IP

The Internet Protocol (IP) is a unique identifier for your device, similar to how a mobile number uniquely identifies your phone.

IP addresses are typically represented as four Octets for IPv4, with each octet being One byte/Octets in size, and eight octets for IPv6, with each octet being two bytes/Octets in size.

Examples:

  • IPv4:Β 192.168.43.64
  • IPv6:Β 2001:db8:3333:4444:5555:6666:7777:8888

For the purposes of this discussion, we will focus on IPv4.

Do we really require four Octets structure with dots between them?

The answer is NO

The only requirement for an IPv4 address is that it must be 4 bytes in size. However, it does not have to be written as four octets or even with dots separating them.

Let’s test this by fetching Google’s IP address using theΒ nslookupΒ command.

Convert this to binary number using bc calculator in Bash shell.

And you can see it’s working.

This is because the octet structure and the dots between them are only for human readability. Computers do not interpret dots; they just need an IP address that is 4 bytes in size, and that’s it.

The range for IPv4 addresses is from 0.0.0.0 to 255.255.255.255.

Types of IP Addresses

IP addresses are classified into two main types: Public IPs and Private IPs.

Private IP addresses are used for communication between local devices without connecting to the Internet. They are free to use and secure to use.

You can find your private IP address by using the ifconfig command


The private IP address ranges are as follows:

10.0.0.0 to 10.255.255.255
172.16.0.0 to 172.31.255.255
192.168.0.0 to 192.168.255.255

Public IP addresses are Internet-facing addresses provided by an Internet Service Provider (ISP). These addresses are used to access the internet and are not free.

By default

Private IP to Private IP communication is possible.
Public IP to Public IP communication is possible.

However:

Public IP to Private IP communication is not possible.
Private IP to Public IP communication is not possible.

Nevertheless, these types of communication can occur through Network Address Translation (NAT), which is typically used by your home router. This is why you can access the Internet even with a private IP address.

Netmasks
Netmasks are used to define the range of IP addresses within a network.

Which means,

You can see 24 Ones and 8 Zeros.

Here, we have converted 255 to binary using division method.

255 Γ· 2 = 127 remainder 1

127 Γ· 2 = 63 remainder 1

63 Γ· 2 = 31 remainder 1

31 Γ· 2 = 15 remainder 1

15 Γ· 2 = 7 remainder 1

7 Γ· 2 = 3 remainder 1

3 Γ· 2 = 1 remainder 1

1 Γ· 2 = 0 remainder 1

So, binary value of 255 is 11111111

By using this, we can able to find the number of IP addresses and its range.

Since we have 8 zeros, so

Number of IPs = 2 ^8 which equals to 256 IPs. SO, the usable IP range is 10.4.3.1 – 10.4.3.254 and the broadcast IP is 10.4.3.255.

And we can also write this as 255.255.255.0/24 . Here 24 denotes CIDR (Classless Inter-Domain Routing).

Thats it.

Kindly let me know in comments if you are any queries in these topics.

Tasks – Docker

19 August 2024 at 10:09
  1. Install Docker on your local machine. Verify the installation by running the hello-world container.
  2. Pull the nginx image from Docker Hub and run it as a container. Map port 80 of the container to port 8080 of your host.
  3. Create a Dockerfile for a simple Node.js application that serves β€œHello World” on port 3000. Build the Docker image with tag my-node-app and run a container. Below is the sample index.js file.

const express = require('express');
const app = express();

app.get('/', (req, res) => {
    res.send('Hello World');
});

app.listen(3000, () => {
    console.log('Server is running on port 3000');
});

4. Tag the Docker image my-node-app from Task 3 with a version tag v1.0.0.

5. Push the tagged image from Task 4 to your Docker Hub repository.

6. Run a container from the ubuntu image and start an interactive shell session inside it. You can run commands like ls, pwd, etc.

7. Create a Dockerfile for a Go application that uses multi-stage builds to reduce the final image size. The application should print β€œHello Docker”. Sample Go code.


package main

import "fmt"

func main() {
    fmt.Println("Hello Docker")
}

8. Create a Docker volume and use it to persist data for a MySQL container. Verify that the data persists even after the container is removed. Try creating a test db.

9. Create a custom Docker network and run two containers (e.g., nginx and mysql) on that network. Verify that they can communicate with each other.

10. Create a docker-compose.yml file to define and run a multi-container Docker application with nginx as a web server and mysql as a database.

11. Scale the nginx service in the previous Docker Compose setup to run 3 instances.

12. Create a bind mount to share data between your host system and a Docker container running nginx. Modify a file on your host and see the changes reflected in the container.

13. Add a health check to a Docker container running a simple Node.js application. The health check should verify that the application is running and accessible.

Sample Healthcheck API in node.js,


const express = require('express');
const app = express();

// A simple route to return the status of the application
app.get('/health', (req, res) => {
    res.status(200).send('OK');
});

// Example main route
app.get('/', (req, res) => {
    res.send('Hello, Docker!');
});

// Start the server on port 3000
const port = 3000;
app.listen(port, () => {
    console.log(`App is running on http://localhost:${port}`);
});

14. Modify a Dockerfile to take advantage of Docker’s build cache, ensuring that layers that don’t change are reused.

15. Run a PostgreSQL database in a Docker container and connect to it using a database client from your host.

16. Create a custom Docker network and run a Node.js application and a MongoDB container on the same network. The Node.js application should connect to MongoDB using the container name.


const mongoose = require('mongoose');

mongoose.connect('mongodb://mongodb:27017/mydatabase', {
  useNewUrlParser: true,
  useUnifiedTopology: true,
}).then(() => {
  console.log('Connected to MongoDB');
}).catch(err => {
  console.error('Connection error', err);
});

17. Create a docker-compose.yml file to set up a MEAN (MongoDB, Express.js, Angular, Node.js) stack with services for each component.

18. Use the docker stats command to monitor resource usage (CPU, memory, etc.) of running Docker containers.

docker run -d --name busybox1 busybox sleep 1000
docker run -d --name busybox2 busybox sleep 1000

19. Create a Dockerfile for a simple Python Flask application that serves β€œHello World”.

20. Configure Nginx as a reverse proxy to forward requests to a Flask application running in a Docker container.

21. Use docker exec to run a command inside a running container.


docker run -d --name ubuntu-container ubuntu sleep infinity

22. Modify a Dockerfile to create and use a non-root user inside the container.

23. Use docker logs to monitor the output of a running container.

24. Use docker system prune to remove unused Docker data (e.g., stopped containers, unused networks).

25. Run a Docker container in detached mode and verify that it’s running in the background.

26. Configure a Docker container to use a different logging driver (e.g., json-file or syslog).

27. Use build arguments in a Dockerfile to customize the build process.


from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, Docker Build Arguments!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

28. Set CPU and memory limits for a Docker container (for busybox)

Docker Ep 12 – Cheatsheet

19 August 2024 at 01:27

Here’s a Docker cheat sheet that covers the most commonly used Docker commands, organized by categories.

Docker Basics

  • docker --version: Show the Docker version installed on your system.
  • docker info: Display system-wide information, including Docker version, number of containers, and images.
  • docker help: Get help on Docker commands.

Docker Images

  • docker images: List all Docker images on your system.
  • docker pull <image>: Download an image from a Docker registry (e.g., Docker Hub).
  • docker build -t <image_name> .: Build an image from a Dockerfile in the current directory and tag it with a name.
  • docker tag <image_id> <new_image_name>: Tag an image with a new name.
  • docker rmi <image>: Remove one or more images.
  • docker history <image>: Show the history of an image (layers).

Docker Containers

  • docker ps: List all running containers.
  • docker ps -a: List all containers (running and stopped).
  • docker run <image>: Run a container from an image.
  • docker run -d <image>: Run a container in detached mode (in the background).
  • docker run -it <image>: Run a container in interactive mode with a terminal.
  • docker run -p <host_port>:<container_port> <image>: Map a port from the host to the container.
  • docker stop <container>: Stop a running container.
  • docker start <container>: Start a stopped container.
  • docker restart <container>: Restart a running container.
  • docker rm <container>: Remove a stopped container.
  • docker logs <container>: View the logs of a container.
  • docker exec -it <container> <command>: Execute a command inside a running container (e.g., bash to open a shell).

Docker Networks

  • docker network ls: List all Docker networks.
  • docker network create <network_name>: Create a new Docker network.
  • docker network inspect <network_name>: View detailed information about a network.
  • docker network connect <network_name> <container>: Connect a container to a network.
  • docker network disconnect <network_name> <container>: Disconnect a container from a network.
  • docker network rm <network_name>: Remove a Docker network.

Docker Volumes

  • docker volume ls: List all Docker volumes.
  • docker volume create <volume_name>: Create a new Docker volume.
  • docker volume inspect <volume_name>: View detailed information about a volume.
  • docker volume rm <volume_name>: Remove a Docker volume.
  • docker run -v <volume_name>:<container_path> <image>: Mount a volume inside a container.

Docker Compose

  • docker-compose up: Start the services defined in a docker-compose.yml file.
  • docker-compose down: Stop and remove containers, networks, volumes, and images created by docker-compose up.
  • docker-compose build: Build or rebuild services defined in a docker-compose.yml file.
  • docker-compose ps: List containers managed by Docker Compose.
  • docker-compose logs: View logs for services managed by Docker Compose.
  • docker-compose exec <service> <command>: Execute a command in a running service.

Dockerfile Directives

  • FROM: Specifies the base image.
  • WORKDIR: Sets the working directory inside the container.
  • COPY: Copies files from the host to the container.
  • RUN: Executes a command in the container.
  • CMD: Specifies the command to run when the container starts.
  • EXPOSE: Specifies the port on which the container will listen.
  • ENV: Sets environment variables.
  • ENTRYPOINT: Configures the container to run as an executable.

Docker Cleanup Commands

  • docker system prune: Remove unused data (stopped containers, unused networks, dangling images, etc.).
  • docker container prune: Remove all stopped containers.
  • docker image prune: Remove unused images.
  • docker volume prune: Remove all unused volumes.
  • docker network prune: Remove all unused networks.

Miscellaneous

  • docker inspect <container_or_image>: Return low-level information on Docker objects (containers, images, volumes, etc.).
  • docker stats: Display a live stream of container(s) resource usage statistics.
  • docker top <container>: Display the running processes of a container.
  • docker cp <container>:<path> <local_path>: Copy files from a container to the host or vice versa.

Docker EP 11 – Docker Networking & Docker Volumes

19 August 2024 at 00:56

Alex is tasked with creating a new microservices-based web application for a growing e-commerce platform. The application needs to handle everything from user authentication to inventory management, and you decide to use Docker to containerize the different services.

Here are the services code with their dockerfile,

Auth Service (auth-service)

This service handles user authentication,


# auth-service.py
from flask import Flask, request, jsonify

app = Flask(__name__)

# Dummy user database
users = {
    "user1": "password1",
    "user2": "password2"
}

@app.route('/login', methods=['POST'])
def login():
    data = request.json
    username = data.get('username')
    password = data.get('password')
    
    if username in users and users[username] == password:
        return jsonify({"message": "Login successful"}), 200
    else:
        return jsonify({"message": "Invalid credentials"}), 401

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Dockerfile:

# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV NAME auth-service

# Run app.py when the container launches
CMD ["python", "auth_service.py"]


Inventory Service (inventory-service)

This service manages inventory data, inventory_service.py

from flask import Flask, request, jsonify

app = Flask(__name__)

# Dummy inventory database
inventory = {
    "item1": {"name": "Item 1", "quantity": 10},
    "item2": {"name": "Item 2", "quantity": 5}
}

@app.route('/inventory', methods=['GET'])
def get_inventory():
    return jsonify(inventory), 200

@app.route('/inventory/<item_id>', methods=['POST'])
def update_inventory(item_id):
    data = request.json
    if item_id in inventory:
        inventory[item_id]["quantity"] = data.get("quantity")
        return jsonify({"message": "Inventory updated"}), 200
    else:
        return jsonify({"message": "Item not found"}), 404

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)


Dockerfile

# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5001 available to the world outside this container
EXPOSE 5001

# Define environment variable
ENV NAME inventory-service

# Run inventory_service.py when the container launches
CMD ["python", "inventory_service.py"]

Dev Service (dev-service)

This service could be a simple service used during development for testing or managing files, dev-service.py


from flask import Flask, request, jsonify
import os

app = Flask(__name__)

@app.route('/files', methods=['GET'])
def list_files():
    files = os.listdir('/app/data')
    return jsonify(files), 200

@app.route('/files/<filename>', methods=['GET'])
def read_file(filename):
    try:
        with open(f'/app/data/{filename}', 'r') as file:
            content = file.read()
        return jsonify({"filename": filename, "content": content}), 200
    except FileNotFoundError:
        return jsonify({"message": "File not found"}), 404

@app.route('/files/<filename>', methods=['POST'])
def write_file(filename):
    data = request.json.get("content", "")
    with open(f'/app/data/{filename}', 'w') as file:
        file.write(data)
    return jsonify({"message": f"File {filename} written successfully"}), 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

Dockerfile


# Use the official Python image.
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install flask

# Make port 5002 available to the world outside this container
EXPOSE 5002

# Define environment variable
ENV NAME dev-service

# Run dev_service.py when the container launches
CMD ["python", "dev_service.py"]

Auth Service: http://localhost:5000/login (POST request with JSON {"username": "user1", "password": "password1"})

Inventory Service: http://localhost:5001/inventory (GET or POST request)

Dev Service:

  • List files: http://localhost:5002/files (GET request)
  • Read file: http://localhost:5002/files/<filename> (GET request)
  • Write file: http://localhost:5002/files/<filename> (POST request with JSON {"content": "Your content here"})

The Lonely Container

You start by creating a simple Flask application for user authentication. After writing the code, you decide to containerize it using Docker.

docker build -t auth-service .
docker run -d -p 5000:5000 --name auth-service auth-service

The service is up and running, and you can access it at http://localhost:5000. But there’s one problemβ€”it’s lonely. Your auth-service is a lone container in the vast sea of Docker networking. If you want to add more services, they need a way to communicate with each other.

  1. docker build -t auth-service .
  • This command builds a Docker image from the Dockerfile in the current directory (.) and tags it as auth-service.

2. docker run -d -p 5000:5000 --name auth-service auth-service

  • -d: Runs the container in detached mode (in the background).
  • -p 5000:5000: Maps port 5000 on the host to port 5000 in the container, making the Flask app accessible at http://localhost:5000.
  • --name auth-service: Names the container auth-service.
  • auth-service: The name of the image to run.

The Bridge of Communication

You decide to expand the application by adding a new inventory service. But how will these two services talk to each other? Enter the bridge network a magical construct that allows containers to communicate within their own private world.

You create a user-defined bridge network to allow your containers to talk to each other by name rather than by IP address.

docker network create ecommerce-network
docker run -d --name auth-service --network ecommerce-network auth-service
docker run -d --name inventory-service --network ecommerce-network inventory-service

Now, your services are not lonely anymore. The auth-service can talk to the inventory-service simply by using its name, like calling a friend across the room. In your code, you can reference inventory-service by name to establish a connection.

docker network create ecommerce-network

  • Creates a user-defined bridge network called ecommerce-network. This network allows containers to communicate with each other using their container names as hostnames.

docker run -d --name auth-service --network ecommerce-network auth-service

  • Runs the auth-service container on the ecommerce-network. The container can now communicate with other containers on the same network using their names.

docker run -d --name inventory-service --network ecommerce-network inventory-service

  • Runs the inventory-service container on the ecommerce-network. The auth-service container can now communicate with the inventory-service using the name inventory-service.

The City of Services

As your project grows, you realize that your application will eventually need to scale. Some services will run on different servers, possibly in different data centers. How will they communicate? It’s time to build a cityβ€”a network that spans multiple hosts.

You decide to use Docker Swarm, a tool that lets you manage a cluster of Docker hosts. You create an overlay network, a mystical web that allows containers across different servers to communicate as if they were right next to each other.

docker network create -d overlay ecommerce-overlay
docker service create --name auth-service --network ecommerce-overlay auth-service
docker service create --name inventory-service --network ecommerce-overlay inventory-service

Now, no matter where your containers are running, they can still talk to each other. It’s like giving each container a magic phone that works anywhere in the world.

docker network create -d overlay ecommerce-overlay

  • Creates an overlay network named ecommerce-overlay. Overlay networks are used for multi-host communication, typically in a Docker Swarm or Kubernetes environment.

docker service create --name auth-service --network ecommerce-overlay auth-service

  • Deploys the auth-service as a service on the ecommerce-overlay network. Services are used in Docker Swarm to manage containers across multiple hosts.

docker service create --name inventory-service --network ecommerce-overlay inventory-service

  • Deploys the inventory-service as a service on the ecommerce-overlay network, allowing it to communicate with the auth-service even if they are running on different physical or virtual machines.

The Treasure Chest of Data

Your services are talking, but they need to remember thingsβ€”like user data and inventory levels. Enter the Docker volumes, the treasure chests where your containers can store their precious data.

For your inventory-service, you create a volume to store all the inventory information,

docker volume create inventory-data
docker run -d --name inventory-service --network ecommerce-network -v inventory-data:/app/data inventory-service

Now, even if your inventory-service container is destroyed and replaced, the data remains safe in the inventory-data volume. It’s like having a secure vault where you keep all your valuables.

docker volume create inventory-data

  • Creates a named Docker volume called inventory-data. Named volumes persist data even if the container is removed, and they can be shared between containers.

docker run -d --name inventory-service --network ecommerce-network -v inventory-data:/app/data inventory-service

  • -v inventory-data:/app/data: Mounts the inventory-data volume to the /app/data directory inside the container. Any data written to /app/data inside the container is stored in the inventory-data volume.

The Hidden Pathway

Sometimes, you need to work directly with files on your host machine, like when debugging or developing. You create a bind mount, a secret pathway between your host and the container.

docker run -d --name dev-service --network ecommerce-network -v $(pwd)/data:/app/data dev-service

Now, as you make changes to files in your host’s data directory, those changes are instantly reflected in your container. It’s like having a secret door in your house that opens directly into your office at work.

-v $(pwd)/data:/app/data:

  • This creates a bind mount, where the data directory in the current working directory on the host ($(pwd)/data) is mounted to /app/data inside the container. Changes made to files in the data directory on the host are reflected inside the container and vice versa. This is particularly useful for development, as it allows you to edit files on your host and see the changes immediately inside the running container.

The Seamless City

As your application grows, Docker Compose comes into play. It’s like the city planner, helping you manage all the roads (networks) and buildings (containers) in your bustling metropolis. With a simple docker-compose.yml file, you define your entire application stack,

version: '3'
services:
  auth-service:
    image: auth-service
    networks:
      - ecommerce-network
  inventory-service:
    image: inventory-service
    networks:
      - ecommerce-network
    volumes:
      - inventory-data:/app/data

networks:
  ecommerce-network:

volumes:
  inventory-data:

  1. version: '3': Specifies the version of the Docker Compose file format.
  2. services:: Defines the services (containers) that make up your application.
  • auth-service:: Defines the auth-service container.
    • image: auth-service: Specifies the Docker image to use for this service.
    • networks:: Specifies the networks this service is connected to.
  • inventory-service:: Defines the inventory-service container.
    • volumes:: Specifies the volumes to mount. Here, the inventory-data volume is mounted to /app/data inside the container.

3. networks:: Defines the networks used by the services. ecommerce-network is the custom bridge network created for communication between the services.

4. volumes:: Defines the volumes used by the services. inventory-data is a named volume used by the inventory-service.

Now, you can start your entire city with a single command,

docker-compose up

Everything springs to life services find each other, data is stored securely, and your city of containers runs like a well-oiled machine.

Docker EP – 10: Let’s Dockerize a Flask Application

18 August 2024 at 11:49

Let’s develop a simple flask application,

  1. Set up the project directory: Create a new directory for your Flask project.

mkdir flask-docker-app
cd flask-docker-app

2. Create a virtual environment (optional but recommended):


python3 -m venv venv
source venv/bin/activate

3. Install Flask


pip install Flask

4. Create a simple Flask app:

In the flask-docker-app directory, create a file named app.py with the following content,


from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, Dockerized Flask!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

5. Test the Flask app: Run the Flask application to ensure it’s working.

python app.py

Visit http://127.0.0.1:5000/ in your browser. You should see β€œHello, Dockerized Flask!”.

Dockerize the Flask Application

  1. Create a Dockerfile: In the flask-docker-app directory, create a file named Dockerfile with the following content:

# Use the official Python image from the Docker Hub
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir Flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV FLASK_APP=app.py

# Run app.py when the container launches
CMD ["python", "app.py"]

2. Create a .dockerignore file:

In the flask-docker-app directory, create a file named .dockerignore to ignore unnecessary files during the Docker build process:


venv
__pycache__
*.pyc
*.pyo

3. Build the Docker image:

In the flask-docker-app directory, run the following command to build your Docker image:


docker build -t flask-docker-app .

4. Run the Docker container:

Run the Docker container using the image you just built,

docker run -p 5000:5000 flask-docker-app

5. Access the Flask app in Docker: Visit http://localhost:5000/ in your browser. You should see β€œHello, Dockerized Flask!” running in a Docker container.

You have successfully created a simple Flask application and Dockerized it. The Dockerfile allows you to package your app with its dependencies and run it in a consistent environment.

Docker Ep 6: Running β€˜Hello, World!’ with BusyBox: A Docker Adventure

15 August 2024 at 04:22

Once upon a time in the city of Athipati, there was a young coder named Arivanandham. Arivanandham had recently heard whispers of a magical tool called Docker, which promised to simplify the process of running applications. Eager to learn more, Arivanandham decided to embark on a questβ€”a quest to run the famous β€œHello, World!” using the mysterious BusyBox Docker image.

Today, we’re going to take our first step by creating and running a container. And what better way to start than with a tiny yet powerful image called BusyBox? Let’s dive in and explore how to run our first container using BusyBox.

Step 1: Discovering BusyBox on Docker Hub

Our journey begins at the Docker Hub, the vast library of images ready to be transformed into containers. Let’s search for β€œBusyBox” on Docker Hub.

Upon searching, you’ll find the official BusyBox repository at the top. BusyBox is renowned for its compact sizeβ€”about 1 megabyteβ€”which makes it an excellent choice for quick downloads and speedy container spins.

Step 2: Exploring BusyBox Tags

Before we proceed, let’s check out the Tags tab on the BusyBox page. Tags represent different versions of the image. We’re going to pick tag 1.36 for our container. This specific tag will be our guide in this Docker adventure.

Step 3: Setting Up Your Docker Environment

To start, we need to open a terminal. If you’re on Docker for Mac, Docker for Windows, or Linux, you can simply open your default terminal. If you’re using Docker Toolbox, open the Docker Quickstart Terminal.

Step 4: Checking Local Images

When you instruct Docker to create a container, it first checks your local system to see if the image is already available. Let’s verify what images we currently have:

docker images

If this is your first time, you’ll see that there are no images available yet. But don’t worry; we’re about to change that.

Step 5: Running Your First Container

Now, let’s run our first container! We’ll use the docker run command, specifying the BusyBox image and tag 1.36. We’ll also tell Docker to execute a simple command: echo "Hello, World!".

docker run busybox:1.36 echo "Hello, World!"

Here’s what happens next:

  • Docker checks for the BusyBox 1.36 image locally.
  • If it’s not found, Docker will download the image from the remote repository.
  • Once the image is downloaded, Docker creates and runs the container.

And just like that, you should see the terminal output:


Hello, World!

Congratulations! You’ve just run your first Docker container.

Step 6: Verifying the Image Download

Let’s run the docker images command again:

docker images

You’ll now see the BusyBox 1.36 image listed. The image has a unique ID, confirming that it’s stored locally on your system.

Step 7: Running the Container Again

Now that we have the BusyBox image locally, let’s run the same container again:

docker run busybox:1.36 echo "Hello, World!"

This time, notice how quickly the command executes. Docker uses the local image, skipping the download step, and instantly spins up the container.

Step 8: Exploring the Container’s File System

Let’s try something new. We’ll list all the contents in the root directory of the BusyBox container:

docker run busybox:1.36 ls /

You’ll see Docker output the list of all directories and files at the root level of the container.

Step 9: Running a Container in Interactive Mode

To dive deeper, we can run the container in an interactive mode, which allows us to interact with the container as if it were a tiny, isolated Linux system. We’ll use the -i (interactive) and -t (pseudo-TTY) flags:


docker run -it busybox:1.36

Now you’re inside the container! Try running commands like ls to see the contents. You can even create a new file:


touch a.txt
ls

You’ll see a.txt listed in the output. To exit the container, simply type exit.

Step 10: Understanding Container Lifecycle

It’s important to note that once you exit a container, it shuts down. If you run the container again using the same command, Docker spins up a brand-new instance. The file you created earlier (a.txt) won’t be there because each container instance is ephemeral, meaning it doesn’t retain data from previous runs unless you explicitly save it.

And there you have it! You’ve successfully created, explored, and understood your first Docker container using the BusyBox image. This is just the beginning of what you can achieve with Docker. As you continue your journey, you’ll discover how containers can simplify development, testing, and deployment, all while keeping your environment clean and isolated.

Types of Version Control System

18 July 2024 at 11:05
version control system tracks changes to a file or set of files over time. There are three types of version control system: Local Version Control Systems: Centralized Version Control Systems: Distributed Version Control Systems: Popular version control systems and tools Here’s a brief overview of some commonly used version control tools and their pros and […]

GitHub, Git & Jenkins

13 February 2024 at 03:32

Github:

It allows collaboration with any developer all over the world. Open Source solutions enable potential developer to contribute and share the knowledge to benefit the Global Community. At a high level, GitHub is a website and cloud-based service that helps developers store and manage their code, as well as track and control changes to their code. To understand exactly what GitHub is, you need to know two connected principles:

  • Version control
  • Git

What Is Version Control?

Version control helps developers track and manage changes to a software project’s code. As a software project grows, version control becomes essential. Take WordPress…

At this point, WordPress is a pretty big project. If a core developer wanted to work on one specific part of the WordPress codebase, it wouldn’t be safe or efficient to have them directly edit the β€œofficial” source code.

Instead, version control lets developers safely work throughΒ branchingΒ andΒ merging.

WithΒ branching, a developer duplicates part of the source code (called theΒ repository). The developer can then safely make changes to that part of the code without affecting the rest of the project.

Then, once the developer gets his or her part of the code working properly, he or she canΒ mergeΒ that code back into the main source code to make it official.

What Is Git?

Git is aΒ specific open-source version control systemΒ created by Linus Torvalds in 2005.It is used for coordinating work among several people on a project and track progress over time. It is used for source code management for software development.

GitHub is a Git repository hosting service which provides a web-based graphical interface that helps every team member to work together on the project from anywhere and makes it easy to collaborate.

Jenkins

Jenkins isΒ a Java-based open-source automation platform with plugins designed for continuous integration.

It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration, and continuous delivery, making it easier for developers and DevOps engineers to integrate changes to the project and for consumers to get a new build. It is a server-based system that runs in servlet containers such as Apache Tomcat.

Git vs Jenkins: What are the differences?

Git and Jenkins are both popular tools, Git is a distributed version control system and Jenkins is a continuous integration and automation tool. Let’s explore the key differences between the two:

  1. Code Management vs. Automated Builds: Git is primarily used for code management and version control. It allows developers to track changes, collaborate on code, and handle code branching and merging efficiently. On the other hand, Jenkins focuses on automated builds, testing, and deployment. It helps in integrating code changes from multiple team members and automates the build process, including compiling, testing, and packaging the software.
  2. Local vs. Remote: Git operates locally on the developer’s machine, allowing them to work offline and commit changes to their local repository. Developers can then push their changes to a remote repository, facilitating collaboration with other team members. In contrast, Jenkins is a remote tool that runs on a dedicated server or cloud platform. It continuously monitors the code repository and triggers automated builds or tests based on predefined conditions or schedules.
  3. Version Control vs. Continuous Integration: Git’s primary focus is on version control, tracking changes to files and directories over time. It provides features like branching, merging, and resolving conflicts to manage code versions effectively. Jenkins, on the other hand, emphasizes continuous integration (CI), which involves frequently integrating code changes from multiple developers into a shared repository. Jenkins automatically builds and tests the integrated code, highlighting any conflicts or issues that arise during the process.
  4. User Interface: Git primarily relies on a command-line interface (CLI) for executing various operations. However, there are also graphical user interface (GUI) clients available for more user-friendly interactions. Jenkins, on the other hand, provides a web-based graphical user interface that allows users to configure and manage Jenkins jobs, view build reports, and monitor the status of automated builds and tests.
  5. Plugin Ecosystem: Git has an extensive ecosystem of third-party plugins that extend its functionality and integrations with other development tools. These plugins cover various areas, including code review, issue tracking, and build automation. Jenkins, being an automation tool, has a rich plugin ecosystem as well. These plugins enable users to integrate Jenkins with different build tools, testing frameworks, version control systems, and deployment platforms, enhancing its capabilities and flexibility.
  6. Ease of Use: Git can have a steep learning curve for beginners, particularly when it comes to understanding concepts like branching, merging, and resolving conflicts. However, once users become familiar with its core functionality, it provides a powerful and flexible version control system. Jenkins, on the other hand, aims to simplify the CI process and provide an intuitive user interface for managing builds and automation. While some initial setup and configuration may be required, Jenkins offers ease of use in terms of managing continuous integration workflows.

In summary, Git focuses on code management and version control, while Jenkins specializes in continuous integration and automation. Git operates locally, while Jenkins runs remotely on dedicated servers. Git’s primary interface is command-line-based, with additional GUI clients available, whereas Jenkins offers a web-based graphical user interface. Both Git and Jenkins have plugin ecosystems that extend their functionality, but Jenkins prioritizes automation-related integrations. Finally, while Git has a steeper learning curve, Jenkins aims to provide ease of use in managing continuous integration workflows.

Source: https://stackshare.io/stackups/git-vs-jenkins#:~:text=Git%20operates%20locally%2C%20while%20Jenkins,web%2Dbased%20graphical%20user%20interface.

Day – 1 in Commands Practice

6 February 2024 at 15:58

Today Learned below commands.

List Commands

ls -> it listed out all the files
ls -l -> it gave long listing items
ls -lh -> it returns items with human readable file size
ls -lS -> it sorted out the files
ls -lSh -> it sorted the files also it returns file with human readable format
ls -R -> returned all the files in the directory

whoami -> shows the current user
hostname -> shows the hostname
hostname -I -> shows the IP address of the user

Date Commands

date -> displayed current date
date –date=”tomorrow” -> prints the tomorrow date
date –date=”3 years ago” -> prints the date 3 years ago.

CAT Commands : Concatenate commands

cat > test1.text -> creates a new file
cat test1.text | less -> show the file in page fize
q -> to quit

ECHO Commands

echo β€œHello world” -> it usually prints the data.

History Commands

history -> it displays the last 1000 commands executed in our machine. we can increase the limit
history 10 -> executes last 10 commands
history | head -> shows first 10 commands
history | tail -> last 10 commands
history !1000(event) -> executed the specified event command.

Remove command

rm -i test1.text -> it asks user permission whether to delete this or not
rm test1.text -> removes the files
rm * text -> removes all the files which are text file type.

Manual command

man ls -> shows all the information about ls command
man date -> displayes all the info about date command
z -> to down the page.
w -> to go up in the page.

Day – 1 in Commands Practice

6 February 2024 at 15:58

Today Learned below commands.

List Commands

ls -> it listed out all the files
ls -l -> it gave long listing items
ls -lh -> it returns items with human readable file size
ls -lS -> it sorted out the files
ls -lSh -> it sorted the files also it returns file with human readable format
ls -R -> returned all the files in the directory

whoami -> shows the current user
hostname -> shows the hostname
hostname -I -> shows the IP address of the user

Date Commands

date -> displayed current date
date –date=”tomorrow” -> prints the tomorrow date
date –date=”3 years ago” -> prints the date 3 years ago.

CAT Commands : Concatenate commands

cat > test1.text -> creates a new file
cat test1.text | less -> show the file in page fize
q -> to quit

ECHO Commands

echo β€œHello world” -> it usually prints the data.

History Commands

history -> it displays the last 1000 commands executed in our machine. we can increase the limit
history 10 -> executes last 10 commands
history | head -> shows first 10 commands
history | tail -> last 10 commands
history !1000(event) -> executed the specified event command.

Remove command

rm -i test1.text -> it asks user permission whether to delete this or not
rm test1.text -> removes the files
rm * text -> removes all the files which are text file type.

Manual command

man ls -> shows all the information about ls command
man date -> displayes all the info about date command
z -> to down the page.
w -> to go up in the page.

Terraform code for AWS Postgresql RDS

7 January 2024 at 17:19

create directory postgres and navigate
$ mkdir postgres && cd postgres
create main.tf file
$ vim main.tf

provider "aws" {
}
resource "aws_security_group" "rds_sg" {
name = "rds_sg"
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_db_instance" "myinstance" {
engine = "postgres"
identifier = "myrdsinstance"
allocated_storage = 20
engine_version = "14"
instance_class = "db.t3.micro"
username = "myrdsuser"
password = "myrdspassword"
parameter_group_name = "default.postgres14"
vpc_security_group_ids = ["${aws_security_group.rds_sg.id}"]
skip_final_snapshot = true
publicly_accessible = true
}

output "rds_endpoint" {
value = "${aws_db_instance.myinstance.endpoint}"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
Install postgres client in local machine
$ sudo apt install -y postgresql-client
To access AWS postgresql RDS instance
$ psql -h <end_point_URL> –p=5432 –username=myrdsuser –password –dbname=mydb
To destroy postgresql RDS instance
$ terraform destroy -auto-approve

Terraform code for AWS MySQL RDS

7 January 2024 at 17:12

create directory mysql and navigate
$ mkdir mysql && cd mysql
create main.tf
$ vim main.tf

provider "aws" {
}
resource "aws_security_group" "rds_sg" {
name = "rds_sg"
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_db_instance" "myinstance" {
engine = "mysql"
identifier = "myrdsinstance"
allocated_storage = 20
engine_version = "5.7"
instance_class = "db.t2.micro"
username = "myrdsuser"
password = "myrdspassword"
parameter_group_name = "default.mysql5.7"
vpc_security_group_ids = ["${aws_security_group.rds_sg.id}"]
skip_final_snapshot = true
publicly_accessible = true
}

output "rds_endpoint" {
value = "${aws_db_instance.myinstance.endpoint}"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
install mysql client in local host
$ sudo apt install mysql-client
To access the mysql
$ mysql -h <end_point_URL> -P 3306 -u <username> -p
To destroy the mysql RDS instance
$ terraform destroy -auto-approve

code for s3 bucket creation and public access

7 January 2024 at 12:55
provider "aws" {
region = "ap-south-1"
}

resource "aws_s3_bucket" "example" {
bucket = "example-my"
}

resource "aws_s3_bucket_ownership_controls" "ownership" {
bucket = aws_s3_bucket.example.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}

resource "aws_s3_bucket_public_access_block" "pb" {
bucket = aws_s3_bucket.example.id

block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}

resource "aws_s3_bucket_acl" "acl" {
depends_on = [aws_s3_bucket_ownership_controls.ownership]
bucket = aws_s3_bucket.example.id
acl = "private"
}

S3 bucket creation and object storage

7 January 2024 at 12:51

create directory s3-demo and navigate
$ mkdir s3-demo && cd s3-demo
create a demo file sample.txt and contents
$ echo β€œthis is sample object to store in demo-bucket” > sample.txt
create main.tf file
$ vim main.tf

provider "aws" {
region = "ap-south-1"
}

resource "aws_s3_bucket" "example" {
bucket = "mydemo-bucket1"
}

resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.example.bucket
key = "sample.txt"
source = "./sample.txt"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve

create S3 bucket using terraform

7 January 2024 at 12:45

create directory s3 and navigate to the directory
$ mkdir s3 && cd s3
create main.tf file
$ vim main.tf

provider "aws" {
region = "ap-south-1"
}

resource "aws_s3_bucket" "my_bucket" {
bucket = "mydemo-bucket"
}

save and exit
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
To destroy the bucket
$ terraform destroy -auto-approve

How to create AWS EC2 instance using Terraform

7 January 2024 at 06:36

create directory ec2 and navigate to the directory
$ mkdir ec2 && cd ec2
create main.tf file
$ vim main.tf

provider "aws" {
region = "ap-south-1"
}

resource "aws_instance" "app_server" {
ami = "ami-03f4878755434977f"
instance_type = "t2.nano"
subnet_id = "subnet-0ccba3b8cfd0645e2"
key_name = "awskey"
associate_public_ip_address = "true"
tags = {
Name = "demo-server"
}
}

output "public_ip" {
description = "public ip of the instance"
value = aws_instance.app_server.public_ip
}

save and exit
initialize the terraform
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
it will create the AWS EC2 instance with output the public ip of the instance

To destroy the instance
$ terraform destroy -auto-approve

❌
❌