❌

Reading view

There are new articles available, click to refresh the page.

HAProxy EP 1: Traffic Police for Web

In the world of web applications, imagine you’re running a very popular pizza place. Every evening, customers line up for a delicious slice of pizza. But if your single cashier can’t handle all the orders at once, customers might get frustrated and leave.

What if you could have a system that ensures every customer gets served quickly and efficiently? Enter HAProxy, a tool that helps manage and balance the flow of web traffic so that no single server gets overwhelmed.

Here’s a straightforward guide to understanding HAProxy, installing it, and setting it up to make your web application run smoothly.

What is HAProxy?

HAProxy stands for High Availability Proxy. It’s like a traffic director for your web traffic. It takes incoming requests (like people walking into your pizza place) and decides which server (or pizza station) should handle each request. This way, no single server gets too busy, and everything runs more efficiently.

Why Use HAProxy?

  • Handles More Traffic: Distributes incoming traffic across multiple servers so no single one gets overloaded.
  • Increases Reliability: If one server fails, HAProxy directs traffic to the remaining servers.
  • Improves Performance: Ensures that users get faster responses because the load is spread out.

Installing HAProxy

Here’s how you can install HAProxy on a Linux system:

  1. Open a Terminal: You’ll need to access your command line interface to install HAProxy.
  2. Install HAProxy: Type the following command and hit enter

sudo apt-get update
sudo apt-get install haproxy

3. Check Installation: Once installed, you can verify that HAProxy is running by typing


sudo systemctl status haproxy

This command shows you the current status of HAProxy, ensuring it’s up and running.

Configuring HAProxy

HAProxy’s configuration file is where you set up how it should handle incoming traffic. This file is usually located at /etc/haproxy/haproxy.cfg. Let’s break down the main parts of this configuration file,

1. The global Section

The global section is like setting the rules for the entire pizza place. It defines general settings for HAProxy itself, such as how it should operate, what kind of logging it should use, and what resources it needs. Here’s an example of what you might see in the global section


global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660
    user haproxy
    group haproxy
    daemon

Let’s break it down line by line:

  • log /dev/log local0: This line tells HAProxy to send log messages to the system log at /dev/log and to use the local0 logging facility. Logs help you keep track of what’s happening with HAProxy.
  • log /dev/log local1 notice: Similar to the previous line, but it uses the local1 logging facility and sets the log level to notice, which is a type of log message indicating important events.
  • chroot /var/lib/haproxy: This line tells HAProxy to run in a restricted area of the file system (/var/lib/haproxy). It’s a security measure to limit access to the rest of the system.
  • stats socket /run/haproxy/admin.sock mode 660: This sets up a special socket (a kind of communication endpoint) for administrative commands. The mode 660 part defines the permissions for this socket, allowing specific users to manage HAProxy.
  • user haproxy: Specifies that HAProxy should run as the user haproxy. Running as a specific user helps with security.
  • group haproxy: Similar to the user directive, this specifies that HAProxy should run under the haproxy group.
  • daemon: This tells HAProxy to run as a background service, rather than tying up a terminal window.

2. The defaults Section

The defaults section sets up default settings for HAProxy’s operation and is like defining standard procedures for the pizza place. It applies default configurations to both the frontend and backend sections unless overridden. Here’s an example of a defaults section


defaults
    log     global
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

Here’s what each line means:

  • log global: Tells HAProxy to use the logging settings defined in the global section for logging.
  • option httplog: Enables HTTP-specific logging. This means HAProxy will log details about HTTP requests and responses, which helps with troubleshooting and monitoring.
  • option dontlognull: Prevents logging of connections that don’t generate any data (null connections). This keeps the logs cleaner and more relevant.
  • timeout connect 5000ms: Sets the maximum time HAProxy will wait when trying to connect to a backend server to 5000 milliseconds (5 seconds). If the connection takes longer, it will be aborted.
  • timeout client 50000ms: Defines the maximum time HAProxy will wait for data from the client to 50000 milliseconds (50 seconds). If the client doesn’t send data within this time, the connection will be closed.
  • timeout server 50000ms: Similar to timeout client, but it sets the maximum time to wait for data from the server to 50000 milliseconds (50 seconds).

3. Frontend Section

The frontend section defines how HAProxy listens for incoming requests. Think of it as the entrance to your pizza place.


frontend http_front
    bind *:80
    default_backend http_back
  • frontend http_front: This is a name for your frontend configuration.
  • bind *:80: Tells HAProxy to listen for traffic on port 80 (the standard port for web traffic).
  • default_backend http_back: Specifies where the traffic should be sent (to the backend section).

4. Backend Section

The backend section describes where the traffic should be directed. Think of it as the different pizza stations where orders are processed.


backend http_back
    balance roundrobin
    server app1 192.168.1.2:5000 check
    server app2 192.168.1.3:5000 check
    server app3 192.168.1.4:5000 check
  • backend http_back: This is a name for your backend configuration.
  • balance roundrobin: Distributes traffic evenly across servers.
  • server app1 192.168.1.2:5000 check: Specifies a server (app1) at IP address 192.168.1.2 on port 5000. The check option ensures HAProxy checks if the server is healthy before sending traffic to it.
  • server app2 and server app3: Additional servers to handle traffic.

Testing Your Configuration

After setting up your configuration, you’ll need to restart HAProxy to apply the changes:


sudo systemctl restart haproxy

To check if everything is working, you can use a web browser or a tool like curl to send requests to HAProxy and see if it correctly distributes them across your servers.

ntfy.sh – To save you from un-noticed events

Alex Pandian was the system administrator for a tech company, responsible for managing servers, maintaining network stability, and ensuring that everything ran smoothly.

With many scripts running daily and long-running processes that needed monitoring, Alex was constantly flooded with notifications.

Alex Pandian: β€œEvery day, I have to gothrough dozens of emails and alerts just to find the ones that matter,”

Alex muttered while sipping coffee in the server room.

Alex Pandian: β€œThere must be a better way to streamline all this information.”

Despite using several monitoring tools, the notifications from these systems were scattered and overwhelming. Alex needed a more efficient method to receive alerts only when crucial events occurred, such as script failures or the completion of resource-intensive tasks.

Determined to find a better system, Alex began searching online for a tool that could help consolidate and manage notifications.

After reading through countless forums and reviews, Alex stumbled upon a discussion about ntfy.sh, a service praised for its simplicity and flexibility.

β€œThis looks promising,” Alex thought, excited by the ability to publish and subscribe to notifications using a straightforward, topic-based system. The idea of having notifications sent directly to a phone or desktop without needing complex configurations was exactly what Alex was looking for.

Alex decided to consult with Sam, a fellow system admin known for their expertise in automation and monitoring.

Alex Pandian: β€œHey Sam, have you ever used ntfy.sh?”

Sam: β€œAbsolutely, It’s a lifesaver for managing notifications. How do you plan to use it?”

Alex Pandian: β€œI’m thinking of using it for real-time alerts on script failures and long-running commands, Can you show me how it works?”

Sam: β€œOf course,”

with a smile, eager to guide Alex through setting up ntfy.sh to improve workflow efficiency.

Together, Sam and Alex began configuring ntfy.sh for Alex’s environment. They focused on setting up topics and integrating them with existing systems to ensure that important notifications were delivered promptly.

Step 1: Identifying Key Topics

Alex identified the main areas where notifications were needed:

  • script-failures: To receive alerts whenever a script failed.
  • command-completions: To notify when long-running commands finished.
  • server-health: For critical server health alerts.

Step 2: Subscribing to Topics

Sam showed Alex how to subscribe to these topics using ntfy.sh on a mobile device and desktop. This ensured that Alex would receive notifications wherever they were, without having to constantly check email or dashboards.


# Subscribe to topics
ntfy subscribe script-failures
ntfy subscribe command-completions
ntfy subscribe server-health

Step 3: Automating Notifications

Sam explained how to use bash scripts and curl to send notifications to ntfy.sh whenever specific events occurred.

β€œFor example, if a script fails, you can automatically send an alert to the β€˜script-failures’ topic,” Sam demonstrated.


# Notify on script failure
./backup-script.sh || curl -d "Backup script failed!" ntfy.sh/script-failures

Alex was impressed by the simplicity and efficiency of this approach. β€œI can automate all of this?” Alex asked.

β€œDefinitely,” Sam replied. β€œYou can integrate it with cron jobs, monitoring tools, and more. It’s a great way to keep track of important events without getting bogged down by noise.”

With the basics in place, Alex began applying ntfy.sh to various real-world scenarios, streamlining the notification process and improving overall efficiency.

Monitoring Script Failures

Alex set up automated alerts for critical scripts that ran daily, ensuring that any failures were immediately reported. This allowed Alex to address issues quickly, minimizing downtime and improving system reliability.


# Notify on critical script failure
./critical-task.sh || curl -d "Critical task script failed!" ntfy.sh/script-failures

Tracking Long-Running Commands

Whenever Alex initiated a long-running command, such as a server backup or data migration, notifications were sent upon completion. This enabled Alex to focus on other tasks without constantly checking on progress.


# Notify on long-running command completion
long-command && curl -d "Long command completed successfully." ntfy.sh/command-completions

Server Health Alerts

To monitor server health, Alex integrated ntfy.sh with existing monitoring tools, ensuring that any critical issues were immediately flagged.


# Send server health alert
curl -d "Server CPU usage is critically high!" ntfy.sh/server-health

As with any new tool, there were challenges to overcome. Alex encountered a few hurdles, but with Sam’s guidance, these were quickly resolved.

Challenge: Managing Multiple Notifications

Initially, Alex found it challenging to manage multiple notifications and ensure that only critical alerts were prioritized. Sam suggested using filters and priorities to focus on the most important messages.


# Subscribe with filters for high-priority alerts
ntfy subscribe script-failures --priority=high

Challenge: Scheduling Notifications

Alex wanted to schedule notifications for regular maintenance tasks and reminders. Sam introduced Alex to using cron for scheduling automated alerts.S

# Schedule notification for regular maintenance
echo "Time for weekly server maintenance." | at 8:00 AM next Saturday ntfy.sh/server-health


Sam gave some more examples to alex,

Monitoring disk space

As a system administrator, you can use ntfy.sh to receive alerts when disk space usage reaches a critical level. This helps prevent issues related to insufficient disk space.


# Check disk space and notify if usage is over 80%
disk_usage=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')
if [ $disk_usage -gt 80 ]; then
  curl -d "Warning: Disk space usage is at ${disk_usage}%." ntfy.sh/disk-space
fi

Alerting on Website Downtime

You can use ntfy.sh to monitor the status of a website and receive notifications if it goes down.


# Check website status and notify if it's down
website="https://example.com"
status_code=$(curl -o /dev/null -s -w "%{http_code}\n" $website)

if [ $status_code -ne 200 ]; then
  curl -d "Alert: $website is down! Status code: $status_code." ntfy.sh/website-monitor
fi

Reminding for Daily Tasks

You can set up ntfy.sh to send you daily reminders for important tasks, ensuring that you stay on top of your schedule.


# Schedule daily reminders
echo "Time to review your daily tasks!" | at 9:00 AM ntfy.sh/daily-reminders
echo "Stand-up meeting at 10:00 AM." | at 9:50 AM ntfy.sh/daily-reminders

Alerting on High System Load

Monitor system load and receive notifications when it exceeds a certain threshold, allowing you to take action before it impacts performance.

# Check system load and notify if it's high
load=$(uptime | awk '{ print $10 }' | sed 's/,//')
threshold=2.0

if (( $(echo "$load > $threshold" | bc -l) )); then
  curl -d "Warning: System load is high: $load" ntfy.sh/system-load
fi

Notify on Backup Completion

Receive a notification when a backup process completes, allowing you to verify its success.

# Notify on backup completion
backup_command="/path/to/backup_script.sh"
$backup_command && curl -d "Backup completed successfully." ntfy.sh/backup-status || curl -d "Backup failed!" ntfy.sh/backup-status

Notifying on Container Events with Docker

Integrate ntfy.sh with Docker to send alerts for specific container events, such as when a container stops unexpectedly.


# Notify on Docker container stop event
container_name="my_app"
container_status=$(docker inspect -f '{{.State.Status}}' $container_name)

if [ "$container_status" != "running" ]; then
  curl -d "Alert: Docker container $container_name has stopped." ntfy.sh/docker-alerts
fi

Integrating with CI/CD Pipelines

Use ntfy.sh to notify you about the status of CI/CD pipeline stages, ensuring you stay informed about build successes or failures.


# Example GitLab CI/CD YAML snippet
stages:
  - build

build_job:
  stage: build
  script:
    - make build
  after_script:
    - if [ "$CI_JOB_STATUS" == "success" ]; then
        curl -d "Build succeeded for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
      else
        curl -d "Build failed for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
      fi

Notification on ssh login to server

Lets try with docker,


FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
# Set root password for SSH access (change 'your_password' to your desired password)
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
COPY ntfy-ssh.sh /usr/bin/ntfy-ssh.sh
RUN chmod +x /usr/bin/ntfy-ssh.sh
RUN echo "session optional pam_exec.so /usr/bin/ntfy-ssh.sh" >> /etc/pam.d/sshd
RUN apt-get -y update; apt-get -y install curl
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

script to send notification,


#!/bin/bash
if [ "${PAM_TYPE}" = "open_session" ]; then
  echo "here"
  curl \
    -H prio:high \
    -H tags:warning \
    -d "SSH login: ${PAM_USER} from ${PAM_RHOST}" \
    ntfy.sh/syed-alerts
fi

With ntfy.sh as an integral part of daily operations, Alex found a renewed sense of balance and control. The once overwhelming chaos of notifications was now a manageable stream of valuable information.

As Alex reflected on the journey, it was clear that ntfy.sh had transformed not just the way notifications were managed, but also the overall approach to system administration.

In a world full of noise, ntfy.sh had provided a clear and effective way to stay informed without distractions. For Alex, it was more than just a toolβ€”it was a new way of managing systems efficiently.

Visual Studio Code (VSCode) and then commit it to a GitHub repository

To commit HTML code to a GitHub repository using Visual Studio Code (VSCode), follow these steps:

  1. Install Git: Make sure Git is installed on your system. You can download it from Git's official website and follow the installation instructions.

  2. Set up Git in VSCode:

    • Open VSCode and open your project folder.
    • Press Ctrl + Shift + P (or Cmd + Shift + P on macOS) to open the Command Palette.
    • Type "Git: Initialize Repository" and select your project folder to initialize Git.
  3. Create a new GitHub repository:

    • Go to GitHub and log in to your account.
    • Click on the "+" icon in the top-right corner and select "New repository."
    • Follow the instructions to create a new repository on GitHub.
  4. Link the local repository to GitHub:

    • In VSCode, open the terminal by pressing Ctrl + ` (backtick) or navigating to View > Terminal in the top menu.
    • Use the following commands to link your local repository to the GitHub repository you created:
   git remote add origin <repository_url>

Replace <repository_url> with the URL of your GitHub repository.

  1. Add and commit HTML files:

    • In VSCode, navigate to the Source Control view by clicking on the source control icon in the left sidebar (or press Ctrl + Shift + G).
    • You'll see a list of changed files. Stage the HTML files you want to commit by clicking the + button next to the file names.
    • Enter a commit message in the text box at the top of the Source Control view to describe the changes you made.
    • Click the checkmark icon to commit the changes.
  2. Push changes to GitHub:

    • After committing the changes, click on the ellipsis (...) icon in the Source Control view and select "Push."
    • This action will push your committed changes to the GitHub repository.

Remember, these instructions assume you've set up a GitHub repository and have the necessary permissions to push changes to it. If you encounter any authentication issues during the push, make sure your GitHub credentials are correctly configured in Git or that you've set up SSH keys for GitHub authentication.

Step 1: Set up Git and GitHub

  1. Install Git: Download and install Git from Git's official website.

  2. Create a GitHub repository: Go to GitHub and log in to your account. Create a new repository by clicking the "+" icon in the top-right corner and selecting "New repository."

Step 2: Create an index.html file

  1. Open Visual Studio Code.
  2. Click on File in the top-left corner, select Open Folder..., and choose or create a new folder for your project.

Step 3: Create and code the index.html file

  1. In VSCode, right-click on the folder you created and select New File. Name the file index.html.
  2. Add HTML code to the index.html file. For example:
<!DOCTYPE html>
<html>
<head>
  <title>My First Web Page</title>
</head>
<body>
  <h1>Hello, World!</h1>
  <p>This is a simple HTML page.</p>
</body>
</html>

Step 4: Initialize Git repository and commit changes

  1. Open the integrated terminal in VSCode by going to Terminal > New Terminal.

  2. Initialize Git in your project folder by typing the following command in the terminal:

git init
  1. Add the index.html file to the staging area by typing:
git add index.html
  1. Commit the changes with a descriptive message:
git commit -m "Add index.html file with basic HTML structure"

Step 5: Link local repository to GitHub

  1. Go to your GitHub repository and copy the repository URL (ending in .git).

  2. In the terminal, link your local repository to the GitHub repository by running the following command:

git remote add origin <repository_url>

Replace <repository_url> with the URL you copied from GitHub.

Step 6: Push changes to GitHub

  1. Finally, push your committed changes to GitHub:
git push -u origin main

This command pushes your changes from the local main branch to the main branch on GitHub.

After completing these steps, your index.html file will be committed and pushed to your GitHub repository. You can verify the changes by visiting your GitHub repository in a web browser

Author:
Muthukumar.K

❌