Meet Jafer, a backend engineer tasked with ensuring the new microservice they are building can handle high traffic smoothly. The microservice is a Flask application that needs to be accessed over TCP, and Jafer decided to use HAProxy to act as a TCP proxy to manage incoming traffic.
This guide will walk you through how Jafer sets up HAProxy to work as a TCP proxy for a sample Flask application.
Why Use HAProxy as a TCP Proxy?
HAProxy as a TCP proxy operates at Layer 4 (Transport Layer) of the OSI model. It forwards raw TCP connections from clients to backend servers without inspecting the contents of the packets. This is ideal for scenarios where:
You need to handle non-HTTP traffic, such as databases or other TCP-based applications.
You want to perform load balancing without application-level inspection.
Your services are using protocols other than HTTP/HTTPS.
In this layer, it can’t read the packets but can identify the ip address of the client.
Step 1: Set Up a Sample Flask Application
First, Jafer created a simple Flask application that listens on a TCP port. Let’s create a file named app.py
from flask import Flask, request
app = Flask(__name__)
@app.route('/', methods=['GET'])
def home():
return "Hello from Flask over TCP!"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000) # Run the app on port 5000
Step 2: Dockerize the Flask Application
To make the Flask app easy to deploy, Jafer decided to containerize it using Docker.
Create a Dockerfile
# Use an official Python runtime as a parent image
FROM python:3.9-slim
# Set the working directory
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install flask
# Make port 5000 available to the world outside this container
EXPOSE 5000
# Run app.py when the container launches
CMD ["python", "app.py"]
To build and run the Docker container, use the following commands
This will start the Flask application on port 5000.
Step 3: Configure HAProxy as a TCP Proxy
Now, Jafer needs to configure HAProxy to act as a TCP proxy for the Flask application.
Create an HAProxy configuration file named haproxy.cfg
global
log stdout format raw local0
maxconn 4096
defaults
mode tcp # Operating in TCP mode
log global
option tcplog
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend tcp_front
bind *:4000 # Bind to port 4000 for incoming TCP traffic
default_backend flask_backend
backend flask_backend
balance roundrobin # Use round-robin load balancing
server flask1 127.0.0.1:5000 check # Proxy to Flask app running on port 5000
In this configuration:
Mode TCP: HAProxy is set to work in TCP mode.
Frontend: Listens on port 4000 and forwards incoming TCP traffic to the backend.
Backend: Contains a single server (flask1) where the Flask app is running.
Step 4: Run HAProxy with the Configuration
To start HAProxy with the above configuration, you can use Docker to run HAProxy in a container.
Create a Dockerfile for HAProxy
FROM haproxy:2.4
# Copy the HAProxy configuration file to the container
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
This will start HAProxy on port 4000, which is configured to proxy TCP traffic to the Flask application running on port 5000.
Step 5: Test the TCP Proxy Setup
To test the setup, open a web browser or use curl to send a request to the HAProxy server
curl http://localhost:4000/
You should see the response
Hello from Flask over TCP!
This confirms that HAProxy is successfully proxying TCP traffic to the Flask application.
Step 6: Scaling Up
If Jafer wants to scale the application to handle more traffic, he can add more backend servers to the haproxy.cfg file
backend flask_backend
balance roundrobin
server flask1 127.0.0.1:5000 check
server flask2 127.0.0.1:5001 check
Jafer could run another instance of the Flask application on a different port (5001), and HAProxy would balance the TCP traffic between the two instances.
Conclusion
By configuring HAProxy as a TCP proxy, Jafer could efficiently manage and balance incoming traffic to their Flask application. This setup ensures scalability and reliability for any TCP-based service, not just HTTP-based ones.
In the world of web applications, imagine you’re running a very popular pizza place. Every evening, customers line up for a delicious slice of pizza. But if your single cashier can’t handle all the orders at once, customers might get frustrated and leave.
What if you could have a system that ensures every customer gets served quickly and efficiently? Enter HAProxy, a tool that helps manage and balance the flow of web traffic so that no single server gets overwhelmed.
Here’s a straightforward guide to understanding HAProxy, installing it, and setting it up to make your web application run smoothly.
What is HAProxy?
HAProxy stands for High Availability Proxy. It’s like a traffic director for your web traffic. It takes incoming requests (like people walking into your pizza place) and decides which server (or pizza station) should handle each request. This way, no single server gets too busy, and everything runs more efficiently.
Why Use HAProxy?
Handles More Traffic: Distributes incoming traffic across multiple servers so no single one gets overloaded.
Increases Reliability: If one server fails, HAProxy directs traffic to the remaining servers.
Improves Performance: Ensures that users get faster responses because the load is spread out.
Installing HAProxy
Here’s how you can install HAProxy on a Linux system:
Open a Terminal: You’ll need to access your command line interface to install HAProxy.
Install HAProxy: Type the following command and hit enter
sudo apt-get update
sudo apt-get install haproxy
3. Check Installation: Once installed, you can verify that HAProxy is running by typing
sudo systemctl status haproxy
This command shows you the current status of HAProxy, ensuring it’s up and running.
Configuring HAProxy
HAProxy’s configuration file is where you set up how it should handle incoming traffic. This file is usually located at /etc/haproxy/haproxy.cfg. Let’s break down the main parts of this configuration file,
1. The global Section
The global section is like setting the rules for the entire pizza place. It defines general settings for HAProxy itself, such as how it should operate, what kind of logging it should use, and what resources it needs. Here’s an example of what you might see in the global section
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660
user haproxy
group haproxy
daemon
Let’s break it down line by line:
log /dev/log local0: This line tells HAProxy to send log messages to the system log at /dev/log and to use the local0 logging facility. Logs help you keep track of what’s happening with HAProxy.
log /dev/log local1 notice: Similar to the previous line, but it uses the local1 logging facility and sets the log level to notice, which is a type of log message indicating important events.
chroot /var/lib/haproxy: This line tells HAProxy to run in a restricted area of the file system (/var/lib/haproxy). It’s a security measure to limit access to the rest of the system.
stats socket /run/haproxy/admin.sock mode 660: This sets up a special socket (a kind of communication endpoint) for administrative commands. The mode 660 part defines the permissions for this socket, allowing specific users to manage HAProxy.
user haproxy: Specifies that HAProxy should run as the user haproxy. Running as a specific user helps with security.
group haproxy: Similar to the user directive, this specifies that HAProxy should run under the haproxy group.
daemon: This tells HAProxy to run as a background service, rather than tying up a terminal window.
2. The defaults Section
The defaults section sets up default settings for HAProxy’s operation and is like defining standard procedures for the pizza place. It applies default configurations to both the frontend and backend sections unless overridden. Here’s an example of a defaults section
defaults
log global
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
Here’s what each line means:
log global: Tells HAProxy to use the logging settings defined in the global section for logging.
option httplog: Enables HTTP-specific logging. This means HAProxy will log details about HTTP requests and responses, which helps with troubleshooting and monitoring.
option dontlognull: Prevents logging of connections that don’t generate any data (null connections). This keeps the logs cleaner and more relevant.
timeout connect 5000ms: Sets the maximum time HAProxy will wait when trying to connect to a backend server to 5000 milliseconds (5 seconds). If the connection takes longer, it will be aborted.
timeout client 50000ms: Defines the maximum time HAProxy will wait for data from the client to 50000 milliseconds (50 seconds). If the client doesn’t send data within this time, the connection will be closed.
timeout server 50000ms: Similar to timeout client, but it sets the maximum time to wait for data from the server to 50000 milliseconds (50 seconds).
3. Frontend Section
The frontend section defines how HAProxy listens for incoming requests. Think of it as the entrance to your pizza place.
frontend http_front: This is a name for your frontend configuration.
bind *:80: Tells HAProxy to listen for traffic on port 80 (the standard port for web traffic).
default_backend http_back: Specifies where the traffic should be sent (to the backend section).
4. Backend Section
The backend section describes where the traffic should be directed. Think of it as the different pizza stations where orders are processed.
backend http_back
balance roundrobin
server app1 192.168.1.2:5000 check
server app2 192.168.1.3:5000 check
server app3 192.168.1.4:5000 check
backend http_back: This is a name for your backend configuration.
balance roundrobin: Distributes traffic evenly across servers.
server app1 192.168.1.2:5000 check: Specifies a server (app1) at IP address 192.168.1.2 on port 5000. The check option ensures HAProxy checks if the server is healthy before sending traffic to it.
server app2 and server app3: Additional servers to handle traffic.
Testing Your Configuration
After setting up your configuration, you’ll need to restart HAProxy to apply the changes:
sudo systemctl restart haproxy
To check if everything is working, you can use a web browser or a tool like curl to send requests to HAProxy and see if it correctly distributes them across your servers.
Dinesh, an avid movie collector and music lover, had a growing problem. His laptop was bursting at the seams with countless movies, albums, and family photos. Every time he wanted to watch a movie or listen to her carefully curated playlists, he had to sit around his laptop. And if he wanted to share something with his friends, it meant copying with USB drives or spending hours transferring files.
One Saturday evening, after yet another struggle to connect his laptop to his smart TV via a mess of cables, Dinesh decided it was time for a change. He needed a solution that would let his access all his media from any device in his house – phone, tablet, and TV. He needed a media server.
Dinesh fired up his browser and began his search: “How to stream media to all my devices.” He gone through the results – Plex, Jellyfin, Emby… Each option seemed promising but felt too complex, requiring subscriptions or heavy installations.
Frustrated, Dinesh thought, “There must be something simpler. I don’t need all the bells and whistles; I just want to access my files from anywhere in my house.” He refined her search: “lightweight media server for Linux.”
There it was – MiniDLNA. Described as a simple, lightweight DLNA server that was easy to set up and perfect for home use, MiniDLNA (also known as ReadyMedia) seemed to be exactly what Dinesh needed.
MiniDLNA (also known as ReadyMedia) is a lightweight, simple server for streaming media (like videos, music, and pictures) to devices on your network. It is compatible with various DLNA/UPnP (Digital Living Network Alliance/Universal Plug and Play) devices such as smart TVs, media players, gaming consoles, etc.
How to Use MiniDLNA
Here’s a step-by-step guide to setting up and using MiniDLNA on a Linux based system.
1. Install MiniDLNA
To get started, you need to install MiniDLNA. The installation steps can vary slightly depending on your operating system.
For Debian/Ubuntu-based systems:
sudo apt update
sudo apt install minidlna
For Red Hat/CentOS-based systems:
First, enable the EPEL repository,
sudo yum install epel-release
Then, install MiniDLNA,
sudo yum install minidlna
2. Configure MiniDLNA
Once installed, you need to configure MiniDLNA to tell it where to find your media files.
a. Open the MiniDLNA configuration file in a text editor
sudo nano /etc/minidlna.conf
b. Configure the following parameters:
media_dir: Set this to the directories where your media files (music, pictures, and videos) are stored. You can specify different media types for each directory.
media_dir=A,/path/to/music # 'A' is for audio
media_dir=V,/path/to/videos # 'V' is for video
media_dir=P,/path/to/photos # 'P' is for pictures
db_dir=: The directory where the database and cache files are stored.
db_dir=/var/cache/minidlna
log_dir=: The directory where log files are stored.
log_dir=/var/log/minidlna
friendly_name=: The name of your media server. This will appear on your DLNA devices.
friendly_name=Laptop SJ
notify_interval=: The interval in seconds that MiniDLNA will notify clients of its presence. The default is 900 (15 minutes).
notify_interval=900
c. Save and close the file (Ctrl + X, Y, Enter in Nano).
3. Start the MiniDLNA Service
After configuration, start the MiniDLNA service
sudo systemctl start minidlna
To enable it to start at boot,
sudo systemctl enable minidlna
4. Rescan Media Files
To make MiniDLNA scan your media files and add them to its database, you can force a rescan with
sudo minidlnad -R
5. Access Your Media on DLNA/UPnP Devices
Now, your MiniDLNA server should be up and running. You can access your media from any DLNA-compliant device on your network:
On your Smart TV, look for the “Media Server” or “DLNA” option in the input/source menu.
On a Windows PC, go to This PC or Network and find your DLNA server under “Media Devices.”
On Android, use a media player app like VLC or BubbleUPnP to find your server.
6. Check Logs and Troubleshoot
If you encounter any issues, you can check the logs for more information
sudo tail -f /var/log/minidlna/minidlna.log
To setup for a single user
Disable the global daemon
sudo service minidlna stop
sudo update-rc.d minidlna disable
Create the necessary local files and directories as regular user and edit the configuration
mkdir -p ~/.minidlna/cache
cd ~/.minidlna
cp /etc/minidlna.conf .
$EDITOR minidlna.conf
Configure as you would globally above but these definitions need to be defined locally
Firewall Rules: Ensure that your firewall settings allow traffic on the MiniDLNA port (8200 by default) and UPnP (typically port 1900 for UDP).
Update Media Files: Whenever you add or remove files from your media directory, run minidlnad -R to update the database.
Multiple Media Directories: You can have multiple media_dir lines in your configuration if your media is spread across different folders.
To set up MiniDLNA with VLC Media Player so you can stream content from your MiniDLNA server, follow these steps:
Let’s see how to use this in VLC
On Machine
1. Install VLC Media Player
Make sure you have VLC Media Player installed on your device. If not, you can download it from the official VLC website.
2. Open VLC Media Player
Launch VLC Media Player on your computer.
3. Open the UPnP/DLNA Network Stream
Go to the “View” Menu:
On the VLC menu bar, click on View and then Playlist or press Ctrl + L (Windows/Linux) or Cmd + Shift + P (Mac).
Locate Your DLNA Server:
In the left sidebar, you will see an option for Local Network.
Click on Universal Plug'n'Play or UPnP.
VLC will search for available DLNA/UPnP servers on your network.
Select Your MiniDLNA Server:
After a few moments, your MiniDLNA server should appear under the UPnP section.
Click on your server name (e.g., My DLNA Server).
Browse and Play Media:
You will see the folders you configured (e.g., Music, Videos, Pictures).
Navigate through the folders and double-click on a media file to start streaming.
4. Alternative Method: Open Network Stream
If you know the IP address of your MiniDLNA server, you can connect directly:
Open Network Stream:
Click on Media in the menu bar and select Open Network Stream... or press Ctrl + N (Windows/Linux) or Cmd + N (Mac).
Enter the URL:
Enter the URL of your MiniDLNA server in the format http://[Server IP]:8200.
Example: http://192.168.1.100:8200.
Click “Play”:
Click on the Play button to start streaming from your MiniDLNA server.
5. Tips for Better Streaming Experience
Ensure the Server is Running: Make sure the MiniDLNA server is running and the media files are correctly indexed.
Network Stability: A stable local network connection is necessary for smooth streaming. Use a wired connection if possible or ensure a strong Wi-Fi signal.
Firewall Settings: Ensure that the firewall on your server allows traffic on port 8200 (or the port specified in your MiniDLNA configuration).
On Android
To set up and stream content from MiniDLNA using an Android app, you will need a DLNA/UPnP client app that can discover and stream media from DLNA servers. Several apps are available for this purpose, such as VLC for Android, BubbleUPnP, Kodi, and others. Here’s how to use VLC for Android and BubbleUPnP, two popular choices
Tap on the menu button (three horizontal lines) in the upper-left corner of the screen.
Select Local Network from the sidebar menu.
Find Your MiniDLNA Server:
VLC will automatically search for DLNA/UPnP servers on your local network. After a few moments, your MiniDLNA server should appear in the list.
Tap on the name of your MiniDLNA server (e.g., My DLNA Server).
Browse and Play Media:
You will see your media folders (e.g., Music, Videos, Pictures) as configured in your MiniDLNA setup.
Navigate to the desired folder and tap on any media file to start streaming.
Additional Tips
Ensure MiniDLNA is Running: Make sure your MiniDLNA server is properly configured and running on your local network.
Check Network Connection: Ensure your Android device is connected to the same local network (Wi-Fi) as the MiniDLNA server.
Firewall Settings: If you are not seeing the MiniDLNA server in your app, ensure that the server’s firewall settings allow DLNA/UPnP traffic.
Some Problems That you may face
minidlna.service: Main process exited, code=exited, status=255/EXCEPTION - check the logs. Mostly its due to an instance already running on port 8200. Kill that and reload the db. lsof -i :8200 will give PID. and `kill -9 <PID>` will kill the process.
If the media files is not refreshing, then try minidlnad -f /home/$USER/.minidlna/minidlna.conf -R or `sudo minidlnad -R`
During our college times, we had a crash course on Machine Learning. Our coordinators has arranged an ML Engineer to take class for 3 days. He insisted to install packages to have hands-on experience. But unfortunately many of our people were not sure about the installations of the packages. So we need to find a solution to install all necessary packages in all machines.
We had a scenario like, all the machines had one specific same user accountwith same password for all the machines. So we were like; if we are able to automate it in one machine then it would be easy for rest of the machines ( Just a for-loop iterating the x.0.0.1 to x.0.0.255 ). This is the birthplace of this tool.
Code=-
#!/usr/bin/env python
import sys
import os.path
from multiprocessing.pool import ThreadPool
import paramiko
BASE_ADDRESS = "192.168.7."
USERNAME = "t1"
PASSWORD = "uni1"
def create_client(hostname):
"""Create a SSH connection to a given hostname."""
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(hostname=hostname, username=USERNAME, password=PASSWORD)
ssh_client.invoke_shell()
return ssh_client
def kill_computer(ssh_client):
"""Power off a computer."""
ssh_client.exec_command("poweroff")
def install_python_modules(ssh_client):
"""Install the programs specified in requirements.txt"""
ftp_client = ssh_client.open_sftp()
# Move over get-pip.py
local_getpip = os.path.expanduser("~/lab_freak/get-pip.py")
remote_getpip = "/home/%s/Documents/get-pip.py" % USERNAME
ftp_client.put(local_getpip, remote_getpip)
# Move over requirements.txt
local_requirements = os.path.expanduser("~/lab_freak/requirements.txt")
remote_requirements = "/home/%s/Documents/requirements.txt" % USERNAME
ftp_client.put(local_requirements, remote_requirements)
ftp_client.close()
# Install pip and the desired modules.
ssh_client.exec_command("python %s --user" % remote_getpip)
ssh_client.exec_command("python -m pip install --user -r %s" % remote_requirements)
def worker(action, hostname):
try:
ssh_client = create_client(hostname)
if action == "kill":
kill_computer(ssh_client)
elif action == "install":
install_python_modules(ssh_client)
else:
raise ValueError("Unknown action %r" % action)
except BaseException as e:
print("Running the payload on %r failed with %r" % (hostname, action))
def main():
if len(sys.argv) < 2:
print("USAGE: python kill.py ACTION")
sys.exit(1)
hostnames = [str(BASE_ADDRESS) + str(i) for i in range(30, 60)]
with ThreadPool() as pool:
pool.map(lambda hostname: worker(sys.argv[1], hostname), hostnames)
if __name__ == "__main__":
main()
Alex Pandian was the system administrator for a tech company, responsible for managing servers, maintaining network stability, and ensuring that everything ran smoothly.
With many scripts running daily and long-running processes that needed monitoring, Alex was constantly flooded with notifications.
Alex Pandian: “Every day, I have to gothrough dozens of emails and alerts just to find the ones that matter,”
Alex muttered while sipping coffee in the server room.
Alex Pandian: “There must be a better way to streamline all this information.”
Despite using several monitoring tools, the notifications from these systems were scattered and overwhelming. Alex needed a more efficient method to receive alerts only when crucial events occurred, such as script failures or the completion of resource-intensive tasks.
Determined to find a better system, Alex began searching online for a tool that could help consolidate and manage notifications.
After reading through countless forums and reviews, Alex stumbled upon a discussion about ntfy.sh, a service praised for its simplicity and flexibility.
“This looks promising,” Alex thought, excited by the ability to publish and subscribe to notifications using a straightforward, topic-based system. The idea of having notifications sent directly to a phone or desktop without needing complex configurations was exactly what Alex was looking for.
Alex decided to consult with Sam, a fellow system admin known for their expertise in automation and monitoring.
Alex Pandian: “Hey Sam, have you ever used ntfy.sh?”
Sam: “Absolutely, It’s a lifesaver for managing notifications. How do you plan to use it?”
Alex Pandian: “I’m thinking of using it for real-time alerts on script failures and long-running commands, Can you show me how it works?”
Sam: “Of course,”
with a smile, eager to guide Alex through setting up ntfy.sh to improve workflow efficiency.
Together, Sam and Alex began configuring ntfy.sh for Alex’s environment. They focused on setting up topics and integrating them with existing systems to ensure that important notifications were delivered promptly.
Step 1: Identifying Key Topics
Alex identified the main areas where notifications were needed:
script-failures: To receive alerts whenever a script failed.
command-completions: To notify when long-running commands finished.
server-health: For critical server health alerts.
Step 2: Subscribing to Topics
Sam showed Alex how to subscribe to these topics using ntfy.sh on a mobile device and desktop. This ensured that Alex would receive notifications wherever they were, without having to constantly check email or dashboards.
Alex was impressed by the simplicity and efficiency of this approach. “I can automate all of this?” Alex asked.
“Definitely,” Sam replied. “You can integrate it with cron jobs, monitoring tools, and more. It’s a great way to keep track of important events without getting bogged down by noise.”
With the basics in place, Alex began applying ntfy.sh to various real-world scenarios, streamlining the notification process and improving overall efficiency.
Monitoring Script Failures
Alex set up automated alerts for critical scripts that ran daily, ensuring that any failures were immediately reported. This allowed Alex to address issues quickly, minimizing downtime and improving system reliability.
Whenever Alex initiated a long-running command, such as a server backup or data migration, notifications were sent upon completion. This enabled Alex to focus on other tasks without constantly checking on progress.
To monitor server health, Alex integrated ntfy.sh with existing monitoring tools, ensuring that any critical issues were immediately flagged.
# Send server health alert
curl -d "Server CPU usage is critically high!" ntfy.sh/server-health
As with any new tool, there were challenges to overcome. Alex encountered a few hurdles, but with Sam’s guidance, these were quickly resolved.
Challenge: Managing Multiple Notifications
Initially, Alex found it challenging to manage multiple notifications and ensure that only critical alerts were prioritized. Sam suggested using filters and priorities to focus on the most important messages.
# Subscribe with filters for high-priority alerts
ntfy subscribe script-failures --priority=high
Challenge: Scheduling Notifications
Alex wanted to schedule notifications for regular maintenance tasks and reminders. Sam introduced Alex to using cron for scheduling automated alerts.S
# Schedule notification for regular maintenance
echo "Time for weekly server maintenance." | at 8:00 AM next Saturday ntfy.sh/server-health
Sam gave some more examples to alex,
Monitoring disk space
As a system administrator, you can use ntfy.sh to receive alerts when disk space usage reaches a critical level. This helps prevent issues related to insufficient disk space.
# Check disk space and notify if usage is over 80%
disk_usage=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')
if [ $disk_usage -gt 80 ]; then
curl -d "Warning: Disk space usage is at ${disk_usage}%." ntfy.sh/disk-space
fi
Alerting on Website Downtime
You can use ntfy.sh to monitor the status of a website and receive notifications if it goes down.
# Check website status and notify if it's down
website="https://example.com"
status_code=$(curl -o /dev/null -s -w "%{http_code}\n" $website)
if [ $status_code -ne 200 ]; then
curl -d "Alert: $website is down! Status code: $status_code." ntfy.sh/website-monitor
fi
Reminding for Daily Tasks
You can set up ntfy.sh to send you daily reminders for important tasks, ensuring that you stay on top of your schedule.
# Schedule daily reminders
echo "Time to review your daily tasks!" | at 9:00 AM ntfy.sh/daily-reminders
echo "Stand-up meeting at 10:00 AM." | at 9:50 AM ntfy.sh/daily-reminders
Alerting on High System Load
Monitor system load and receive notifications when it exceeds a certain threshold, allowing you to take action before it impacts performance.
# Check system load and notify if it's high
load=$(uptime | awk '{ print $10 }' | sed 's/,//')
threshold=2.0
if (( $(echo "$load > $threshold" | bc -l) )); then
curl -d "Warning: System load is high: $load" ntfy.sh/system-load
fi
Notify on Backup Completion
Receive a notification when a backup process completes, allowing you to verify its success.
Integrate ntfy.sh with Docker to send alerts for specific container events, such as when a container stops unexpectedly.
# Notify on Docker container stop event
container_name="my_app"
container_status=$(docker inspect -f '{{.State.Status}}' $container_name)
if [ "$container_status" != "running" ]; then
curl -d "Alert: Docker container $container_name has stopped." ntfy.sh/docker-alerts
fi
Integrating with CI/CD Pipelines
Use ntfy.sh to notify you about the status of CI/CD pipeline stages, ensuring you stay informed about build successes or failures.
# Example GitLab CI/CD YAML snippet
stages:
- build
build_job:
stage: build
script:
- make build
after_script:
- if [ "$CI_JOB_STATUS" == "success" ]; then
curl -d "Build succeeded for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
else
curl -d "Build failed for commit $CI_COMMIT_SHORT_SHA." ntfy.sh/ci-cd-status;
fi
Notification on ssh login to server
Lets try with docker,
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
# Set root password for SSH access (change 'your_password' to your desired password)
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
COPY ntfy-ssh.sh /usr/bin/ntfy-ssh.sh
RUN chmod +x /usr/bin/ntfy-ssh.sh
RUN echo "session optional pam_exec.so /usr/bin/ntfy-ssh.sh" >> /etc/pam.d/sshd
RUN apt-get -y update; apt-get -y install curl
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
script to send notification,
#!/bin/bash
if [ "${PAM_TYPE}" = "open_session" ]; then
echo "here"
curl \
-H prio:high \
-H tags:warning \
-d "SSH login: ${PAM_USER} from ${PAM_RHOST}" \
ntfy.sh/syed-alerts
fi
With ntfy.sh as an integral part of daily operations, Alex found a renewed sense of balance and control. The once overwhelming chaos of notifications was now a manageable stream of valuable information.
As Alex reflected on the journey, it was clear that ntfy.sh had transformed not just the way notifications were managed, but also the overall approach to system administration.
In a world full of noise, ntfy.sh had provided a clear and effective way to stay informed without distractions. For Alex, it was more than just a tool—it was a new way of managing systems efficiently.