Nginx Dynamic Modules: Automating Recompilation with APT Hooks

If you’ve ever dealt with Nginx and its dynamic modules, you know the drill. An Nginx package update hits, and suddenly your custom modules – like ModSecurity or GeoIP2 – are no longer compatible. The whole process is a headache: you have to stop Nginx, recompile your modules against the new version, copy the files, and restart the service.

I was looking for a way to automate this. The goal was simple: ensure that dynamic modules are always compatible with a new Nginx version. And if the recompilation fails for any reason, the entire Nginx update must be aborted.

Continue reading Nginx Dynamic Modules: Automating Recompilation with APT Hooks

Docker Update Automation: Advanced Bash Pipelining. paperless-ngx

This article documents a reliable update script for the Paperless-NGX stack, which minimizes the risk of container failures during automated maintenance. The focus here is not just on simple automation, but on ensuring the integrity of the process—especially handling logs and exit codes within complex Bash pipelines.

Part I: Defining the Problem (The Log and Exit Code Dilemma)

The initial simple script worked, but it suffered from two critical flaws that make it unsuitable for production cron jobs:

  1. Inaccurate Timestamp: The start and end time logged was identical, as the $DATE variable was only defined once at the script’s initiation.
  2. Broken Exit Codes (The Fatal Flaw): Commands inside a pipe (|) often run in a subshell. If docker compose down fails, the pipe’s overall exit code ($?) often reflects the status of the final command (e.g., while read), hiding the initial failure. This means the script might proceed with docker compose pull even if the service failed to stop.

Part II: Solution – Hardening the Bash Pipeline

To create a production-ready script, I implement advanced Bash features to guarantee reliable command execution and accurate logging.

1. The wlog Function (Adding Timestamps and Centralizing Output)

The wlog function is introduced to wrap commands, timestamp the output of every line, and consolidate stdout and stderr (2>&1), enabling central logging.

wlog () {
  local cmd="$@"
  # Redirects command output through the pipeline
  $cmd 2>&1 | while read -r l; do d=`date`; echo "$d: $l"; done
} 

2. Resolving Exit Codes and Pipeline Integrity

The failure of the initial script to correctly capture the exit code is solved by enabling two shell options, which are available since Bash 4.2:

# Required for reliable pipelines (available Bash 4.2+)
shopt -s lastpipe
shopt -so pipefail
  • shopt -s lastpipe: Forces the last segment of the pipe (while read) to run in the current shell, allowing $? to be reliably checked.
  • shopt -so pipefail: Ensures the exit code of the pipeline is that of the first command that failed (this is critical for safe automation).

Part III: The Final Automation Script

The final script applies these techniques, ensuring that docker compose pull only executes if docker compose down was successful (&& operator).

#!/bin/bash

set -e
shopt -s lastpipe
shopt -so pipefail

PDIR=/opt/paperless/paperless-ngx
LOG=/opt/paperless/docker-compose-cron.log

wlog () {
  local cmd="$@"
  $cmd 2>&1 | while read -r l; do d=`date`; echo "$d: $l"; done
}

wlog echo "Starting Docker Compose Update" >> $LOG
cd $PDIR

# 1. Stop and pull only if successful
wlog /usr/bin/docker compose down >> $LOG && wlog /usr/bin/docker compose pull >> $LOG

# 2. Start all containers
wlog /usr/bin/docker compose up --wait -d >> $LOG

wlog echo "Finished Docker Compose Update" >> $LOG

Verification (Log Output)

The log output now provides precise timestamps for every step of the Docker Compose operation, fulfilling the Observability requirement.

Wed Feb 21 21:29:45 CET 2024: Container paperless-webserver-1  Stopping
Wed Feb 21 21:29:53 CET 2024: Container paperless-webserver-1  Stopped
...
Wed Feb 21 21:30:03 CET 2024: Network paperless_default  Removed
...

Part IV: Conclusion and Alternatives

This solution provides reliable automation using pure Bash. However, be aware that solutions like Docker Watchtower may offer a simpler, container-native approach if complex exit code logic is not required.

Sources / See Also

  1. GNU Bash Reference Manual. Shell Options for Pipeline Management (shopt -s lastpipe, pipefail). https://www.gnu.org/software/bash/manual/bash.html
  2. Docker Documentation. Docker Compose Upgrade and Maintenance. https://docs.docker.com/compose/compose-file/08-upgrade/
  3. Docker Documentation. Reference for Docker Compose CLI commands (down, pull, up). https://docs.docker.com/compose/reference/overview/
  4. Linux Manpage: date. Usage of the date command for precise timestamping.
  5. Linux Manpage: cron. Syntax and execution environment for automated job scheduling.

Automating IPS: Real-Time Suricata Rule Generation via Fail2ban Hook

In my last posts, I established a central syslog hub feeding Fail2ban and demonstrated Suricata as an intrusion prevention system (IPS). This final piece connects the two: feeding Suricata with the ban results from Fail2ban by creating a dynamic, external rule file.

This process is highly automated, but requires robust Bash scripting and careful handling of security context.

1. Fail2ban Action and Scripting Logic

The core idea is to replace Fail2ban’s default firewall action with a custom script that modifies a public rule file.

Custom Action Definition

The actionban and actionunban directives in /etc/fail2ban/action.d/firewall.conf point to simple Bash wrappers.

[Definition]

actionstart = 
actionstop =

actioncheck =

actionban = /var/www/f2b/bin/ban.sh <ip>
actionunban = /var/www/f2b/bin/unban.sh <ip>

Security-Hardened ban.sh Script

The ban script must: (1) validate the input, (2) generate a unique Signature ID (SID), (3) append the rule, and (4) atomically update the ruleset’s MD5 checksum for Suricata-Update to fetch the change.

#!/bin/bash
#
# ban.sh: Adds a banned IP to the Fail2ban Suricata ruleset.

IP="$1"
RULESFILE="/var/www/f2b/htdocs/fail2ban.rules"
MSG="BRUTE-FORCE detected by fail2ban"

# INPUT VALIDATION: Ensure the input is a valid IPv4/IPv6 address.
if ! [[ $IP =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ || $IP =~ ^([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}$ ]]; then
    echo "ERROR: Invalid IP address received: $IP" >&2
    exit 1
fi

# 1. Generate Unique SID (Timestamp + Counter)
TSTAMP=$(date +%s)
CNT=$(wc -l $RULESFILE | cut -d' ' -f1);
# Generate a SID in a high range to avoid conflicts with commercial rules (e.g., 90000000+)
SID=$(($CNT + $TSTAMP + 90000000))

if ! grep -q "$IP" $RULESFILE; then
  # Rule: Drop all traffic from the banned IP to any network home port.
  # Using 'drop ip' is robust; adjust ports if required (e.g., $SSH_PORTS).
  RULE="drop ip $IP any -> \$HOME_NET any (msg:\"$MSG - $IP\"; sid:$SID; rev:1;)"

  echo $RULE >> $RULESFILE
  
  # Set correct permissions (Critical step for web delivery)
  chown www-data:www-data $RULESFILE

  # 2. Atomically update the MD5 checksum file
  SUM=$(md5sum $RULESFILE | cut -d' ' -f1);
  echo $SUM > $RULESFILE.md5
fi

The Unban Script (unban.sh)

The unban script removes the line and performs the critical MD5 update.

#!/bin/bash
#
# unban.sh: Removes a banned IP from the Suricata ruleset.

IP="$1"
RULESFILE="/var/www/f2b/htdocs/fail2ban.rules"

if grep -q "$IP" $RULESFILE; then
  # Use sed -i to remove the line containing the IP address
  sed -i '/'$IP'/d' $RULESFILE
  
  # Atomically update the MD5 checksum
  SUM=$(md5sum $RULESFILE | cut -d' ' -f1);
  echo $SUM > $RULESFILE.md5
fi

2. Integration and Verification

The final step is to make the ruleset publicly available (via HTTPS/SSL) and configure Suricata to fetch it.

Suricata-Update Configuration

The rule file (fail2ban.rules) must be made available via a web server (e.g., NGINX) with a specific URL (e.g., https://f2b.example.com/fail2ban.rules). I add this URL as a new source to Suricata-Update.

root@fw2:~# suricata-update add-source
URL: https://f2b.example.com/fail2ban.rules

# Running the update process
19/11/2023 -- 20:17:38 - <Info> -- Fetching https://f2b.example.com/fail2ban.rules.
 100% - 18344/18344                   
19/11/2023 -- 20:17:38 - <Info> -- Done.

Verification and Observability

Verification confirms that the new rules are loaded and actively dropping traffic. The log analysis command must be adapted to track these specific fail2ban drops.

# Awk command to filter and count dropped packets (Excerpt showing drop sources)
# awk '/Drop/{...}' fast.log | sort | uniq -c | sort -hr

   6505 IP dropped due to fail2ban detection
    638 ET DROP Dshield Block Listed Source 
    ...

This ensures a comprehensive, self-healing Incident Response Chain.

Sources / See Also

  1. Suricata Documentation. High-performance AF_PACKET IPS mode configuration and usage. https://docs.suricata.io/en/latest/install/af-packet.html
  2. Suricata Documentation. Working with Suricata-Update (Ruleset Management). https://suricata-update.readthedocs.io/en/latest/update.html
  3. Suricata Documentation. EVE JSON Output for Structured Logging. https://docs.suricata.io/en/latest/output/eve/eve-json-format.html
  4. Google Development. Google Perftools (TCMalloc) Documentation. https://github.com/google/gperftools
  5. Emerging Threats (Proofpoint). Information on the Emerging Threats Open Ruleset. https://www.proofpoint.com/us/security-awareness/blog/emerging-threats
  6. Elastic Stack (ELK) Documentation for Log Analysis. https://www.elastic.co/what-is/elk-stack
  7. Linux Manpage: ethtool (Network Offload Configuration). https://man7.org/linux/man-pages/man8/ethtool.8.html