Restored Article: SPF: The Foundation of Email Sender Authentication

The Sender Policy Framework (SPF) is a foundational email authentication technology. It enables a domain owner to specify, via a special DNS record, which hosts are authorized to send mail on behalf of their domain.

Continue reading Restored Article: SPF: The Foundation of Email Sender Authentication

Suricata IPS: Fixing Legitimate Traffic Drops by Disabling drop-invalid

I encountered a peculiar issue where my WordPress instance was unable to reach wordpress.org, and DokuWiki could not access its plugin repository. All standard network checks (wget, curl, DNS) worked fine, and no drops were registered by the standard firewall rules.

However, logging revealed a problem deep within the Intrusion Prevention System (IPS) layer.

The Diagnostic: Stream Errors

I noticed an unusually high number of dropped packets related to stream errors in the stats.log:

ips.drop_reason.flow_drop | Total | 837
ips.drop_reason.rules | Total | 3398
ips.drop_reason.stream_error | Total | 19347

This confirmed that Suricata’s TCP Stream Engine was classifying legitimate traffic as invalid, causing the connection to stall before the application layer could proceed. The volume of stream_error drops was alarmingly high.

Further investigation into Suricata’s internal statistics revealed details about the nature of the errors:

stream.fin_but_no_session                     | Total | 12508
stream.rst_but_no_session                     | Total | 2577
stream.pkt_spurious_retransmission            | Total | 14735

These specific counters (FINs/RSTs without an active session, spurious retransmissions) point to common issues in asymmetric routing or session tracking in complex bridged/virtualized environments.

The Workaround: Disabling Strict Stream Enforcement

Based on community discussions regarding unexpected drops in IPS mode, I tested a key stream-configuration variable.

The default setting drop-invalid: yes instructs Suricata to immediately drop packets it deems invalid according to its internal state machine (often due to out-of-sync sequence numbers or timing issues).

The Fix: I set this directive to no.

stream:
  memcap: 64mb
  memcap-policy: ignore  
  drop-invalid: no # Set to 'no' to fix legitimate traffic drops
  checksum-validation: yes
  midstream-policy: ignore
  inline: auto
  reassembly:

As soon as I applied this change, the traffic to wordpress.org and the DokuWiki repository resumed functioning normally.

Conclusion: The Security Trade-off

While this workaround immediately solved the connectivity problem, I am consciously accepting a security trade-off. Disabling drop-invalid instructs the IPS to allow potentially ambiguous or invalid packets to pass.

  • Risk: This allows a low-volume attacker to potentially use malformed packets to bypass the stream state-tracking.
  • Benefit: It ensures Service Availability for crucial application updates and connections that the IPS was incorrectly flagging due to virtualization or network environment subtleties.

My next step will be to investigate the root cause of the high stream_error count to see if the error is caused by a kernel-level configuration or a misaligned network path.

Sources / See Also (Quellen)

  1. Suricata Documentation. Stream Configuration and Settings (Specifically drop-invalid). https://docs.suricata.io/en/latest/configuration/stream.html
  2. Suricata Documentation. Understanding and Analyzing the Stats Log. https://docs.suricata.io/en/latest/output/stats/stats-log.html
  3. Suricata Documentation. IPS Mode and Traffic Drop Reasons. https://docs.suricata.io/en/latest/performance/ips-mode.html
  4. OISF Community Forum. Discussion on high stream errors/spurious retransmissions and network offloading. (Diese Art von Diskussion ist der primäre Fundort für solche Workarounds).
  5. Linux Manpage: ethtool. Documentation on Network Offloading (TSO, GSO, LRO) which often causes Suricata Stream issues.

OpenSSH Hardening Strategy: Auditing Policies and Mitigating Low-Strength Ciphers

OpenSSH ships with a default configuration that prioritizes high compatibility. However, this compatibility comes at a price: some of the included ciphers and algorithms may be outdated or contain known vulnerabilities. To strengthen the encryption and gain a transparent overview of known weaknesses, ssh-audit is the essential auditing tool.

My hardening strategy uses the Mozilla Security Guidelines on OpenSSH as a base, which I then refined using the specific findings from ssh-audit.

Part I: Initial Server Hardening and Auditing

Before tuning algorithms, I enforce core security policy: limiting access to specific users, disabling root login, and preventing password authentication.

AuthenticationMethods publickey
PermitRootLogin no
# AllowUsers replace-with-your-usernames,separate-by-comma

1. Installation and Usage of ssh-audit

I use pip3 to install and maintain ssh-audit to ensure I have the most current version, which is necessary for accurate vulnerability assessment.

Server Audit (Debian 12 Default) Output: Running ssh-audit localhost on a default installation reveals critical weaknesses. The output serves as our baseline:

# general
(gen) banner: SSH-2.0-OpenSSH_9.2p1 Debian-2+deb12u2
(gen) software: OpenSSH 9.2p1
(gen) compatibility: OpenSSH 8.5+, Dropbear SSH 2018.76+
(gen) compression: enabled (zlib@openssh.com)

# key exchange algorithms
(kex) sntrup761x25519-sha512@openssh.com    -- [info] available since OpenSSH 8.5
(kex) curve25519-sha256                     -- [info] available since OpenSSH 7.4, Dropbear SSH 2018.76
                                            `- [info] default key exchange since OpenSSH 6.4
(kex) curve25519-sha256@libssh.org          -- [info] available since OpenSSH 6.4, Dropbear SSH 2013.62
                                            `- [info] default key exchange since OpenSSH 6.4
(kex) ecdh-sha2-nistp256                    -- [fail] using elliptic curves that are suspected as being backdoored by the U.S. National Security Agency
                                            `- [info] available since OpenSSH 5.7, Dropbear SSH 2013.62
(kex) ecdh-sha2-nistp384                    -- [fail] using elliptic curves that are suspected as being backdoored by the U.S. National Security Agency
                                            `- [info] available since OpenSSH 5.7, Dropbear SSH 2013.62
(kex) ecdh-sha2-nistp521                    -- [fail] using elliptic curves that are suspected as being backdoored by the U.S. National Security Agency
                                            `- [info] available since OpenSSH 5.7, Dropbear SSH 2013.62
(kex) diffie-hellman-group-exchange-sha256 (3072-bit) -- [info] available since OpenSSH 4.4
                                                      `- [info] OpenSSH's GEX fallback mechanism was triggered during testing. Very old SSH clients will still be able to create connections using a 2048-bit modulus, though modern clients will use 3072. This can only be disabled by recompiling the code (see https://github.com/openssh/openssh-portable/blob/V_9_4/dh.c#L477).
(kex) diffie-hellman-group16-sha512         -- [info] available since OpenSSH 7.3, Dropbear SSH 2016.73
(kex) diffie-hellman-group18-sha512         -- [info] available since OpenSSH 7.3
(kex) diffie-hellman-group14-sha256         -- [warn] 2048-bit modulus only provides 112-bits of symmetric strength
                                            `- [info] available since OpenSSH 7.3, Dropbear SSH 2016.73
(kex) kex-strict-s-v00@openssh.com          -- [info] pseudo-algorithm that denotes the peer supports a stricter key exchange method as a counter-measure to the Terrapin attack (CVE-2023-48795)

# host-key algorithms
(key) rsa-sha2-512 (2048-bit)               -- [warn] 2048-bit modulus only provides 112-bits of symmetric strength
                                            `- [info] available since OpenSSH 7.2
(key) rsa-sha2-256 (2048-bit)               -- [warn] 2048-bit modulus only provides 112-bits of symmetric strength
                                            `- [info] available since OpenSSH 7.2
(key) ecdsa-sha2-nistp256                   -- [fail] using elliptic curves that are suspected as being backdoored by the U.S. National Security Agency
                                            `- [warn] using weak random number generator could reveal the key
                                            `- [info] available since OpenSSH 5.7, Dropbear SSH 2013.62
(key) ssh-ed25519                           -- [info] available since OpenSSH 6.5

# encryption algorithms (ciphers)
(enc) chacha20-poly1305@openssh.com         -- [info] available since OpenSSH 6.5
                                            `- [info] default cipher since OpenSSH 6.9
(enc) aes128-ctr                            -- [info] available since OpenSSH 3.7, Dropbear SSH 0.52
(enc) aes192-ctr                            -- [info] available since OpenSSH 3.7
(enc) aes256-ctr                            -- [info] available since OpenSSH 3.7, Dropbear SSH 0.52
(enc) aes128-gcm@openssh.com                -- [info] available since OpenSSH 6.2
(enc) aes256-gcm@openssh.com                -- [info] available since OpenSSH 6.2

# message authentication code algorithms
(mac) umac-64-etm@openssh.com               -- [warn] using small 64-bit tag size
                                            `- [info] available since OpenSSH 6.2
(mac) umac-128-etm@openssh.com              -- [info] available since OpenSSH 6.2
(mac) hmac-sha2-256-etm@openssh.com         -- [info] available since OpenSSH 6.2
(mac) hmac-sha2-512-etm@openssh.com         -- [info] available since OpenSSH 6.2
(mac) hmac-sha1-etm@openssh.com             -- [fail] using broken SHA-1 hash algorithm
                                            `- [info] available since OpenSSH 6.2
(mac) umac-64@openssh.com                   -- [warn] using encrypt-and-MAC mode
                                            `- [warn] using small 64-bit tag size
                                            `- [info] available since OpenSSH 4.7
(mac) umac-128@openssh.com                  -- [warn] using encrypt-and-MAC mode
                                            `- [info] available since OpenSSH 6.2
(mac) hmac-sha2-256                         -- [warn] using encrypt-and-MAC mode
                                            `- [info] available since OpenSSH 5.9, Dropbear SSH 2013.56
(mac) hmac-sha2-512                         -- [warn] using encrypt-and-MAC mode
                                            `- [info] available since OpenSSH 5.9, Dropbear SSH 2013.56
(mac) hmac-sha1                             -- [fail] using broken SHA-1 hash algorithm
                                            `- [warn] using encrypt-and-MAC mode
                                            `- [info] available since OpenSSH 2.1.0, Dropbear SSH 0.28

# fingerprints
(fin) ssh-ed25519: SHA256:---
(fin) ssh-rsa: SHA256:---

# algorithm recommendations (for OpenSSH 9.2)
(rec) -ecdh-sha2-nistp256                   -- kex algorithm to remove 
(rec) -ecdh-sha2-nistp384                   -- kex algorithm to remove 
(rec) -ecdh-sha2-nistp521                   -- kex algorithm to remove 
(rec) -ecdsa-sha2-nistp256                  -- key algorithm to remove 
(rec) -hmac-sha1                            -- mac algorithm to remove 
(rec) -hmac-sha1-etm@openssh.com            -- mac algorithm to remove 
(rec) !rsa-sha2-256                         -- key algorithm to change (increase modulus size to 3072 bits or larger) 
(rec) !rsa-sha2-512                         -- key algorithm to change (increase modulus size to 3072 bits or larger) 
(rec) -diffie-hellman-group14-sha256        -- kex algorithm to remove 
(rec) -hmac-sha2-256                        -- mac algorithm to remove 
(rec) -hmac-sha2-512                        -- mac algorithm to remove 
(rec) -umac-128@openssh.com                 -- mac algorithm to remove 
(rec) -umac-64-etm@openssh.com              -- mac algorithm to remove 
(rec) -umac-64@openssh.com                  -- mac algorithm to remove 

# additional info
(nfo) For hardening guides on common OSes, please see: <https://www.ssh-audit.com/hardening_guides.html>
(nfo) Be aware that, while this target properly supports the strict key exchange method (via the kex-strict-?-v00@openssh.com marker) needed to protect against the Terrapin vulnerability (CVE-2023-48795), all peers must also support this feature as well, otherwise the vulnerability will still be present.  The following algorithms would allow an unpatched peer to create vulnerable SSH channels with this target: chacha20-poly1305@openssh.com.  If any CBC ciphers are in this list, you may remove them while leaving the *-etm@openssh.com MACs in place; these MACs are fine while paired with non-CBC cipher types.

Part II: Mitigating Weak Cryptography

1. Hardening Diffie-Hellman (GEX) Moduli

I use awk to filter the /etc/ssh/moduli file to enforce a minimum size of 3072 bits, eliminating low-strength moduli.

awk '$5 >= 3071' /etc/ssh/moduli > /etc/ssh/moduli.tmp && mv /etc/ssh/moduli.tmp /etc/ssh/moduli

2. Replacing Host Keys

I replace the default host keys with stronger settings, which is essential for modern security. The Ed25519 key is the current gold standard for key exchange performance and security.

rm /etc/ssh/ssh_host_*
ssh-keygen -t rsa -b 4096 -f /etc/ssh/ssh_host_rsa_key -N ""
ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N ""

3. Finalizing the SSHD Configuration

I explicitly define the allowed algorithms in a dedicated drop-in file (/etc/ssh/sshd_config.d/ciphers.conf) to ensure that only the secure algorithms identified by ssh-audit remain.

# /etc/ssh/sshd_config.d/ciphers.conf
# Ciphers

HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256

KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,sntrup761x25519-sha512@openssh.com,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com

Part III: Auditing Clients and Avoiding Pitfalls

1. Client-Side Hardening

I recommend applying a strong default set of algorithms to the client’s ~/.ssh/config file to reduce its attack surface, overriding weak defaults when connecting to external hosts.

# ~/.ssh/config

Host *
    HashKnownHosts yes
    HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-ed25519
    KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,diffie-hellman-group16-sha512,...
    Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,...
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com

2. Defense Tool Context (RKhunter and Lynis)

When using system auditing tools, I recognize that their checks can be incomplete or based on outdated assumptions.

  • RKhunter Context: RKhunter incorrectly read the active drop-in configuration (sshd_config.d), confirming the necessity of manual verification.
  • Lynis Context: Changing port 22 or disabling modern compression offers minimal additional protection on systems already hardened with public-key authentication. Do not apply settings blindly.

3. Reducing Service Footprint (Defense-in-Depth)

Finally, I hide the OpenSSH version for minor defense-in-depth:

Plaintext

# /etc/ssh/sshd_config.d/other.conf
Banner none
DebianBanner no

This reduces the public information available to automated scanners.

Sources / See Also

  1. Mozilla Security Guidelines. OpenSSH Recommended Configuration. https://wiki.mozilla.org/Security/Guidelines/OpenSSH
  2. SSH-Audit. SSH Hardening Guides for Common OSes. https://www.ssh-audit.com/hardening_guides.html
  3. OpenSSH. Release Notes for OpenSSH 7.4 (Removal of pre-auth compression). https://www.openssh.com/txt/release-7.4
  4. OpenSSH. Release Notes for OpenSSH 4.2 (Delayed compression). https://www.openssh.com/txt/release-4.2
  5. GitHub (OpenSSH Portable). Source Code Reference for GEX fallback mechanism. https://github.com/openssh/openssh-portable/blob/V_9_4/dh.c#L477
  6. CISOfy Lynis. Control Reference for SSH Hardening (SSH-7408). https://cisofy.com/controls/SSH-7408/

Paperless-NGX Setup: Installation, Security, and NGINX Integration

When I read about paperless-ngx, I was immediately drawn to the idea of having all my documents indexed (via OCR) and centrally stored. With a proper tagging system, exporting my documents for my annual tax declaration should only take seconds.

The installation procedure is straightforward but contains several critical security pitfalls that must be addressed, especially when integrating a reverse proxy. Here are my notes on setting up Paperless-NGX in Debian 12 Bookworm.

Part I: Installation and Secure User Setup

1. Install Docker Engine

Please consult the official Docker documentation for the installation of the Docker Engine.

2. Add a Dedicated, Unprivileged User

The safest approach is to use a dedicated system user. This ensures the application does not run with root privileges, even if the installation script or containers were ever compromised.

# 1. Create dedicated system user 'paperless'
adduser paperless --system --home /opt/paperless --group

# 2. Grant the user permissions to use Docker
usermod -aG docker paperless

3. Run the Install Script Securely

Execute the official install script using the newly created, unprivileged paperless user by leveraging sudo -Hu paperless.

sudo -Hu paperless bash -c "$(curl --location --silent --show-error https://raw.githubusercontent.com/paperless-ngx/paperless-ngx/main/install-paperless-ngx.sh)"

My Configuration Settings during the script:

SettingRecommended ValueRationale
URLhttps://documents.example.comNecessary for reverse-proxy and SSL configuration.
Database backendpostgresRecommended for production and better performance compared to SQLite.
Enable Apache Tika?yesRequired for indexing complex document types (Word, Excel, PowerPoint).
OCR languagedeu+eng+fra+araCaution: Each language increases resource usage. Choose only what is necessary.

Part II: Configuration and Container Management (Beginner Guide)

1. Modifying Configuration (docker-compose.env)

The environment variables are managed via the docker-compose.env file located in the installation directory (/opt/paperless/paperless-ngx/).

I recommend immediately setting the following variables, which are essential for security and functionality:

PAPERLESS_URL=https://documents.example.com
PAPERLESS_SECRET_KEY=------------USE-A-LONG-CRYPTIC-RANDOM-KEY----------------
PAPERLESS_OCR_LANGUAGE=ara+deu+eng+fra
PAPERLESS_OCR_LANGUAGES=ara deu eng fra # Note: space vs. plus sign syntax
PAPERLESS_CONSUMER_RECURSIVE=true
PAPERLESS_PORT=8000
  • OCR Note: Be sure to set both variables (_LANGUAGE and _LANGUAGES) as the syntax requirements for the Tesseract engine and the Docker Compose files differ.
  • CONSUMER_RECURSIVE: Set to true to allow dropping folders into the consume directory.

2. Container Management: Start, Stop, and Update

For users new to Docker, knowing the exact commands for managing the environment after configuration changes is essential.

First, navigate to the directory containing the configuration files:

# cd /opt/paperless/paperless-ngx/

Stop and Restart (After configuration change):

root@paperless:/opt/paperless/paperless-ngx# sudo -Hu paperless docker compose down
[+] Running 6/6
 ✔ Container paperless-webserver-1  Removed                                                   6.9s 
...
root@paperless:/opt/paperless/paperless-ngx# sudo -Hu paperless docker compose up -d
[+] Running 6/6
 ✔ Network paperless_default        Created                                                   0.1s 
...
 ✔ Container paperless-webserver-1  Started                                                   0.0s

Update (Pulling new container images):

root@paperless:/opt/paperless/paperless-ngx# sudo -Hu paperless docker compose down
root@paperless:/opt/paperless/paperless-ngx# sudo -Hu paperless docker compose pull
[+] Pulling 35/22
...

Part III: Critical Security Fix and NGINX Integration

1. CRITICAL SECURITY FLAW: Port Exposure Fix

The default installation (as of writing this article: 17. Dezember 2023) does not bind the Paperless-NGX webserver (Port 8000) to localhost (127.0.0.1). This means if you lack a strict host firewall, the Paperless login page is accessible from the internet via Port 8000.

Proof of Exposure: A netstat check shows global listening:

tcp        0      0 0.0.0.0:8000            0.0.0.0:* LISTEN

The Fix: You must edit the ports directive in the docker-compose.yml to explicitly set the binding to 127.0.0.1.

# /opt/paperless/paperless-ngx/docker-compose.yml (webserver section)
    ports:
      # CRITICAL: Only the localhost can reach Port 8000 on the host.
      - "127.0.0.1:8000:8000" 

2. NGINX SSL/TLS Basic Hardening

Since Paperless-NGX handles sensitive personal documents, a strong TLS configuration is mandatory. I suggest using the Mozilla SSL Configuration Generator as a reference for modern best practices.

Recommendations:

  • ECDSA Certificates: Use ECDSA certificates (e.g., secp384r1) over legacy RSA keys for better performance and security.
  • HSTS: Implement Strict-Transport-Security (HSTS) to force browsers to always use HTTPS.
  • TLS Protocol: Use ssl_protocols TLSv1.3; to ensure only the most current and secure protocol is allowed.

3. Header Management and Inheritance Logic

A common pitfall with NGINX is the add_header directive. If you use even one add_header directive within a location {} block, it overrides/disables all header inheritance from the parent server {} block.

This means if you add the Referrer-Policy header in your location / {} block, you must re-declare all other global headers (like HSTS and other security headers) there as well.

4. Essential Security Headers

To ensure defense against common web attacks, I use a separate headers.conf file:

Nginx

# headers.conf in /etc/nginx/conf.d/
add_header X-Frame-Options "SAMEORIGIN" always;         # Clickjacking Defense
add_header X-Content-Type-Options "nosniff" always;    # MIME-Sniffing Defense
add_header X-XSS-Protection "0" always;                # Disables obsolete browser protection
add_header Permissions-Policy "camera=(), microphone=()" always; # Prevents browser access to peripherals

5. Content Security Policy (CSP)

CSP is the most crucial defense against Cross-Site Scripting (XSS). Paperless-NGX’s UI uses inline scripts and styles, which complicate the policy.

The following CSP is a working compromise, allowing essential inline elements while blocking common injection points. I strongly suggest using the developer console to check for any blocked resources after implementation.

Nginx

# Functional CSP for paperless-ngx
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src data: 'self'; upgrade-insecure-requests" always;

Note: Using 'unsafe-inline' is often necessary for applications that have not fully adopted modern CSP practices.

6. Blocking Search Engine Indexers (robots.txt)

Since this is a system for private documents, we must prevent all search engines and indexing services from crawling or indexing the instance, regardless of the login protection.

This is easily achieved in NGINX without creating a file on the disk:

Nginx

location = /robots.txt {
  add_header Content-Type text/plain;
  return 200 "User-agent: AdsBot-Google\nUser-agent: *\nDisallow: /\n";
}

Part IV: Final Site Configuration

The final NGINX site configuration combines all security requirements (HSTS, Headers, robots.txt) and correctly proxies to the secure loopback address.

Nginx

server {
  server_name documents.example.com;

  add_header Strict-Transport-Security "max-age=63072000" always;
  add_header Referrer-Policy "strict-origin-when-cross-origin";
  include conf.d/headers.conf; # Includes basic security headers

  location = /robots.txt {
    add_header Content-Type text/plain;
    return 200 "User-agent: AdsBot-Google\nUser-agent: *\nDisallow: /\n";
  }

  location / {
    proxy_pass http://localhost:8000/;

    # Required headers for secure proxying and WebSockets
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    # ... other proxy settings ...
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_redirect off;
  }
  
  # TLS Configuration
  listen 443 ssl; 
  ssl_certificate /etc/letsencrypt/live/.../fullchain.pem; 
  ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem; 
  ssl_trusted_certificate /etc/letsencrypt/live/.../chain.pem;
}

# HTTP Redirect
server {
  listen 80;
  server_name documents.example.com;
  return 301 https://$host$request_uri;
}

Part V: Further Hardening Suggestions

To move beyond the basic secure setup, I suggest investigating these advanced hardening techniques:

AreaSuggestionGoal
AuthenticationExternal Authentication: Implement a proxy layer like Authelia or Keycloak to enforce Multi-Factor Authentication (MFA) before the Paperless-NGX login page.Zero Trust: Protect against Brute-Force attacks before they reach the application.
Rate LimitingFail2ban Integration: Configure Fail2ban to monitor NGINX access logs for login failures and automatically block the source IP.Brute-Force Defense at the network/IP layer.
Protocol SecurityDisable TLSv1.2: If all client devices are modern, disable TLSv1.2 completely to enforce TLSv1.3 only.Eliminate older, potentially vulnerable crypto-protocols.
Security HeadersStrong CORS Policies: Implement strict CORS headers (Cross-Origin Resource Sharing) to prevent the Paperless instance from being used to serve resources to unauthorized external domains.Defense against Cross-Origin attacks.

Sources / See Also

  1. Paperless-NGX Documentation. Installation Guide. https://docs.paperless-ngx.com/setup/
  2. Paperless-NGX Documentation. Advanced Tasks: Fail2ban. https://docs.paperless-ngx.com/advanced_tasks/#fail2ban
  3. Docker Documentation. Install Docker Engine. https://docs.docker.com/engine/install/debian/
  4. Mozilla SSL Configuration Generator. A reference tool for modern TLS configurations. https://ssl-config.mozilla.org/
  5. Scott Helme. Hardening Your HTTP Response Headers (X-Frame-Options, X-Content-Type-Options). https://scotthelme.co.uk/hardening-your-http-response-headers/
  6. Scott Helme. Content Security Policy – An Introduction. https://scotthelme.co.uk/content-security-policy-an-introduction/
  7. NGINX Documentation. Understanding the NGINX add_header Directive. http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header

Bogon Defense: Integrating Dynamic IP Blacklists into Suricata’s Reputation System

Bogon networks are IP address ranges that should never appear on the public internet, as they are either reserved or unassigned. Blocking these ranges is a fundamental and highly effective security measure. While this can be done with simple firewall rules, integrating the blocklist directly into the Suricata IP Reputation system is far more performant.

I rely on the lists provided by Team Cymru for both IPv4 and IPv6 bogons.

1. IPv6 Whitelisting: The Link-Local Caveat

When running an IPS/Firewall, one must be careful not to block essential local network traffic. The global IPv6 bogon list often includes the Link-Local and Multicast ranges (fe80::/10 and ff02::/16) because they fall under the wider 8000::/1 block.

Since blocking these addresses is incorrect and breaks internal IPv6 communication, a specific pass rule for ICMPv6 is required.

Suricata Pass Rule and RFC Reference

The rule uses ip_proto:58 (ICMPv6) and is carefully scoped. I use the Suricata reference system to document the source of the decision (RFC 4890).

Reference Configuration (/etc/suricata/reference.config):

config reference: rfc       https://datatracker.ietf.org/doc/html/

The Final ICMPv6 Whitelist Rule:

pass ip [fe80::/10,ff02::/16] any -> any any (msg:"Pass essential ICMPv6 Link-Local traffic"; ip_proto:58; reference:rfc,rfc4890; sid:10; rev:1;)

2. Implementing the IP Reputation System

Suricata’s IP Reputation system is a performant alternative to sequential firewall checks. It loads external IP lists into an internal hashmap, allowing for a single, fast lookup per packet.

Configuration Setup

  1. Enable IP Reputation: Uncomment the relevant sections in suricata.yaml and define the list files:
# IP Reputation
reputation-categories-file: /etc/suricata/iprep/categories.txt
default-reputation-path: /etc/suricata/iprep
reputation-files:
 - bogons-v4.list
 - bogons-v6.list
  1. Define the Category: Define a specific category for bogons in the categories.txt file. The number 1 is the category ID used in the final rule.
# /etc/suricata/iprep/categories.txt
1,Bogons,fullbogons list

The Bash Automation Script (IPv4 Example)

A robust Bash script is needed to fetch the lists and format the output into the Suricata-specific IP Reputation format (IP,categoryID,score).

#!/bin/bash
# ... (Source URL and File paths defined) ...

# 1. Fetch the list and check for changes
wget -q -O "$TMPIPREPFILE" "$SRCURL"
# ... (Diff check to prevent unnecessary updates) ...

# 2. Format and load the list
if [ -s "$TMPIPREPFILE" ]; then
  # Remove current list for atomic update
  if [ -f $IPREPFILE ]; then
    rm $IPREPFILE
  fi

  # Add each CIDR block with the category ID (1) and a score (10)
  while read -r NETWORK; do
    # Note: Score > 1 is needed to trigger the alert/drop rule
    echo "$NETWORK,1,10" >> $IPREPFILE
  done< <(grep -v "^#" $TMPIPREPFILE)
fi

Note: I use a score of 10, meaning a source must have a reputation score greater than 1 to trigger the rule.

3. The Final Detection and Prevention Rules

The final rule leverages the iprep keyword to check the source IP against the newly loaded Bogon list.

Detection Rule (Testing Phase)

The detection rule is used first to verify the configuration and observe traffic without blocking. The rule is triggered if the source IP’s reputation is in the Bogons category (category ID 1) and the score is greater than 1.

# Use this first to see what it would drop.
alert ip $EXTERNAL_NET any -> $HOME_NET any (msg:"DROP FullBogons listed."; iprep:src,Bogons,>,1; sid:11; rev:1;)

Prevention Rule (Active Defense)

Once testing is complete, the rule is switched to drop for active prevention.

# Use this for active IPS defense.
drop ip $EXTERNAL_NET any -> $HOME_NET any (msg:"DROP FullBogons listed."; iprep:src,Bogons,>,1; sid:11; rev:1;)

4. Verification

Verification confirms the lists are loaded and the counts are correct. The vast number of IPv6 bogons (142054) highlights the importance of this protection layer.

root@fw2:/etc/suricata/iprep# wc -l *
674 bogons-v4.list
142054 bogons-v6.list
1 categories.txt
142729 total

# Suricata log confirming load:
[Info] - reputation: Loading reputation file: /etc/suricata/iprep/bogons-v4.list
[Info] - reputation: Loading reputation file: /etc/suricata/iprep/bogons-v6.list

Sources / See Also

  1. Team Cymru. Bogon Networks Reference. https://www.team-cymru.com/bogon-networks
  2. Team Cymru. List of Unallocated IPv4 Address Space. http://www.team-cymru.org/Services/Bogons/fullbogons-ipv4.txt
  3. Team Cymru. List of Unallocated IPv6 Address Space. http://www.team-cymru.org/Services/Bogons/fullbogons-ipv6.txt
  4. RFC 4890. Recommendations for ICMPv6 Traffic. https://datatracker.ietf.org/doc/html/rfc4890
  5. Suricata Documentation. IP Reputation Configuration. https://docs.suricata.io/en/latest/configuration/ip-reputation.html
  6. Suricata Documentation. Working with Suricata-Update (Rule Management). https://suricata-update.readthedocs.io/en/latest/update.html

Suricata AF-Packet: Resolving VirtIO Non-Functionality via Checksum Offload Disablement

This article documents a two-part process: successfully upgrading Suricata to version 7 on Debian Bookworm and solving a critical stability issue required to run the AF-Packet IPS mode with high-performance VirtIO NICs in a virtual machine. Without this specific configuration, the IPS failed to function.

Part I: Suricata 7 Upgrade and Policy Changes

A much newer Suricata version can be installed by utilizing Debian’s bookworm-backports repository, which is essential for access to the latest security features and performance enhancements.

The Backports Installation

  1. Ensure the backports repository is configured in your /etc/apt/sources.list:

    deb https://ftp.debian.org/debian/ bookworm-backports contrib main non-free non-free-firmware
  2. Install Suricata using the specific target:

    apt-get install -t bookworm-backports suricata

Post-Upgrade Security Alert (Critical)

After upgrading to Suricata 7, you may experience immediate traffic blocking. This is not a bug, but a deliberate change in the application’s default security posture.

  • Reason: Suricata 7 introduced new policy rules that are often set to drop by default.
  • Action: You must review your new suricata.yaml configuration. The recommended approach is to install the new configuration files, compare them with your old setup, and set unwanted policies to ignore.

Reference: This new behavior is explicitly documented in the official Suricata 7 Changelog. Consult the Suricata FAQ for troubleshooting details on blocking issues.

Part II: The VirtIO and AF-Packet Critical Failure Fix

When using Suricata in IPS mode with the high-performance AF-Packet acquisition method, using VirtIO NICs is preferred. However, without a specific Libvirt configuration, the IPS fails entirely to process bridged traffic.

The Problematic Default VirtIO Config

If the VirtIO NIC is defined simply with <model type='virtio'/> in the Libvirt XML, AF-Packet fails to initialize or correctly process traffic.

The Solution: Disabling Guest Checksum Offload

The fix requires overriding the default driver settings by introducing the <driver> block and explicitly setting checksum (csum) offloading to off for the guest system.

This solution was found while troubleshooting similar packet loss issues in a thread related to XDP drivers in RHEL environments, suggesting a common kernel/driver interaction problem with aggressive offloading features.

The minimal required working Libvirt XML configuration looks like this:

    <interface type='bridge'>
      <mac address='..:..:..:..:..:..'/>
      <source bridge='ovs-guests'/>
      <virtualport type='openvswitch'>
      </virtualport>
      <model type='virtio'/>      
      <driver name='vhost'>
        <host csum='off' gso='off' tso4='off' tso6='off' ecn='off' ufo='off' mrg_rxbuf='off'/>
        <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

Crucial Insight: The key fix is the parameter csum='off' within the <guest/> tag. If checksum offloading is left enabled (csum='on'), the system fails to bridge traffic completely.

Part III: The Deep Dive: Why Checksum Offload Causes Complete Failure

Here is the rationale for why Checksum Offload (CSUM) leads to complete non-functionality:

1. The CSUM Optimization Paradigm (CSUM=’on’)

When you set csum='on', you are performing a performance optimization aimed at saving CPU cycles:

  • The Host/Hypervisor receives packets and passes them to the VirtIO Driver (Vhost).
  • The Vhost Driver passes the packets into the VirtIO Ring in the Guest System, but marks them with a special flag (e.g., in the skb—Socket Buffer—metadata) signaling to the Guest Kernel: “Attention, the L3/L4 checksum is invalid/missing and must be corrected or calculated before further processing up the stack.”
  • This is a performance trick: the CPU-intensive checksum calculation is delegated to the Guest Kernel, but only when it is truly necessary.

2. The Collision Point: AF-Packet Bypass

Suricata using AF-Packet now bypasses precisely this process:

  • AF-Packet is a very low-level packet capture method. It operates directly above the driver (or in the kernel) and fetches the raw L2 frames directly from the VirtIO Ring.
  • Suricata receives the packet at a point before the standard kernel stack has performed the checksum finalization.
  • Suricata’s Deep Packet Inspection (DPI) engine relies on the integrity of the Layer 3/Layer 4 headers (e.g., to check the TCP segment length, track the TCP state machine, or evaluate the validity of IP headers).
  • The Non-Functionality: Since Suricata receives a packet with the “Checksum missing/invalid” flag, it interprets this not as an optimization instruction, but as a critical error in the packet itself (Corrupted Packet).

3. The Resolution (CSUM=’off’)

By explicitly setting <guest csum='off'>, we force the Host/Vhost Driver to deliver the packets to the Guest as if they were ‘normal’ Ethernet frames that already contain all checksums. Suricata therefore only sees complete, consistent packets and can apply the DPI logic without error.


Sources / See Also

  1. Suricata Documentation. Suricata 7 Changelog (Note new policy behavior). https://suricata.io/changelog/
  2. Suricata Documentation. FAQ: Traffic gets blocked after upgrading to Suricata 7. https://suricata-update.readthedocs.io/en/latest/faq.html#my-traffic-gets-blocked-after-upgrading-to-suricata-7
  3. Suricata Documentation. Working with AF-Packet. https://docs.suricata.io/en/latest/install/af-packet.html
  4. Libvirt Documentation. VirtIO Device Configuration (Driver Offload Parameters). https://libvirt.org/formatdomain.html#elementsNICS
  5. Debian Wiki. Instructions for using Debian Backports. https://wiki.debian.org/Backports
  6. Suricata Community Forums. Troubleshooting references for XDP/Packet Loss (Context for driver tuning). https://forum.suricata.io/
  7. Linux Networking. Understanding the Checksum Offload Mechanism. https://www.kernel.org/doc/Documentation/networking/checksum-offloads.txt

Automating IPS: Real-Time Suricata Rule Generation via Fail2ban Hook

In my last posts, I established a central syslog hub feeding Fail2ban and demonstrated Suricata as an intrusion prevention system (IPS). This final piece connects the two: feeding Suricata with the ban results from Fail2ban by creating a dynamic, external rule file.

This process is highly automated, but requires robust Bash scripting and careful handling of security context.

1. Fail2ban Action and Scripting Logic

The core idea is to replace Fail2ban’s default firewall action with a custom script that modifies a public rule file.

Custom Action Definition

The actionban and actionunban directives in /etc/fail2ban/action.d/firewall.conf point to simple Bash wrappers.

[Definition]

actionstart = 
actionstop =

actioncheck =

actionban = /var/www/f2b/bin/ban.sh <ip>
actionunban = /var/www/f2b/bin/unban.sh <ip>

Security-Hardened ban.sh Script

The ban script must: (1) validate the input, (2) generate a unique Signature ID (SID), (3) append the rule, and (4) atomically update the ruleset’s MD5 checksum for Suricata-Update to fetch the change.

#!/bin/bash
#
# ban.sh: Adds a banned IP to the Fail2ban Suricata ruleset.

IP="$1"
RULESFILE="/var/www/f2b/htdocs/fail2ban.rules"
MSG="BRUTE-FORCE detected by fail2ban"

# INPUT VALIDATION: Ensure the input is a valid IPv4/IPv6 address.
if ! [[ $IP =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ || $IP =~ ^([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}$ ]]; then
    echo "ERROR: Invalid IP address received: $IP" >&2
    exit 1
fi

# 1. Generate Unique SID (Timestamp + Counter)
TSTAMP=$(date +%s)
CNT=$(wc -l $RULESFILE | cut -d' ' -f1);
# Generate a SID in a high range to avoid conflicts with commercial rules (e.g., 90000000+)
SID=$(($CNT + $TSTAMP + 90000000))

if ! grep -q "$IP" $RULESFILE; then
  # Rule: Drop all traffic from the banned IP to any network home port.
  # Using 'drop ip' is robust; adjust ports if required (e.g., $SSH_PORTS).
  RULE="drop ip $IP any -> \$HOME_NET any (msg:\"$MSG - $IP\"; sid:$SID; rev:1;)"

  echo $RULE >> $RULESFILE
  
  # Set correct permissions (Critical step for web delivery)
  chown www-data:www-data $RULESFILE

  # 2. Atomically update the MD5 checksum file
  SUM=$(md5sum $RULESFILE | cut -d' ' -f1);
  echo $SUM > $RULESFILE.md5
fi

The Unban Script (unban.sh)

The unban script removes the line and performs the critical MD5 update.

#!/bin/bash
#
# unban.sh: Removes a banned IP from the Suricata ruleset.

IP="$1"
RULESFILE="/var/www/f2b/htdocs/fail2ban.rules"

if grep -q "$IP" $RULESFILE; then
  # Use sed -i to remove the line containing the IP address
  sed -i '/'$IP'/d' $RULESFILE
  
  # Atomically update the MD5 checksum
  SUM=$(md5sum $RULESFILE | cut -d' ' -f1);
  echo $SUM > $RULESFILE.md5
fi

2. Integration and Verification

The final step is to make the ruleset publicly available (via HTTPS/SSL) and configure Suricata to fetch it.

Suricata-Update Configuration

The rule file (fail2ban.rules) must be made available via a web server (e.g., NGINX) with a specific URL (e.g., https://f2b.example.com/fail2ban.rules). I add this URL as a new source to Suricata-Update.

root@fw2:~# suricata-update add-source
URL: https://f2b.example.com/fail2ban.rules

# Running the update process
19/11/2023 -- 20:17:38 - <Info> -- Fetching https://f2b.example.com/fail2ban.rules.
 100% - 18344/18344                   
19/11/2023 -- 20:17:38 - <Info> -- Done.

Verification and Observability

Verification confirms that the new rules are loaded and actively dropping traffic. The log analysis command must be adapted to track these specific fail2ban drops.

# Awk command to filter and count dropped packets (Excerpt showing drop sources)
# awk '/Drop/{...}' fast.log | sort | uniq -c | sort -hr

   6505 IP dropped due to fail2ban detection
    638 ET DROP Dshield Block Listed Source 
    ...

This ensures a comprehensive, self-healing Incident Response Chain.

Sources / See Also

  1. Suricata Documentation. High-performance AF_PACKET IPS mode configuration and usage. https://docs.suricata.io/en/latest/install/af-packet.html
  2. Suricata Documentation. Working with Suricata-Update (Ruleset Management). https://suricata-update.readthedocs.io/en/latest/update.html
  3. Suricata Documentation. EVE JSON Output for Structured Logging. https://docs.suricata.io/en/latest/output/eve/eve-json-format.html
  4. Google Development. Google Perftools (TCMalloc) Documentation. https://github.com/google/gperftools
  5. Emerging Threats (Proofpoint). Information on the Emerging Threats Open Ruleset. https://www.proofpoint.com/us/security-awareness/blog/emerging-threats
  6. Elastic Stack (ELK) Documentation for Log Analysis. https://www.elastic.co/what-is/elk-stack
  7. Linux Manpage: ethtool (Network Offload Configuration). https://man7.org/linux/man-pages/man8/ethtool.8.html

Suricata Alert Analysis: Tuning Rules and Promoting Detection to Prevention

This is a follow-up to my last post in which I set up Suricata as an IPS. This article demonstrates how to effectively work with the Suricata engine—specifically, how I analyze its log output, silence unnecessary alerts, and promote specific detection rules to prevention rules.

1. Performance and Rule Management Setup

LibTCMalloc Integration

To enhance Suricata’s performance and stability, I integrate Google’s TCMalloc library to achieve memory usage improvements.

  1. Install the library: apt-get install libtcmalloc-minimal4
  2. Edit the Systemd service (systemctl edit suricata) to preload the library:
# /etc/systemd/system/suricata.service.d/override.conf
[Service]
Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4"

Rule Update Path Management

I correct the Debian setup where the default rule path conflicts with the update path. I align the configuration to use the dedicated data directory (/var/lib/suricata/rules) for updates, simplifying maintenance.

  1. Edit /etc/suricata/suricata.yaml to point the default rule path:default-rule-path: /var/lib/suricata/rules
  2. I ensure that update.yaml is configured correctly, and remove all initial rules from /etc/suricata/rules to avoid duplicate-rules warnings.

2. Alert Analysis and Rule Tuning (Observability in Practice)

By default, Suricata operates as an IDS (Intrusion Detection System). The critical first step is analyzing the generated alerts (fast.log) to separate actual threats from alert noise.

Initial Alert Frequency Analysis

The following command provides a crucial initial overview by counting unique alert messages and sorting them by frequency. This step is essential to understand the top sources of load and noise.

# awk '{$1=""; $2=""; $3=""}1' fast.log | sed 's_\[\*\*\].*__g' | sed 's_ group [0-9]*__g' | sort | uniq -c | sort -h

# Log Analysis (Excerpt showing frequency)
[..]
    100    GPL RPC portmap listing UDP 111 
    103    SURICATA STREAM 3way handshake excessive different SYN/ACKs 
    176    ET SCAN Suspicious inbound to PostgreSQL port 5432 
    216    ET SCAN Suspicious inbound to mySQL port 3306 
    223    SURICATA UDPv4 invalid checksum 
    236    SURICATA STREAM Last ACK with wrong seq 
    241    GPL ICMP_INFO PING speedera 
    325    ET SCAN Suspicious inbound to MSSQL port 1433 
    ...
  12872    GPL ICMP_INFO PING *NIX 

The Decision to Silence Noise

Alerts like the simple GPL ICMP_INFO PING *NIX often provide no actionable security value and must be disabled to prevent log flooding. I disable logging of ping probes by identifying the specific Signature IDs (SIDs) and adding them to a custom disable.conf file.

Code-Snippet

# /etc/suricata/disable.conf (Excerpt for ICMP PINGs)
# Disabled ping logging
2100366
...
2100480 

3. Promotion to IPS: Hardening the Drop Policy

For the system to transition from passive detection to active prevention (IPS), specific detection rules must be promoted to drop rules.

I promote the ET DROP Dshield Block Listed Source rule, as it targets known hostile IPs, by adding its SID to drop.conf.

# /etc/suricata/drop.conf
# Rules matching SIDs in this file will be converted to drop rules.
2402000 # SID for 'ET DROP Dshield Block Listed Source'

After running suricata-update, the engine confirms the change: -- Dropped 1 rules.

Verifying the Drop (Active Defense Check)

I verify the success of the active drop policy by specifically filtering for dropped packets in the logs.

# Command to output only dropped packets, showing the specific rule that triggered the block:
# awk '/Drop/{...}' fast.log | sort | uniq -c | sort -hr
# Example Output:
   6505 IP dropped due to fail2ban detection
    638 ET DROP Dshield Block Listed Source 

4. Advanced Rule Tuning: Leveraging Variables and Custom Logic

My advice is to use the variables whenever possible. By ensuring that network variables ($HOME_NET, $SMTP_SERVERS, etc.) correctly reflect your environment, you maximize the accuracy of existing rules. This prevents false positives and improves performance.

Enhancing Accuracy with Custom Rules

It’s crucial not just to disable bad rules, but to write custom rules that leverage these network variables for precise defense.

Example: Traffic Segregation Rule

To save resources, I would write a custom rule that only inspects for a vulnerability (e.g., a specific HTTP exploit) when the traffic comes from the external network and is destined for the correct server type.

# Example: Only check for sensitive SQL traffic if it comes from the EXTERNAL net.
# This prevents wasting resources checking internal-to-internal traffic.
# alert tcp $EXTERNAL_NET any -> $SQL_SERVERS 3306 (msg:"ET Custom: External Access to SQL Port"; ...)

This ensures that network resources are conserved by avoiding redundant checks on internal traffic.

5. Modern Analysis: Migrating from Bash to Structured Data

While the Bash pipeline is functional, high-traffic environments quickly overwhelm it. For modern Observability and SecOps analysis, the logs must be processed as structured data.

Migrating to EVE JSON

Suricata can output events in the EVE JSON format, which is ideal for ingestion into systems like Elasticsearch (ELK) or Splunk. This eliminates the slow and unreliable Bash parsing of fast.log.

Configuration Change (in suricata.yaml):

To migrate from the legacy fast.log format, you simply need to enable the EVE logger in your configuration.

# Output module setup in suricata.yaml
outputs:
  - eve-log:
      enabled: yes
      file: eve.json
      # Other settings (e.g., adding flow/metadata fields)

Python for High-Performance Analysis

Instead of relying on slow awk and sed pipelines, I recommend using Python for high-performance log analysis. Python’s built-in json library is optimized to read and aggregate large eve.json files far more efficiently. This elevates the analysis layer of the architecture to a production standard.

Sources / See Also

  1. Suricata Documentation. High-performance AF_PACKET IPS mode configuration and usage. https://docs.suricata.io/en/latest/install/af-packet.html
  2. Suricata Documentation. Working with Suricata-Update (Ruleset Management). https://suricata-update.readthedocs.io/en/latest/update.html
  3. Suricata Documentation. EVE JSON Output for Structured Logging. https://docs.suricata.io/en/latest/output/eve/eve-json-format.html
  4. Google Development. Google Perftools (TCMalloc) Documentation. https://github.com/google/gperftools
  5. Emerging Threats (Proofpoint). Information on the Emerging Threats Open Ruleset. https://www.proofpoint.com/us/security-awareness/blog/emerging-threats
  6. Elastic Stack (ELK) Documentation for Log Analysis. https://www.elastic.co/what-is/elk-stack
  7. Linux Manpage: ethtool (Network Offload Configuration). https://man7.org/linux/man-pages/man8/ethtool.8.html

Suricata IPS: Building a Transparent Network Defense Layer with AF-Packet Bridging

Suricata functions as a powerful engine for Network Intrusion Detection and Prevention (IDS/IPS). This guide demonstrates how to set up Suricata as a transparent Intrusion Prevention System (IPS) within a KVM environment by replacing the kernel bridge with the high-performance AF-Packet mechanism.

Continue reading Suricata IPS: Building a Transparent Network Defense Layer with AF-Packet Bridging

Automated Defense: Building a Central Log Hub for Fail2ban and External Firewall Integration

A very light-weight and efficient approach for consolidating logs centrally is by using rsyslog. My virtual machines all use rsyslog to forward their logs to a dedicated internal virtual machine, which acts as the central log hub. A fail2ban instance on this hub checks all incoming logs and sends a block command to an external firewall—a process helpful for automated security.

Continue reading Automated Defense: Building a Central Log Hub for Fail2ban and External Firewall Integration