Suricata is a Network Intrusion Detection and Prevention System as well as a Network Security Monitoring engine. For now I am using Suricata as an IPS and here I’ll show you how to set it up.
Bridging without Suricata
Imagine you have all your virtual machines on the host in one bridge and your host-device (which is connected to the switch in the datacenter / your uplink) in another bridge.
Bridge ovs-guests
Port veth104
Interface veth104
Port ovs-guests
Interface ovs-guests
type: internal
Bridge ovs-host
Port eno1
Interface eno1
Port ovs-host
Interface ovs-host
type: internal
ovs_version: "3.1.0"
Here you see that my virtual machine with ID 104 has it’s veth connected to the openvswitch bridge ovs-guests. While the uplink / host device eno1 is connected to the bridge ovs-host. You may do the same without Openvswitch by just creating a bridge called vmbr0 and a bridge called vmbr1 using bridge-utils. Then add eno1 (your host device) to vmbr0 and configure your virtual machines to add themselves to vmbr1. In the following I will always refer to ovs-host and ovs-guests.
At this stage your virtual machines don’t have any internet connectivity. You either need to add the host device to the guests bridge (ovs-guests) or you need some other way to connect them.
You can start by creating a virtual machine which you give the very creative name firewall. This firewall should have two network interface cards – one connected to ovs-host and one connected to ovs-guests:
Bridge ovs-guests
Port veth104
Interface veth104
Port ovs-guests
Interface ovs-guests
type: internal
Port veth109p2
Interface veth109p2
Bridge ovs-host
Port eno1
Interface eno1
Port ovs-host
Interface ovs-host
type: internal
Port veth109p1
Interface veth109p1
ovs_version: "3.1.0"
The Firewall VM (here ID 109) has it’s first card (veth109p1) in ovs-host in which also the host-device eno1 is. The same virtual machine has it’s second card (veth109p2) in ovs-guests in which all virtual machines are connected.
Let’s assume the devices in your Firewall virtual machine are called ens3 and ens5 while ens3 is the one connected to ovs-host bridge and ens5 is the one connected to ovs-guests… You can just bridge those two devices using bridge-utils within the virtual machine:
iface ens3 inet manual
iface ens5 inet manual
auto br0
iface br0 inet static
address 0.0.0.0
bridge-vlan-aware yes
bridge_ports ens3 ens5
bridge_stp off
bridge_fd 0
bridge_waitport 0
Your virtual machines should have internet connectivity now if you configured an IP in it. Remember this is not a NAT setup or the like. So you just have to configure your VM with a public IP.
This allows you to use ebtables / nftables to filter traffic to and from your virtual machines in a transparent way. Because all traffic passes this bridge. For example to block an IP using nftables in this setup you would just issue something like this:
nft add table bridge myt
nft add chain bridge myt myc '{ type filter hook forward priority 0; policy accept; }'
nft add rule bridge myt myc iif ens3 ether type ip ip saddr $IP counter drop
Just add an ip for $IP and you’re done. Of course this also allows you other filtering and you could – if you just want some detection try and add suricata or other things to br0 in this virtual machine.
Bridging using Suricata
I assume you installed Suricata using apt-get install suricata already. The next step is to edit your network configuration. Remove the bridge br0 which I’ve shown above. I’m using:
auto ens3
iface ens3 inet manual
pre-up ip link set ens3 up
pre-up /root/disable-offload.sh ens3
auto ens5
iface ens5 inet manual
pre-up ip link set ens5 up
pre-up /root/disable-offload.sh ens5
Attention: If you are using VirtIO for your network connection Suricata will likely not work. I had trouble getting Suricata to work with VirtIO enabled for the Network Interface Cards. Switch to e1000 for ens3 and ens5. Your management nic can still be virtio. But the two cards which you bridge using af-packet may require e1000 instead of virtio. Update: I got VirtIO Working with AF-Packet.
I wrote a tiny script which disables the various offloading features because those seem to be problematic with suricata.
disable-offload.sh:
#!/bin/bash
DEV="$1"
for i in rx tx tso gso gro lro tx sg txvlan rxvlan; do
#echo $i
/sbin/ethtool -K $DEV $i off &>/dev/null;
done
Now you just need to edit /etc/suricata/suricata.yaml and modify the part with af-packet like this:
af-packet:
- interface: ens3
threads: 1
cluster-id: 98
cluster-type: cluster_flow
copy-mode: ips
copy-iface: ens5
defrag: no
tpacket-v3: no
ring-size: 2048
use-mmap: yes
- interface: ens5
threads: 1
cluster-id: 97
cluster-type: cluster_flow
copy-mode: ips
copy-iface: ens3
defrag: no
tpacket-v3: no
ring-size: 2048
use-mmap: yes
I would suggest that you search for HOME_NET in this configuration file as well and add your public IP network(s) to it. Then you should get some rules for suricata:
suricata-update -o /etc/suricata/rules
This will require that your firewall has internet connectivity. You may add a third network device to this virtual machine which you put into the host-bridge and which you can use as a management-device / connection. Finally enable suricata:
systemctl start suricata
In /var/log/suricata/suricata.log you should see something like this:
<Info> - AF_PACKET IPS mode activated ens3->ens5
<Info> - Going to use 1 thread(s)
<Info> - AF_PACKET IPS mode activated ens5->ens3
<Info> - Going to use 1 thread(s)
<Info> - Found an MTU of 1500 for 'ens5'
<Info> - Found an MTU of 1500 for 'ens3'
<Info> - Using unix socket file '/var/run/suricata-command.socket'
<Notice> - all 2 packet processing threads, 4 management threads initialized, engine started.
<Info> - All AFP capture threads are running.
Congratulation. You’ve set up Suricata. I assume you saw that the first interface ens3 copies (copy-iface) to ens5 while the second interface ens5 copies (copy-iface) to ens3. This is exactly what makes this work similar to a bridge.