While I'm a huge fan of shenanigans and silly hijinks, there are some things in life production that should be just plain boring. Solid functionality that just works.

Yeah? Well pfSense 2.4.5p1 does that. The issue is the drama that recently occured. The tl;dr is that netgate dun goofed by paying someone to shoddily implement wireguard into the FreeBSD kernel. They can't be blamed for the work of that dev (arguably) but the developers of Wireguard had reached out numerous times to assist, and were ignored. No idea why they'd do that, but it seems like a big failing on their part. Eventually, they decided to (rightly) yank the wireguard implementation. BUT WAIT, there's more. Pfsense seems to be moving to a closed source option, which combined with the complaints that pfSense 2.4.* can't be built from source is kinda iffy. While netgate reps in the pfsense subreddit do say that the community edition will receive support and updates going forward, I'm not sure I can believe them. This is the same organisation that pulled some super childish shenanigans with OPNSense.

These antics are far from boring. I need my firewall to be boring. This leaves one end goal. We dash pfSense in the bin.

However, as my previous posts have shown, I utilise the HAProxy package quite heavily in my network. As I'm a heretic who likes using the gui, all my configs where done through the pfsense haproxy webui. The issue is that the ui changes between the 2 just confused me. The solution? Yank the haproxy config files from pfSense and create a dedicated vm for haproxy, dump in the exported config files (with a few minor changes) and boom. All works. Awesome. That was much easier than anticipated. Ended up creating a Ansible role to automate this for me too. Additional bonus is that I now understand how to configure haproxy from the config file. We doing all the learnings today. While I was at it, I figured I may as well set up some redundancy so I added KeepAliveD to the mix. This allows for a floating IP(s) to be assigned to whichever HAProxy instance is available.

I suppose I can share the playbook.

---
- name: Add the Debian backports line to /etc/apt/sources.list
  lineinfile:
    path: /etc/apt/sources.list
    line: deb http://deb.debian.org/debian buster-backports main

- name: Install HAProxy from backports and KeepAliveD
  apt:
    name: 
        - haproxy='2.2.\* -t buster-backports'
        - keepalived
        - psmisc # for killall command
    update_cache: yes

- name: Copy the haproxy config files over
  copy: src=./files/{{ item }} dest=/etc/haproxy/
  with_items:
    - errorfile_HTTPS_443_503_ExampleErrorfile
    - haproxy.cfg
    - HTTP-80.crt_list
    - HTTPS_443
    - HTTPS_443.crt_list
    - HTTPS_443.pem
    - LDAPS-Frontend.crt_list

- name: Copy keepalived MASTER config file
  template:
    src: ./files/keepalived_master.conf
    dest: /etc/keepalived/keepalived.conf
  when: '"01" in ansible_hostname'

- name: Copy keepalived SLAVE config file
  template:
    src: ./files/keepalived_slave.conf
    dest: /etc/keepalived/keepalived.conf
  when: '"02" in ansible_hostname'

- name: set ENABLED to 1 in /etc/default/haproxy
  lineinfile:
    path: /etc/default/haproxy
    line: ENABLED=1

# To allow HAProxy to bind to the shared IP address, we add the following line to /etc/sysctl.conf
- sysctl:
    name: net.ipv4.ip_nonlocal_bind
    value: '1'
    sysctl_file: /etc/sysctl.conf
    reload: yes

# Open required ports
- name: Open required UFW TCP ports for haproxy
  ufw:
    rule: allow
    proto: tcp
    port: "{{ item }}"
  with_items:
    - 22
    - 80
    - 443
    - 444
    - 636

- name: Open required UFW UDP ports for haproxy
  ufw:
    rule: allow
    proto: udp
    port:  "{{ item }}"
  with_items:
    - 636

- name: Make sure keepalived and haproxy are started and enabled
  sysvinit:
      name: "{{ item }}"
      state: started
      enabled: yes
  with_items:
    - keepalived
    - haproxy

# Perform a reboot
- name: Perform a reboot
  reboot:

Obviously, this assumes have a haproxy config that works. To get KeepAliveD working, the following config worked for me.

global_defs {
    lvs_id haproxy01
    enable_script_security
    script_user deploy
}
vrrp_sync_group SyncGroup01 {
    group {
        VI_1
    }
}
vrrp_script chkhaproxy {
    script "/usr/bin/killall -0 haproxy"
    script "/usr/sbin/service haproxy start"
    interval 9
    timeout 3
    weight 20
    rise 2
    fall 4
}
vrrp_instance VI_1 {
    interface {{ ansible_default_ipv4.interface }}                # interface to monitor
    state MASTER
    virtual_router_id 51          # Assign one ID for this route
    priority 101                  # 101 on MASTER, 100 on BACKUP
    advert_int 5
    authentication {
        auth_type PASS
        auth_pass 6c6942fdece8f61aevc60293f9adfdz8
    }

    unicast_src_ip 10.10.10.63   # IP address of local interface
    unicast_peer {            # IP address of peer interface
        10.10.10.64
    }

    virtual_ipaddress {
        10.10.10.99         # the virtual IP
        10.10.10.19
    }
    track_script {
        chkhaproxy
    }
}

The slave configuration is almost identical. The only things that need changing are the unicast source and peer addresses and the priority. Easy stuff.

The plus side of using ansible here is that I can update the HAProxy config across however many instances I run with a simple playbook command. We fancy now.

The only other casualty (aside from my time) is pfblockerng. Turns out that this was very easy to replace with the the unbound dnsbl functionality that's available. I just used the PiHole blocklist that can be found here, and added a cron job in OPNSense to update this list periodically.

Everything else (telegraf stats, openvpn, active directory login etc) worked out of the box.

No complaints so far.