What we doing fam?

So, you have a nice Active Directory set up to which a bunch of your internal services bind to. While you've been going through all these binds, you see options asking which port and whether or not you want security. Well, if you're anything like me, then you probably have been using port 389 with no security as that's the default option offered by AD. Which is fine, I guess. For a homelab. Not exactly the smartest thing to do in production, but a cool quote I've seen says:

Everybody has a testing environment. Some people are lucky enough to have a totally separate environment to run production in.

I'm going to be that guy, so you don't have to. Best case scenario, everything works out. Worst case scenario, I need to rebuild my domain controllers........

Anyway, let us try and secure things in my set up. I've got a few guides handy to follow, but that still leaves a couple questions that need answering.

  1. Will I need to repeat the steps to enable ldaps on both domain controllers, or will some magic replication occur? If not, can I do something with GPO's to enable this automagically going forward?
  2. Can I enable some sort of load balancing for the domain controllers as some services only allow a single hostname to be entered. My concern here is that my primary load balancer has a wildcard cert installed and is mainly used for ssl off-loading. I think I may need set up a seperate dedicated load balancer just for ldaps that doesnt do any ssl offloading and cause any weird cert conflicts...

Ayt, here goes nothing.

First step seems to be generating a certificate signing request. That seems to be as simple as creating a request.inf file and populating it with the following info.

 [Version]
 Signature="$Windows NT$"

 [NewRequest]
 Subject = "cn=dc1.fqdn.com"
 KeySpec = 1
 KeyLength = 2048
 Exportable = TRUE
 MachineKeySet = TRUE
 SMIME = FALSE
 PrivateKeyArchive = FALSE
 UserProtected = FALSE
 UseExistingKeySet = FALSE
 ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
 ProviderType = 12
 RequestType = PKCS10
 KeyUsage = 0xa0

 [EnhancedKeyUsageExtension]
 OID = 1.3.6.1.5.5.7.3.1 ; Server Authentication
 
 [Extensions]
 2.5.29.17  = "{text}"
 _continue_ = "dns=dc1&"
 _continue_ = "dns=dc1.fqdn.com&"
 _continue_ = "dns=ldaps&"
 _continue_ = "dns=ldaps.fqdn.com&"

Replace dc1.fqdn.com with the fqdn of the domain controller........I guess this answers my first question somewhat. I'll need to generate another cert for the second dc as well. I think. With this request file on the domain controller, I need to run the following in an elevated shell.

 C:\> certreq -new request.inf client.csr

Now, apparently, I need to create a v3ext.txt file with the following contents.

keyUsage=digitalSignature,keyEncipherment
extendedKeyUsage=serverAuth
subjectKeyIdentifier=hash
subjectAltName = @alt_names
[alt_names]
DNS.1 = dc1
DNS.2 = dc1.fqdn.com
DNS.3 = ldaps
DNS.4 = ldaps.fqdn.com
DNS.3 and DNS.4 are so I can put both the DC's being a load balancer. More on that later.

The issue for me though, is that my certificate management processes are all done through my pfSense instance, but I see no way to provide these options when signing the request that I previously generated.

Hum.

Not sure I want to skip this step as the guide I'm following states these extensions MUST be present. Looking at the certs in pfSense, that doesn't seem to be the case.

Soooo, small detour. I'll create a new vm, install openssl, import my intermediate certificate authority and key, then use that to sign the request so that it includes the v3 extensions......sounds simple enough.......

With the crt and the key copied to my new signing server I run the following command:

openssl x509 -req -days 3650 -in dc1.csr -CA Intermediate.crt -CAkey Intermediate.key -extfile v3ext.txt -set_serial 23 -out dc1.crt

And then the same but for dc2, while incrementing the set_serial integer.

Copy these over to my domain controllers and run the following to import the cert, and then reboot.

C:\> certreq -accept dc1.crt
Assuming the cert is in the root directory.....

Sweet. To test this, I jump over to my other domain controller and open up the ldap utility by running the following in prompt window.

C:\ .\ldp.exe

Click connection -> Connect -> enter fqdn and set port to 636. Click ok.

Neat. It works. Let me spin up something that can use this connection without modifying any of my existing services. A new nexctloud docker instance will work as a test considering I know the values required....

Where the scope changes

Right, ok. I can see that ldaps is working, but nextcloud dont want none unless it got the root cert hun. That is to say, I can connect via the ldaps port if I instruct nextcloud to not verify the ldaps cert. I could do that, but I don't like the idea of kludging that. May as well just not use ldaps in that case. The question has now changed to "how do I get my trusted certs into the containers". A quick and easy idea is to just bind mount the cert directory from the docker host to the cert directory in the container......

Ok, so mounting the cert directory as a volume does work. The addition of

- /usr/local/share/ca-certificates/:/usr/local/share/ca-certificates/:ro

to the volumes section works, but I need to run

update-ca-certificates

within the container before they're recognised. While this is a relatively trivial thing to do, I don't want to do this for each and every container every time I recreate one.....

Where I decide I'm getting sidetracked and will come back to that problem later.

Seeing as my ldaps shenanigans with a single domain controller work fine, I'll go ahead and import the cert for DC2 and run the same test...... and yup. All good. So now both my domain controllers are runnings ldaps. Neato.

Now, the final question is, how can I put a load balancer in front of these 2.....

TLDR; I did it. Pretty simple, but there's room for improvement.

Here's what I did.

pfSense -> Firewall -> Virtual IPs -> New IP Alias for 10.10.10.19/32.

Pointed ldaps.fqdn.com to this ip.

pfSense -> HAProxy -> Created a new backend. Added in the 2 domain controllers by fqdn with port 636. Load balancing set to "Least Connections" because that seemed most appropriate going by the docs. Health check method set to none because I couldn't get that to work, hence the improvement I mentioned.

pfSense -> HAProxy -> Create a new frontend listening on the virtual ip port I created a moment ago with type tcp and the default backend set to the ldap-servers backend I created in the previous step.

I tested the good old fashioned way by just turning off the domain controllers in turn, leaving the other running and testing the connection. Failed over nicely-ish. Would like it to be a little faster, but that's a problem for another time. Guess I can start migrating my services over.

That's it for now though. Apex time.


Little addendum cus relevant.

There are a couple places that the root certificate authority is checked. Usually, this is the default os certificate store. On Debian 10, my golden image has these dumped into /usr/share/local/ca-certificates. Most of the time, having these here is sufficient. However, if your server is able to connect via port 389, but having issues with port 636 (ldaps), and your certificate is in the affore-mentioned place? Well, in some scenarios, the configuration file for the software you're trying to get ldaps working on may have the option to provide a link to the certificate or a directory containing your certs. If that's not the case, there's one more place you can look (assuming Debian 10 again). That's in /etc/ldap/ldap.conf

You'll want to add something like the following to get thing's trusted.

TLS_CACERT      /usr/local/share/ca-certificates/YOUR-ROOT-OR-INTERMEDIATE.CRT
TLS_CACERTDIR   /usr/local/share/ca-certificates/
TLS_REQCERT hard

There's probably a better way than just rebooting, but eh, just reboot and we good.

Happy securings.