Since Google is getting rid of my unlimited Gdrive and my home internet options are all capped at 20 megabits up, I have resorted to colocating my 125 terabyte Plex server currently sitting in my basement. Right now it is in a Fractal Define 7 XL, but I have order a Supermicro 826 2U chassis to swap everything over to.
This being my first time colocating I’m not quite sure what to expect. I don’t believe I will have direct access since it is a shared cabinet. Currently it is running Unraid, but I’m considering switching to Proxmox and virtualizing TrueNAS. Their remote hands service is quite expensive, so I’d like to have my server as ready to go as possible. I’m not even sure how my IP will be assigned: is DHCP common in data centers or will I need to define my IP addresses prior to dropping it off?
If anyone has any lessons learned or best practices from colocating I would be really interested in hearing them.
Never colocated, but did rent baremetal from OVH back when they didn’t have any KVM and all you could do is wipe/reinstall, reboot and boot into a 2-3 releases old Debian recovery.
Definitely seconding the KVM remote access part: you really, really want that, or at least some way to hard reset your server if it crashes. I can’t stress this enough. Even if you think you’ll never need it, you never know when you’ll have a kernel panic or need to do some boot troubleshooting, even just to run fsck. It’s absolutely nerve-wracking to reboot a server you don’t have any way to access other than SSH and looking at that ping window for 2-5 minutes while the thing boots back up and wondering it it will come back online or not.
If you don’t have IPMI and can’t have some sort of KVM for your server, I highly recommend having at least a PiKVM or something in there to be able to do remote troubleshooting. Ideally I also recommend (if no IPMI) setting up some sort of preboot environment you know will reliably boot (maybe something entirely in initramfs) that will boot up, get network and listen for SSH for a couple minutes before chainloading back into the main OS so that you can at least turn off firewall/reset network to known good. Anything that will give you remote access independently of your main OS.
At least I had access to the recovery environment from OVH, but even then, that thing took a full boot cycle to boot up + some more time for them to deliver the credentials by email (that better not be hosted on that box itself), change a config file, reboot again. Legit 10-15 minutes between each attempt and little to no way of knowing what happens until you boot the recovery again. It was horrifying, can’t recommend.
IPMI saved my ass a few times and I’m never getting another box without it.
Tbh I worked on a campus where we had total free access to our bays in the local DC (like 5 minutes away by car), even in the dead of night we just had to make a call to not get stopped at the door, and even then IPMI is still just so much more convenient than sitting on the floor with your laptop, a VGA screen and PS2 keyboard among your tools in a loud DC with mandatory earplugs and an eye on the nitrogen fire supression that really has no reason to trigger but it could and that is terrifying.
Or you could have IPMI and be sat at your desk with coffee and listening to music. Your choice really, I wonder why iLO licenses are so expensive :P
I have a spare Pi4 sitting around the house that I could pretty cheaply turn into a PiKVM. Looks like there are some slick hats to install into a PCI-E slot so I don’t have a Pi and a bunch of wires hanging out in the chassis. Looks like I’ll be going that route. Just need to figure out how to power it (they all seem to require external 5v or POE).
Consumer motherboards have some USB ports that have standby power @2A. Or the power supply has a 5vsb rail as well, that’s where that comes from.
Does the IPMI or KVM go on a private network of some sort? Surely you wouldn’t want to expose that to the internet.
Usually you define a VLAN dedicated to your IPMI devices, only accessible through an access-controlled way (usually, VPN served by the firewall but don’t do that if you’re virtualizing the firewall for obvious reasons). The DC might offer a VPN of their own specifically for this purpose, or you can pay them for more space to install a physical firewall but that’s a more significant investment.
Ultimately best practices say not to expose the IPMI to the internet, but if you really have no choice and your thing is up to date then you only need to fear 0-days and brute force attacks, the login pages are usually pretty secure since access is equivalent to physical access. You will attract a lot of parasite traffic probing for flaws though.
Usually yes. That’s something you might want to discuss with the datacenter what they have to offer that way, some will give you a VPN to be able to reach it. But I don’t have experience with that, my current servers came with IPMI and I can download a Java thing from OVH to connect to it.