I’ve been in the process of migrating a lot things back to kubernetes, and I’m debating whether I should have separate private and public clusters.

Some stuff I’ll keep out of kubernetes and leave in separate vms, like nextcloud/immich/etc. Basically anything I think would be more likely to have sensitive data in it.

I also have a few public-facing things like public websites, a matrix server, etc.

Right now I’m solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

The main concern I’d have is reducing the blast radius if something gets compromised. But I also don’t know if I really want to maintain multiple personal clusters. I am using Omni+Talos for kubernetes, so it’s not too difficult to maintain two clusters. It would be more inefficient as far as resources go since some of the nodes are baremetal servers and others are only vms. I wouldn’t be able to share a large baremetal server anymore, unless I split it into vms.

What are y’all’s opinions on whether to keep everything in one cluster or not?

    • alienscience@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Just to add to this point. I have been running a separate namespace for CI and it is possible to limit total CPU and memory use for each namespace. This saved me from having to run a VM. Everything (even junk) goes onto k8s isolated by separate namespaces.

      If limits and namespaces like this are interesting to you, the k8s resources to read up on are ResourceQuota and LimitRange.

  • FrederikNJS@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    I really don’t see much benefit to running two clusters.

    I’m also running single clusters with multiple ingress controllers both at home and at work.

    If you are concerned with blast radius, you should probably first look into setting up Network Policies to ensure that pods can’t talk to things they shouldn’t.

    There is of course still the risk of something escaping the container, but the risk is rather low in comparison. There are options out there for hardening the container runtime further.

    You might also look into adding things that can monitor the cluster for intrusions or prevent them. Stuff like running CrowdSec on your ingresses, and using Falco to watch for various malicious behaviour.

    • johntash@eviltoast.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Network Policies are a good idea, thanks.

      I was more worried about escaping the container, but maybe I shouldn’t be. I’m using Talos now as the OS and there isn’t much on the OS as it is. I can probably also enforce all of my public services to run as non-root users and not allow privileged containers/etc.

      Thanks for recommending crowdsec/falco too. I’ll look into those

  • theroff@aussie.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    At work we use separate clusters for various things. We built an Ansible collection to manage the lot so it’s not too much overhead.

    For home use I skipped K8s and went to rootless Quadlet manifests. Each quadlet is in a separate non-root user with lingering enabled to reduce exposure from a container breakout.

    • anyhow2503@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      If I may ask: how practical is monitoring / administering rootless quadlets? I’m running rootless podman containers via systemd for home use, but splitting the single rootless user into multiple has proven to be quite the pain.

      • theroff@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Yeah it is a bit of a pain. I currently only have a few users. Tooling-wise there are ways to tail the journals (if you’re using journalctl) and collate them but I haven’t gotten around to doing this myself yet.

  • wirehead@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Well, one option, which can be pushing the boundaries of selfhosted for some, would be to use a hosted k8s service for your public-facing stuff and then a home real k8s cluster for the rest of it.

    • johntash@eviltoast.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      This is an option, my main reason for not wanting to use a hosted k8s service is cost. I already have the hardware, so I’d rather use it first if possible.

      Though I have been thinking of converting some sites to be statically-generated and hosted externally.

  • farcaller@fstab.sh
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    I’ve dealt with exactly the same dilemma in my homelab. I used to have 3 clusters, because you’d always want to have an “infra” cluster which others can talk to (for monitoring, logs, docker registry, etc. workloads). In the end, I decided it’s not worth it.

    I separated on the public/private boundary and moved everything publicly facing to a separate cluster. It can only talk to my primary cluster via specific endpoints (via tailscale ingress), and I no longer do a multi-cluster mesh (I used to have istio for that, then cilium). This way, the public cluster doesn’t have to be too large capacity-wise, e.g. all the S3 api needs are served by garage from the private cluster, but the public cluster will reverse-proxy into it for specific needs.

    • johntash@eviltoast.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I did actually consider a 3rd cluster for infra stuff like dns/monitoring/etc, but at the moment I have those things in separate vms so that they don’t depend on me not breaking kubernetes.

      Do you have your actual public services running in the public cluster, or only the load balancer/ingress for those public resources?

      Also how are you liking garage so far? I was looking at it (instead of minio) to set up backups for a few things.

      • farcaller@fstab.sh
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Actual public services run there, yeah. In case if any is compromised they can only access limited internal resources, and they’d have to fully compromise the cluster to get the secrets to access those in the first place.

        I really like garage. I remember when minio was straightforward and easy to work with. Garage is that thing now. I use it because it’s just co much easier to handle file serving where you have s3-compatible uploads even when you don’t do any real clustering.

        • johntash@eviltoast.orgOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Do you use garage for backups by any chance? I was wanting to deploy it in kubernetes, but one of my uses would be to back up volumes, and… that doesn’t really help me if the kubernetes cluster itself is broken somehow and I have to rebuild it.

          I kind of want to avoid a separate cluster for storage or even separate vms. I’m still thinking of deploying garage in k8s, and then just using rclone or something to copy the contents from garage s3 to my nas

          • farcaller@fstab.sh
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            No. It’s my in-cluster storage that I only use for things that are easier to work with via S3 api, and I do backups outside of the k8s scope (it’s a bunch of various solutions that boil down to offsite zfs replication, basically). I’d suggest you to take a look at garage’s replication features if you want it to be durable.

  • Tiuku@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Right now I’m solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

    How’s this working out? What kinda alternatives are there with a single cluster?

    • johntash@eviltoast.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      It’s mostly working fine for me.

      An alternative I tried before was just whitelisting which IPs are allowed to access specific ingresses, but having the ingress listen on both public/private networks. I like having a separate ingress controller better because I know the ingress isn’t accessible at all from a public ip. It keeps the logs separated as well.

      Another alternative would be an external load balancer or reverse proxy that can access your cluster. It’d act as the “public” ingress, but would need to be configured to allow specific hostnames/services through.