• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle





  • I thought about setting one up for my main server because every time the power went out I’d have to reconfigure the bios for boot order, virtualization, and a few other settings.

    I’ve since added a UPS to the mix but ultimately the fix was replacing the cmos battery lol. Had I put one of these together it would be entirely unused these days.

    It’s a neat concept and if you need remote bios access it’s great, but people usually overestimate how useful that really is.





  • I can’t speak for everyone else, but I run about 6 different VMs solely to run different docker containers. They’re split out by use case, so super critical stuff on one VM, *arr stuff on another, etc. I did this so my tinkering didn’t take down Jellyfin and other services for my wife and kids.

    Beyond that I also have two VMs for virtualized pihole running gravity sync on different hosts, and another I intend to use for virtualized opnsense.

    Everything is managed via ansible with each docker project in its own forgejo repo.




  • I’m assuming you installed it directly to the container vs running docker in there?

    I have been debating making the jump from docker in a VM to a container, but I’ve been maintaining Nextcloud in docker the entire time I’ve been using it and not had any issues. The interface can be a little slow at times but I’m usually not in there for long. I’m not sure it’s worth it to have to essentially rearchitect mely setup for that.

    All that aside, I also map an NFS share to my docker container that stores all my files on my NAS. This could be what causes the interface slowness I sometimes see, but last time I looked into it there wasn’t a non hacky way to mount a share to an LXC container, has that changed?


  • Yikes! I pay a couple bucks more for uncapped gigabit. I’m fortunate in that there’s two competing providers in my area that aren’t in cahoots (that I can tell.) I much prefer the more expensive one and was able to get them to match the other’s price.

    My wife has been dropping hints she wants to move to another state though and I’m low key dreading dealing with a new ISP/losing my current plan.


  • The growth is happening mostly in the pictrs and db containers. I know pictrs is optional if you’re not uploading pics yourself, but I didn’t want to limit myself on that. I haven’t dived into where the db growth is happening yet either. Right now my hurdle is there doesn’t seem to be any baked in maintenance tools, so it’s all going to be me editing the database directly. I’m okay with doing it but need to figure out how to not purge content I have saved via Lemmy.

    As far as NSFW stuff, there’s a check box for the instance settings for enabling NSFW instance wide. I have it unchecked and haven’t seen a single NSFW post browsing through my instance. It does require things to be marked as such though. I’ll probably go the extra step and defederate the porn instances just to add another layer.

    Please let me know if you find anything useful for maintaining the instance.


  • I have this one on a Hetzner server that runs me like $6/mo. I’m not comfortable with the federated nature of things potentially putting CSAM or other illegal content on disk in my home.

    I use tailscale so I can still hit my internal (at home) git repos and all that. The rest of my stuff is all hosted on an old gaming PC I turned into a Proxmox host that sits in my spare bedroom. Of those services, I only expose like 3 things to the outside world. Nextcloud being the main one. I don’t route it through my VPS, just proxy it through cloudflare.


  • Yeah I haven’t found anything for cleanup maintenance. Right now with just me my disk usage is increasing ~300MB per day. I’m debating purging stuff older than 30 days or something. The only stuff where my server is the source of truth is my profile and communities on my instance.

    We’ll see though, this is just a fun little side thing I’m not taking too seriously.


  • 2fa was in at the time. IIRC the jwt was granted after 2fa so it didn’t matter.

    You’ve got a point though, small instances aren’t gonna be nearly as useful as a giant one to threat actors. Assuming you don’t give them a reason to go after you specifically they wouldn’t have a reason to target such a tiny server.

    Still though, I don’t need that shiny A next to my name so I’m good with how I have it set up.


  • I do a separate container for each service that requires a db. It’s pretty baked into my backup strategy at this point where the script I wrote references environment variables for dumps in a way that I don’t have to update it for every new service I deploy.

    If the container name has -dbm on the end it’s MySQL, -dbp is postgres, and -dbs would be SQLite if it needed its own containers. The suffix triggers the appropriate backup command that pulls the user, password, and db name from environment variables in the container.

    I’m not too concerned about system overhead, but I’m debating doing a single container for each db type just to do it, but I also like not having a single point of failure for all my services (I even run different VMs to keep stable services from being impacted by me testing random stuff out.)


  • Exactly. I went one step further and decided not to use my admin account as my main. I don’t run around as root on servers so I try not to do that with apps. It’s easier with Lemmy because once it’s set up all the admin tasks hit my email.

    I also wanted to avoid that vulnerability that hit Lemmy World a few weeks ago that was only possible because the server admin got their jwt stolen, which wouldn’t have been so impactful if they weren’t on the admin account.