They also created ghidra! Probably second best
They also created ghidra! Probably second best
Ollama is a big thing, do you want it to be fast? You will need a GPU, how large is the model you will be running be? 7/8B with CPU not as fast but no problem. 13B slow with CPU but possible
I would take that any day!
Sir, you just made my day thank you!
That is true, I mean I mostly only use my homelab except some game servers that I am running. And you are totally right. Only reason why I want to run proxmox or in general why I have a homelab is to learn more about servers and self hosting. I am currently in the first year of my apprenticeship and I have learned so much since I got my server up and running 😄 and I think I can learn a lot more when I am using proxmox
Please keep me up to date what you try and how you are trying to migrate it over! :D and obviously good luck
I am in the same boat currently and thinking about how I can migrate my stuff over without having a 1 month downtime EDIT: after reading all the comments I’m still not sure if I should do it or like I said even how. I love my unraid it fits me well however I think I also have fallen in love with proxmox
“Eigentlich fertig” was for an IP subnet calculator that I programmed with a fellow student
Probably because of accessibility I would say. Not good design but it is what it is
Meh that sucks i even have a perfectly working ddns, I mean I know I don’t get something like a PTR record but i wish that mail hosters would allow for more self hosting options
Oh yeah I heard about this and saw that mutahar (some ordinary gamers) was doing it once on windows with a 4090. I would love to do that on my GPU and then split it between my host and my VM
Wonderful thank you so much!
I need that wallpaper! Is there a way you could provide me that?
Probably or the ai if I should have guessed in the backend it’s using something like local ai, koboldcpp, llamacpp probably
I used llamacpp with opencl but couple of months they supported rocm which is even faster
Just want to piggyback this. You will probably need more than 6gb vram to run good enough models with a acceptable speed and coherent output, but the more the better.
Pi hole works network wide ;D
I think qcow2 images are always a fixed size (but I could be wrong on that) however I saw some threads explaining how you could relatively easy modify the size of the qcow2 image :)
I am using KDE not sure about gnome sadly
This is true I wanted to play magicka 1 with friends but he couldn’t launch it on his windows pc while I could launch it without any problems on Linux