Literally what I was wondering, lol. My first thought was “how well does it run Debian?”
OTOH, I really don’t want to contribute to a sale that may make MS or the hardware manufacturers think people want this AI crap. I just want a beefy ARM laptop that runs Linux lol.
They’re apparently working on it. Tuxedo already got a prototype and Qualcomm has been apparently contributing code to the mainline Linux kernel to guarantee support
Well, actually, what if I want AI “crap” capability with my Linux ARM laptop?
The TOPS on those systems are no joke. Consider that it’s 1/2 the performance of an RTX 2060 in a slim laptop form factor.
Edit: The performance variance is still the same. 2060 can do almost 13 TFLOPS fp16 or about 102 TOPS measured (this figure is on other sites too, this is what I can find atm). SD Elite X can do 45 TOPS. Not bad, considering existing x86_64 CPUs with an NPU do 10-16 TOPS.
So the way MS is using it is incredibly dumb, but hardware wise, it’s just a NN-optimized tile on the CPU. That is going to be a great thing for democratizing access to serious machine learning hardware. In that respect, it’s actually pretty awesome, despite the fact that It’s annoying that the initiative is tied so closely to MS.
Literally what I was wondering, lol. My first thought was “how well does it run Debian?”
OTOH, I really don’t want to contribute to a sale that may make MS or the hardware manufacturers think people want this AI crap. I just want a beefy ARM laptop that runs Linux lol.
They’re apparently working on it. Tuxedo already got a prototype and Qualcomm has been apparently contributing code to the mainline Linux kernel to guarantee support
Good to know! Will keep an eye out for sure.
Well, actually, what if I want AI “crap” capability with my Linux ARM laptop?
The TOPS on those systems are no joke. Consider that it’s 1/2 the performance of an RTX 2060 in a slim laptop form factor.
Edit: The performance variance is still the same. 2060 can do almost 13 TFLOPS fp16 or about 102 TOPS measured (this figure is on other sites too, this is what I can find atm). SD Elite X can do 45 TOPS. Not bad, considering existing x86_64 CPUs with an NPU do 10-16 TOPS.
Don’t confuse TFLOPs and TOPs. Especially when the latter is 4-bit integer operations.
So the way MS is using it is incredibly dumb, but hardware wise, it’s just a NN-optimized tile on the CPU. That is going to be a great thing for democratizing access to serious machine learning hardware. In that respect, it’s actually pretty awesome, despite the fact that It’s annoying that the initiative is tied so closely to MS.