Are you talking specifically about LLMs or Neural Network style AI in general? Super computers have been doing this sort of stuff for decades without much problem, and tbh the main issue is on training for LLMs inference is pretty computationally cheap
Super computers have been doing this sort of stuff for decades without much problem
Idk if I’d point at a supercomputer system and suggest it was constructed “without much problem”. Cray has significantly lagged the computer market as a whole.
the main issue is on training for LLMs inference is pretty computationally cheap
Again, I would not consider anything in the LLM marketplace particularly cheap. Seems like they’re losing money rapidly.
I disagree, there are loads of white papers detailing applications of AI in various industries, here’s an example, cba googling more links for you.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7577280/
And loads more of its ineffectual nature and wastefulness.
Are you talking specifically about LLMs or Neural Network style AI in general? Super computers have been doing this sort of stuff for decades without much problem, and tbh the main issue is on training for LLMs inference is pretty computationally cheap
Idk if I’d point at a supercomputer system and suggest it was constructed “without much problem”. Cray has significantly lagged the computer market as a whole.
Again, I would not consider anything in the LLM marketplace particularly cheap. Seems like they’re losing money rapidly.