Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.
I’m sure that everybody has some, but to spend billions seems a little premature.
Six months from now: “damn, we’re way behind Meta on AI. We should have spent billions six months ago, it’s going to cost way more to catch up.”
Chips evolve. By the time a billion dollar contract is fulfilled, they are two iterations behind.
Pretty sure they’ll be given insight into the roadmap for that price, and be able to place speculative orders on upcoming generations.
I used to present those roadmaps. They change too.
Of course they do, but my point was that I doubt Meta is locked into this generation.
The article says “by the end. Of the year” they will spend billions
“spend billions” does not equal “hand over cash and take home GPUs”. It’ll mean a contract worth that amount with delivery terms defined over time. Even over the course of a year there’s likely to be newer product than Lovelace.
When you get product you pay for it. Spending means paying for it. You may have a contract for future product, but you don’t pay for the future product in advance as SOX rules kick in. Commonly, a chip development cycle can be at least 10 months.