Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips (www.cnbc.com)
from 31337@sh.itjust.works to technology@lemmy.world on 19 Jan 2024 22:33
https://sh.itjust.works/post/13105235

Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

#technology

threaded - newest

tonytins@pawb.social on 19 Jan 2024 22:34 next collapse

The real winners are the chipmakers.

whodatdair@lemm.ee on 19 Jan 2024 23:08 next collapse

Gold rush you say?

Shovels for sale!

Get your shovels here! Can’t strike it rich without a shovel!

FaceDeer@kbin.social on 19 Jan 2024 23:37 next collapse

I feel like a pretty big winner too. Meta has been quite generous with releasing AI-related code and models under open licenses, I wouldn't be running LLMs locally on my computer without the stuff they've been putting out. And I didn't have to pay a penny to them for it.

Unforeseen@sh.itjust.works on 20 Jan 2024 13:51 collapse

Subsized by boomers everywhere looking at ads on Facebook lol. Same with the Quest gear and VR development

fluxion@lemmy.world on 20 Jan 2024 21:16 collapse

Was wondering why my stock was up. AI already improving my quality of life.

simple@lemm.ee on 19 Jan 2024 22:41 next collapse

Who isn’t at this point? Feels like every player in AI is buying thousands of Nvidia enterprise cards.

31337@sh.itjust.works on 19 Jan 2024 23:02 collapse

The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it’s estimated they “only” used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn’t worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?

TropicalDingdong@lemmy.world on 19 Jan 2024 23:11 next collapse

Might be a bit of a tell that they think they have something.

ExLisper@linux.community on 19 Jan 2024 23:58 next collapse

Or they just have too much money.

FaceDeer@kbin.social on 20 Jan 2024 01:02 collapse

Which will be solved by them spending it.

NotMyOldRedditName@lemmy.world on 20 Jan 2024 01:32 next collapse

Would that be diminishing returns on quality, or training speed?

If I could tweak a model and test it in an hour vs 4 hours, that could really speed up development time?

31337@sh.itjust.works on 20 Jan 2024 03:02 collapse

Quality. Yeah, using the extra compute to increase speed of development iterations would be a benefit. They could train a bunch of models in parallel and either pick the best model to use or use them all as an ensemble or something.

My guess is that the main reason for all the GPUs is they’re going to offer hosting and training infrastructure for everyone. That would align with the strategy of releasing models as “open” then trying to entice people into their cloud ecosystem. Or, maybe they really are trying to achieve AGI as they state in the article. I don’t really know of any ML architectures that would allow for AGI though (besides the theoretical, incomputable AIXI).

qupada@kbin.social on 20 Jan 2024 01:34 collapse

The estimated training time for GPT-4 is 90 days though.

Assuming you could scale that linearly with the amount of hardware, you'd get it down to about 3.5 days. From four times a year to twice a week.

If you're scrambling to get ahead of the competition, being able to iterate that quickly could very much be worth the money.

Rapidcreek@lemmy.world on 19 Jan 2024 22:58 next collapse

I’m sure that everybody has some, but to spend billions seems a little premature.

FaceDeer@kbin.social on 20 Jan 2024 01:01 collapse

Six months from now: "damn, we're way behind Meta on AI. We should have spent billions six months ago, it's going to cost way more to catch up."

Rapidcreek@lemmy.world on 20 Jan 2024 01:57 collapse

Chips evolve. By the time a billion dollar contract is fulfilled, they are two iterations behind.

wewbull@feddit.uk on 20 Jan 2024 12:37 collapse

Pretty sure they’ll be given insight into the roadmap for that price, and be able to place speculative orders on upcoming generations.

Rapidcreek@lemmy.world on 20 Jan 2024 14:06 collapse

I used to present those roadmaps. They change too.

wewbull@feddit.uk on 21 Jan 2024 11:44 collapse

Of course they do, but my point was that I doubt Meta is locked into this generation.

Rapidcreek@lemmy.world on 21 Jan 2024 11:56 collapse

The article says “by the end. Of the year” they will spend billions

wewbull@feddit.uk on 21 Jan 2024 14:26 collapse

“spend billions” does not equal “hand over cash and take home GPUs”. It’ll mean a contract worth that amount with delivery terms defined over time. Even over the course of a year there’s likely to be newer product than Lovelace.

Rapidcreek@lemmy.world on 21 Jan 2024 15:16 collapse

When you get product you pay for it. Spending means paying for it. You may have a contract for future product, but you don’t pay for the future product in advance as SOX rules kick in. Commonly, a chip development cycle can be at least 10 months.

Boozilla@sh.itjust.works on 19 Jan 2024 23:02 next collapse

Just like the Metaverse…this won’t have legs.

the_q@lemmy.world on 19 Jan 2024 23:03 next collapse

Jensen’s gonna buy so many new leather jackets.

qupada@kbin.social on 19 Jan 2024 23:08 collapse

And spatulas. Don't forget the spatulas.

massive_bereavement@kbin.social on 20 Jan 2024 00:10 collapse

Could just buy Spatula City.

elgordio@kbin.social on 19 Jan 2024 23:25 next collapse

total expenditures potentially reaching $9 billion

I imagine they negotiated quite the discount in that.

DdCno1@kbin.social on 19 Jan 2024 23:56 next collapse

Agreed. There's volume discount, and then there is "Facebook data center with an energy consumption of a small country volume discount".

Ghostalmedia@lemmy.world on 20 Jan 2024 02:02 collapse

They signed up for spam email so they could get a coupon code.

Deceptichum@kbin.social on 19 Jan 2024 23:38 next collapse

I really hope they fail hard and end up putting these devices on the consumer second hand market because the v100's while now affordable and flooding the market are too out of date.

FaceDeer@kbin.social on 20 Jan 2024 00:59 collapse

Meta is the source of most of the open source LLM AI scene. They're contributing tons to the field and I wish them well at it.

TropicalDingdong@lemmy.world on 20 Jan 2024 01:18 collapse

Only other game in town really.

papertowels@lemmy.one on 20 Jan 2024 17:55 collapse

I’ve heard mistral released some good models

lurch@sh.itjust.works on 20 Jan 2024 00:13 next collapse

well Zuck has a lot of users he has to create bullshit for to keep them emotionally engaged and distracted

chemicalwonka@discuss.tchncs.de on 20 Jan 2024 02:23 next collapse

After all he needs a good AI bot to teach him to be “more human” because humans are starting to suspect

werefreeatlast@lemmy.world on 20 Jan 2024 03:56 next collapse

This is great! I thought there would be a chips LED recession. Sorry homeless people but you’re gonna have to wait another generation to try and get online to maybe buy a house someday far far away… and also some day far far away if you get my drift.

FlyingSquid@lemmy.world on 21 Jan 2024 11:55 collapse

It does not give them personal access as privately as they may want (although privacy is generally respected), but at least there are public libraries for the poor and homeless to use computers and connect to the internet. One of the many, many ways libraries are essential to a community, especially to the poor.

werefreeatlast@lemmy.world on 21 Jan 2024 15:44 collapse

No what I meant is that everyone is currently hellbent into having a recession so they can magically afford to buy a house. The recession was coming since China got cock blocked from purchasing EUV systems by the US government. This in turn means that the company making these machines and the companies hoping to use them…as well as their investments where going to bite the dust. However now Mr SuckmyVerga is investing in these new devices using the new machines from vendors not affected by the embargo. Which means that there won’t be a recession in chips. Probably. Maybe. I don’t know what you were talking about. But I was referring to us homeless who cannot afford to buy a home…which does include library homeless and currently here in Seattle popsicle homeless. Well I guess in most of the US actual homeless people are in libraries or popsicles. Those people suffer tremendously so don’t let my sarcastic cynicism fool you, my parents had food stamps and I had soggy cereal for breakfast plenty of time. I can’t believe anyone could survive being outside in the past couple of weeks without heating.

Wanderer@lemm.ee on 21 Jan 2024 01:46 next collapse

Anyone got a graph of ai spending over time globally?

I’m starting to feel more confident about AGI coming soon (relatively soon).

Knowing absoultely nothing about it though it seems like it needs to be more efficient? What’s the likelihood rather than increasing the bulk power of these systems that there is a breakthrough that allows more from less?

31337@sh.itjust.works on 21 Jan 2024 03:36 next collapse

Spending is definitely looks exponential at the moment: <img alt="" src="https://sh.itjust.works/pictrs/image/75b9c941-b926-45a7-8593-3a953bbc92f5.jpeg">

Most breakthroughs have historically been made by university researchers, then put into use by corporations. Arguably, including most of the latest developments,. But university researchers were never going to get access to the $100 million in compute time to train something like GPT-4, lol.

The human brain has 100 trillion connections. GPT-4 has 1.76 trillion parameters (which are analogous to connections). It took 25k GPUs to train, so in theory, I guess it could be possible to train a human-like intelligence using 1.4 million GPUs. Transformers (the T in GPT) are not like human brains though. They “learn” once, then do not learn or add “memories” while they’re being used. They can’t really do things like planning either. There are algorithms for “lifelong learning” and planning, but I don’t think they scale to such large models, datasets, or real-world environments. I think there needs to be a lot theoretical breakthroughs to make AGI possible, and I’m not sure if more money will help that much. I suppose AGI could be achieved by trial and error (i.e. trying ideas and testing if they work without mathematically proving if or how well they’d work) instead of rigorous theoretical work.

Wanderer@lemm.ee on 21 Jan 2024 06:35 collapse

Interesting. Thanks for posting.

So you’re saying we might see something 1/10 of a human brain (obviously I understand that’s a super rough estimate) next year.

This is the first I heard about GPT not learning. So if I interact with chat gpt it’s effectively a finished product and it will stay like that forever even if it is wrong and I correct it multiple times?

This is where I’m really confused with the analogue. If GPT is not really close to a human brain how is it able to interact with so many people instantly. I couldn’t hold 3 conversations never mind a million. Yet my brain power is much much higher than GPT. Couldn’t it just talk to 1 person and be smarter as it can use all the computing power for that 1 conversation?

31337@sh.itjust.works on 21 Jan 2024 07:11 next collapse

Correct, when you talk to GPT, it doesn’t learn anything. If you’re having a conversation with it, every time you press “send,” it sends the entire conversation back to GPT, so within a conversation it can be corrected, but remembers nothing from the previous conversation. If a conversation becomes too long, it will also start forgetting stuff (GPT has a limited input length, called the context length). OpenAI does periodically update GPT, but yeah, each update is a finished product. They are very much not “open,” but they probably don’t do a full training between each update. They probably carefully do some sort of “fine-tuning” along with reinforcement-learning-with-human-feedback, and probably some more tricks to massage the model a bit while preventing catastrophic forgetting.

Oh yeah, the latency of signals in the human brain is much, much slower than the latency of semiconductors. Forgot about that. That further muddies the very rough estimates. Also, there are multiple instances of GPTs running, not sure how many. It’s estimated that each instance “only” requires 128 GPUs during inference (responding to chat messages), as opposed to 25k gpus for training. During training, the model needs to process multiple training examples at the same time for various reasons, including to speed up training, so more GPUs are needed. You could also think of it as training multiple instances at the same time, but combining what’s “learned” into a single model/neural network.

Wanderer@lemm.ee on 21 Jan 2024 07:21 collapse

This is really cool. Thanks for taking the time. Confusing but the good kind.

I’m just using this to info to then try and extrapolate.

I understand the growth of moores law and such. But the efficiency I was talking about seems almost like 1 exponential jump on an exponential curve.

Let’s just say for argument sake that meta makes AGI next year with 350,000 GPUs it would only need 2,000 GPU’s to make use of what it’s built. That’s pretty mind-boggling. That really is singularity sort of talking.

So in your mind AGI when? And ASI when? You working in this field?

31337@sh.itjust.works on 21 Jan 2024 08:27 collapse

Yeah, those GPU estimates are probably correct.

I specialized in ML during grad school, but only recently got back into it and keeping up with the latest developments. Started working at a startup last year that uses some AI components (classification models, generative image models, nothing nearly as large as GPT though).

Pessimistic about the AGI timeline :) Though I will admit GPT caught me off guard. Never thought a model simply trained to predict the next word in a sequence of text would capable of what GPT is (that’s all GPT does BTW, takes a sequence to text and predicts what the next token should be, repeatedly). I’m pessimistic because, AFAIK, there isn’t really a ML/AI architecture or even a good theoretical foundation that could achieve AGI. Perhaps actual brain simulation could, but I’m guessing that is very inefficient. My wild-ass-guess is AGI in 20 years if interest and money stays consistent. Then ASI like a year after, because you could use the AGI to build ASI (the singularity concept). Then the ASI will turn us into blobs that cannot scream, because we won’t have mouths :)

Wanderer@lemm.ee on 21 Jan 2024 09:22 collapse

Yea I had a feeling it was still a long way away. At least the media will get bored of it in a year and only the big breakthroughs will make it.

But I think there will still be a lot of “stupid” yet impressive developments like GPT. It appears smart but isn’t that smart. Sure there will be other things.

It’s the same as the manufacturing developments. Only now are we beginning to build things similar to the complexity of a human in limited functions. But that doesn’t mean the machines we have built haven’t put millions of people out of work, we just changed manufacturing to better utilise the stupid things they can do much faster and accurately than we can and made a better product because of it. I found out about a year ago we couldn’t make a Saturn v rocket now even if we had all the money in the world. The ability of man has been lost. The way they did the machining of the rockets and the welding and things like that, no one alive has that ability anymore. Robots can’t do it either. But the rockets we make now are more accurate that the ones made in the 60’s. It’s just done differently.

Miaou@jlai.lu on 21 Jan 2024 14:18 collapse

You’re confused by the analogie because it’s a shitty one. If we wanted to reproduce the behaviour of the human, we would invest in medecin, not computer science

EnderMB@lemmy.world on 21 Jan 2024 10:08 collapse

While I do work in the space, I’m more pessimistic. I think LLM’s will allow the tech companies to breach plateaus that they’ve found with compositional models, but what we will see is other companies catch up to GPT4, perhaps surpassing it a little.

I won’t pretend to be an expert on AI, but my view is that we’re purely seeing a future where multiple companies will own LLM’s. We also won’t see many improvements over what we have now, and this is the pessimist in me again, what I think we’ll see is that many of the benefits we saw from GPT4 were likely from the fact that their datasets contained an unbelievable amount of PII and stolen data. Without that data, we’ve seen ChatGPT get worse, and it’s one area where researchers and other tech firms have tried to explain the performance gap.

t_berium@lemmy.world on 21 Jan 2024 10:26 collapse

Consumer GPU shortage from hell incoming. Why would Nvidia waste their production on low end GPUs, if they can sell AI GPUs for what… 70K USD a piece? This might become worse than the shortages because of mining.

superbirra@lemmy.world on 21 Jan 2024 13:10 collapse

no more than ~26k apiece, it seems

Case@lemmynsfw.com on 21 Jan 2024 22:00 collapse

Oh, well that’s totally affordable and won’t affect the consumer market at all.

superbirra@lemmy.world on 21 Jan 2024 23:36 collapse

useless reverse strawman as I did not intend none of the shit you pretend I meant, kid ;)