Mac Studio With M3 Ultra Runs Massive DeepSeek R1 AI Model Locally (www.macrumors.com)
from cantankerous_cashew@lemmy.world to apple_enthusiast@lemmy.world on 17 Mar 2025 22:56
https://lemmy.world/post/26987021

#apple_enthusiast

threaded - newest

jqubed@lemmy.world on 18 Mar 2025 00:23 collapse

Perhaps most impressively, the Mac Studio accomplishes this while consuming under 200 watts of power. Comparable performance on traditional PC hardware would require multiple GPUs drawing approximately ten times more electricity.

[…]

However, this performance doesn’t come cheap – a Mac Studio configured with M3 Ultra and 512GB of RAM starts at around $10,000. Fully maxed out, an M3 Ultra Mac Studio with 16TB of SSD storage and an Apple M3 Ultra chip with 32-core CPU, 80-core GPU, and 32-core Neural Engine costs a cool $14,099. Of course, for organizations requiring local AI processing of sensitive data, the Mac Studio offers a relatively power-efficient solution compared to alternative hardware configurations.

I wonder what a multi-GPU x86-64 system with adequate RAM and everything would cost? If it’s less, how many kWh of electricity would it take for the Mac to save money?

timewarp@lemmy.world on 18 Mar 2025 01:52 collapse

It would be less if NVIDIA & AMD wasn’t in an antitrust duopoly. You can get two XTX for less than $2000 with 48GB total VRAM.

vanderbilt@lemmy.world on 18 Mar 2025 22:59 collapse

Unfortunately getting an AI workload to run on those XTXs, and run correctly, is another story entirely.

timewarp@lemmy.world on 18 Mar 2025 23:18 collapse

ROCm has made a lot of improvements. $2000 for 48GB of VRAM makes up for any minor performance decrease as opposed to spending $2200 or more for 24GB VRAM with NVIDIA.

vanderbilt@lemmy.world on 19 Mar 2025 03:45 collapse

ROCm certainly has gotten better, but the weird edge cases remain; alongside the fact that merely getting certain models to run is problematic. I am hoping that RNDA4 is paired with some tooling improvements. No more massive custom container builds, no more versioning nightmares. At my last startup we tried very hard to get AMD GPUs to work, but there were too many issues.