this post was submitted on 01 Nov 2023
113 points (97.5% liked)
Apple
17481 readers
241 users here now
Welcome
to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!
Rules:
- No NSFW Content
- No Hate Speech or Personal Attacks
- No Ads / Spamming
Self promotion is only allowed in the pinned monthly thread
Communities of Interest:
Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple
Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode
Community banner courtesy of u/Antsomnia.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This was a real bummer for anyone interested in running local LLMs. Memory bandwidth is the limiting factor for performance in inference, and the Mac unified memory architecture is one of the relatively cheaper ways to get a lot of memory rather than buying a specialist AI GPU for $5-10k. I was planning to upgrade the memory a bit further than normal on my next MBP upgrade in order to experiment with AI, but now I’m questioning whether the pro chip will be fast enough to be useful.