this post was submitted on 12 Sep 2024
53 points (87.3% liked)

PC Gaming

8524 readers
729 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] [email protected] 16 points 1 month ago (2 children)

A phone CPU challenging a top of the line desktop GPU is crazy.

[–] [email protected] 24 points 1 month ago* (last edited 1 month ago) (2 children)

Desktop CPU, with a 170W TDP.

Granted, the comparison is an extremely specific synthetic benchmark, but still, I agree: utterly wild.

[–] [email protected] 11 points 1 month ago

It doesn't really challenge the desktop CPU in multithreaded tests where the 170w are actually relevant.

The test also includes AI tasks, the Apple chip seems to spend around 20% of real estate on that, the desktop CPU had none.

[–] [email protected] 4 points 1 month ago

That's actually nuts. I have an iphone x, I remember when that came out and everyone was surprised that it was as fast as an i5-7200u. Yeah sure it's a dual core laptop chip but still very impressive.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)

been like this with the Apple A chips for years

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (2 children)

I have to demonstrate to my friends every time how my MBP M2 blows my Ryzen 5950x desktop out of the water for my professional line of work.

I can’t catch quite the drift what x86/x64 chips are good for anymore, other than gaming, nostalgia and spec boasting.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

I have a 5950X computer and a Mac mini with some form of M2.

I render video on the M2 computer because I have that sweet indefinite Final Cut Pro license, but then I copy it to the 5950X computer and use ffmpeg to recompress it, which is like an order of magnitude faster than using the M2 computer to do the video compression.

I have some other tasks I’ve given both computers and when the 5950X actually gets to use all its cores, it blows the M2 out of the water.

[–] [email protected] 2 points 1 month ago (1 children)

Is it possible you’re using your desktop’s GPU for ffmpeg encoding, and not the CPU, by chance?

[–] [email protected] 3 points 1 month ago

No, you need to manually specify that, and the options are more limited, so I usually do CPU encoding unless I’m prioritizing encoding speed over quality for some reason. (And yes, I have verified it’s using the CPU by looking at the CPU usage while it’s encoding).

[–] [email protected] 2 points 1 month ago

I can’t catch quite the drift what x86/x64 chips are good for anymore, other than gaming, nostalgia and spec boasting.

Probably two things:

  • Cost- and power-no-object performance, which isn't necessarily a positive as it encourages bad behaviour.
  • The platform is much more open, courtesy of some quirks of how IBM spec'ed BIOS back before the dawn of time. Yes, you can get ARM and RISC-V licenses (openPOWER is kind of a non-entity these days) and design your own SBC, but every single ARM and RISC-V machine boots differently, while x86 and amd64 have a standard boot process.

All those fancy "CoPilot ready" Qualcomm machines? They're following the same path as ARM-based smartphones have, where every single machine is bespoke and you're looking for specific boot images on whatever the equivalent of xda-developers is, or (and this is more likely) just scrapping them when they're used up, which will probably happen a lot faster, given Qualcomm's history with support.

I'd love to see a replacement for x86/amd64 that isn't a power suck, but has an open interface to BIOS.

[–] [email protected] 12 points 1 month ago* (last edited 1 month ago) (1 children)

It's geekbench though...unless things have changed that's a garbage benchmark. Glad A18 is doing well, but it doesn't compete with the laptop/desktop space. If it did, they wouldn't bother making the M series.

[–] [email protected] 10 points 1 month ago* (last edited 1 month ago) (1 children)

Yeah, geekbench has long been known to favor iPhones. Even compared to other mobile SoCs like Qualcomm or MediaTek, it's... uh... "optimized" for what Apple chips are designed around.

Using it on a desktop for benchmarking is... Even more useless.

On top of that:

  1. The single core number obviously only uses one big core, of which Apple's chip only has 2.

  2. The score only reflects the maximum burst speed (it's not expected to sustain that kind of performance for more than 10 seconds. Even using both big cores simultaneously would cut that score short due to overheating.

  3. Desktop cpu has 16 cores that are all identical, and is expected to sustain those workloads indefinitely. Servers and supercomputers run these things 24/7.

  4. The desktop cpu is on an older node as well, inherently less power efficient.

It's a bit like saying, "for 10 seconds, I can run as fast as the world's fastest marathon ~~runner~~ TEAM!". Neat factoid but still incomparable.

[–] [email protected] 3 points 1 month ago

That's a great breakdown of it. Thanks for taking the time to lay it all out.

[–] [email protected] 3 points 1 month ago (2 children)

What is Intel doing? A 12W CPU should not be faster than their 125W flagship. Single core performance is what matters the most in a lot of usage scenarios.

Intel is still on 10nm lithography. Did they stop investing in R&D and machinery a while back? AMD is on 7, Apple on 3.

[–] [email protected] 8 points 1 month ago

Intel literally spent a decade trying to make there CPUs just a little bit faster, but not too much faster, every year. They succeeded in doing so, and we have this as a result.

The worst part is they aren't eving running at 125w or whatever they claim, often into the 180w or 200w range to reach there own marketing benchmarks.

[–] [email protected] 1 points 1 month ago (1 children)
[–] [email protected] 2 points 1 month ago (2 children)

You all know that nm are just marketing, and there isn't anything there with literally 3nm

[–] [email protected] 4 points 1 month ago (1 children)
[–] [email protected] 1 points 1 month ago
[–] [email protected] 2 points 1 month ago (1 children)

Not sure why you getting downvoted, it's true.

[–] [email protected] 1 points 1 month ago

¯_(ツ)_/¯

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

challenges the ryzen in the sense that 8gb ram on apple silicon is like 16gb on everything else? at 5x the price no less, and 1k for the optional wheels?