/r/hardware: a technology subreddit for computer hardware news, reviews and discussion.

54 readers
1 users here now

/r/hardware is a place for quality computer hardware news, reviews, and intelligent discussion.

founded 1 year ago
MODERATORS
1
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/TwelveSilverSwords on 2024-08-23 05:18:02+00:00.

2
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/imaginary_num6er on 2024-08-22 18:05:24+00:00.

3
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/RenatsMC on 2024-08-22 13:25:17+00:00.

4
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/Dakhil on 2024-08-22 11:46:52+00:00.

5
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/RenatsMC on 2024-08-22 11:26:18+00:00.

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/imaginary_num6er on 2024-08-22 01:56:55+00:00.

7
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/justme2024 on 2024-08-22 00:39:41+00:00.

8
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/reps_up on 2024-08-22 00:25:32+00:00.

9
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/BreakAtmo on 2024-08-21 14:10:53+00:00.


I know about how CPUs prefer low-latency RAM and front require the high bandwidth of GPUs, while GPUs prefer high-bandwidth VRAM and can deal with higher latency. But I was wondering:

  • Are there hard numbers on the performance increases seen in a CPU when it goes from using high-latency VRAM (like in a console that uses unified GDDR6) to low-latency RAM?
  • Do GPUs benefit at all from lower latency RAM if they're forced to use it?
10
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/imaginary_num6er on 2024-08-21 23:35:37+00:00.

11
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/Pristine-Woodpecker on 2024-08-21 21:52:54+00:00.


This is hidden somewhere deep within the comments on the Chips and Cheese Zen 5 review. The Phoronix benchmarks with PostgreSQL 16 parallel read/write indeed show catastrophic scaling on Zen 5, with the 9950X performing slightly worse than the 5950X. This would indeed be the kind of parallel benchmark that needs continuous cross-CCX synchronization.

However, the same benchmark shows the 9900X performing as expected! Most of the other reviews that looked at the bad inter-CCX latency only tested the 9950X. So is this a benchmark oddity, or does it really only affect the 16-core variant?

The read-only tests, which would have less inter-CCX synchronization traffic, scale as expected on both chips.

12
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/Chipdoc on 2024-08-21 16:21:25+00:00.

13
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/SixDegreee612 on 2024-08-21 10:50:50+00:00.


I remember reding some announcements etc, but so far there is nothing on the shelves. Are we ever going to see any or are 48GiB sticks the best that they can do, at least with DDR5 ?

I know they have hit limits with capacitor cells, but there were some news that they have solutions for at least one more generation or two.

Or maybe they gave up on that and are going to stack the RAM, somewhat similar to what they are doing with NAND FLASH ? 🙄

14
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/Flying-T on 2024-08-21 06:27:39+00:00.

15
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/RenatsMC on 2024-08-21 14:51:16+00:00.

16
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/giuliomagnifico on 2024-08-21 11:55:02+00:00.

17
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/Famous_Wolverine3203 on 2024-08-21 12:06:49+00:00.


“Versus its true predecessor, the Intel Core i9-14900K, the CPU scores an 11.7% lead in single-core and a 10.2% lead in multi-core tests. “

“The CPU ends up 8% faster than the Core i9-14900KS & 4% faster than the Ryzen 9 9950X in single-core tests. In Multi-core, the CPU scores a 5.1% lead over the Core i9-14900KS and a 14% lead over the Ryzen 9 9950X”

Bear in mind, that Object Detection and Background Blur subtests in Geekbench 6 uses AVX512 in AMD’s Zen 5. So AMD benefits in those tests by upto 21%. Excluding said two tests increases the lead further in Intel’s favour.

18
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/M337ING on 2024-08-21 11:40:57+00:00.

19
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/capybooya on 2024-08-20 21:53:37+00:00.

20
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/constantlymat on 2024-08-20 14:42:43+00:00.

21
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/Dakhil on 2024-08-20 14:45:17+00:00.

22
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/TwelveSilverSwords on 2024-08-20 20:21:32+00:00.


| SKU | Name | CPU clock | GPU clock | |


|


|


|


| | 8650AB | Snapdragon 8 Gen 3 | 3.3 GHz | 903 MHz | | 8650AC | Snapdragon 8 Gen 3 For Galaxy | 3.4 GHz | 1000 MHz | | 8750AB | Snapdragon 8 Gen 4 | 4.37 GHz | 1150 MHz | | 8750AC | Snapdragon 8 Gen 4 For Galaxy (?) | 4.47 GHz | 1250 MHz |

Qualcomm Snapdragon 8 Gen 4.

SM8750 (Standard)

SM8750P (Performance/For Galaxy)

Adreno 830 GPU.

New Spectra ISP.

eNPU.

30% CPU gains.

50%+ GPU gains.

On-device AI performance boost.

Thermals & power efficiency in check

Launch in October, first phone on sale by November

A810 is 40% faster than A710, both are 1 CU (and probably around the same frequency too). I guess the 8G4 GPU is at least 60% faster than 8G3.

23
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/reps_up on 2024-08-20 17:24:42+00:00.

24
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/picastchio on 2024-08-20 16:58:54+00:00.

25
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/hardware by /u/bizude on 2024-08-20 16:07:47+00:00.

view more: next ›