GeekBench probably made the right choice to optimize for more realistic real-world workloads than for the more specific workloads that benefit from really high core counts. GeekBench is supposed to be a proxy for common use case performance.
High core count CPUs are only useful for specific workloads and should not be purchased as general purpose fast CPUs. Unless you’re doing specific tasks that scale by core count, a CPU with fewer cores and higher single threaded throughput would be faster for normal use cases.
The callout against the poor journalism at Tom’s Hardware isn’t something new. They have a couple staff members posting clickbait all the time. Some times the links don’t even work or they have completely wrong claims. This is par for the site now.
To be fair, the Tom’s Hardware article did call out these points and the limitations in the article, so this SlashDot critique is basically repeating the content of the Tom’s Hardware article but more critically https://www.tomshardware.com/pc-components/cpus/apples-18-co...
Right, this is a car-priced CPU and the only rational reason to have one is that you can exploit it for profit. One pretty great reason would be giving it to your expensive software developers so they don't sit there waiting on compilers.
I think this actually concedes the main criticism.
If Geekbench 6 multicore is primarily a proxy for “common use case performance” rather than for workloads that actually use lots of cores, then it shouldn’t be treated as a general multicore CPU benchmark, and it definitely shouldn’t be the basis for sweeping 18-core vs 96-core conclusions.
That may be a perfectly valid design choice. But then the honest takeaway is: GB6 multicore measures a particular class of lightly/moderately threaded shared-task workloads, not broad multicore capability.
The criticism isn’t “every workload should scale linearly to 96 cores.” It’s that a benchmark labeled “multicore” is being used as if it were a general multicore proxy when some of its workloads stop scaling very early, including ones that sound naturally parallelizable.
Geekbench 6 isn't really marketed as a one-size-fits-all benchmark. It's specifically aimed at consumer hardware. The first paragraph on geekbench.com reads:
> Geekbench 6 is a cross-platform benchmark that measures your system's performance with the press of a button. How will your mobile device or desktop computer perform when push comes to crunch? How will it compare to the newest devices on the market? Find out today with Geekbench 6.
And further down,
> Includes updated CPU workloads and new Compute workloads that model real-world tasks and applications. Geekbench is a benchmark that reflects what actual users face on their mobile devices and personal computers.
The problem is, in practice, despite nonspecific marketing language, people do use the multicore benchmark to measure multicore performance. Including for things like Threadripper, which is not exactly an exotic science project CPU or non-personal or non-desktop.
> Including for things like Threadripper, which is not exactly an exotic science project CPU or non-personal or non-desktop.
We're talking about a CPU with a list price over $10000.
Geekbench 6 is a bad test to use to assess the suitability of a 96-core Threadripper for the kinds of use cases where buying a 96-core Threadripper might make sense. But Geekbench 6 does a very good job of illustrating the point that buying a 96-core Threadripper would be a stupid waste of money for a personal desktop and the typical use cases of a personal desktop.
You're seriously posting to HN a link to your Slashdot post linking to your year-old blog post complaining about Geekbench 6's multi-threaded test without ever mentioning Amdahl's Law?
Pretending that everything a CPU does is an embarrassingly parallel problem is heinous benchmarking malpractice. Yes, Geekbench 6 has its flaws, and limitations. All benchmarks do. Geekbench 6 has valid uses, and its limitations are defensible in the context of using it to measure what it is intended to measure. The scalability limitations it illustrates are real problems that affect real workloads and use cases. Calling it "broken" because it doesn't produce the kind of scores a marketing department would want to see from a 96-core CPU reflects more poorly on you than it does on Geekbench 6.
It explains why a workload with a large serial/contended fraction won’t scale.
It does not prove that the workload’s serial fraction is representative of the category it claims to stand in for.
So when a benchmark’s “text processing” test over ~190 files barely gets past ~1.3x on 8 cores, that’s not some profound demonstration that CPUs can’t parallelize text work. It’s mostly a demonstration that this benchmark’s implementation has a very large serial bottleneck.
That would be fine if people treated GB6 multicore as a narrow benchmark of specific shared-task client workloads. The problem is that it is labelled as a general multicore CPU metric, and is used as such, including for 18-core vs 96-core comparisons. That’s the misuse being criticized.
TL;DR: Amdahl’s Law explains the ceiling; it does not justify treating an avoidably low ceiling as a general measure of multicore CPU capability.
EDIT: Also, submitter, I'm not sure why parent is upset that you submitted a Slashdot post that has 3 links, 1 of which is to your blog. Thanks for sharing. I've been wondering for years why GeekBench was obviously broken on multicore. (comes up a lot in Apple discussions, as you know)
The article is probably right about text processing though. It sounds like they took an inherently parallel task with no communication and (accidentally?) crippled it.
I'm not sure what's going on with that subtest, and the lack of scaling is certainly egregious. But we've all encountered tasks that in theory could scale much better but in practice have been implemented in a more or less serial fashion. That kind of thing probably isn't a good choice for a multi-core test suite, but on the other hand: given that Geekbench has both multi-core and single-core scores for the same subtests (though with different problem sizes), it would be unrealistic if all the subtests were highly scalable. Encountering bad scalability is a frequent, everyday part of using computers.
Anyone who treats Geekbench as a meaningful benchmark (i.e. not without a huge disclaimer or with other more meaningful datapoints) is not to be trusted. You can only really trust it for inter-generational comparisons within a single architecture.
The strategy is to make outlandish claims and then have people "engaging" to "disprove" all of the claims. This strategy works as long as people are too apathetic and/or stupid to hold liars accountable. It works currently because journalism has significantly less value than tabloid drama to many people, some of which are just narrative shopping for a fun curated list of ideas (not facts) that fit their personalized echo chamber.
Phoronix is terrible in terms of clickbait and deliberate ragebait, and its comment section is a toxic cesspool, but its benchmarks generally seem sound. What issues have you observed with their benchmark suite?
Given the username of the account you're replying to, and the implausibility of a Phoronix reader being unaware of Tom's Hardware, I think you may have been baited by a troll.
GeekBench probably made the right choice to optimize for more realistic real-world workloads than for the more specific workloads that benefit from really high core counts. GeekBench is supposed to be a proxy for common use case performance.
High core count CPUs are only useful for specific workloads and should not be purchased as general purpose fast CPUs. Unless you’re doing specific tasks that scale by core count, a CPU with fewer cores and higher single threaded throughput would be faster for normal use cases.
The callout against the poor journalism at Tom’s Hardware isn’t something new. They have a couple staff members posting clickbait all the time. Some times the links don’t even work or they have completely wrong claims. This is par for the site now.
To be fair, the Tom’s Hardware article did call out these points and the limitations in the article, so this SlashDot critique is basically repeating the content of the Tom’s Hardware article but more critically https://www.tomshardware.com/pc-components/cpus/apples-18-co...
As an owner of a 96 core 9995wx, nobody is buying one for desktop PC much less laptop level software.
To justify the investment you need to have tasks that scale out, or loads of heterogeneous tasks to support concurrently.
What tasks are you running on your 96 core 9995wx?
LLVM developer compiling the full LLVM stack every 10 minutes.
Right, this is a car-priced CPU and the only rational reason to have one is that you can exploit it for profit. One pretty great reason would be giving it to your expensive software developers so they don't sit there waiting on compilers.
I think this actually concedes the main criticism.
If Geekbench 6 multicore is primarily a proxy for “common use case performance” rather than for workloads that actually use lots of cores, then it shouldn’t be treated as a general multicore CPU benchmark, and it definitely shouldn’t be the basis for sweeping 18-core vs 96-core conclusions.
That may be a perfectly valid design choice. But then the honest takeaway is: GB6 multicore measures a particular class of lightly/moderately threaded shared-task workloads, not broad multicore capability.
The criticism isn’t “every workload should scale linearly to 96 cores.” It’s that a benchmark labeled “multicore” is being used as if it were a general multicore proxy when some of its workloads stop scaling very early, including ones that sound naturally parallelizable.
Geekbench 6 isn't really marketed as a one-size-fits-all benchmark. It's specifically aimed at consumer hardware. The first paragraph on geekbench.com reads:
> Geekbench 6 is a cross-platform benchmark that measures your system's performance with the press of a button. How will your mobile device or desktop computer perform when push comes to crunch? How will it compare to the newest devices on the market? Find out today with Geekbench 6.
And further down,
> Includes updated CPU workloads and new Compute workloads that model real-world tasks and applications. Geekbench is a benchmark that reflects what actual users face on their mobile devices and personal computers.
The problem is, in practice, despite nonspecific marketing language, people do use the multicore benchmark to measure multicore performance. Including for things like Threadripper, which is not exactly an exotic science project CPU or non-personal or non-desktop.
> Including for things like Threadripper, which is not exactly an exotic science project CPU or non-personal or non-desktop.
We're talking about a CPU with a list price over $10000.
Geekbench 6 is a bad test to use to assess the suitability of a 96-core Threadripper for the kinds of use cases where buying a 96-core Threadripper might make sense. But Geekbench 6 does a very good job of illustrating the point that buying a 96-core Threadripper would be a stupid waste of money for a personal desktop and the typical use cases of a personal desktop.
Holy hell. Lol. I did not realize how generous $PREVIOUS_EMPLOYER was.
> then it shouldn’t be treated as a general multicore CPU benchmark,
It is a general multi core benchmark for its target audience.
96-core CPUs are not its target audience.
You're seriously posting to HN a link to your Slashdot post linking to your year-old blog post complaining about Geekbench 6's multi-threaded test without ever mentioning Amdahl's Law?
Pretending that everything a CPU does is an embarrassingly parallel problem is heinous benchmarking malpractice. Yes, Geekbench 6 has its flaws, and limitations. All benchmarks do. Geekbench 6 has valid uses, and its limitations are defensible in the context of using it to measure what it is intended to measure. The scalability limitations it illustrates are real problems that affect real workloads and use cases. Calling it "broken" because it doesn't produce the kind of scores a marketing department would want to see from a 96-core CPU reflects more poorly on you than it does on Geekbench 6.
Amdahl’s Law is descriptive, not exculpatory.
It explains why a workload with a large serial/contended fraction won’t scale.
It does not prove that the workload’s serial fraction is representative of the category it claims to stand in for.
So when a benchmark’s “text processing” test over ~190 files barely gets past ~1.3x on 8 cores, that’s not some profound demonstration that CPUs can’t parallelize text work. It’s mostly a demonstration that this benchmark’s implementation has a very large serial bottleneck.
That would be fine if people treated GB6 multicore as a narrow benchmark of specific shared-task client workloads. The problem is that it is labelled as a general multicore CPU metric, and is used as such, including for 18-core vs 96-core comparisons. That’s the misuse being criticized.
TL;DR: Amdahl’s Law explains the ceiling; it does not justify treating an avoidably low ceiling as a general measure of multicore CPU capability.
EDIT: Also, submitter, I'm not sure why parent is upset that you submitted a Slashdot post that has 3 links, 1 of which is to your blog. Thanks for sharing. I've been wondering for years why GeekBench was obviously broken on multicore. (comes up a lot in Apple discussions, as you know)
The real meat from the article: https://dev.to/dkechag/how-geekbench-6-multicore-is-broken-b...
First plot really says it all.
Compare to https://upload.wikimedia.org/wikipedia/commons/e/ea/AmdahlsL...
The article is probably right about text processing though. It sounds like they took an inherently parallel task with no communication and (accidentally?) crippled it.
I'm not sure what's going on with that subtest, and the lack of scaling is certainly egregious. But we've all encountered tasks that in theory could scale much better but in practice have been implemented in a more or less serial fashion. That kind of thing probably isn't a good choice for a multi-core test suite, but on the other hand: given that Geekbench has both multi-core and single-core scores for the same subtests (though with different problem sizes), it would be unrealistic if all the subtests were highly scalable. Encountering bad scalability is a frequent, everyday part of using computers.
"When you measure, include the measurer" - MC Hammer
When you are MC Hammer, everything is an MC nail.
What in the name of goodness are you doing to your poor computers?
Have you not heard of C-x M-c M-butterfly?
Anyone who treats Geekbench as a meaningful benchmark (i.e. not without a huge disclaimer or with other more meaningful datapoints) is not to be trusted. You can only really trust it for inter-generational comparisons within a single architecture.
Since Geekbench 5, the single threaded benchmark scores have aligned pretty well with those from the industry standard SPEC benchmark.
What in the name of goodness are you doing to your poor computers?
TIL: Slashdot still exists. And it looks exactly as horrible as 20 years ago.
I mean so does HN
HN is minimalist, not horrible. And the content is good!
The strategy is to make outlandish claims and then have people "engaging" to "disprove" all of the claims. This strategy works as long as people are too apathetic and/or stupid to hold liars accountable. It works currently because journalism has significantly less value than tabloid drama to many people, some of which are just narrative shopping for a fun curated list of ideas (not facts) that fit their personalized echo chamber.
people already "destroy" the many-core threadrippers with gaming-oriented ryzens on appropriately suited workloads, this is clickbait
Wow, there's a site with a worse benchmark methodology than Phoronix? Noted.
Phoronix is terrible in terms of clickbait and deliberate ragebait, and its comment section is a toxic cesspool, but its benchmarks generally seem sound. What issues have you observed with their benchmark suite?
Given the username of the account you're replying to, and the implausibility of a Phoronix reader being unaware of Tom's Hardware, I think you may have been baited by a troll.
What clickbait and ragebait? Example?
I'm really confused by the self-aggrandizing here, muddying up the discussion
how good is the M5 Max in comparison to a 96-core threadripper? what's the tl;dr, where are the broader assortments of benchmarks
I just want to see some bargraphs that say "lower is better" or "higher is better"