In that, you are right. This poll picked TFs as the metric.
Why would there be any other metric? Why would we be discussing anything but compute throughput, especially between GPUs with similar architectures?
Whomever created this thread and its poll seemed to share this sentiment, as the options only show TFLOP count.
Do we consider Hawaii to be a better choice than Polaris 10 because it has many more execution units?
TFLOP count is obviously not the only metric we can compare that is relevant for the console's final performance, but on systems with similar architectures it sure is arguably the best we can get.
However ,that's somewhat skewed by dynamic clocks.
Sony says the clock deltas will be tangential and most of the time both the CPU and GPU will run at the advertised max clocks. If we choose to reject their claims, why would we choose to accept that Microsoft isn't lying about their clock speeds also?
I won't pick sides and say one is lying and the other isn't. Both companies stated their specs. I won't engage in baseless assumptions over officially stated specs. I also think it's a rather poor practice to do so, but to each their own.
The main take home was a leaked spec for an SOC of 36 CUs, and a bunch of would-be insiders making claims of notably faster, and whether people believed the leak was final hardware (regardless what it could be clocked at) or whether the insiders were actual insiders providing real info. The insiders were, AFAICS, by and large, a bunch of fibbers. So the lesson here is don't trust GAF/Era insiders.
This is the one sole time I'll comment on this. It's a worthless conversation to have.
All I did these past couple of months was put up several different sources on the baseless thread with different specs so we could discuss them. Which was supposedly the purpose of that thread until a mod decided it should instead be a thread for dogpiling on people outside this forum who couldn't defend themselves.
A practice that same mod brought to this thread as soon as the baseless one was closed.
So much talk about what is
beneath B3D during these last couple of months, yet it seems constant mockery of people (some of them publicly identified developers) outside the forum is somehow fair game.
My problem has
always been the github inquisitors who constantly jumped to mockery, bullying and trolling towards any source who claimed the github data wasn't indicative of the final product (which it is), as well as any user who dared to post these baseless rumors on the baseless rumor thread.
I don't remember swearing fealty to any one leaker, on the contrary. My purpose has always been to entertain different hypotheses.. in the thread that was supposed to exist to entertain different hypotheses.
The only lesson I learned here is that B3D condones the dogpiling of users and external people who don't conform to a certain clique.
That was a lesson well learned.
Although if one of the would-be insiders like Osiris did state 10.x TFs and stuff that makes it look legitimate, that could be recorded for future prosperity come PS6 predictions.
All the certified leakers said the github gospel had outdated data. Which it does.
The PS5 isn't bringing a 9.2 TFLOPs GPU (36 CUs at 2GHz) which absolutebeginner repeated ad nauseam.
It's also not bringing a 8 TFLOPs GPU (36 CUs at 1.75GHz) which psman1700 repeated around the forum ad nauseam, in what's probably over a hundred posts if we bother to count.
The GPU will be running at over 10% higher clocks than what's in the github, and the resulting compute throughput delta towards SeriesX is reduced from 33% to 15%. 15% is close, 33% is not.
Personally, my take has always been that Sony would be seeking close-to-parity performance because they know how bad it went for the XB1for not achieving it. I thought this would be done through a wider chip and Oberon was actually a 3 shader engine part. Turns out Sony managed to achieve close-to-parity through humongously high clocks.
One user I can remember of who claimed +/- 10.5 TFLOPs from the start to finish was HeisenbergFX or something.
o'dium might have been tricked by dev kit specs. If the devkit is carrying all CUs enabled like we saw in the XBoneX's devkit, then at the same 2.23GHz the devkit is pushing 11.4 TFLOPs, which is the number he'd been given for a while. I really don't think o'dium is an attention-seeking troll. He's a publicly identified developer with over a decade of experience on working for AAA projects. His user avatar at GAF is a picture of him next to his daughter, that's not the profile of a troll.
OsirisBlack, VFXVeteran and Tommy Fisher were fake insiders in the end, yes. Tommy Fisher was a very lucky one BTW, since he guessed the SeriesX's GPU clockspeeds right down to the tens of megahertz.
There was also a french journalist that made a video over a week ago with statements that became spot on, whose content was translated to a post at GAF.
I would have shared that here, had there been a thread actually dedicated to baseless rumors.