Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
We investigate all sorts of claims, that's what we do here: we investigate. Unless the person is misterxmedia, like, okay, but we've investigated many more claims that were far more absurd.
We evaluate the arguments and data points at hand, not the anonymous person's credibility. Once you go the credibility route, suddenly we're not B3D. It's just about fighting about credibility, you only let those pass the ones that fit your agenda. And you deny credibility of those that don't.

The ones who denyed it were Matt from resetera and Matt Hargrett.
And since they contradicted themselfs, someones credibility is at stake... That's a given!

But I ask... Should we really investigate if a 10.28 Tflops console can run a game that was planned to be released on Xbox One and PS4, at more than 1080p 60 fps?
I saw Gran Turismo at 4K 60 fps with RT... Miles Morales and Ratchet and Clank at 4K 30 fps with RT. I Saw the Unreal Engine 5 demo, and Horizon.
So why am I believing this person? Should I also belive (or investigate) the Orphan the Machine creator who was claiming that the PS5 was not capable of running it´s Ecco the Dolphin Clone at 4K (and then deleted the tweet)?
 
Last edited:
Less correct than claims that the GPU was around 10.5 TFLOPs (with no reference to CU count), which the github inquisitors promptly ridiculed at the time.
Github - https://www.eurogamer.net/articles/...laystation-5-xbox-series-x-spec-leak-analysed
I'm not sure I understand the term raw performance numbers.
36CU and clock speeds gets you your raw performance numbers.
You quoted a guy who gave a description of 1.8 to 2.2Ghz. By all means that' sits as low as 9 to 10.2 whatever it is.
These leaks were legitimate for what was tested at the time.

This means that if a source would have appeared in July 2012 claiming the Xbox One was going to get a CPU and GPU clock boost over what had already been announced, we couldn't discuss it. It would be completely against known data at the time.
There are rumors going on that Sony is looking at 16Gbps or 18Gbps GDDR6 to put in the PS5, as well as enabling the 4 redundancy CUs for PS5 mode (making them erm.. not redundant anymore). According to your terms, that discussion isn't allowed because it goes against known current data.
Fair enough. But you run into contradictory datapoints relatively quickly. Both of those moves would increase prices dramatically for PS5. And then you need to contend with the memory swap last minute and you need to contend with the idea of shipping 10M while tossing any non-perfect chips. Not impossible but certainly sitting on the heavily improbable side of things.

You're right it shouldn't be outright dismissed, democratized may be the right choice here.

I'm not sure this is a great idea, considering we don't know what the IPC gains are for the RDNA1->RDNA2 transition, not to mention the substantial difference in core clocks.
The core clocks aren't that far apart, 5700XT anniversary gets to 2.1 Ghz.
and has 4 more CUs for a TF rating of 10.2
As for IPC gains I have large doubts it can't be accounted for.
Yes a RDNA 2 version of the 5700XT would be ideal. But 5700XT is the best proxy we have right now.
 
The ones who denyed it were Matt from resetera and Matt Hargrett.
And since they contradicted themselfs, someones credibility is at stake... That's a given!

But I ask... Should we really investigate if a 10.28 Tflops console can run a game that was planned to be released on Xbox One and PS4, at more than 1080p 60 fps?
I saw Gran Turismo at 4K 60 fps with RT... Miles Morales and Ratchet and Clank at 4K 30 fps with RT. I Saw the Unreal Engine 5 demo, and Horizon.
So why am I believing this person? Should I also belive (or investigate) the Orphan the Machine creator who was claiming that the PS5 was not capable of running it´s Ecco the Dolphin Clone at 4K (and then deleted the tweet)?
Yea that's reasonable.

Within reason of what was shown:
GT 4K60 only had RT in replay cams. Not during gameplay
MM is the same, RT in cutscenes, but we never really saw gameplay.
R&C had massive compromises to its RT, many times reflecting the wrong things or not reflecting at all.
 
Github - https://www.eurogamer.net/articles/...laystation-5-xbox-series-x-spec-leak-analysed
I'm not sure I understand the term raw performance numbers.
36CU and clock speeds gets you your raw performance numbers.
Yes and that article points to the PS5 being a 9.2 TFLOPs console.
There were sources claiming around 10.5 TFLOPs, which again became a lot closer to what the raw numbers ended up being, but were quickly shot down at the time because "9.2 TFLOPs or more likely 8 TFLOPs" was the maximum permitted by the github inquisition.

Both of those moves would increase prices dramatically for PS5.
Enabling the full iGPU and adopting 16Gbps GDDR6? How would you know? Do you know what the yields for a PS5 SoC are with 36 and 40 CUs enabled?
Adopting 16Gbps memory has to do with power consumption and mass availability. I doubt it has anything to do with cost.

And then you need to contend with the memory swap and you need to contend with the idea of shipping 10M.
Memory swap? If they're swapping 14Gbps for 16Gbps GDDR6, I doubt they'd do it after having actual 14Gbps chips soldered on PS5 PCBs.
 
Yes and that article points to the PS5 being a 9.2 TFLOPs console.
There were sources claiming around 10.5 TFLOPs, which again became a lot closer to what the raw numbers ended up being, but were quickly shot down at the time because "9.2 TFLOPs or more likely 8 TFLOPs" was the maximum permitted by the github inquisition.
You're harping on 10% miss here on clockspeed. It nailed everything else. The final number is unimportant if you were looking at this as paper grading.
CUs
Memory
Memory Bandwidth
Clockspeed missed by 10%

Given the timestamp of when this leaked this is still certainly a snapshot in time of the work in progress.
Other rumours prior to github all revolved around:

a) 13TF
b) 20% more powerful than XSX
c) More powerful than Stadia
and all matter of claims that put PS5 above XSX.

Enabling the full iGPU and adopting 16Gbps GDDR6? How would you know? Do you know what the yields for a PS5 SoC are with 36 and 40 CUs enabled?
Adopting 16Gbps memory has to do with power consumption and mass availability. I doubt it has anything to do with cost.
It certainly won't be cheaper. CUs marked for redundancy are there to improve yield. Memory prices increases with clockspeed.

Memory swap? If they're swapping 14Gbps for 16Gbps GDDR6, I doubt they'd do it after having actual 14Gbps chips soldered on PS5 PCBs.
Then they are going to announce a sudden 576GB/s under 3 months to go? Is there a reason to wait so long to do this?
 
It ended up being correct except for a clockspeed which no one believed to begin with. 2.0Ghz was already _too high_. Thus people were dismissing the Github leak based on that. It ended up being 2.23 Ghz. Certainly much more correct than other claims. With respect to that thread, his bounds were 1.8 to 2.2Ghz. He lost by 30Mhz. Seriously? He's 1.3% wrong on the bounds.

CUs were correct
Memory bandwidth
Memory capacity

I think it should be a policy, someone will need to decide what is out of bounds. Like if the rumour is going completely against known data, then it doesn't make a lot of sense, if we should democratize it perhaps that makes sense.
Outlier commentary is harder to follow and discuss for us because if we cannot get to a specific conclusion without having to jump and agree to too many unknown data points, it no longer makes sense.

Like you posted earlier, on top of other things that MS has access to, to make XSX cheaper. It leaves open the possibility that both consoles could be the same price. It's not definitive that PS5 has to be cheaper. And we can use a 5700XT as a proxy for PS5 GPU performance, and even with some tolerances understand that it would probably struggle with some titles at 4K60 using it as a benchmark given how close the two are in spec. These are reasonable avenues to provide some validity to the claims. Does it matter that it can't do 4K? Unlikely, I doubt will care to a certain degree. Get it close enough in range and people won't mind.

But if we can't do that, then I think that, yea it's probably best we don't pursue it.
Parts were correct, but it’s like saying if I had dough, tomatoes and cheese I have a pizza - it could end up many things and ultimately the story was ...

Yes and that article points to the PS5 being a 9.2 TFLOPs console.
There were sources claiming around 10.5 TFLOPs, which again became a lot closer to what the raw numbers ended up being, but were quickly shot down at the time because "9.2 TFLOPs or more likely 8 TFLOPs" was the maximum permitted by the github inquisition.


Enabling the full iGPU and adopting 16Gbps GDDR6? How would you know? Do you know what the yields for a PS5 SoC are with 36 and 40 CUs enabled?
Adopting 16Gbps memory has to do with power consumption and mass availability. I doubt it has anything to do with cost.


Memory swap? If they're swapping 14Gbps for 16Gbps GDDR6, I doubt they'd do it after having actual 14Gbps chips soldered on PS5 PCBs.
This.

You're harping on 10% miss here on clockspeed. It nailed everything else. The final number is unimportant if you were looking at this as paper grading.
CUs
Memory
Memory Bandwidth
Clockspeed missed by 10%

Given the timestamp of when this leaked this is still certainly a snapshot in time of the work in progress.
Other rumours prior to github all revolved around:

a) 13TF
b) 20% more powerful than XSX
c) More powerful than Stadia
and all matter of claims that put PS5 above XSX.


It certainly won't be cheaper. CUs marked for redundancy are there to improve yield. Memory prices increases with clockspeed.


Then they are going to announce a sudden 576GB/s under 3 months to go? Is there a reason to wait so long to do this?
I think you’re totally missing the point.

At the time we had ‘insiders’ saying the consoles were ‘ close’ and people were using GitHub data to ‘prove’ that they can’t be “12v9 but more likely 12v8 due to insane speeds”. What we see today (from grub etc) is a hangover from that narrative that people are FUDing.
 
I think you’re totally missing the point.

At the time we had ‘insiders’ saying the consoles were ‘ close’ and people were using GitHub data to ‘prove’ that they can’t be “12v9 but more likely 12v8 due to insane speeds”. What we see today (from grub etc) is a hangover from that narrative that people are FUDing.
We totally had a majority of the claims of PS5 around 11-13 TF.
When Github was released it was dialed back, and new numbers circulated post Github around 10.7-11.7

People made calls on what was reasonable at the time. If you look at the post github PS5 claims

11.7 TF would have required 2530Mhz clock rate to achieve
10.7 would require 2320 Mhz.

Both of these were way out there especially for _fixed_ clockspeeds. No one anticipated variable clock rate and not a single leaker ever once brought variable frequencies into the equation.
But since we're on the that topic, everyone is very quickly moving to use the upper end of a variable clock rate to represent the actual performance of this chip.
As a forum we worked with the data points we understood from the past, and leveraged them to debate the future. If a leaker indicated that PS5 had variable or boost clock speeds up to some enormous number then sure, I suppose people would just debate if that could be done on console. People would not debate whether boost can achieve that. Boost achieves incredibly high numbers.

The entire debate on github being wrong hinges on the fact that PS5 moved to variable frequencies. Even during the Road to PS5 presentation, Cerny brought up an arbitrary 2.0Ghz number that could not be achieved using fixed frequencies. His words not mine. As a forum we didn't do anything wrong. What we did as a forum is let 10.2TF pass as it's actual performance when really we're looking at the upper end of it's peak performance. Actual game clock should sit below that, but we couldn't get this debate out either without getting shot down as concern trolling.
 
You're harping on 10% miss here on clockspeed. It nailed everything else. The final number is unimportant if you were looking at this as paper grading.
The final number is unimportant?! It's only unimportant if you're handpicking data to make the final spec as close to the github gospel as possible.

As written above, the message being incessantly spread by the inquisitors was "this is a regression test and 2GHz is too high, so it's most probably staying at 8TF". This was repeated ad nauseum.

The people claiming 10.5TF simply said "github is true but it's outdated data". Which makes sense since it was probably using a discrete Navi 10 back in 2018. These same people were ridiculed in this forum, and we're talking about actual developers on neogaf.
 
The people claiming 10.5TF simply said "github is true but it's outdated data". Which makes sense since it was probably using a discrete Navi 10 back in 2018. These same people were ridiculed in this forum, and we're talking about actual developers on neogaf.
Osiris Black led everyone to believe it's was going to be high 10s to mid 11s.
Did any leaker, _any_ leaker, indicate variable clock speeds? Because the whole discussion around this was based around fixed clocks. Which 2230 is still by all means is already extremely high overclocking territory. It didn't suit the profile for a console.

How is it reasonable under any circumstance to use the highest number of the boost clock and indicate that is the performance it will operate at, at all times?
 
Osiris Black led everyone to believe it's was going to be high 10s to mid 11s.
Did any leaker, _any_ leaker, indicate variable clock speeds? Because the whole discussion around this was based around fixed clocks. Which 2230 is still by all means is already extremely high overclocking territory. It didn't suit the profile for a console.

How is it reasonable under any circumstance to use the highest number of the boost clock and indicate that is the performance it will operate at, at all times?
By the same token sure github was wrong because there was no evidence of checking variable speeds or speeds up to 2.23 (or whatever the top speed is).

You’re also driving a point saying that the top speed ‘we don’t even know how often it’ll be there’ - didn’t Cerny say most of the time?

Either way you clearly are missing what angst we had against github, it was being incorrectly used (and understood) as a tool to make PS5 more behind XSX than it is - and it’s still being used today with people insinuating ‘9.2 realistically’...maybe you blocked those posters but they were there.
 
Either way you clearly are missing what angst we had against github, it was being incorrectly used (and understood) as a tool to make PS5 more behind XSX than it is - and it’s still being used today with people insinuating ‘9.2 realistically’...maybe you blocked those posters but they were there.
Oh I get it now.

And I understand how this could aggravate people. This is also why in general I stopped posting about it, but since we got started, its' hard to stop.
I have to the best of my ability tried to use data points that we know of and many times shot down because of it.
Here:
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive
There's likely more to discover about how boost will influence game design. Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core
DF had to break that news, no leaker did.

People didn't want to believe it.

Then I made a huge discussion about Dynamic Power Equation in which many times over I've been told that it's wrong. That it most certainly wouldn't require that much more power or heat over the 2000 mark.
But everything in the DF article and what Cerny's says supports it's correct. I said that for a given load that we can prove that power is cubic to frequency. And I got all sorts of people telling me I'm wrong. Or whatever
but here:
Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency," Cerny emphasizes.
If I do the math for you it's pretty straight forward.
2230 - 10% = 2007
2007 / 2230 = 0.9
0.9^3 = 0.729
1.0 - 0.729 = 27%

And so with that proven, if there was a hypothetical PS5 running at 1825Mhz, it would be running 50% the power required at 2230Mhz. I hope that provides some indication of how much cooler XSX chip could be running.

Nothing I wrote was out of bounds, we worked as a forum with the knowledge that was available to validate claims that were out there.
That 10% frequency drop puts it back at 9.2TF and that means 27% more power back for other tasks. That's all he said. And somewhere in there eventually you're going to need more CPU, either through taxing AI or through high frame rate. You're gonna need that CPU. We can have that discussion when we get there.

If I had to make a claim, because of the way PS5 is designed, wait for DF to showcase results. We won't know the effect of this until we see release code.
 
Last edited:
didn’t Cerny say most of the time?

In what kind of situations? In cases where things get hammered, not most of the time?

Could mean a thousend things, with only the PR to go after, people will speculate.

The PS5 gpu is basically a 5700 xt with rdna1 some rdna2 features.
The XBX got a custom gpu built on the more advanced amd rdna2 architecture, with more advanced ray-tracing tech. ;)

Yes most likely because sony started development earlier, around 2015.


People didn't want to believe it

Some still don’t. Btw what does it matter, 9.x tf or 10tf, its still below Stadia (and xbox ofc). That was the most sought after.
That 10% frequency drop puts it back at 9.2TF and that means 27% more power back for other tasks. That's all he said. And somewhere in there eventually you're going to need more CPU, either through taxing AI or through high frame rate. You're gonna need that CPU. We can have that discussion when we get there.

Yes, that’s how it is. But we dont know how often 9.2TF will be. Atleast we know for sure it can drop below that 10TF mark, just not exactly how often, yet.
 
We totally had a majority of the claims of PS5 around 11-13 TF.

Actually I am pretty sure the 13TF numbers were only used after the XBSX was revealed, actually around February or March just before the Road to PS5 came out. Before that all the numbers were pointing to 8-10TF as I remember it.
 
Actually I am pretty sure the 13TF numbers were only used after the XBSX was revealed, actually around February or March just before the Road to PS5 came out. Before that all the numbers were pointing to 8-10TF.
Prior to; everyone had followed Jason S that it would be higher than Stadia and we should trust him. They immediately put the target over 10.7

people used insider claims that PS5 was performing 20% better over XSX to be 13.

that was largely the debate for a while. It’s not important, I just feel if people are going to claim that the narrative around PS5 was to undermine it, then we should not ignore that we did have discussions on everything. For a majority of the discussion prior to XSX reveal the PS5 was always claimed by insiders to be stronger.
 
Prior to; everyone had followed Jason S that it would be higher than Stadia and we should trust him. They immediately put the target over 10.7

people used insider claims that PS5 was performing 20% better over XSX to be 13.

Who the fuck is Jason S? I was following news(not on here of course) and I remember the Github leak and as ToTTenTranz said, the leaks were all around 8-10TF. It was 2Ghz at 36CU's for a long time and they had painted XBSX with 1.85 or something similar with 52CU's. Actually the initial leaks as I remember it were from RoGamer and someone else on twitter and those were all 2.0Ghz. Actually I see the rumour below from TweakTown says that a Dev Kit was rumoured to be 13TF. Maybe that's where that rumour came from. I'm sure you can find people saying 12-13TF around that time, but I don't remember anyone taking it serious. It was only just before the Road to PS4 in March that the 13TF rumour really got any good steam for some reason.

Dec2019 - 9.2Ghz (2Ghz/36CU's) with Gen2/1/0 modes
https://wccftech.com/ps5-gpu-6-teraflops-xbox-series-x-12-teraflops-leak/
Jan2020 - 9.2Ghz (2Ghz/36CU's) with Gen2/1/0 modes
https://bgr.com/2020/01/02/ps5-spec...features-you-want-most-from-next-gen-console/

Mar2020 - 13TF rumour
https://www.neogaf.com/threads/plac...ps5-have-9-teraflops-or-13-teraflops.1531704/
May2020 - Dev Kit at 13TF
https://www.tweaktown.com/news/65770/ps5-dev-kit-rumor-ultra-fast-ram-navi-gpu-13-tflops/index.html
 
Status
Not open for further replies.
Back
Top