Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
AMD apparently mention "multi-ghz" GPU's clocks at their day.

Basically 2.0 ghz PS5 is a go.
People actually thinking this invalidates the Gospel. Oh you poor souls, Oberon Native test is regression test with Ariel iGPU spec (GX10 - ie. Navi 10 derivative), it wasn't suppose to contain RT/VRS.

2.0GHz was only thing not making complete sense, yet, it does now...

Also, not only will we see 2.0+ GHz chips, IPC has been improved by 15% as well. I think we are probably looking at 9.2TF RDNA2 beating 2080.
 
Last edited:
To be fair if you go era or gaf, this AMD FA Day make they thinking PS5 will be 12.6/13.x/14 something TF even harder, I guess this AMD conference will create even more chaos till Sony announce PS5's specs:LOL:.
 
So we went from PS5 bespoke nearly everything to like PS5 RDNA2 with maybe some customization. So we know the GPUs are likely to be directly comparable based on clock speed and compute units.
 
The high clocks for Sony make the proposed clocks for Microsoft seem very slow. Have they gone for a safe bet here, eat the dia area cost to ensure the target can be met rather than push the envelope for performance.

They went big on memory capacity but very safe on implementation, have the repeated this on gpu perf?
 
So we went from PS5 bespoke nearly everything to like PS5 RDNA2 with maybe some customization. So we know the GPUs are likely to be directly comparable based on clock speed and compute units.

Might be interesting to see how 2SE@2Ghz+ vs 3SE@1675 performs -- faster front-ends vs more front-ends.
 
The high clocks for Sony make the proposed clocks for Microsoft seem very slow. Have they gone for a safe bet here, eat the dia area cost to ensure the target can be met rather than push the envelope for performance.

They went big on memory capacity but very safe on implementation, have the repeated this on gpu perf?
I think MS went with 56CU and 320bit bus to make sure they have headroom to win any power battle that might ensue. If they are sure Sony is not getting to that number, they might as well clock it low enough for better cooling and yields.

Sony probably shot to maximize die space, hence 256bit bus with smaller but higher clocked GPU.

As BRiT said, even if 9.2TF v 12TF is true, there will be advantages to higher clocked chip + it will have to be considerably cheaper to produce. Again, all lines up not only with Github (where people are actually actively ignoring what Oberon was being tested against), but also AquariusZi info and Flute benchmark.

In this case, provided all else stays in ballpark (RAM, BW, VRS) I expect to see minimal difference between the two.

The Github SOC of PS5 is based on RDNA1 (Navi 10). It's unreasonable to assume PS5 is RDNA1
Except its not. There is no single SOC of PS5 in Github, there are two. Ariel and Oberon.

For Ariel, we know it is GFX10 chip - meaning, its Navi 10 derivative.

For Oberon we don't. Oberon Native regression test is being tested against Ariel iGPU spec sheet. Regression test is not bandwidth test, its test to check functionality of newer version of HW/SW vs older one.
 
AMD says RDNA2 has 1.5x performance per watt of RDNA1.

Assuming PS5 is also in RDNA2, then that 11.6TF number seems reasonable (44CUs@2.06GHz).
The 11.6TF RDNA2 GPU may consume power close to a 5700XT, so Sony needs an "unusually
expensive" cooling solution mentioned by Bloomberg.


The Github SOC of PS5 is based on RDNA1 (Navi 10). It's unreasonable to assume PS5 is RDNA1
and XSX can have RDNA2 which is 1.5x performance per watt.
 
AMD says RDNA2 has 1.5x performance per watt of RDNA1.

Assuming PS5 is also in RDNA2, then that 11.6TF number seems reasonable (44CUs@2.06GHz).
The 11.6TF RDNA2 GPU may consume power close to a 5700XT, so Sony needs an "unusually
expensive" cooling solution mentioned by Bloomberg.


The Github SOC of PS5 is based on RDNA1 (Navi 10). It's unreasonable to assume PS5 is RDNA1
and XSX can have RDNA2 which is 1.5x performance per watt.
?
what?
I don't think Github ever mentioned any sort of architecture. If there was a mention of it; people would have been all over it months ago.

the 2.0GHz clock on the GitHub leak actually makes even more sense with RDNA 2 which is proof of performance per watt. The only reason people casted the 2Ghz leak was because it's impossible to do with RDNA 1 in a console format.
 
Then it comes down to a 9.2TF vs 12TF difference.... About what we have now with Pro vs One X. Exclusives could show a difference perhaps, but 3rd party games a resolution/fps difference.
 
?
what?
I don't think Github ever mentioned any sort of architecture. If there was a mention of it; people would have been all over it months ago.

the 2.0GHz clock on the GitHub leak actually makes even more sense with RDNA 2 which is proof of performance per watt. The only reason people casted the 2Ghz leak was because it's impossible to do with RDNA 1 in a console format.
Exactly. But now, one last stand...RT/VRS. Yet, there is perfectly good explanation why it is not there - Oberon native regression test ran Ariel iGPU test list. And since Ariel is Navi 10 Lite (GFX10), test should have never included RDNA2 features, Navi 10 simply doesn't have RT/VRS.

Here :

1.PNG

In test, they took Ariel iGPU test list and ran regression test for Oberon on it. Perfectly fine, but don't expect RT/VRS in results.
 
Last edited:
I think MS went with 56CU and 320bit bus to make sure they have headroom to win any power battle that might ensue. If they are sure Sony is not getting to that number, they might as well clock it low enough for better cooling and yields.

Sony probably shot to maximize die space, hence 256bit bus with smaller but higher clocked GPU.

As BRiT said, even if 9.2TF v 12TF is true, there will be advantages to higher clocked chip + it will have to be considerably cheaper to produce. Again, all lines up not only with Github (where people are actually actively ignoring what Oberon was being tested against), but also AquariusZi info and Flute benchmark.

In this case, provided all else stays in ballpark (RAM, BW, VRS) I expect to see minimal difference between the two.


Except its not. There is no single SOC of PS5 in Github, there are two. Ariel and Oberon.

For Ariel, we know it is GFX10 chip - meaning, its Navi 10 derivative.

For Oberon we don't. Oberon Native regression test is being tested against Ariel iGPU spec sheet. Regression test is not bandwidth test, its test to check functionality of newer version of HW/SW vs older one.
I don't think the difference will be "minimal" if it's 9.2 vs 12, in the end it's 2.8TF advantage.
But without seeing 3rd parties game comparision, it just all talk, so let's just wait...yeah a long wait i guess:LOL:.
 
Haha some of you are so hellbent on convincing yourselves that ps5 is 9.2tf it’s almost scary. Readying for a celebration perhaps? I’ll reserve judgement till Sony officially announce them.
Here’s also a possibility, if ps5 is rdna2 all along and had slightly more flops according to Tom Warren or was it Jason Moriarity and of course a few others, then hell its well over 12tf by that logic.
 
Haha some of you are so hellbent on convincing yourselves that ps5 is 9.2tf it’s almost scary. Readying for a celebration perhaps? I’ll reserve judgement till Sony officially announce them.
Here’s also a possibility, if ps5 is rdna2 all along and had slightly more flops according to Tom Warren or was it Jason Moriarity and of course a few others, then hell its well over 12tf by that logic.
Or you know, Github leak is still 100% on point (for MI100, Renoir, Arden and after this, for 2.0GHz crew), and Oberon dev kits sent to developers with 9.2TF RDNA2 based chips last summer outperformed anything else available? 15% IPC improvement over RDNA1 is no small thing btw, that thing outperforms Radeon 7 and probably matches 2080.
 
Status
Not open for further replies.
Back
Top