General Next Generation Rumors and Discussions [Post GDC 2020]


8.5 seconds loading, not switching.
Ok well presumably Spider-man was reworked to take advantage of that speed since that was the purpose of the demo... it was literally a tech demo. Gears 5 hasn't been optimized for showing off the loading... yet... it was ported in 2 weeks, and they basically just maxed it out with PC ultra settings and a couple past that.
 
The SSD wil make an actual, obviius difference apart from obviously faster loading.

It will make a difference on both consoles. SSD's, sata perhaps, shoukd have been in the 2013 boxex already :p

Btw, where do dimishing returns occur when clocking high? I have noticed that, with my i7 920 systems, after 3.6Ghz, there is about zero improvements in games practically.

About the loading, maybe they should compare RDR2 or something, instead of two different games.
 
Ok well presumably Spider-man was reworked to take advantage of that speed since that was the purpose of the demo... it was literally a tech demo. Gears 5 hasn't been optimized for showing off the loading... yet... it was ported in 2 weeks, and they basically just maxed it out with PC ultra settings and a couple past that.

I don't think MSFT did anything special in SSD controller to remove the bottlenecks.
It also seems to use off-the-shelf controller with custom firmware.
 
I wonder if we will start to see the use of "The worlds fastest console" for PS5 because of GPU clocks and SSD speeds. I am a bit surprised to see Santa Monica Studios use it in job listing. (It is from 4 weeks back). Food for warriors:rolleyes:?

https://www.linkedin.com/jobs/view/1784840264/

"You will also be one of the leaders of an elite team that is super excited to launch the upcoming world’s fastest console(PS5) in 2020"
 
Well, can't deny that. Whenever i think of the PS5, somehow the SR71 comes to mind. It's higher clocked then any gpu out there, and that wont change even with RNDA2 ad Ampere most likely.
 
I wonder if we will start to see the use of "The worlds fastest console" for PS5 because of GPU clocks and SSD speeds. I am a bit surprised to see Santa Monica Studios use it in job listing. (It is from 4 weeks back). Food for warriors:rolleyes:?

https://www.linkedin.com/jobs/view/1784840264/

"You will also be one of the leaders of an elite team that is super excited to launch the upcoming world’s fastest console(PS5) in 2020"

Its just typical company posturing to try and get people excited for the job, everyone does it.
 
Yeah, so games that are hitting CPU to hit 60fps on Anaconda will likely suffer more on the GPU side if devs choose to keep 3.5GHz to keep up with Anaconda's 3.6GHz minimum. Perhaps a tad bigger delta on GPU then if devs are choosing to use 3.8GHz for the higher single-threaded performance (~10% difference) i.e <=9.2TF more often vs 12TF.

Then again, it could mean that the CPU clock on Lockhart may be permitted to be lower to match PS5 - so that ends up becoming the target clock by default with Anaconda serving as a slight boost.

GPU clock of Lockhart will have to be whatever gives them 4TF at 20CUs (for example, 3WGP per shader array * 4 - 24CU total) - 1.5625GHz? It's too bad but they should really just match the clock speed to Anaconda to ensure the front ends are similar in spec. Not sure what bandwidths we have at play, but I think the speculation was 192-bit bus, which may be 288-336GB/s depending on 12-14Gbps. That said, they'll want the lowest BOM possible on there to hit a lower price point as options are thin.
 
Last edited:
It sounds like a problem. So the Series X has constant high performance for the GPU, the PS5 will sometimes go even below the 10TF and who knows where the cap is. The Series X also has constant higher clocked CPU. Again the PS5 will have to compromise. So basically a game that uses both the CPU and GPU in "equal" measure on a complex AAA game, it will most likely perform substantially below than the competition? We need to know what is the minimum clocks on the CPU and GPU on the PS5
 
We need to know what is the minimum clocks on the CPU and GPU on the PS5

It's probably not something they want to share right now. When publicing a product, you advertise highest peak performance possible. If they could sustain 10TF they probably would.
 
It sounds like a problem. So the Series X has constant high performance for the GPU, the PS5 will sometimes go even below the 10TF and who knows where the cap is. The Series X also has constant higher clocked CPU. Again the PS5 will have to compromise. So basically a game that uses both the CPU and GPU in "equal" measure on a complex AAA game, it will most likely perform substantially below than the competition? We need to know what is the minimum clocks on the CPU and GPU on the PS5
Even dropping to 2100MHz (which is crazy clock btw) will result in ~9.6TF chip. It's clear PS5 SOC is pushed to brink, and 3Dilettante has talked about SOC having issues with boosting clocks for both CPU and GPU before yesterday's conference.

I assume most devs will try to run GPU at highest possible clocks, but then again, maybe if XSX runs CPU at 3.66GHz all the time, they will want to be at 3.5GHz for CPU and thus lowering GPU to a bit lower levels as its easier to scale graphics then CPU ops.

Nothing you will be able to feel IMO, but the entire thing was probably made for PR reasons.
 
How do power consumption tests look for Zen 2 at various clocks?

----

MS is going to try and push 60-120fps titles as much as possible, so it's possible we may never get to maximum GPU clock on PS5 unless the same title were pushing for 120fps on Anaconda.
 
How do power consumption tests look for Zen 2 at various clocks?
D_mJ2PAU4AEJ4Ii.png


This is obviously full 32MB L3 Zen 2 desktop part.
 
Somehow I suspect 10W savings on the CPU for 10-20% clock change is going to amount to much for GPU power headroom... *shrug*

I'll maybe suggest that 3.5GHz and 2GHz is "normal", but maybe that's baseless. :p
 
Even dropping to 2100MHz (which is crazy clock btw) will result in ~9.6TF chip. It's clear PS5 SOC is pushed to brink, and 3Dilettante has talked about SOC having issues with boosting clocks for both CPU and GPU before yesterday's conference.

I assume most devs will try to run GPU at highest possible clocks, but then again, maybe if XSX runs CPU at 3.66GHz all the time, they will want to be at 3.5GHz for CPU and thus lowering GPU to a bit lower levels as its easier to scale graphics then CPU ops.

Nothing you will be able to feel IMO, but the entire thing was probably made for PR reasons.

That 2000mhz clock was probably in the comfortzone (as in max utilisation of the whole system), as per amd testlab leaks. The upclocks did occur sometime after the summer i think. Could the boost clock be only for BC modes though? Say HZD at 60fps is going to be CPU heavy.
Reading Alex's post, it seems that 60FPS games could pose problems, 1st party most likely will maintain a solid 30.
 
The CPU is going to get the short end of the stick everytime but I don't have a problem with that even at 3 GHZ it's still a major improvement over current gen.
 
That 2000mhz clock was probably in the comfortzone (as in max utilisation of the whole system), as per amd testlab leaks.

That would mean Cerney was lying or at least being disingenuous when he said it maintains those clocks most of the time and I don't think he's the type of person to do that.
 
The CPU is going to get the short end of the stick everytime but I don't have a problem with that even at 3 GHZ it's still a major improvement over current gen.

Very true, even at 2Ghz it would have been a majgor improvement. The 2013 consoles had Q6600 level performance CPU's.
 
The CPU is going to get the short end of the stick everytime but I don't have a problem with that even at 3 GHZ it's still a major improvement over current gen.
Actually, it probably won't. We don't have power curve for RDNA2, but I suspect (even Cerny alluded to it), that 2.23GHz is above sweet spot, and small reduction in frequency will bring bigger TDP savings.

On the other hand, dropping CPU from 3.5 to 3.2 will give you few watts less, but nowhere near enough to get any palpable headroom on GPU side, yet it will mean you get ~10% less performance vs XSX CPU for very little energy gain.
That would mean Cerney was lying or at least being disingenuous when he said it maintains those clocks most of the time and I don't think he's the type of person to do that.
I also don't think he was lying, but I also would be vary of "close or at", because if devs decide for 3.5GHz (as XSX is 3.66GHz all the time), then I am sure GPU will be slightly under max clock speed. Even 2-5% reduction of frequency, which won't be noticable to players or devs, will bring that console below PR headline of 10TF.

This way they win-win.
 
Back
Top