Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Ah, that makes a lot more sense! I did wonder, but the way it was phrased (two pools with separate BW, like PS3's split RAM) caught me off guard. Should pay more attention.
I wonder if one scenario for memory access for a given clock cycle could be something like:

4x2x16-bit + 6x1x16-bit for GPU or CPU, respectively.
6x1x16-bit for CPU or GPU, respectively (to the higher addresses from 1-2Gbyte)

i.e. 224-bit access + 96-bit access in a given cycle, if that makes sense for simultaneous CPU/GPU no contention? Remember that GDDR6 operates a 32-bit DRAM with twin 16-bit channels, so I'm saying that on the 6x2GB DRAMs, half of the chans are either GPU or CPU with the 1GB DRAMs being given over to either the CPU or GPU.

224/320 = 14/20 = 392 GB/s
96/320 = 6/20 = 168GB/s

That's assuming the CPU hits all 6 DRAMs simultaneously, although I guess it depends if the OS is spread out across 3 DRAMs (1GB + 1GB + 0.5GB) or if it is spread out on all 6 DRAMs to maximize CPU OS bandwidth for some ducky reason.

I don't know if any of that really matters.

Code:
  [1GB]       [1GB]       [1GB]       [1GB]
    ||          ||          ||          ||
[x16][x16]  [x16][x16]  [x16][x16]  [x16][x16]

[1+1 GB]    [1+1 GB]    [1+1 GB]    [1+1 GB]    [1+1 GB]    [1+1 GB]
    ||          ||          ||          ||          ||          ||
[x16][x16]  [x16][x16]  [x16][x16]  [x16][x16] [x16][x16] [x16][x16]
 
Last edited:
When all said and done we're probably looking at relative performance difference of 2070 Super (9 TF) vs 2080 Super (11.1 TF)?
https://www.rockpapershotgun.com/2019/07/23/nvidia-rtx-2070-super-vs-2080-super/
RTX-2070-Super-vs-2080-Super-benchmarks-4K-Ultra.png

RTX-2070-Super-vs-2080-Super-benchmarks-4K-High.png

It's about 5-7 fps difference on average. Notice 2080 Super has 45MHz faster Base and Boost clock and a 48BGB/s bandwidth advantage than 2070 Super. Now compared to XsX, PS5 has a 405MHz gpu clock advantage and roughly the same bandwidth disadvantage if you average it out, also assuming it performs at its peak most of the time like Cerny said. I say this is roughly the difference we're getting give or take a fps or two?
PS5's SSD is a dark horse, it's possible it might allow more high res assets to be streamed or simply more higher res assets per level than XsX, depends on developer optimization I guess, we'll have to wait and see.
And at 4K the slightly faster CPU on XsX might be a moot point since it's more gpu dependent at that res I reckon. Am I getting somewhere?
 
When all said and done we're probably looking at relative performance difference of 2070 Super (9 TF) vs 2080 Super (11.1 TF)?
https://www.rockpapershotgun.com/2019/07/23/nvidia-rtx-2070-super-vs-2080-super/
RTX-2070-Super-vs-2080-Super-benchmarks-4K-Ultra.png

RTX-2070-Super-vs-2080-Super-benchmarks-4K-High.png

It's about 5-7 fps difference on average. Notice 2080 Super has 45MHz faster Base and Boost clock and a 48BGB/s bandwidth advantage than 2070 Super. Now compared to XsX, PS5 has a 405MHz gpu clock advantage and roughly the same bandwidth disadvantage if you average it out, also assuming it performs at its peak most of the time like Cerny said. I say this is roughly the difference we're getting give or take a fps or two?
PS5's SSD is a dark horse, it's possible it might allow more high res assets to be streamed or simply more higher res assets per level than XsX, depends on developer optimization I guess, we'll have to wait and see.
And at 4K the slightly faster CPU on XsX might be a moot point since it's more gpu dependent at that res I reckon. Am I getting somewhere?

That’s how I see it and how many of us have been trying to put things in perspective.

Also, we all religiously listen to Digital Foundry, and they repeatedly said even in the last video that TFLOPs numbers are not - repeat NOT - all that we need to worry about, if at all.

It’s what we’ve been saying!!!!
 
That’s how I see it and how many of us have been trying to put things in perspective.

Also, we all religiously listen to Digital Foundry, and they repeatedly said even in the last video that TFLOPs numbers are not - repeat NOT - all that we need to worry about, if at all.

It’s what we’ve been saying!!!!
Good to hear, I simply don’t have the time to go through the last 10 pages and read through all the new threads. Now we just need to see the games and more impressions from other devs.
 
I wonder if one scenario for memory access for a given clock cycle could be something like:

4x2x16-bit + 6x1x16-bit for GPU or CPU, respectively.
6x1x16-bit for CPU or GPU, respectively (to the higher addresses from 1-2Gbyte)

i.e. 224-bit access + 96-bit access in a given cycle, if that makes sense for simultaneous CPU/GPU no contention? Remember that GDDR6 operates a 32-bit DRAM with twin 16-bit channels, so I'm saying that on the 6x2GB DRAMs, half of the chans are either GPU or CPU with the 1GB DRAMs being given over to either the CPU or GPU.

224/320 = 14/20 = 392 GB/s
96/320 = 6/20 = 168GB/s

That's assuming the CPU hits all 6 DRAMs simultaneously, although I guess it depends if the OS is spread out across 3 DRAMs (1GB + 1GB + 0.5GB) or if it is spread out on all 6 DRAMs to maximize CPU OS bandwidth for some ducky reason.

I don't know if any of that really matters.

Code:
  [1GB]       [1GB]       [1GB]       [1GB]
    ||          ||          ||          ||
[x16][x16]  [x16][x16]  [x16][x16]  [x16][x16]

[1+1 GB]    [1+1 GB]    [1+1 GB]    [1+1 GB]    [1+1 GB]    [1+1 GB]
    ||          ||          ||          ||          ||          ||
[x16][x16]  [x16][x16]  [x16][x16]  [x16][x16] [x16][x16] [x16][x16]

I'd suggest that spreading the OS reservation across all six chips along with other "slow" data should work better. Not to maximize bandwidth but to minimize contention.

To flesh this out a bit:

What about 1/2 the 16-bit channels on the 2GB modules and all of the 16-bit channels on the 1GB chips are primarily allocated to the GPU and the other 1/2 of the 16-bit channels on the 2GB chips are available to the CPU and I/O as needed. Those allocations can shift in either direction according to need with the CPU + I/O maxing at 192-bits at any one time (per the DF article).
 
Last edited:
When all said and done we're probably looking at relative performance difference of 2070 Super (9 TF) vs 2080 Super (11.1 TF)?
https://www.rockpapershotgun.com/2019/07/23/nvidia-rtx-2070-super-vs-2080-super/
RTX-2070-Super-vs-2080-Super-benchmarks-4K-Ultra.png

RTX-2070-Super-vs-2080-Super-benchmarks-4K-High.png

It's about 5-7 fps difference on average. Notice 2080 Super has 45MHz faster Base and Boost clock and a 48BGB/s bandwidth advantage than 2070 Super. Now compared to XsX, PS5 has a 405MHz gpu clock advantage and roughly the same bandwidth disadvantage if you average it out, also assuming it performs at its peak most of the time like Cerny said. I say this is roughly the difference we're getting give or take a fps or two?
PS5's SSD is a dark horse, it's possible it might allow more high res assets to be streamed or simply more higher res assets per level than XsX, depends on developer optimization I guess, we'll have to wait and see.
And at 4K the slightly faster CPU on XsX might be a moot point since it's more gpu dependent at that res I reckon. Am I getting somewhere?

The PS5's SSD speed advantage will mostly be beneficial to exclusives.

The XSX's CPU advantage will mostly be beneficial to exclusives.

Pertaining to multiplatform games, each of the above will grant minor benefits that we'll need Digital Foundry to point out to us.

Still pertaining to multiplatform games, the XSX's CU and bandwidth advantage will result in 2160p vs the PS5's 1800p. And we'll still need Digital Foundry to point it out to us.

Both consoles are fine, very well engineered, and have struck decent balances between cost and performance. Except for the PS5's 14gbps GDDR6, which is a load of bollocks. Or, to be more charitable, a load of barely adequate bollocks.
 
hmm, this might be my fault because I run all my games off a Western Digital Black as opposed to a SSD, my SSD is so tiny i keep it for OS only.
You monster! Hand in your PCMR on the way out. Small SSD for Windows. Larger, faster for SSD for games you're playing. WD Black for everything you may want to play.

Unless you're super rich, in which case RAID-SSD. Or buy a nextgen console. ;)
 
You monster! Hand in your PCMR on the way out. Small SSD for Windows. Larger, faster for SSD for games you're playing. WD Black for everything you may want to play.

Unless you're super rich, in which case RAID-SSD. Or buy a nextgen console. ;)
noooooooooooooooooooooooooooooooooooooooooooooooooooooo.
I will re-apply for my PCMR license next week with corrections in place
but the damn computer shops are closed!
 
The PS5's SSD speed advantage will mostly be beneficial to exclusives.

The XSX's CPU advantage will mostly be beneficial to exclusives.

Pertaining to multiplatform games, each of the above will grant minor benefits that we'll need Digital Foundry to point out to us.

Still pertaining to multiplatform games, the XSX's CU and bandwidth advantage will result in 2160p vs the PS5's 1800p. And we'll still need Digital Foundry to point it out to us.

Both consoles are fine, very well engineered, and have struck decent balances between cost and performance. Except for the PS5's 14gbps GDDR6, which is a load of bollocks. Or, to be more charitable, a load of barely adequate bollocks.

And I think XBSX will have more headroom for RT.
 
CPU will likely be an advantage for all titles, giving a few more FPS.

Wouldn't the increase in CPU clockspeed generally be eaten up by the increase in resolution?

That may be a silly question, I'm just going by the fact that the PS4Pro's and X1X's increased CPU clockspeeds were eaten up by the increased resolution.
 
When all said and done we're probably looking at relative performance difference of 2070 Super (9 TF) vs 2080 Super (11.1 TF)?

We don't know yet, hard to compare, your taking two NV gpus and expect the same difference. Its so far unknown either for the variable/dynamic clocks.

Both consoles are fine, very well engineered, and have struck decent balances between cost and performance. Except for the PS5's 14gbps GDDR6, which is a load of bollocks. Or, to be more charitable, a load of barely adequate bollocks.

CPU, GPU, RAM/BW advantage to XSX. CPU might help for more consistent FPS, or with the 3.8ghz perhaps other things. GPU differences we might not notice unless DF points them out. They might show more in exclusives, RT performance might be better. BW helps with resolution and less bottlenecks.
 
And I think XBSX will have more headroom for RT.

True, but as Shifty has pointed out, that may be ameliorated by rendering at 1800p. As another user has pointed out though (sorry, can't remember who) the bandwidth will take a hit due to CPU contention.

Not that it would be nearly as much of an issue if they'd used 16gbps GDDR6. I know I'm flogging a dead horse at this point, but it's the one area that I still think Sony ballsed up.

Hell, even if the 16gbps chips are relatively highly priced right now, charge the same as the XSX and offset it by including a year of PS+ and PSNow - which will also ameliorate a good chunk of BC issues whilst they're being worked on.

As time goes on, yields will improve, especially when transitioning to a new node. At a new node, they may also be able to use a cheaper cooling solution. They'll then have a smaller SoC than the XSX, with comparable yields, making the SoC itself cheaper. Even if GDDR6, as it drops in price, continues to offset the savings on the SoC they'll, at worst, have an identical BoM to the XSX, but will be able to produce visuals that are identical, bar a slight drop in resolution.

Going cheap on memory and low on bandwidth will impact visuals more and more as time goes on. And, IMO, hamper sales to more casual folks who don't care about pixel counting, but can see stuttering and missing reflections.
 
The variable frequency of PS5 is probably included at the beginning.
I guess github 9.2TF is also peak value. Now the peak is raised to 10.28TF.


Typically ZEN2 consumes 40W and that GPU may be about 160W. If the power budget is 200W of the SOC,
when ZEN2 is under heavy workload ZEN2 may only need 15~20W more. This can be achieved by reducing
15~20W of GPU and it may only affect 2% of GPU frequency due to the non-linear power curve at 2.23 GHz.

I think SONY's cooling solution can be capable of sustaining 2.23 GHz for GPU under heavy work load if the power
density of GPU is high. The main variable is CPU power dissipation since the APU consumes constant power,
so they may have to reduce 2~3% of GPU frequency when CPU needs more power consumtion.
 
Last edited:
Still pertaining to multiplatform games, the XSX's CU and bandwidth advantage will result in 2160p vs the PS5's 1800p.
.
2160p vs 1800p is 44% difference in resolution (assuming the same setting).

How to reach the conclusion?


In fact Mark Cerny highly praises the narrow and fast approach. How to compare PS5 and xsx game performance
if PS5 GPU is at 2.23GHz with faster front-end?
 
2160p vs 1800p is 44% difference in resolution (assuming the same setting).

How to reach the conclusion?


In fact Mark Cerny highly praises the narrow and fast approach. How to compare PS5 and xsx game performance
if PS5 GPU is at 2.23GHz with faster front-end?

Yes 15% is 2160p against 2050p.
 
CPU will likely be an advantage for all titles, giving a few more FPS.
Well, I don't know about that. I'd say the CPU is actually virtually similar on both. 3.5 vs 3.6 ? That's a 5.5% difference. In Pro vs XBX case, even with a 9% faster CPU on XBX, many games run better on Pro, even in cases where res is the same or very similar.

I think CPU (among GPU and ram specs) is actually the only area where the specs are great and expected (we all expected 3.2ghz CPU). But Sony dropped the ball on RAM bandwith, again after Pro, (we know AMD 10tf GPUs are heavily bottlenecked with only 448GB/s, that's without CPU contention) and GPU.

This is the crazy Ken moment of Cerny. Hubris engineer stuff.
 
Status
Not open for further replies.
Back
Top