Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I doubt SONY has any "secret sauce". Again, the anomaly is the XSX. XSX is performing worse than the 5700 XT for goodness. That can't be right or healthy. Even without the bells and whistles of RDNA 2.0, it should at least be on par with 5700 XT, but it isn't. I am sure MS has been aware of this for a while now. We just need to wait and see if MS will catch up to 5700 XT let alone PS5, and if yes, will it also surpass it and when. I personally would expect that in late 2021 if XSX is not performing measurable better than it is now, then it won't perform substantially better in 2023 and so on either. I doubt very much this is a PS3 situation either... that was very different IMO.
how do we know xbox perform worse than 5700xt ? on techpowerup valhalla benchamrk 1440p 53fps on avarage on 5700xt so worse than xsx, but don't know graphics settings on pc vs consoles, @Dictator do you know how valhalla compare to 5700xt on nextgen consoles ?
 
I think MS chose 12/14 because it makes sense for them as they are going to use those in servers to do compute oriented tasks (with ML and such). But they will certainly lose efficiency in gaming tasks.
Or maybe they just wanted to brag that they could get 12TF. Remember when the Xbox One X had 6TF and the bragging that was done yet no game other than Gears 5 and RDR2 took advantage of it.
 
I agree with Globby, these systems will be close. There is no magic sauce to be had... just better optimization and toolsets over time. Anyone thinking XBSX and PS5 are going to miles apart in terms of performance, is looking at this all wrong.
 
Or maybe they just wanted to brag that they could get 12TF. Remember when the Xbox One X had 6TF and the bragging that was done yet no game other than Gears 5 and RDR2 took advantage of it.
Probably not, doesnt work like that. They couldnt go for additional SE because it would be costly in terms of die space, but they did add additional 1MB of L2 over PS5 so we still have to see what is the reason for such perfomances. We know tools are coming in way to hot and devs are not super happy about the change, and we know situation with Sony is complete opposite.

So, until we know what is the bottleneck, its best not to assume engineers did anything for PR bragging purposes.

Ampere is underperforming vs RDNA2 by some margin in some games, yet it outperforms it in others.

Take 3080 vs 6800XT for example. 96 vs 128 ROPs with 6800XT having considerable advantage in clocks, thus higher gpix by (almost 100% advantage) yet results are all over the place.
 
how do we know xbox perform worse than 5700xt ? on techpowerup valhalla benchamrk 1440p 53fps on avarage on 5700xt so worse than xsx, but don't know graphics settings on pc vs consoles, @Dictator do you know how valhalla compare to 5700xt on nextgen consoles ?
I am sorry maybe I am making conclusion too fast here.. Well I am I guess. But it does seem that XSX in some cases does not perform as well? Again sorry if I am spreading false information.
 
  • Like
Reactions: snc
You are doing the same mistake people did for months, you are setting yourselves (and people who read you) up to another disappointement. But the answer is probably not a mystery and is given by Cerny in his presentation:

- Narrow & fast design and making sure all CUs are busy. There are actually 2 key principles here, not one.

There is a reason that the high performance RDNA2 and RDNA1 GPUs have 8/10 CUs by shader array. Because it's the right sweet spot between compute and efficiency for gaming purpose. It seems RDNA 1 & 2 shader arrays are most efficient at working with 8/10 CUs, not 12/14.

I think MS chose 12/14 because it makes sense for them as they are going to use those in servers to do compute oriented tasks (with ML and such). But they will certainly lose efficiency in gaming tasks.

Why I set me to disappointement ? I will have a PS5, if the console had infinite cache the narrow and fast would probably work and better than on PC with dev been able to optimize the code around infinity cache usage. My problem with the PS5 is not the frequency, continuous boost, or the number of CU or the Tflops. This is the memory bandwidth of only 448 GB/s. The PS4 Pro biggest weakness is the memory bandwidth. The PS5 share the same bandwidth than a 2080 or a 5700/5700 XT between the CPU, GPU, the tempest engine and the SSD.

Cache is good but you need to access memory to take the data and XSX cache is slower but the quantity is the same. When you need to access main memory and it will be a bottleneck much faster on PS5. If the interleaved memory is use correctly XSX has probably around 20% more memory bandwidth than PS5. Maybe this is part of the problem with API been inefficient with the XSX memory architecture.
 
Last edited:
We were sitting on this for some time. We didn't know what to make of it. But it's out there now.

upload_2020-11-18_20-16-36-png.4941


This might be causing some issues here we think on the front end. It's worth discussing, but it's not going to explain all of their issues.

You may want to reach out to Al, or if he wants to post here. But you're looking at (if we understand it) 32 ROPS for XSX but it's doubled pumped.
Thread here: https://forum.beyond3d.com/posts/2176264/

edit: help? picture is failing to show for people not logged in. Seems to be a permission issue.

Ah, so 32 double pumped ROPs would have the raw throughput of 64, but you would lose efficency on small tris?

So Sony - who may not have VRS - could be using an Rdna 1 like arrangement of 64 'single pumped' ROPs. And they would probably need that for 4Pro compatibility. This might lend itself better to higher effective filtrate, particularly with small tris.

Do you think the double pumped RBs can still output up to 8 pixels per clock when not using VRS pattern, or does it likely drop to only 4 (4 *8 for 32 pixels /cycle)?
 
Why I set me to disappointement ? I will have a PS5, if the console had infinite cache the narrow and fast would probably work and better than on PC with dev been able to optimize the code around infinity cache usage. My problem with the PS5 is not the frequency, continuous boost, or the number of CU or the Tflops. This is the memory bandwidth of only 448 GB/s. The PS4 Pro biggest weakness is the memory bandwidth. The PS5 share the same bandwidth than a 2080 or a 5700/5700 XT between the CPU, GPU, the tempest engine and the SSD.

Cache is good but you need to access memory to take the data and XSX cache is slower but the quantity is the same. When you need to access main memory and it will be a bottleneck much faster on PS5. If the interleaved memory is use correctly XSX has probably around 20% more memory bandwidth than PS5. Maybe this is part of the problem with API been inefficient withe the XSX memory architecture.
there are some hints from developers like Matt Hargett that cache system is very good on ps5 and bandwidth is not a problem for this console
 
Why I set me to disappointement ? I will have a PS5, if the console had infinite cache the narrow and fast would probably work and better than on PC with dev been able to optimize the code around infinity cache usage. My problem with the PS5 is not the frequency, continuous boost, or the number of CU or the Tflops. This is the memory bandwidth of only 448 GB/s. The PS4 Pro biggest weakness is the memory bandwidth. The PS5 share the same bandwidth than a 2080 or a 5700/5700 XT between the CPU, GPU, the tempest engine and the SSD.

Cache is good but you need to access memory to take the data and XSX cache is slower but the quantity is the same. When you need to access main memory and it will be a bottleneck much faster on PS5. If the interleaved memory is use correctly XSX has probably around 20% more memory bandwidth than PS5. Maybe this is part of the problem with API been inefficient withe the XSX memory architecture.
With the cache scrubbers and the fast SSD and other bits we might not know about, I would think Cerny knows what he is doing and that the 448BG/s is enough if he felt it was enough.
 
With the cache scrubbers and the fast SSD and other bits we might not know about, I would think Cerny knows what he is doing and that the 448BG/s is enough if he felt it was enough.
Its all about compromises though. 16Gbps GDDR6 chips are expensive, more power hungry and harder to procure.

360 had 10MB of EDRAM, just few short for "free" 2xMSAA without tilling.

PS4Pro had only 218GB/s of bandwidth and XSX is not full 560GB/s for system duo to cost of RAM chips.

They will get around that easily anyway given that people simply cannot see difference between 1600p or 1800p for couple of frames.
 
So, until we know what is the bottleneck, its best not to assume engineers did anything for PR bragging purposes.
But it's not like they didn't brag about the fastest, most powerful console, that being the 6TF Xbox One X when they had the chance.

And I mean they have been touting the XBSX as the fastest, most powerful console ever with 12TF.
 
But it's not like they didn't brag about the fastest, most powerful console, that being the 6TF Xbox One X when they had the chance.

And I mean they have been touting the XBSX as the fastest, most powerful console ever with 12TF.
They would have bragged by SSD faster then anything out there if they has it. Its bulletpoint you use when you have advantage somewhere, but its not a reason console is designed the way it is. Thinking about it, its perfectly well designed for 200W system, dont forget PS5 uses same amount of watts so its not like its getting outperformed by system that uses half the electricity.

If they gain upper hand in future, at same price and perhaps some additional features, they cannot be faulted much.

But how much more expensive is it than the 10GB GDDR6 560GB/s that the XBSX has?
I am saying, 320bit bus gave them path for 20GB of RAM at full 560GB/s, but duo to prices of memory they had to work around and thus compromised on 16GB where 10GB is full 560GB/s and other 6GB is 336GB/s.
 
They would have bragged by SSD faster then anything out there. Its bulletpoint you use when you have advantage somewhere, but its not a reason console is designed the way it is. Thinking about it, its perfectly well designed for 200W system, dont forget PS5 uses same amount of watts so its not like its getting outperformed by system that uses half the electricity.

If they gain upper hand in future, at same price and perhaps some additional features, they cannot be faulted much.
Sure, but that's an "if" that might or might not occur. Currently the story goes that the GDK is the responsible for the slow XBSX, but what if it's not and the XBSX does have an inherant problem?

It's all speculation at this point in time but currently the "slower" PS5 seems to be doing a better job than the "fastest, most powerful" console.

It's a bit of egg in the face of MS imo. Of course that might change in a years time .. or when ever MS decide to release their 1P games.
 
the proof is that it perform close or in some cases better than xsx

I heard the Xbox API is late and less efficient than PS5 API for the moment. This is probably a better explanation and it means there is a bigger improvement possible on Xbox API than PS5 API. It means one day the XSX will have the upper hand.

If in 4/6 years the situation is the same, this is ok. We can say it is a proof before this is not the case.
 
Status
Not open for further replies.
Back
Top