Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Ha! Real question coming... I know, this is unusual.

Is there something like a 'low limit' where lowering the resolution to, say, 720p wouldn't actually allow us to squeeze any more eye-candy out of the hardware? Surely at some point you don't get much out of it, regardless of how low the res is?
 
Ha! Real question coming... I know, this is unusual.

Is there something like a 'low limit' where lowering the resolution to, say, 720p wouldn't actually allow us to squeeze any more eye-candy out of the hardware? Surely at some point you don't get much out of it, regardless of how low the res is?
I was actually thinking if 1080p is already too low all things consider but looking at AMD chips with similar BW, probably not. For Nvidia and their Ampere GPUs with 900GB/s 1080p is no go simply duo to low perf return you trade for IQ.
 
This is not the final result it was probably pre release. But it performs very well on PS5 and XSX. I find the final DMC 5 test

Better than 2080 at ultra but not as good as a 2080 Ti probably around 2080 Super not bad at all. Next generation consoles are probably on ultra too I suppose.

EDIT: A surprising result for PS5, on paper XSX seems between 2080 Super and 2080Ti.

This is not a safe assumption to make at all. The visual vs performance trade off of Ultra settings on PC is usually very poor. See the DF Watch Dogs video on the huge differences in performance settings can make.

Without a DF like analysis or explicit confirmation from the developers that the settings are equal, there's not much conclusion that can be taken from these results.

And that's before you even get into talking about whether comparable sections of the game were used.
 
This is not a safe assumption to make at all. The visual vs performance trade off of Ultra settings on PC is usually very poor. See the DF Watch Dogs video on the huge differences in performance settings can make.

Without a DF like analysis or explicit confirmation from the developers that the settings are equal, there's not much conclusion that can be taken from these results.

And that's before you even get into talking about whether comparable sections of the game were used.

From what they said about the section used into digitalfoundery video, it is an heavy one. After XSX having around 2080 Super performance is logic on paper. PS5 not at all but seeing realworld 5700 XT performance, an Xbox Series X should be able to be a bit above a 2080 Super but behind a 2080Ti in rasterization.

We will have to wait and it will vary from game to game.
 
...
So what could be the difference between 14 and 14C? Lower CAS latency?
Maybe lower voltage / power consumption?

...
.
Based on the others 'speed grade mark' labels, 14C should have a better performance than 14. Could be a variant with lower (CAS) latency.
 
Ha! Real question coming... I know, this is unusual.

Is there something like a 'low limit' where lowering the resolution to, say, 720p wouldn't actually allow us to squeeze any more eye-candy out of the hardware? Surely at some point you don't get much out of it, regardless of how low the res is?

There aren't many data points on this, but I think @Dictator mentioned that Devil May Cry in the RayTracing Performance Mode (1080p + RT) shows some artifacts in the RT reflections that are due to the low rendering resolution and somewhat mitigate the RT's image quality advantage. I guess the further you go below 1080p the worse this problem will be. There's a game (can't remember which) that has no RT on the Series S despite having it on the other consoles, and it could be because RT on the SeriesS would require lower than 1080p render resolution, and the added IQ from RT could be minimal at that resolution.
 
Based on the others 'speed grade mark' labels, 14C should have a better performance than 14. Could be a variant with lower (CAS) latency.

But what could it be other than lower latency? By stating 14 as in "14Gbps" they're pretty much already saying what clock speeds the memory is running at.
 
What is that we havent publicly heard about? Both have 64ROPs, but PS5 fillrate is ~25% higher.

Aha I see edit, yes.

Interestingly, fillrate should be an advantage WRT titles developed for the previous generation (PS4/XBO) titles. So, it could be that the fillrate advantage of the PS5 could be balancing out and in some cases providing more of a benefit on cross generation titles versus the compute advantage of the XBS-X.

This could explain why the PS5 has smoke effects tht the XBS-X doesn't have and why it doesn't drop during smoke heavy scenes like the XBS-X does in COD as Arwin pointed out.

We don't yet know what will end up being best or most common rendering practices for the current generation as most titles will have been developed as incremental updates over the previous generation.

There's speculation by some in the NAVI 2 thread for instance that going forwards fillrate will mean less while compute will mean more. Of course, this could just be the typical banter between NV fans and AMD fans as NV now has a distinct compute advantage while it appears that AMD has the fillrate advantage.

Regards,
SB
 
Last edited:
There aren't many data points on this, but I think @Dictator mentioned that Devil May Cry in the RayTracing Performance Mode (1080p + RT) shows some artifacts in the RT reflections that are due to the low rendering resolution and somewhat mitigate the RT's image quality advantage. I guess the further you go below 1080p the worse this problem will be. There's a game (can't remember which) that has no RT on the Series S despite having it on the other consoles, and it could be because RT on the SeriesS would require lower than 1080p render resolution, and the added IQ from RT could be minimal at that resolution.

It was DMC you were thinking of. The S doesn't have an RT mode.*

*It's doooooomed! ;)
 
Mods - can't all this be spun off into its own thread?
I’m inclined to jettison it into the sun. I don’t want gloating over who reads tea leaves better in B3D, period. Things were said in the heat of a new console launch (also under the heel of a pandemic). Let’s move on with more constructive avenues of discussion.
 
Ha! Real question coming... I know, this is unusual.

Is there something like a 'low limit' where lowering the resolution to, say, 720p wouldn't actually allow us to squeeze any more eye-candy out of the hardware? Surely at some point you don't get much out of it, regardless of how low the res is?


Is that final output or rendering?


720p is a lot of data, DLSS can do pretty good things with it and that will only improve.
 
From what they said about the section used into digitalfoundery video, it is an heavy one. After XSX having around 2080 Super performance is logic on paper. PS5 not at all but seeing realworld 5700 XT performance, an Xbox Series X should be able to be a bit above a 2080 Super but behind a 2080Ti in rasterization.

We will have to wait and it will vary from game to game.

In rasterization they are somewhere around 2080 level, in RT 2060S. But i think its better to compare to what AMD has, instead of NV. A wild guess is the 6700XT.
 
In rasterization they are somewhere around 2080 level, in RT 2060S. But i think its better to compare to what AMD has, instead of NV. A wild guess is the 6700XT.

I compare with what is available and on one title it will vary from title to title. The title where AMD excel it will be better.

In rasterization, it will differ a lot title by title.
 
From what they said about the section used into digitalfoundery video, it is an heavy one. After XSX having around 2080 Super performance is logic on paper. PS5 not at all but seeing realworld 5700 XT performance, an Xbox Series X should be able to be a bit above a 2080 Super but behind a 2080Ti in rasterization.

We will have to wait and it will vary from game to game.

On paper the PS5 is a near perfect match for the 2080 at its advertised boost clock. 2080 has more fill rate and PS5 can process more primitives per clock. They have the same memory bandwidth but the 2080 doesn't have to share its with a CPU.

Obviously that's comparing across architectures so can't be used to base any conclusions on, but if we're looking at pure specs alone, then they're a reasonable match.
 
On paper the PS5 is a near perfect match for the 2080 at its advertised boost clock. 2080 has more fill rate and PS5 can process more primitives per clock. They have the same memory bandwidth but the 2080 doesn't have to share its with a CPU.

Obviously that's comparing across architectures so can't be used to base any conclusions on, but if we're looking at pure specs alone, then they're a reasonable match.

This is why I speak about XSX not PS5. PS5 must be around 2080.
 
On paper the PS5 is a near perfect match for the 2080 at its advertised boost clock. 2080 has more fill rate and PS5 can process more primitives per clock. They have the same memory bandwidth but the 2080 doesn't have to share its with a CPU.

Obviously that's comparing across architectures so can't be used to base any conclusions on, but if we're looking at pure specs alone, then they're a reasonable match.

And the PS5 has many bandwidth saving features that the 2080 mostly doesn’t have access to so that should I think probably easily offset having to share with CPU?
 
And the PS5 has many bandwidth saving features that the 2080 mostly doesn’t have access to so that should I think probably easily offset having to share with CPU?

AMD release a patent for reduce bandwith contention on APU field in 2016 or 2017 and publish in 2019. The memory controller was supposed to give priority to CPU memory call because it is more latency sensitive than the GPU. It is somewhere on era but too difficult to find.
 
Status
Not open for further replies.
Back
Top