Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
New consoles may render 4k for cross-gen games, but they struggle to make 4k next-gen games.
If PS5 can only render 1440p for UE5 demo, Xsx may only render 1550p~1600p images. This
is an indication that we may not see native resolution for next-gen games on 4K TV.

Previous generations developers often use the most powerful console to render "native" resolution
for main stream TVs at that time.


2013 PS4: 1080p
2005 XB360: 720p


This generation TVs improve so fast (4X resolution) that none of the new consoles can render
native resolution using true next-gen engine. In that UE5 demo new consoles may only render
50% of 4k resolution


In previous generations a more powerful console often benefit from "native resolution" of the TVs
while other consoles render blurry images for TV. "Native resolution" plays an very important
rules for modern TVs, Therefore this generation we will see much less comparison between the consoles (especially resolution).

Because it is not as meaningful as previous consoles.

4k TV: we see 45~55% of native resolution, maybe 65% with temporal projection. A more powerful
console just means a little less blurry, but not benefit from native display.


1080p TV: Both of them will render more than 1080p, with some super sampling effect. A more powerful
consoles means a little better supersampling. But it is very hard to identify unless one console
can render 1.5~2x more pixels for super sampling but it is not possible between PS5 and Xbsx.

You pessimist. The UE5 demo on PS5 was 1440p@30hz, Epic are apparently aiming for 1440p@60hz. This will calculate as 4k@30hz.

Let's not forget that 4k is actually the biggest resolution jump we've had for a console generation. I'm pretty sure that we're seeing one of the biggest jumps in compute from base consoles to next gen too.
 
So you could have an "accelerator cartridge" where you could multiply the onboard SSD +cartridge throughput for select games. But, as if 5~9GB/s wasn't already enough! (famous last words?)
I think the PS5 is very unlikely to have RAID0 between its internal SSD and the M.2 expansion, but with the SeriesX it's just impossible IMO. From the pictures, part of the custom expansion cartridge stays outside the console, probably on purpose to make it hot-swappable. The last thing we'd want would be for the console's games to all be rendered useless the moment someone pulled out the cartridge.

Both could use RAID1, though, and that could be good for significantly higher read times but the extra capacity would be lost.
 
New consoles may render 4k for cross-gen games, but they struggle to make 4k next-gen games.
If PS5 can only render 1440p for UE5 demo, Xsx may only render 1550p~1600p images. This
is an indication that we may not see native resolution for next-gen games on 4K TV.

Previous generations developers often use the most powerful console to render "native" resolution
for main stream TVs at that time.


2013 PS4: 1080p
2005 XB360: 720p


This generation TVs improve so fast (4X resolution) that none of the new consoles can render
native resolution using true next-gen engine. In that UE5 demo new consoles may only render
50% of 4k resolution


In previous generations a more powerful console often benefit from "native resolution" of the TVs
while other consoles render blurry images for TV. "Native resolution" plays an very important
rules for modern TVs, Therefore this generation we will see much less comparison between the consoles (especially resolution).

Because it is not as meaningful as previous consoles.

4k TV: we see 45~55% of native resolution, maybe 65% with temporal projection. A more powerful
console just means a little less blurry, but not benefit from native display.


1080p TV: Both of them will render more than 1080p, with some super sampling effect. A more powerful
consoles means a little better supersampling. But it is very hard to identify unless one console
can render 1.5~2x more pixels for super sampling but it is not possible between PS5 and Xbsx.
Games in the first few years will most likely hit native or near native 4k, especially on first party titles. It is only around mid gen do we start to see substantial drop in res among various titles but that's when mid gen upgrades happen and start to do their thing. A PS5 Pro first party game would be insanely good looking at 4k native.
 
Games in the first few years will most likely hit native or near native 4k, especially on first party titles. It is only around mid gen do we start to see substantial drop in res among various titles but that's when mid gen upgrades happen and start to do their thing. A PS5 Pro first party game would be insanely good looking at 4k native.

The quest for native res seems a weird one, given the better than native results that ML techniques like DSLL 2.0 seem to provide. When we're 4 years down the line on PS5/XSX, I'm not sure that apparent resolution will be where those consoles are straining (assuming that mid range TVs don't end up bigger than 65 inches).
 
The quest for native res seems a weird one, given the better than native results that ML techniques like DSLL 2.0 seem to provide. When we're 4 years down the line on PS5/XSX, I'm not sure that apparent resolution will be where those consoles are straining (assuming that mid range TVs don't end up bigger than 65 inches).
You quoted but seemed to miss an important part of what he said
in the first few years
Unsure if the consoles will be able to do ML upscaling fast enough to be useful, but non native resolution is already a staple and we won't be going back. So even without ML they will progress with temporal injection/reconstruction, checkerboarding etc.
But for cross gen games, i could see them just rendering native because will have the power to compared to mid gen for some studios, and cost to do much more might not be worth it.

It'll be about what resolution the non native is done at, if they can even tell as we move on.
 
If PS5 can only render 1440p for UE5 demo, Xsx may only render 1550p~1600p images.

I think that the difference will be wider, 2-3 TFlops alone means 20-30% more compute processing, more 20-30% per-pixel processing, and we have 52 CU vs 36, some functions, as RayTracing dependes more on the units number, than frequency, and its gpu have more ram bandwidh. And the CPU, is faster too
is PS5 can render 1440p I believe that XSX will stay on 1872p or more
 
I think that the difference will be wider, 2-3 TFlops alone means 20-30% more compute processing, more 20-30% per-pixel processing, and we have 52 CU vs 36, some functions, as RayTracing dependes more on the units number, than frequency, and its gpu have more ram bandwidh. And the CPU, is faster too
is PS5 can render 1440p I believe that XSX will stay on 1872p or more

And XSX won't have to render as much geometry due slower steaming speeds. ;-)
 
I think that the difference will be wider, 2-3 TFlops alone means 20-30% more compute processing, more 20-30% per-pixel processing, and we have 52 CU vs 36, some functions, as RayTracing dependes more on the units number, than frequency, and its gpu have more ram bandwidh. And the CPU, is faster too
is PS5 can render 1440p I believe that XSX will stay on 1872p or more

https://www.silisoftware.com/tools/screen_aspect_ratio_calculator

CPU has nothing to do with resolution. Using this and using 20% more pixel than PS5, the XSX will render at 1577p(2805x1577, probably not noticeable). The IQ is so good in the demo, I am not sure native 4k would be noticeable. DF couldn't count pixel and would have said native 4k without Epic Games honesty...

Maybe they use software Analytical AA like in Reyes at least for the micropolygon part.

 
if resolution difference is not noticeable for the majority of players, i bet MS will ask devs to use the extra power elsewhere instead of wasting it to slightly increase resolution.
 
Using this and using 20% more pixel than PS5, the XSX will render at 1577p(2805x1577, probably not noticeable).
Yep and although upping resolution is the simplest thing to do, I would prefer them not to in this case. .
Better use would probably be to render at same resolution and use the difference on effects etc. At that (relatively) low resolution the TF to pixel is a lot better than at higher rez.
 
Someone ought to graph that resolution trend. @Dictator : DF article there? Graph the change in game rendering modes (res and framerate) across the lifespan of various consoles to see how resolution and framerate are sacrificed for complexity.
I wonder if that would inadvertantly bias towards the dominant game genre of the day. 3D platformers at some point were dominant, then FPS, then linear photoreal games, now open worlds are all the rage, both as twitchy FPS and more slow paced TPS... and each genre have a very different bias towards resolution or frame rate, and between high poly count vs high texture details. Interior vs exterior.
 
https://www.silisoftware.com/tools/screen_aspect_ratio_calculator

CPU has nothing to do with resolution. Using this and using 20% more pixel than PS5, the XSX will render at 1577p(2805x1577, probably not noticeable).


CPU has a lot to do with better framerate/less stuttering, of course, but it can be used to do post-processing per-pixel, anyway it's was a major bottleneck in the last gen. A faster cpu is an important advantage, anyway, more software processing available for the developers.

20% more? if the 10 Tflops is fixed and not dynamic, and if the Raytracing is based on the same number of CU units and the gpu bandwidh is the same. But last time I've checked, reality was different and those assumptions are all false. So, I believe, we it's safe to say it's more than 30%, if Cerny and Microsoft gave the right numbers, of course.
 
CPU has a lot to do with better framerate/less stuttering, of course, but it can be used to do post-processing per-pixel, anyway it's was a major bottleneck in the last gen. A faster cpu is an important advantage, anyway, more software processing available for the developers.

20% more? if the 10 Tflops is fixed and not dynamic, and if the Raytracing is based on the same number of CU units and the gpu bandwidh is the same. But last time I've checked, reality was different. So, I believa, we are safe to say it's more than 30%, if Cerny and Microsoft gave the right numbers.

The x86 CPU is not a CELL processor. I never find a GDC presentation since 2013 talking about using the CPU for pixel processing. The RT/TMU unit is linked to the clock of the GPU like the CU it means XSX have 15% more CU power and RT /TMU capacity. Do you have a devkit? If no, I prefer talk about the official number at least before first batch of Digitalfoundry face off. Before this is just faith after all some people believe this too.
 
Last edited by a moderator:
So, I believe, we it's safe to say it's more than 30%, if Cerny and Microsoft gave the right numbers, of course.
If you believe what Cerny said, it's performing above what the TF number is indicating because of the modifications they made to RDNA2. Which are proprietary to Sony and not part of the AMD roadmap. Either you believe him or you don't. He said it's wrong to use the TF as a performance metric. It's just the sum of ALUs multiplied by the clock, and there's a lot of other things going on that can afffect cache hit ratio, occupancy, etc...
 
If you believe what Cerny said, it's performing above what the TF number is indicating because of the modifications they made to RDNA2. Which are proprietary to Sony and not part of the AMD roadmap. Either you believe him or you don't. He said it's wrong to use the TF as a performance metric. It's just the sum of ALUs multiplied by the clock, and there's a lot of other things going on that can afffect cache hit ratio, occupancy, etc...
I have little reason not to believe him, but did he quantify how much difference it makes? 0.1% or better thermals, power draw?
I have little reason to believe it won't perform as expected based on its TF.
Expecting more of less gpu power/efficiency from either console other than the TF value until we know a lot more is just random as far as I can tell.
 
I have little reason not to believe him, but did he quantify how much difference it makes? 0.1% or better thermals, power draw?
I have little reason to believe it won't perform as expected based on its TF.
Expecting more of less gpu power/efficiency from either console other than the TF value until we know a lot more is just random as far as I can tell.
No he didn't give numbers, it's not really possible at this stage. Honestly we have no clue what the real world impact is going to be for their alterations. He was singling out the cache scrubbers as a sony exclusive feature that AMD wouldn't use on PC ("developed just for us"). And he said that during heavy streaming it prevents continuously scrapping the entire gpu cache just because the storage engine DMAed something. I don't know how gpu cache works so I have no idea if it's a big deal or not. Until now I had no idea it needed to be scrapped because another module decides to write something in ram. I'm assuming flushing the gpu caches is a big inefficiency hit.

He also pointed out the importance of keeping the data closer to where they need to be used as an important improvement in efficiency, but that comment applied to RDNA2 in general, not necessarily PS5.

Another point I find interesting is the reiteration that their collaboration with AMD is about developping features together that helps both world. He said if we see RDNA2 cards launching with similar features as PS5, it means their collaboration succeeded. I'm curious if that's poking through the confidentiality between custom silicon client, if what sony and amd developped together have restrictions for competing clients, but allowed for amd's own products.
 
Last edited:
15%?
care to show how work your math?
I guess he did 10.3 / 12.2 == 0.844, so it's 84% of XBSX, a 16% deficit, rounded to a nicer sounding 15%
Alternatively, 12.2 / 10.3 = 1.18, so XBSX has 18% more TFs.

Are you aware that there are thinghs not clock-dependent, as Ray Tracing that are influenced more by the number of CU units...
How so? You have RT units capable of performing however many intersect tests per clock. 1 unit at 10 GHz should be able to process the same number as 10 units at 1 GHz, I'd have thought. How does parallelism help with AMD's implementation of RT in RDNA2?
 
Status
Not open for further replies.
Back
Top