Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I think people are trying to get an idea of what happens (within the same architecture) between more cores vs clockspeed.
Considering that the TI has about ~50% more cores than the 2080 S, but the clockspeed is lower, it's a somewhat valid comparison.

Though the clockspeeds aren't lining up well as we can see.

Not sure how this will translate to RDNA 2 however. We could see a larger difference or smaller difference than compared to turing.

Just look at the 3d mark timespy gpu scores and see how the 5700xt scales with frequency.
 
So if we're comparing TF and B/W then;

PS5 = 43.6 GB/s @ 10.28TF to 48.7 GB/s @ 9.2TF
XSX = 46.1 GB/s @ 12.15TF

Or is that bad math?

If correct it hardly seems a big difference (~7% in very worst case)
It would apply to a dedicated bandwidth scenario, so it might not be entirely accurate (although there's probably not much meaning in it).

You seem a tad bit emotional here over something that may not be worth discussing. You're comparing 2 different products here. Scorpio was meant to make XBO games go to 4K. They had to take an existing architecture and make it work at 4K resolution. There were going to be changes to things to make that happen.
If the cache levels are sufficient for 4K given the number of CUs, why change things? The silicon could be useful elsewhere. GCN 2 was never designed around 4K. But RDNA 2 probably is.

mmm... put another way, the best case scenario for GCN at 4K was to go wider because on PC, it meant that the pixel: triangle ratio went up (for console-driven assets & LODs), so the scenario would become less dependent on the front-end and CU utilization would go up with more pixel quads & higher pixel:texel (for console-driven texture assets & shader quality).

edit: although I might have that backwards in a way. A better design for increasing utilization at lower ratios (via better culling, for instance) would still be good at higher ratios anyway.

If they didn't care to beef the cache, maybe there was bigger items to resolve. I don't think it's bullshit, maybe its just not needed this time around.
Maybe... although you'd think it would be more useful as now the ROPs can go through L2 while we are looking at higher res textures to sample. That said, doubling the L2 per slice might be too expensive given the cache design itself.
 
Last edited:
In general, CPUs go fast and GPUs go wide. There's a reason for this and it's based on what model is the most efficient (cost/power/thermals) at accomplishing the majority of the workloads they are tasked with.

And higher occupancy also equals higher heat (hence AVX downclocks on CPUs) generated over a smaller area. There are trade-offs to this approach.

In the theoretical world having faster processors is (almost?) always better than having more processors when targeting the same performance. In the, "we actually have to make these things with real materials with real limits on transistor performance, power delivery and cooling capability" world this is no longer true.
 
2070S with official specs at 9.1 TFLOPS

2070S on average performs closer to 10 then 9TF, whilest ps5 10TF are max clocks. A 2070S doesn’t seem far off in comparison to ps5 gpu, so far.

For xsx it’s between 2080S and 2080Ti, perhaps closer to the latter. You cant just stare at TF, bandwith is important too.
 
A 2080Ti at that resolutions will get choked by the CPU and sometimes even the game engines. You *really* want to avoid these "combined score" ratios for anything meaningful.

Or look at 4k reviews for specific games where the GPU is the limiting factor and hope the rest of the system (cpu/ram) is kept standard.

https://www.eurogamer.net/articles/digitalfoundry-2019-09-27-geforce-rtx-2080-super-benchmarks-7001

Easy example of what i mean.
AFAIK almost all games (at least the ones still being benchmarked) become GPU-limited at 4K, otherwise you'd observe the same framerates at 1080p and 4K, which isn't happening.

What is put into question is whether they become bandwidth and/or compute and/or fillrate limited at each graphics card. But in this case, I think techreport's comparisons actually become fair if we want to analyse the big picture. Different games will face different bottlenecks, and although we should indeed expect an increase in compute requirements overall for next-gen, I don't think it'll suddenly become the bottleneck for all games. Same for bandwidth, of course.


Sure, lets pin the PS5 at a 2070S with official specs at 9.1 TFLOPS (oh lets snip off about 1 Tflops because of reasons) and pin the Xbox SX at 13.45 TFLOPS (and add 1 Tflop or so because of reasons) and call that a fair comparison.
Give it a few more weeks and he'll be casually claiming the PS5 vs. SeriesX comparison is close to a mobile RTX 2060 Max-Q vs. an overclocked Titan RTX.
I.e. don't bother. This is probably being done on purpose.
 
We're way into the weeds now if people are comparing Nvidia gpu products to these consoles.

I understand the idea behind it, but, IMO, there are just too many variables introduced to be confident this directly maps to the new consoles. Different architecture, different platform, different programming model (designed to run on general hardware instead of specific hardware). That's a lot.
 
@mrcorbo Yes. I also have a bit of an issue of looking at benchmarks from current-gen games and extrapolating to the new consoles. Current gen games may have very different bottlenecks than future games. We'll probably have new sets of algorithms developed that are important and have very different characteristics than whatever aggregate benchmark tech report is using.
 
Those real fast SSD's with hw compression and efficient streaming might be the bigger difference between pc and consoles than tflops. If/when some first party titles go all in streaming that can create some insane assets that are just not feasible on mainstream pc gaming. i.e. anything close by can have insane textures/models as there is no problem getting those assets into ram only when needed.

Imagine what gran turismo can do when you are just behind the car in front of you or what horizon zero dawn can do when a robot is at players face or what MLB can do when zooming into players face,... Of course pc will catch up but there could be period of time when streaming is the big differentiation and not the flops.
 
I understand the idea behind it, but, IMO, there are just too many variables introduced to be confident this directly maps to the new consoles. Different architecture, different platform, different programming model (designed to run on general hardware instead of specific hardware). That's a lot.
You're both right. When we remove the marketing talk and the potential that features have, and just look at raw power, then I find the Gears 5 benchmark to be useful. But it's not indicative of potential performance in the future.

I think the easiest way (only because we have 1 XSX benchmark here) is to just compare a 5700XT running Gears 5 4K @ Ultra settings. DF reports XSX is 60fps flawless and it's running higher than ultra settings. Though that might be console advantage there, or the results of testing were not thorough enough I'm not sure. But the 5700XT here is woefully behind. The 5700XT 50th Anniversary edition is fairly lock match for the PS5 running the same memory and compute performance. However in benchmarks the 50th Anniversary edition doesn't really pull all that far away from the 5700XT.
9199_502_gears-benchmarked-1080p-1440p-4k-rtx-2080-ti-pushes-60fs.png
 
Those real fast SSD's with hw compression and efficient streaming might be the bigger difference between pc and consoles than tflops. If/when some first party titles go all in streaming that can create some insane assets that are just not feasible on mainstream pc gaming. i.e. anything close by can have insane textures/models as there is no problem getting those assets into ram only when needed.

Imagine what gran turismo can do when you are just behind the car in front of you or what horizon zero dawn can do when a robot is at players face or what MLB can do when zooming into players face,... Of course pc will catch up but there could be period of time when streaming is the big differentiation and not the flops.

I'm actually very curious how cross-platform will work for PC, since many PCs still have SATA SSD or even HDDs. Maybe huge RAM recommendations for people with slow drives and just pre-load a lot of data. Also a bit of a side track.
 
You're both right. When we remove the marketing talk and the potential that features have, and just look at raw power, then I find the Gears 5 benchmark to be useful. But it's not indicative of potential performance in the future.

I think the easiest way (only because we have 1 XSX benchmark here) is to just compare a 5700XT running Gears 5 4K @ Ultra settings. DF reports XSX is 60fps flawless and it's running higher than ultra settings. Though that might be console advantage there, not sure. But the 5700XT here is woefully behind. The 5700XT 50th Anniversary edition is fairly lock match for the PS5 running the same memory and compute performance. However in benchmarks the 50th Anniversary edition doesn't really pull all that far away from the 5700XT.
Gears 5 PC is running higher settings than console though. :p
 
Gears 5 PC is running higher settings than console though. :p
Not the one they tested for XSX.
It was running above Ultra settings according to Richard.

The Coalition's Mike Rayner and Colin Penty showed us a Series X conversion of Gears 5, produced in just two weeks. The developers worked with Epic Games in getting UE4 operating on Series X, then simply upped all of the internal quality presets to the equivalent of PC's ultra, adding improved contact shadows and UE4's brand-new (software-based) ray traced screen-space global illumination. On top of that, Gears 5's cutscenes - running at 30fps on Xbox One X - were upped to a flawless 60fps. We'll be covering more on this soon, but there was one startling takeaway - we were shown benchmark results that, on this two-week-old, unoptimised port, already deliver very, very similar performance to an RTX 2080.
 
@iroboto But PS5 is not a 5700XT. It's a new architecture, so even clock for clock it probably has some real gains.
I have some doubts that RDNA2 will pull away from RDNA1 per clock cycle. That would require additional architecture change. I think we'll see higher clocks with RDNA 2 though.

RDNA 2 (also RDNA2) is the successor to the RDNA 1 microarchitecture and is planned to be released in 2020. According to statements from AMD, RDNA 2 will be a "refresh" of the RDNA 1 architecture. More information about RDNA 2 was made public on AMD's Financial Analyst Day on March 5th, 2020.

I think RDNA 2 contains the needed features that RDNA 1 doesn't have, but I largely believe it to be the same performance with some better performance around clocking.
They have stated otherwise, aiming for 50% more performance per watt. But I'm not sure to be honest.

Even then, as much as I hate to admit, I may have been wrong about mocking RDNA 1.9
If Sony does not announce a standard AMD VRS solution, I'm fairly confident both are running RDNA 1.9 and not RDNA 2.0

Both of them running custom VRS solutions, or one having a custom VRS solution and the other not are entirely suspect when AMD provides it's own VRS solution with RDNA 2.0
 
Last edited:
I have some doubts that RDNA2 will pull away from RDNA1 per clock cycle. That would require additional architecture change. I think we'll see higher clocks with RDNA 2 though.



I think RDNA 2 contains the needed features that RDNA 1 doesn't have, but I largely believe it to be the same performance with some better performance around clocking

If you assume clock for clock rdna2 is not much better than rdna1, then most of the gains for series x would be coming from console advantages, so PS5 theoretically would have the same benefit and probably come in way ahead of the 5700XT as well. If 2080ti is a proxy for Series X, based on DIgital Foundry's first look, then PS5 would come in around a 2080 just based on ALU.
 
If you assume clock for clock rdna2 is not much better than rdna1, then most of the gains for series x would be coming from console advantages, so PS5 theoretically would have the same benefit and probably come in way ahead of the 5700XT as well. If 2080ti is a proxy for Series X, based on DIgital Foundry's first look, then PS5 would come in around a 2080 just based on ALU.
That would be valid look at things. We don't actually have a RDNA with 52 CUs though, so it's hard to say how much the additional CUs are adding to performance here vs console advantages. But indeed, those are the flaws in my comparison.
I'm trying to look for ballpark performance here, last thing I want to do is look back at the earlier rumours and work forwards, I feel like that is a waste of time.

on a side note, The Radeon VII is woefully terrible for 13.8 TF and 1 TB of bandwidth.
 
Last edited:
Status
Not open for further replies.
Back
Top