Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
Is it clearly more potent on paper though?

The only area where it's clearly more potent is in terms of memory bandwidth but that's only for 10Gb of the total 16Gb, and if a game needs more then 10Gb of VRAM it may cause issues with latency.

The CPU clock advantage is nothing and the GPU isn't faster then PS5's across the board as there's areas where PS5's GPU is faster due to the clock speed advantage.

I just think it's a case of:

1. Current game engines not unanimously preferring a wider GPU
2. RDNA2 CU scaling not being as efficient as MS thought it would be (Makes sense looking at historic AMD PC GPU's)
3. Sony's gamble of 'A rising tide lifts all boats' seems to be paying off

It'll be interesting to see where the performance gap is in 2 years time.

Most likely dev focus.

There are a lot more ps4s out there than xbox ones. With that in mind I am sure a lot of developers bet on Sony out of the gate and made the ps5 their lead console for next gen stuff and couple that with sony apparently having working hardware and dev kits early than MS. Also there is the series S to throw a wrench in things although it should be a net gain for the series x since any optimizations that are done for the s should boost the x.

Now is the xbox series x really that much more powerful ? Who knows how it will shake out. I think we wont really know until either a refresh systems come out in 2024ish or we transition to the gen after this.

I have said before , I think this will be a faster generation than last. Ray tracing is just really pitiful on both these consoles and a jump to RDNA 3 which should be ready in 2022 or RDNA 4 in 2023/24 may be much smarter than just clocking this systems faster and adding more cus. You also get the benefit of a much newer ryzen processor and pci-e 5 to draw from. But that is a whole other conversation
 
Surely if a game tailors to PS5's strengths it wouldn't still perform better on XSX?

If a game dev decides to go all out and optimize the heck out of the PS5, it probably would outperform the XSX by a decent margin, playing to the higher clocks, cache arch, maybe even the SSD speeds.
And to answer more accurate to your question, the PS5 most likely is an easier target for now to extract performance out of. The narrow and fast GPU would allow that, aswell as not having to think of the 10GB vram limit which some games, in special if sloppy, could go over (like on pc).
That 10TF is easier to maintain then the XSX's 12TF i can imagine.

It just be that in typical AMD fashion CU scaling is poor.

Historically ATI/AMD GPU's have always scaled poor with CU count (HD5850 vs 5870, HD6950 vs 6970....etc.....etc...)

If you clock Vega 56 and Vega 64 at the same clocks they're within 1% of each other.

Yeah maybe. RDNA2 seems to love high clocks, the XSX is about of an outlier in the clock department for a 10+TF performance class GPU. All RDNA2 dGPU's clock atleast as high as the PS5, mostly even higher. And seeing the PS5 is close to the 6600XT (10.2TF) in gaming, seems that PS5 is doing what its supposed to be doing. What we suspected before, no-one is punching above some weights, its the XSX that should perform better.

That is a very big possibility as it's clearly an architecture that's built for speed.

Its what i can find, as a pc gamer i monitor and follow benchmarks and what im seeing is RDNA2 gpus being tested, their always high clocked. I tested a 6600XT myself, its quite close to the PS5, which tells me the PS5 is extracting its performance as it should.
Beforehand, like iroboto wrote, i was in the thought that the XSX would pull ahead according to its specs, but right now it doesnt. It might though.

One could underclock a RDNA2 dGPU, say RX6800 and try to see what happens relative to performance. Thats still 8 CU's more, and it has its own bandwith/16gb gddr, quite hard to fairly make conclusions even then, but to get an idea of the RNDA2 clocks.

Now is the xbox series x really that much more powerful ? Who knows how it will shake out. I think we wont really know until either a refresh systems come out in 2024ish or we transition to the gen after this.

It technically is. Somethings bottlenecking things, might that be gpu saturation vs clock speeds, memory contention (not likely) or dev focus. It could be a combination of different factors aswell. Sony has historically have had the better/closer to the metal tools since the PS4 era right?
Maybe the XSX gpu has its advantages in RT (CU saturation). The XSX minecraft rt demo was quite impressive performance ways. Problem with that is that demo never made it out the door so theres no single benchmark to go after with that one aside from MS's own.

I have said before , I think this will be a faster generation than last. Ray tracing is just really pitiful on both these consoles and a jump to RDNA 3 which should be ready in 2022 or RDNA 4 in 2023/24 may be much smarter than just clocking this systems faster and adding more cus. You also get the benefit of a much newer ryzen processor and pci-e 5 to draw from. But that is a whole other conversation

Its maybe 'pitiful' but not useless either. When optimized its quite nice what your getting. Look at rift apart, the RT reflections are quite convincing in special considering the rest of the game is quite 'next gen' as in fidelity and performance. But i'd agree that theres much of room to improve due to ray tracing, while native resolution increases have taken a backseat (and im happy for that).
4k as a standard will be around for a long, long time to come.

Edit: and to say again, the PS5 does come out as the 'winner' so far, it has less specs but competes very well with the higher specced box. No matter how, thats a feat and something i and others didnt expect to happen in the speculation threads here.
As for the true winner, there are none. Both compete and thats maybe the best for all of us, right? :p
 
If a game dev decides to go all out and optimize the heck out of the PS5, it probably would outperform the XSX by a decent margin, playing to the higher clocks, cache arch, maybe even the SSD speeds.
And vice versa, making that kind of a moot point. ;) There are enough differences that it isn't a 1:1 comparison between 12 TF and 10 TF giving XBSX a 20% advantage. A game made to leverage XBSX's CU compute advantage, perhaps leveraging more software rendering techniques, and without using PS5 optimised alternatives, should gain a 20% improvement in res or framerate or detail. And the same on PS5, a game tailored to PS5's strengths will hit bottlenecks on XBSX.

I'm more interested in the cross-plat games, the one running on UE, Frostbite, et al, that have the PC as their architectural baseline. Are these engines suitably optimised to leverage the strengths of the subtly different consoles to see the games balanced reasonably well between them instead of leaning too heavily to one side that favours one platform?
 
But neither console is pure desktop RNDA2.
i'm fairly certain it's the same here, or possibly better on console.

It's important to note that the claim that AMD devices are not good at scaling CUs is a misnomer.
It's not good at scaling CUs using the 3D pipeline. Meaning the command processor in particular was not able to schedule and take full advantage of the hardware on a per cycle basis. This was a large painpoint before RDNA 2 which allowed instructions to be issued every cycle as opposed to every 4 cycles.

With compute shaders and the compute pipeline, the AMD cards are very efficient at dispatching work for all CUs to do work. Which is why Dreams is so full of awesome.
 
I'm more interested in the cross-plat games, the one running on UE, Frostbite, et al, that have the PC as their architectural baseline. Are these engines suitably optimised to leverage the strengths of the subtly different consoles to see the games balanced reasonably well between them instead of leaning too heavily to one side that favours one platform?
The current generation of PC GPUs are notably wider than the consoles and over time consoles have been widening. It's only reasonable to assume the next generation of consoles will continue to grow wider as well. So engines should follow suit in that trend.

They should be taking advantage of additional compute units and dropping off the 3D pipeline and the need for ROPs etc. Fixed function hardware is awesome, but some of the engines are pushing the hard limitations around geometry etc. Even mesh shaders, which sit in the 3D pipeline, leverage CUs as opposed to their regular FF units.
I prefer to see more compute being used, and I think compute is the future and it's been trending that way.
 
The current generation of PC GPUs are notably wider than the consoles

Their also (quite) high clocked, from low-ish end to high end (6600/XT to 6900XT). AMD seems to have optimized rdna2 for high clock speeds, as opposed to NV. I have no idea if that something to do with the XSX's performance, just guessing.
 
Their also (quite) high clocked, from low-ish end to high end (6600/XT to 6900XT). AMD seems to have optimized rdna2 for high clock speeds, as opposed to NV. I have no idea if that something to do with the XSX's performance, just guessing.
its hard to take advantage of clock speed if you can't feed it. The Infinity Cache is what allows the 6600+ series to take advantage of the higher clock speed, otherwise those cycles are just wasted. To do work, it needs feeding, and that means huge bandwidth requirements.

You either feed all the time, or you feed a lot of people in huge doses. NV took the latter and AMD the prior.

tldr; very fast clock speed with no work to do is just burning idle cycles.
 
Did we ever get any detail on exactly what the cache scrubbers in PS5 help with and aim to achieve?

Could they be playing a part in PS5 keeping pace with XSX?
 
Did we ever get any detail on exactly what the cache scrubbers in PS5 help with and aim to achieve?

Could they be playing a part in PS5 keeping pace with XSX?
we are OT, but try the search function around here. IIRC we did a pretty thorough discussion on that topic.
You're likely not going to see that type of benefit from it, it's probably more beneficial imo from streaming data into memory than it is performance.
 
Still waiting for just one developer to confirm that they've chosen the single-threaded route on a cross-gen multiplatform game for Series X. I'd like to see the results of a 9% CPU clock advantage in action, if any.
 
Still waiting for just one developer to confirm that they've chosen the single-threaded route on a cross-gen multiplatform game for Series X. I'd like to see the results of a 9% CPU clock advantage in action, if any.

Maybe it's just not such a notable advantage
 
Precisely why I would like to see it. More understanding of the performance characteristics of these machines and the games that run on them is the whole purpose of this forum, isn't it?

To be honest I'm not really sure what the purpose of MS putting a single threaded option into the mix was. SMT is just an inevitable thing if you are working as multiplat dev these days going in this gen
 
It's important to note that the claim that AMD devices are not good at scaling CUs is a misnomer.

When it comes to game performance and frame rate they historically never scaled very well with CU count.

5850 vs 5870
6950 vs 6970
7950 vs 7970
290 vs 290x

And others....when all run at the same clock they're within 0-2% of each other in terms of frame rate when run at the same clock which is below the difference in CU count.

With RDNA2 moving from 40CU's to 60CU's (50% more) offers 28-40% additional performance which is an improvement over previous entries but still low enough to tell us something about XSX and PS5.
 
To be honest I'm not really sure what the purpose of MS putting a single threaded option into the mix was. SMT is just an inevitable thing if you are working as multiplat dev these days going in this gen

They detailed the option during several presentations last year. It benefits older games developed with limited threading support.
 
I guess that's for like og Xbox games or 360 games right? I was only thinking in terms of last gen and next gen but it does make sense. Certainly it probably makes that level emulation much easier to achieve.

I still hear about Sony deciding their cu counts and preconfigured clocks to make PS4 and Pro work on PS5 so I guess it shouldn't be surprising
 
5850 vs 5870
6950 vs 6970
7950 vs 7970
290 vs 290x
All these GPUs suffered from the 4 cycle issue of GCN architecture.
The reason clock speed helped GCN architectures is because it could speed through the idle cycles more to issue instructions faster.

https://gpuopen.com/wp-content/uploads/2019/08/RDNA_Architecture_public.pdf

As per the slides, 6 and 7.

RDNA resolves some of the largest issues with scaling to more CUs.
You will get more performance out of increasing clock speed to a limit in which data needs to be accessible for work to perform. But you will hit physical limits of power draw. The need to go wider is the natural resolution to increasing performance per watt.
 
Status
Not open for further replies.
Back
Top