Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
PS5 average performance leads 5700 by 25~30% in the video. And generally 5700XT leads 5700 by 15% or so. Therefore PS5 should lead 5700XT 10% or above.

By the way do you have any evidence PS5 performs under 5700XT?

You're right, I only looked at the lowest point in the PS5 performance but that appears to be a slight outlier for PS5 performance in Alex's video.

So yes, I would conclude from Alex's video that the PS5 is a little faster than the 5700XT in this game (as one would expect it to be).
 
Written DF Article @ https://www.eurogamer.net/articles/digitalfoundry-2020-assassins-creed-valhalla-ps5-vs-pc

Assassin's Creed Valhalla - what PC hardware is needed to match PS5 visuals?
And which settings get the best from your rig?

With the arrival of the next console generation, it's inevitable that the hardware requirements for PC software will rise as graphical quality and complexity increases. The baseline is reset with the arrival of Xbox Series X and PlayStation 5, and we wanted to get an outline of what kind of PC graphics kit is required to match or even exceed console hardware. To do this, we broke down the visual make-up of Assassin's Creed Valhalla, matching PS5 and PC in terms of quality settings - getting a good grip on optimised settings in the process, where we measure the bang for the buck of every preset and suggest the most optimal settings for PC users.

First of all, it's worth pointing out that we may well see very different results for very different games. In assessing Watch Dogs Legion, I came to the conclusion that Xbox Series X could be matched by a PC running an Nvidia RTX 2060 Super - mostly owing to the onerous demands of ray tracing, an area where GeForce hardware has a clear advantage. With Assassin's Creed Valhalla, we see something very different. First of all, the game doesn't seem to run that well on Nvidia kit, and there's no RT in use, nullifying a key GeForce advantage. Meanwhile, AMD seems to fare significantly better. By our reckoning a Radeon RX 5700XT should get very close to the PS5 experience.

It's worth pointing out that some of this comparison work is theoretical as there are no like-for-like settings between consoles and PC. For example, the dynamic resolution scaling system is very different. PS5 spends most of its time between 1440p and 1728p in our pixel count measurements, with many areas and cutscenes locked to 1440p. PC is different - bizarrely perhaps, the anti-aliasing system is also the DRS system, with the adaptive setting giving between 85 per cent to 100 per cent of resolution on each axis, according to load. Put simply, PC has a lower DRS window. So to get an idea of relative performance between PC and consoles, I used an area of the game that drops beneath 60fps on PlayStation 5, and does so while rendering at 1440p resolution.

...
 
The patch lowered performance right?
Still a very good showing:
a single 5700XT costs more than the entire PS5 (that is with a next-generation wireless controller, PSU, extremely fast 800GB SSD, a near silent case and some cables)

btw LOL at the YouTube conspiracy comments:

@Diego I'm the Messenger Of Facts Correcting bias missinformation from Alex Battaglia who is sponsored by nvidia to damage control consoles and make nvidia look good than he blames ubisoft when AMD hardware that's in the PS5 performs better than an RTX 2080 TI in almost every game PS5 outperforms RTX 2080 TI so in AC Valhalla the old Anvil engine of AC Valhalla is optimized for GCN architecture the PS5 is working as a 10.3 TFLOPS GCN architecture in Valhalla Ubisoft are not using the rdna 2 architecture in the consoles with the old Anvil Engine in Valhalla it's the same reason in Watch dogs legion the reason 2060 super has an edge in ray tracing over Xbox Series X and PS5 in the cherry picked scene selected by the sponsored nvidia shill Alex Battaglia is because Watch Dogs Legion makes the consoles perfom like a GCN architecture and it doesn't use their RDNA 2 architecture at all and when we add more facts that Series X and PS5 are out from 3 weeks developers are using 5% of their hardware power the same goes for ray tracing which the watch dogs legion video was a ray tracing comparison facts are RDNA 2 is behind ampere in ray tracing worst case scenario RX 6800XT is control is 64% weaker than RTX 3080 while ray tracing is turned on and 16.6% weaker than RTX 3070 in control Xbox series X and PS5 ray tracing will improve and it works hardware accelarated like Big Navi 21 6800XT 6900XT but it also works differently than their GPU counterparts RX 6800XT RX 6900XT because everything in the consoles is highly customized including ray tracing performance and rdna 2 featuress what Alex is doing is being an nvidia shill and being bias striking at the right moment when the new consoles are the weakest where developers haven't optimized any game for their architecture and developers haven't used more than 5% of their power so this what alex is paid by nvidia to do is to damage control consoles and make nvidia look good Alex with the 2060 super in watch dogs legion is using DLSS which doubles performance so basically alex is cheating and trying his best to make Nvidia look good that's what Nvidia paid him to do to damage control consoles and make them look bad and make Nvidia look good and and he is kind of doing it in this video with AC valhalla as well doubting PS5 settings second guessing changing his mind blaming Ubisoft nonstop complaining that PC is not optimized like PS5 when PC has more than 1000 PC specs configurations and PC in gaming will never be optimized as good as Consoles because PC is a brute force platform which doesn't need developers to optimize the games for it it just brute forces everything while consoles are at the mercy of developers they are hardware in a closed environment and once developers learn their hardware specifications they can use 100% of their power and optimize the games fully to utilize 100% the hardware PS5 consistantly outperforms RTX 2080 TI in most games so here is the reality check and the facts people PS5 performs like an RTX 3070 in most games when ray tracing is off while developers are using 5% of it's hardware power and i know you gonna start arguing but it's pointless to argue because we did benchmarks verus RTX 3000 series in moore's law is dead channel and RedGamingTech channel NX gamer and i Ragnar was there in the 2 hours shows we compared in many games framerate graphic settings resolution of Xbox series X PS5 to PC Xbox Series X is an RTX 2080 TI in most games and PS5 in most games consistantly performs like an RTX 3070 and the PS5 will get more powerful than the 3070 in the future PS5 graphic settings in performance mode 60fps 1440p lowest resolution 1726p 1800p highest native resolution in AC Valhalla Ultra High Shadows Very High Environment Detail Ultra High Clutter Ultra High Clouds Very High Water High Depth Of Field Very High Environment Textures Very High Character Textures PS5 Quality Mode Native 4k 30fps Graphic Settings every graphic setting in on the PS5 native 4k 30fps quality mode is ultra maxed out Everything Ultra High Maxed out in quality mode same performance resolution and graphics as RTX 3070 in AC Valhalla and almost every game we did the benchmarks in moore's law is dead channel and redgaming tech's channel with nx gamer and myself the proof is in the pudding and the PS5 is a monster that's just waking up because developers haven't tapped into 95% of it's power that's left on the table and developers haven't used any of the rdna 2 architecture in the consoles that's ready to be used when developers spend a few years to learn the hardware
 
One thing to note here, the infamous 1.0.4 patch led to a 6% hit on PS5 on the scene used to benchmark performance between PS5 and PC

It's more than an outlier, it's unprecedented if this is just optimization.
It may explain why XSX is doing so poorly here even compared to the 5700XT as it's performing closer to that 2070S level of expected performance (but I'm not holding my breath on this one)


If we look at the same scene in ver 1.00, seriesx performs better than RTX 2070S. Both new consoles perform significantly better than 5700 by at least 25% (PS5 leads RX 5700 more). In other words PS5 and xsx can perform better than 5700XT which only leads 5700 by 15%.

It seems the problem is not peak performance of xsx. It is that PS5 has fewer bottlenecks for rasterization and 120fps. When RT enabled xsx seems to have the same or several percent better performance.
 
Last edited:

If we look at the same scene in ver 1.00, seriesx performs better than RTX 2070S. Both new consoles perform significantly better than 5700 by at least 25% (PS5 leads RX 5700 more). In other words PS5 and xsx can perform better than 5700XT which only leads 5700 by 15%.

It seems the problem is not peak performance of xsx. It is that PS5 has fewer bottlenecks for rasterization and 120fps. When RT enabled xsx seems to have the same or several percent better performance.
If you remove the consoles from the equation the 5000 series cards are performing out of bracket. AMD designs and prices their cards for particular price points. It’s performing well out of its price bracket in this example.

As for watch dogs; these aren’t the same engines. WD uses Dunia IIRC. AC is using the latest anvil 2.0 or whatever it’s called now. Each Ubisoft game will perform differently, and i would not expect to see 1 Ubisoft title to perform like the others. Division for instance is snowdrop.

it won’t be Apple to Apple comparisons because each engine may have put their focus on different areas.
 
One thing to note here, the infamous 1.0.4 patch led to a 6% hit on PS5 on the scene used to benchmark performance between PS5 and PC
It lead to a drop in performance meadured in that cutscene on PS5, but only because it was measurable there. It was not measurable on XSX because XSX then made the lower cap for dynamic resolution. That cutscene could have just as easily just be came more expensive on every System but was only measured on two - where by one System hid the New cost of He cutscene under dynamic res and a 60 fps cap I think this cutscene is perfectly representative and not unfair to ps5.
 

If we look at the same scene in ver 1.00, seriesx performs better than RTX 2070S. Both new consoles perform significantly better than 5700 by at least 25% (PS5 leads RX 5700 more). In other words PS5 and xsx can perform better than 5700XT which only leads 5700 by 15%.

It seems the problem is not peak performance of xsx. It is that PS5 has fewer bottlenecks for rasterization and 120fps. When RT enabled xsx seems to have the same or several percent better performance.
The PS5 is not overperform, it is preforming in line with the given specs. The PS5 is more powerful than 5700xt on paper. So I think the problem is the utilization of the XSX.
 
You're right, I only looked at the lowest point in the PS5 performance but that appears to be a slight outlier for PS5 performance in Alex's video.

So yes, I would conclude from Alex's video that the PS5 is a little faster than the 5700XT in this game (as one would expect it to be).
Yep, the PS5 is performing exactly as it should, without the optimization specific to PS5 that has not arrived yet. So the only variable in the equation is XSX, that is not behaving according to theory .
 
Great review as always. Hopefully you will perform a similar performance analysis once Cyber Punk becomes available. Reviews of this type are most meaningful when settings in the options menu are similar and functional for the cross platform products being reviewed. TBH, Valhalla would not be on my PC game list if the equivalent "settings" identified in your review was the best case scenario for PC performance and graphic quality.
 
Great review as always. Hopefully you will perform a similar performance analysis once Cyber Punk becomes available. Reviews of this type are most meaningful when settings in the options menu are similar and functional for the cross platform products being reviewed. TBH, Valhalla would not be on my PC game list if the equivalent "settings" identified in your review was the best case scenario for PC performance and graphic quality.
The game will use the Pro and XOX version on next gen, how is that meaningful?
 
Turing does FP and INT ops at same clock though. I don't think it's correct to assume RTX 2060S is only 8.1TFLOPS as it's shown in DF comparison. Depending on the game INT rate hovers between 20-35%. I don't have ACV on my PC to check a frame in Nsight but assuming INT rate is at least 20%, RTX 2060S' Flops rate rises to (8.1TFLOPS+1.6TOPS) which is %5 shy of PS5 theoretical numbers.

turingdjjkc.png
 
We shall see.

Why wouldn't it? Turing clearly offers better RT performance than RDNA2 per SM/CU (or per RT Core/Ray Accelerator). In that specific workload, Turing will age better than RDNA2 I would think, and clearly better than RDNA1 cards that aren't capable of DXR at all.
 
Thanks :)
It took a hilarious amount of time to get He performamce data here...watching the game's opening soooo many times. Also, this is a pretty not too great PC version. So barebones!

Phenomenal work, man. I really appreciated the way when setting a baseline for PC settings you demonstrated differences in shadow setting and world detail at distance, i.e. look at this fence, look at this tree. :yes: Along with flagging the differences between PS5 settings and PC settings.

It definitely feels like Ubisoft's teams are prioritising different platforms in terms of effort and tuning of settings: PC for Watch Dogs, console for Valhalla. But if I'm tracking the different Ubisoft teams, this is Ubisoft's 'A Team' for Assassin's Creed.
 
Why wouldn't it? Turing clearly offers better RT performance than RDNA2 per SM/CU (or per RT Core/Ray Accelerator). In that specific workload, Turing will age better than RDNA2 I would think, and clearly better than RDNA1 cards that aren't capable of DXR at all.

Because the software created by developers may not necessarily expose it. Nvidia's higher geometry throughput has never materialized into anything outside of their over tessellated gameworks effects. Its not certain to me that their RT advantage will either.
 
In other words PS5 and xsx can perform better than 5700XT which only leads 5700 by 15%.

It seems the problem is not peak performance of xsx. It is that PS5 has fewer bottlenecks for rasterization and 120fps. When RT enabled xsx seems to have the same or several percent better performance.

Why would being faster than the 5700XT mean the XSX is reaching it's peak performance? Based on it's specs it should be easily exceeding the 5700XT's performance. The PS5 should also be faster so I don't think this really tells us anything. To me the PS5 looks to be performing exactly as it should vs the 5700XT which means the XSX is underperforming if it's slower than the PS5.

Expected results given the PC benchmarks. Turing will not age well.

Turing seems to have the feature set to age reasonably. Unlike Kepler for example which deliberately scaled back GPGPU capabilities right before a console generation than featured very strong GPGPU capabilities for the time. Naturally it will get slower compared with the consoles over time due to increasingly less driver and developer optimisation, and increasingly more console optimisation. But that will stand for any architecture. On a feature set level it seems very well prepared though, especially given it's stronger RT capabilities which should go a long way to mitigating the usual optimisation issues.

The game will use the Pro and XOX version on next gen, how is that meaningful?

It should be self evident that using the same settings across platforms as a basis for comparison is the most meaningful way to do things. If the suggestion is that the comparison doesn't make sense because the next gen consoles aren't using their full potential on account of using similar settings to the previous generation then I don't think that makes sense. It's the very use of those previous gen like settings (they aren't identical) that allows the new consoles to hot 60fps at relatively high resolutions. And at the same time, the PC GPU's they are being compared to are also not able to stretch their legs using 'last gen settings'. With the exception of the 5700, the PC GPU's stand to gain as much or even more (if RT is used) from true next gen settings that leverage the full capabilities of DX12U. In fact the argument can be made that leaving RT unused unfairly penalises Turing in such a comparison. The reality is that as a design, Turings peak performance can only be reached when both RT and DLSS are fully utilised - because the architecture dedicated die space to these features that goes idle if they're not used. Much like not taking advantage of CBR in the PS4P.

I am guessing the 399 dollar PS5 is outperforming a 1200+ dollar PC here?
Would it be relevant for historical purposes if we get to know the price of the PC which was used?

I'm not sure that it would be relevant. Said PC was available 2 years before these consoles launched so what value would you place on having next gen console level performance 2 years before those next gen consoles are launched? Using more modern components it should be possible to get a console equivalent experience for around $1000 at the moment.

To be honest though I do find these "but that PC cost 3x as much as the console" arguments quite tiresome because they ignore so many other factors, not the least of which is that no-one buying a whole new PC at the start of a new console generation is doing so because they offer a good value proposition. It's a bit like telling the guy who just bought a BMW M8 that he made the wrong choice because it doesn't go 5x faster than a Ford Focus.

you still cannot get the PS5s impressive storage I/O on the PC no matter how much you spend.

True but I think you can buy the hardware which will allow it. You'll just be constrained by software until Direct Storage comes along.

Because the software created by developers may not necessarily expose it. Nvidia's higher geometry throughput has never materialized into anything outside of their over tessellated gameworks effects. Its not certain to me that their RT advantage will either.

I thought Nvidia's strong front end performance was considered to be a driving factor behind their performance lead over the past few generations? At least until things started to become more compute focused during the previous console generation. I'm not going to start digging through old reviews but I'm sure I've read this on numerous occasions over the years. Ultimately it will come down to how much the consoles push RT usage. If it's used very sparingly then Nvidia's advantage may well be nullified outside of Nvidia sponsored titles. But if the consoles push their own RT capabilities to the limits then this should allow Turing and Ampere to stretch their legs in relation to RDNA2.
 
Turing seems to have the feature set to age reasonably. Unlike Kepler for example which deliberately scaled back GPGPU capabilities right before a console generation than featured very strong GPGPU capabilities for the time. Naturally it will get slower compared with the consoles over time due to increasingly less driver and developer optimisation, and increasingly more console optimisation. But that will stand for any architecture. On a feature set level it seems very well prepared though, especially given it's stronger RT capabilities which should go a long way to mitigating the usual optimisation issues.

I thought Nvidia's strong front end performance was considered to be a driving factor behind their performance lead over the past few generations? At least until things started to become more compute focused during the previous console generation. I'm not going to start digging through old reviews but I'm sure I've read this on numerous occasions over the years. Ultimately it will come down to how much the consoles push RT usage. If it's used very sparingly then Nvidia's advantage may well be nullified outside of Nvidia sponsored titles. But if the consoles push their own RT capabilities to the limits then this should allow Turing and Ampere to stretch their legs in relation to RDNA2.

They rarely had the performance lead outside of the top end GPUs where AMD didn't always release a competitor. AMD led performance the majority of the time at any price tier they released a product. I was under the impression Nvidia's performance lead in those early gen titles and common denominator, do-it-all engines like UE4 and Unity is because they rely mostly on pixel shaders. Earlier GCN cards had utilization issues with pixel shaders due to frequent stalls requiring pipeline flushes. There was also a driver released for Tahiti cards like a year or so after their release with huge performance gains which AMD attributed to them fixing a bug affecting memory bandwidth. Geometry has only increased as the generation has gone by and Nvidia performance has gone down rather noticeably in comparison to its AMD competitors at nearly every single price point.

WRT RT I agree it comes down to devs. Given that the consoles aren't too performant in that metric I don't think its a certainty that RT becomes too pervasive. It's entirely possible usage will be sparse in most titles leaving heavier usage to the occasional Nvidia sponsored title.
 
Last edited:
Status
Not open for further replies.
Back
Top