Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
I knew two months after the opening of the thread in era and I told it in another era thread like I said here before road to PS5, the uncompressed data will be above 7GB/s, I heard 8 GB/s and I said too since last summer Sony have some new AA/reconstruction method using deep learning inside the SDK from a friend dev in a 3rd party studios.

I said in era multiples times the controller will be custom optimised for read data when people were speculating about standard SSD controller because I know my friend since a very long time and I have full confidence about his information. He doesn't want to give too much details, for example I know the acronym of the AA/reconstruction method not more. And knowing the acroynym I doubt this is not deep learning, if it is not deep learning no idea what is the meaning of the acronym.

The only things not inside the patent is the coherency engine.

I knew the GPU specs/clocks beforehand (coming from a 1st party member). I dont get all the hype anyway, its intresting and fun before and somewhat after the machine has launched, but..... thats it. Its kinda over with all the hype and the dust has settled. It doesnt matter some TFs more or less, or whatever minimal spec differences. As a pc gamer its just fun to watch people brag their consoles specs to eachother, its no different to the 6th generation (although that was much more fun).

Just go with the plastic box you like the most..... it doesnt really matter in multiplatform games even, DF has to point it out, even for hitman 3.

Even on paper the console are not so far from each other but for the moment I am surprised by the multiplatform game benchmark.

Surprised.... im not really. They are too close in paper specs for a meaningfull difference in practice. XSX is going to pull ahead but nothing like One s vs PS4 (and that was already small enough).

Edit: There is no reconstruction hardware (ala tensor hw) to aid performance there as far as i know regarding the PS5's GPU. It might be software/sever side etc though, no idea.
 
DF Article @ https://www.eurogamer.net/articles/digitalfoundry-2021-god-of-war-ps5-60fps-patch-analysis

God of War's 60fps upgrade for PS5: the final flourish for an incredible game
An excellent reason to revisit a last-gen classic.

God of War - the 2018 sequel for PlayStation 4 - has finally received a patch for PlayStation 5, and in common with similar updates for Days Gone and Ghost of Tsushima, it opens the door to a classic game running flat out at 60 frames per second - and in common with those other Sony first party juggernauts, the impact is indeed transformative. It's almost like the final piece of the puzzle: the original release was hugely impressive with its 4K graphics, extreme detail, phenomenal lighting and excellent performances. A nigh-on flawless 60 frames per second is the final flourish for a game that pushed PS4 and PS4 Pro to its limits.

In fact, before we talk about the raw performance numbers, we should probably address what you might call the quality of life improvement. In extracting so much from the last-gen silicon, Santa Monica Studio inadvertently ran head-first into another issue - the cooling design of PS4 and PS4 Pro. God of War actually became our title of choice for testing power draw, acoustics and thermal performance of PlayStation hardware. On the noise front especially, this game caused the fans to spin up to an obtrusive degree, depending on which iteration of the hardware you have. Beyond what's happening with the software, God of War on PS5 is a much more pleasant experience simply because a nuanced story of profound loss and parenthood plays out without high-pitched fans running at max speed in the background.

And in returning to God of War, what struck me was just how risky this title would have been for SIE and Santa Monica Studio. A number of gambles here pay off spectacularly. A series that began life as a technologically state of the art arcade brawler with set-piece bosses has slowed down, there's a genuine story here and fully fleshed out characters. By comparison, the older God of War titles almost feel like exaggerated word-of-mouth legends. Regardless, Santa Monica Studio has moved on, the story has moved on and maybe the audience has moved on too.

...
 
Twice I tried to buy a disc version of this game from eBay, and twice something went wrong. Then the gods of God of War showed me mercy.

And lo, it was good!

Not eBay, however, which can go f itself in the a with an s.
 
they've just removed 30fps cap ;d

I'm talking about the overall consistency of their work. The more consistent 60fps just highlights those details even further. Heck, there are certain third-party titles claiming to be next-gen upgrades for BC titles, however, they still can't match most of Sony's prior PS4/Pro works that have no true next-gen updates (only uncapped framerates).
 
Last edited:
Sony Santa Monica did an awesome job on this BC title. I haven't completed GoW yet, might as well on the PS5.

Aye, it'll be worth waiting. I started God of War when I had a PS4Pro hooked up to a 1080p TV. Naturally, I played it in the 45-60fps mode.

I got a 4K TV part way through and had to make the choice: smooth or sharp? I picked sharp, but I would often sob at the loss of smoothness. For my second playthrough, I won't need to blow snot bubbles off of my lips. And I shall finally show that Valkyrie Queen tart who's boss.
 
Getting back to the Power Differential thing on the last Page mentioned by @chris1515 - I think it just makes sense that the XSX will show it's advantage over time more because Compute is becoming and has become the driving factor behind engine Design for the most Part. If you do a render doc of a lot of modern engine's, the CS stage is rather huge! Heck something like UE5, Lumen and nanite's most interesting Part computationally are compute.

Playstation 4 was a Design that was about Compute before it was ubiquitous, XSX is out a time when Compute is ubiquitus and the way Lots of gfx are done now.
 
Getting back to the Power Differential thing on the last Page mentioned by @chris1515 - I think it just makes sense that the XSX will show it's advantage over time more because Compute is becoming and has become the driving factor behind engine Design for the most Part. If you do a render doc of a lot of modern engine's, the CS stage is rather huge! Heck something like UE5, Lumen and nanite's most interesting Part computationally are compute.

Playstation 4 was a Design that was about Compute before it was ubiquitous, XSX is out a time when Compute is ubiquitus and the way Lots of gfx are done now.
It's already showing its 20% advantage in compute limited scenes (when the CPU is out of the picture). But PS4 had 40% more compute and more than 100% more main ram bandwidth while XB1 had roughly no advantages on its own overall so the situation with PS4 was very different than now.

I think this gen (and as predicted by Cerny) the games will be mostly limited I/O, not compute. Here the PS5 has the measured advantage (whether limited by CPU or custom hardware) as seen in Cyberpunk or Control. And Cyberpunk could already be seen as a next gen game if we look at how the game is running on PS4 and XB1.

I mean, sure, in GPU limited scenes the XSX should always has the edge but unfortunately over time the game should be more and more CPU limited.
 
It's already showing its 20% advantage in compute limited scenes (when the CPU is out of the picture). But PS4 had 40% more compute and more than 100% more main ram bandwidth while XB1 had roughly no advantages on its own overall so the situation with PS4 was very different than now.

I think this gen (and as predicted by Cerny) the games will be mostly limited I/O, not compute. Here the PS5 has the measured advantage (whether limited by CPU or custom hardware) as seen in Cyberpunk or Control. And Cyberpunk could already be seen as a next gen game if we look at how the game is running on PS4 and XB1.

I mean, sure, in GPU limited scenes the XSX should always has the edge but unfortunately over time the game should be more and more CPU limited.
I think the issue is that GPUs are making large strides and CPUs are not, as such I believe devs will have to manage expectations on CPU but will be able to push the GPUS.

So the issue as I understand it, is that XSX GPU is not being utilised to full effect (and likely won’t for a year or so) - as games have been designed around lower CUs. As the CU count grows then devs will expand their use and then the PS5 will ‘suffer’ as it doesn’t have the same number of CUs.
 
they've just removed 30fps cap ;d
and I'll gladly take it ;)
Almost done with the Medium and this will be my next console title following.

Following up with the what Alex said about the Medium, great title with some performance issues, but there are a couple of moments in that game that I think people can appreciate. It is supernatural horror, but the split world view / quick world change serves as a satisfying narrative device for a couple set pieces instead of just being a novelty. There is a true sense of urgency and fear even if you know the outcome of failure is to do it again, it's still freaky.
 
I mean, sure, in GPU limited scenes the XSX should always has the edge but unfortunately over time the game should be more and more CPU limited.
I'm curious as to why you think games are headed towards more CPU limited? We are moving towards GPU based driven pipelines. GPUs can dispatch their own work, the need for CPU culling, animation, and GPGPU can all be handled and dispatched by the GPU going forward. If the CPU played 0 role in rendering, as that is direction we are headed, I'm curious as to what you think they'll fill up all the cycles with.

It just seems to me that we're a long way from game design concepts to be able to reach there with strictly game logic/AI code, that or the population just isn't ready for that type of game yet.
 
Getting back to the Power Differential thing on the last Page mentioned by @chris1515 - I think it just makes sense that the XSX will show it's advantage over time more because Compute is becoming and has become the driving factor behind engine Design for the most Part. If you do a render doc of a lot of modern engine's, the CS stage is rather huge! Heck something like UE5, Lumen and nanite's most interesting Part computationally are compute.

Playstation 4 was a Design that was about Compute before it was ubiquitous, XSX is out a time when Compute is ubiquitus and the way Lots of gfx are done now.

Makes sense. Also, the larger bandwith could aid in situations that are more compute heavy.

No idea what drove sony for narrow/fast, benefitial early in perhaps, but most likely BC modes played a role aswell.
 
I have feeling that im in 2013 when I read that future games will be more compute heavy ;d My prediction, nothing will change much, current games are already compute heavy. Any difference I can see if ue5 really can utlize io ps5 arch. or it was marketing bs and ordinal ssd would be enough.
 
I'm curious as to why you think games are headed towards more CPU limited? We are moving towards GPU based driven pipelines. GPUs can dispatch their own work, the need for CPU culling, animation, and GPGPU can all be handled and dispatched by the GPU going forward. If the CPU played 0 role in rendering, as that is direction we are headed, I'm curious as to what you think they'll fill up all the cycles with.

It just seems to me that we're a long way from game design concepts to be able to reach there with strictly game logic/AI code, that or the population just isn't ready for that type of game yet.
Because those first games are barely using the Zen 2 CPUs as they have being designed to run on Jaguar CPUs. If anything, GPU compute is already pretty much well used (with those dynamic resolutions and effects) because compute was already very much available on the old consoles (partiularly mid-gen) and as the APIs are about the same, it's was the easiest thing to do.

Take Hitman 3 for instance. It's a game that already run at solid 60fps on a puny Jaguar so it was obvious from the start the game was never going to be CPU limited on those next gen consoles as we know those new CPUs are about 3 to 4 times faster.

Historically devs on consoles have always found new creative ways to use the available CPU cycles (more objects, more physics, more interactivity more AI etc.). Don't worry about that. ;-)
 
  • Like
Reactions: snc
I have feeling that im in 2013 when I read that future games will be more compute heavy ;d My prediction, nothing will change much, current games are already compute heavy. Any difference I can see if ue5 really can utlize io ps5 arch. or it was marketing bs and ordinal ssd would be enough.

You are still in 2013, in the sense that game engines started ramping up more and more compute over time back then, but the momentum hasn't stopped.

Look at any rendering gdc or siggraph talk on any cutting edge engine. Look at current research subjects, look at what AMD/Nvidia are putting in their cards, look at API features (we know from the ps4 that sonys tools favor memory management and compute scheduling... dx12 is public) look at what graphical features are popular and where the bottlenecks are. Why on earth would compute usage stay where it is? The fixed pipeline is increasingly a major barrier in game performance, even the renderers that use it heavily will do a great deal of async compute for things like lighting and use compute shaders to cull and partition the frustum to save performance.

(And both the ps5 and xsx are prepared to run these future engines great -- its not like this is really an xbox vs playstation thing. Xbox just has the more powerful gpu)
 
You are still in 2013, in the sense that game engines started ramping up more and more compute over time back then, but the momentum hasn't stopped.

Look at any rendering gdc or siggraph talk on any cutting edge engine. Look at current research subjects, look at what AMD/Nvidia are putting in their cards, look at API features (we know from the ps4 that sonys tools favor memory management and compute scheduling... dx12 is public) look at what graphical features are popular and where the bottlenecks are. Why on earth would compute usage stay where it is? The fixed pipeline is increasingly a major barrier in game performance, even the renderers that use it heavily will do a great deal of async compute for things like lighting and use compute shaders to cull and partition the frustum to save performance.

(And both the ps5 and xsx are prepared to run these future engines great -- its not like this is really an xbox vs playstation thing. Xbox just has the more powerful gpu)
Cerny said something along the lines of filling up and exploiting fully the CU rather than spread to many and have them underutilized.
It seems like the clock boost is supposedly going to help in that regard. Maybe he things the CU count with that clock boost will help push the performance better? Hence the gap will not be as big as we expect?
Also I wonder how CU count scales up, considering so many GPUs out there with different CU counts. Games wont be fully optimized with those with the highest count but with an average out there. Unless the games are intelligent enough to scale efficiently and properly
 
You are still in 2013, in the sense that game engines started ramping up more and more compute over time back then, but the momentum hasn't stopped.

Look at any rendering gdc or siggraph talk on any cutting edge engine. Look at current research subjects, look at what AMD/Nvidia are putting in their cards, look at API features (we know from the ps4 that sonys tools favor memory management and compute scheduling... dx12 is public) look at what graphical features are popular and where the bottlenecks are. Why on earth would compute usage stay where it is? The fixed pipeline is increasingly a major barrier in game performance, even the renderers that use it heavily will do a great deal of async compute for things like lighting and use compute shaders to cull and partition the frustum to save performance.

(And both the ps5 and xsx are prepared to run these future engines great -- its not like this is really an xbox vs playstation thing. Xbox just has the more powerful gpu)
I don't know what people expect, for example rtx 2080 has 20% advantage over rtx 2070 in current games, do you predict it will be 40% in future games ?
 
Cerny said something along the lines of filling up and exploiting fully the CU rather than spread to many and have them underutilized.
It seems like the clock boost is supposedly going to help in that regard. Maybe he things the CU count with that clock boost will help push the performance better? Hence the gap will not be as big as we expect?
Also I wonder how CU count scales up, considering so many GPUs out there with different CU counts. Games wont be fully optimized with those with the highest count but with an average out there. Unless the games are intelligent enough to scale efficiently and properly

Disclaimer: not a professional graphics programmer.

Generally speaking, I think wider gpus are a little harder to feed data to, yeah, cause you need more things that you can do at once. GPUs are obviously massively parallel devices, but lots of the processing they do has to wait on some previous processing being completed. This is what makes them challengine to utilize in general -- if you only have ~50% of your compute worth of work to do right now, and all subsequent work needs to wait on that being done, half of your gpu sits empty until what you have in flight is finished and the memory is freed up. My understanding is, all else being equal, higher clock speed = less ms/ns spent waiting on each job to complete, so lack of synchronization hurts a little less.

The upside for microsoft (and every other wide gpu) is that realistically, theres a lot of independent stuff to render each frame, and there's ways to chop it up even more to make it more independent. Naturally, most of that work (big parallel computing that needs to be done super fast) is done in compute shaders these days. So ideally your gpu is maybe doing stuff like... shading some models that are ready, rasterizing/preparing others, doing big slow asynchronous calculations like GI, and maybe doing things like particle motion, vfx, etc... all at once, and getting somewhere close to maximum utilization. To ensure that, the actual cpu side code that drives setting up the pipeline and keeping stuff running can get pretty complex, with engines like Frostbite having a giant graph of dependencies and needs calculated at all times.

So summary is... there is definitely an advantage to being faster and narrow, but in a perfect world you'd rather be fast and also wide. The ps5 will always enjoy benefits of its faster clock speed, especially when games that aren't good at filling the wide CUs are coming out, but as long as there's enough work to fill the xsx gpu that clock speed benefit isnt going to magically erase a considerable tflop/bandwidth advantage. Just shrink it.

I don't know what people expect, for example rtx 2080 has 20% advantage over rtx 2070 in current games, do you predict it will be 40% in future games ?

No -- i predict it will converge on the tflop/bandwidth advantage over time. Which is like ~25%, right? The thesis is: If a game takes advantage of both cards' architecture equally, the difference in performance will align closely to the cards' specs. This is a pretty reasonable claim!
 
Status
Not open for further replies.
Back
Top