A Generational Leap in Graphics [2020] *Spawn*

That aligns with the TPU database which rates the 5700XT as 4.4x faster than the R7 265 which was equivalent to the PS4 GPU. Obviously the PS5 GPU is a bit faster than the PS4 GPU so roughly 5x more performance seems reasonable when not using RDNA2 specific features.

https://www.techpowerup.com/gpu-specs/radeon-r7-265.c2558
5.5 is 25% better than 4.4. We have seen others games with up to 9x improvement (from PS4 to PS5) when GPU limited.
- The last guardian: PS4 1080p ~20fps to PS5: 1890p ~60fps, that's up to 9x (and using BC)
- Spiderman remastered has about 7-8x more perf on PS5 using the 60fps mode (so RDNA2 features).
Another Indie game I don't remember the name has up to 8x perf improvement using BC on PS5.
 
All this cross gen/back compat performance and RDNA2 Vs GCN , console Vs PC talk suggests people are struggling to define what they expect from a generational leap in graphics other than bigger numbers. ;-)
 
https://blog.quanticdream.com/detroit-a-vulkan-in-the-engine/

Some explanation here consoles API versus DX12/Vulkan API.

Really interesting article, thanks for the link. I found this part particularly interesting:

"The CPU of the PlayStation® 4 is an AMD Jaguar with 8 cores. It is obviously slower than some recently-released PC hardware; but the PlayStation® 4 has some major advantages, such as very fast access to the hardware. We find the PlayStation® 4 graphics API to be much more efficient than all PC APIs. It is very direct and has very low overhead. This means we can push a lot of draw calls per frame. We knew that the high number of draw calls could be an issue with low-end PCs.

One other big advantage is that all the shaders can be compiled off-line on PlayStation® 4, meaning the loading of shaders is nearly instantaneous. On PC, the driver needs to compile shaders at load time: this cannot be an off-line process because of the wide configurations of GPUs and drivers that need to be supported.

During the development of Detroit: Become Human on PlayStation® 4, artists could design unique shader trees for all materials. This resulted in an insane number of vertex and pixel shaders, so we knew from the beginning of the port that this will be a huge problem."


I'm surprised draw calls are still so relatively expensive on the PC. I thought one of the major reasons for DX12 and Vulcan was to vastly decrease the expense of draw calls. Or is that just DX12? Or is it the case that they are vastly decreased vs DX11/OGL but still very expensive vs PS4? Which begs the question just how bad was DX11!

In regards to the shader compilation I would guess this is the reason for HZD's compilation at start up. For games that are designed from day 1 for the PC, developers can optimise to avoid the pitfalls of long shader compilation steps, making it possible. to complete in the background during gameplay or at reasonable length loading screens. But a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.

Personally, I don't mind it but I can understand their reticence to this approach given that the step could take up to 20 minutes on a low end PC.
 
In regards to the shader compilation I would guess this is the reason for HZD's compilation at start up. For games that are designed from day 1 for the PC, developers can optimise to avoid the pitfalls of long shader compilation steps, making it possible. to complete in the background during gameplay or at reasonable length loading screens. But a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.

Personally, I don't mind it but I can understand their reticence to this approach given that the step could take up to 20 minutes on a low end PC.
In fact I started a thread on exactly my fear of this becoming a bigger issue going forward:

https://forum.beyond3d.com/threads/...pc-about-to-become-a-bigger-bottleneck.61929/
 
Really interesting article, thanks for the link. I found this part particularly interesting:

"The CPU of the PlayStation® 4 is an AMD Jaguar with 8 cores. It is obviously slower than some recently-released PC hardware; but the PlayStation® 4 has some major advantages, such as very fast access to the hardware. We find the PlayStation® 4 graphics API to be much more efficient than all PC APIs. It is very direct and has very low overhead. This means we can push a lot of draw calls per frame. We knew that the high number of draw calls could be an issue with low-end PCs.

One other big advantage is that all the shaders can be compiled off-line on PlayStation® 4, meaning the loading of shaders is nearly instantaneous. On PC, the driver needs to compile shaders at load time: this cannot be an off-line process because of the wide configurations of GPUs and drivers that need to be supported.

During the development of Detroit: Become Human on PlayStation® 4, artists could design unique shader trees for all materials. This resulted in an insane number of vertex and pixel shaders, so we knew from the beginning of the port that this will be a huge problem."


I'm surprised draw calls are still so relatively expensive on the PC. I thought one of the major reasons for DX12 and Vulcan was to vastly decrease the expense of draw calls. Or is that just DX12? Or is it the case that they are vastly decreased vs DX11/OGL but still very expensive vs PS4? Which begs the question just how bad was DX11!

In regards to the shader compilation I would guess this is the reason for HZD's compilation at start up. For games that are designed from day 1 for the PC, developers can optimise to avoid the pitfalls of long shader compilation steps, making it possible. to complete in the background during gameplay or at reasonable length loading screens. But a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.

Personally, I don't mind it but I can understand their reticence to this approach given that the step could take up to 20 minutes on a low end PC.

Is it possible Arkham Knight PC was doing it in realtime or something?
 
5.5 is 25% better than 4.4. We have seen others games with up to 9x improvement (from PS4 to PS5) when GPU limited.
- The last guardian: PS4 1080p ~20fps to PS5: 1890p ~60fps, that's up to 9x (and using BC)
- Spiderman remastered has about 7-8x more perf on PS5 using the 60fps mode (so RDNA2 features).
Another Indie game I don't remember the name has up to 8x perf improvement using BC on PS5.

4x resolution does not equal 4x the performance. Resolution doesn't scale that way with performance. Check basically any game in this review, for the most part the 3080 is about the same performance at 4K as the 2070S is at 1440p. 4K is 2.25x the resolution of 4k, but look at the GPU's relative performance at the same resolution... the 3080 isn't even close to 2.25x the 2070S performance.
 
If that's in 'prioritize resolution' mode (which I believe locks at 4k), that's extremely impressive. I've seen videos of the Series X in this mode with vsync unlocked where it gets over 100+fps. Prioritize performance is perhaps another matter, where the res is variable - but even there it doesn't look like it drops under 4k that often.

This is basically 3080 level performance, but of course depends on the settings the PS5/SX versions are using, 3080 at 4k are with ultra in the benchmarks I've seen - has there been any comparisons done yet between the PC and next-gen console versions in terms of graphics settings? Either way, ridiculously good performance.
 
Is it possible Arkham Knight PC was doing it in realtime or something?
Perhaps, but you can see Arkham Knight doing its shader compiling when you start it up after a new driver and look at CPU usage % through Rivatuner, it will be pegged at 100% for a minute or so (shader compiling is usually a task that scales well across threads). Arkham Knight's stuttering was due more to how it handled streaming in textures. If it was purely a CPU bottleneck with shader compiling then much faster CPU's would be able to eliminate it, but that was never really the case.

Digital Foundry seemed to be able to get a locked 60 even on a Core I8-8400 in one of the videos they did on it well after launch, but myself and others I've seen have never been able to replicate that.
 
this was running in BC mode at ps4 pro settings, i think the next gen update that dropped on december 4th caps the framerate at 60fps or 120fps depending the mode, and they cranked up the details and lighting to be more in line with the PC version, but to what extent i don't know.
 
5.5 is 25% better than 4.4. We have seen others games with up to 9x improvement (from PS4 to PS5) when GPU limited.
- The last guardian: PS4 1080p ~20fps to PS5: 1890p ~60fps, that's up to 9x (and using BC)
- Spiderman remastered has about 7-8x more perf on PS5 using the 60fps mode (so RDNA2 features).
How are you calculating Spiderman:RM having 7-8x more perf using the 60fps mode exactly?
 
this was running in BC mode at ps4 pro settings, i think the next gen update that dropped on december 4th caps the framerate at 60fps or 120fps depending the mode, and they cranked up the details and lighting to be more in line with the PC version, but to what extent i don't know.

Ah ok, that's an entirely different matter then.
 
Really interesting article, thanks for the link. I found this part particularly interesting:

"The CPU of the PlayStation® 4 is an AMD Jaguar with 8 cores. It is obviously slower than some recently-released PC hardware; but the PlayStation® 4 has some major advantages, such as very fast access to the hardware. We find the PlayStation® 4 graphics API to be much more efficient than all PC APIs. It is very direct and has very low overhead. This means we can push a lot of draw calls per frame. We knew that the high number of draw calls could be an issue with low-end PCs.

One other big advantage is that all the shaders can be compiled off-line on PlayStation® 4, meaning the loading of shaders is nearly instantaneous. On PC, the driver needs to compile shaders at load time: this cannot be an off-line process because of the wide configurations of GPUs and drivers that need to be supported.

During the development of Detroit: Become Human on PlayStation® 4, artists could design unique shader trees for all materials. This resulted in an insane number of vertex and pixel shaders, so we knew from the beginning of the port that this will be a huge problem."


I'm surprised draw calls are still so relatively expensive on the PC. I thought one of the major reasons for DX12 and Vulcan was to vastly decrease the expense of draw calls. Or is that just DX12? Or is it the case that they are vastly decreased vs DX11/OGL but still very expensive vs PS4? Which begs the question just how bad was DX11!

In regards to the shader compilation I would guess this is the reason for HZD's compilation at start up. For games that are designed from day 1 for the PC, developers can optimise to avoid the pitfalls of long shader compilation steps, making it possible. to complete in the background during gameplay or at reasonable length loading screens. But a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.

Personally, I don't mind it but I can understand their reticence to this approach given that the step could take up to 20 minutes on a low end PC.

executeIndirect is (was?) supposed to assist in drawcalls and results in some early benchmarks years ago showed it having a big benefit. However, up until now only Xbox consoles and Nvidia GPUs had hardware support for it, and even now it might come down to other factors I wouldn't even be knowledgeable of personally, to ponder the question if executeIndirect is even being fully leveraged by most developers on PC.

But I'm just speaking of all this in a specific MS/PC terms; there's other ways to handle drawcalls as you've illustrated on console with games like Horizon on PS4, Sony and other APIs like Vulkan have their own ways of doing drawcall management. I dunno if any one or the other can be considered superior to the other, though there's ways of implementing it that can be more or less beneficial to a design (Xbox One technically speaking had some really good ideas like ESRAM (something Infinity Cache is spiritually in line of and greatly improved upon) and Kinect, but poor implementation of those ideas).
 
I think people that are saying the PS5 and XSX are under-powered are going to be eating their words later. There are going to be some amazing looking (and playing) games coming to these systems.

High-end PCs are always going to be more powerful. You may as well be saying the Sun rises in the East. It's obvious.

Consoles punch above their specs usually though and I don't think this generation is going to be any different.
 
In fact I started a thread on exactly my fear of this becoming a bigger issue going forward:

https://forum.beyond3d.com/threads/...pc-about-to-become-a-bigger-bottleneck.61929/

Yeah it's a good thread, I've followed it from the start. I'm still of the opinion that I don't really care though. If devs can't find a way to hide the compilation during gameplay or load screens (and from the above link it certainly seems like they have options if they plan for it from the start) then I'm more than content to have a 5-10 min compilation on first run. Yes you have to do it again with every driver update or patch, but in reality the number of patches or driver updates most people experience during the playtime of a typical game is going to be largely irrelevant.
 
Consoles punch above their specs usually though and I don't think this generation is going to be any different.

They do, but not until later in the generation when developers have had a chance to get to grips with them as far as I can see. For the first year or two, it seems to be pretty much a 1:1 comparison. However the PC's increased performance though new hardware occurs much faster than the consoles increased performance through optimisation. So while optimisation eats into the gains that each successive GPU generation brings, the gap continues to grow ever wider until a new generation of consoles launch, and everything resets.
 
It seems you don't understand what is streaming. Latency doesn't need to be as good as RAM the streaming system is not used for assets for the next frame even in R&C Rift Apart portal is to fill RAM with assets used in 1 to 2 seconds. Here it will not be a bottleneck because you can load the best assets quality in RAM without problem. RAM or VRAM inside any consoles PC or contains much more assets than the GPU will render for the next few frames. Depending of the framerate RAM contains at minimum 60 to 120 frames in a non realistic scenery where a team will have unique assets for the next two seconds and much more because of the game size limit and cost limit for assets creation.

Streaming. Im talking about SSD drives replacing fast GDDR5/6 ram. Im not saying the SSD drive cannot mitigate for the lack of VRAM, im just saying it cant take account for all of it. If that theory would be true, they could aswell have stick with 8GB ram 'because the SSD'. Dont forget either that the PS4 has a rather low bandwith to play with also, where the SSD has to help out?

Destiny 2 with it dynamic resolutions show more and same than GOT the framerate is 60 fps locked on PS5. It does not mean GOT push fully what the PS5 is capable off into this backward compatiblity mode. GOT is blocked to 1800p because of the PS4 Pro version.

Again, theres a reason some compare only cross gen titles? I think native games designed around the systems gives a better picture of what the systems can do, as you always like to point out. That would be Shadowfall and 1886 for the PS4, demon souls/rift apart for the PS5.
Also, your forgetting that PS3 and 4 where on more different architectures then PS4 and 5 are. Besides that cross gen porting wasnt as great as it is nowadays, witht focus more heavily on that.

EDIT: the 7870 is 38% more powerful in theory than the PS4 GPU

Yes, exactly. Its also almos two years older and basically a 2012 midranger at that.

All this cross gen/back compat performance and RDNA2 Vs GCN , console Vs PC talk suggests people are struggling to define what they expect from a generational leap in graphics other than bigger numbers.

I too have no idea why cross gen games are so hot in this discussion right now. obviously, cross gen wasnt such a thing back then, and people are totally forgetting that PS3 and PS4 where on totally different architectures. If game A was done for say PS3 in mind and ported to PS4, that could explain the harder to optimize for case.

ut a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.

CP2077 really has some impressive nvme streaming tech going for it on pc, so yes, if games are designed around the pc, it has nor problem doing these things.

4x resolution does not equal 4x the performance. Resolution doesn't scale that way with performance. Check basically any game in this review, for the most part the 3080 is about the same performance at 4K as the 2070S is at 1440p. 4K is 2.25x the resolution of 4k, but look at the GPU's relative performance at the same resolution... the 3080 isn't even close to 2.25x the 2070S performance.

This, was thinking the same thing. As a pc gamer and upgrading alot (before atleast), this came to mind but didnt or couldnt explain it as well.
Anyway, when comparing graphical leaps, its generally best to compare games developed and designed around the system, not old games/cross gen titles.
The avoidance of this is telling me enough.

If that's in 'prioritize resolution' mode (which I believe locks at 4k), that's extremely impressive.

Its rainbow six siege, i dont its impressive but thats me. Its going into diminishing returns territory there. For example both my 1070 and 2080Ti can do 500fps in counter strike source. Try to do the same with another game, take watch dogs or something, and see where performances go.

I think people that are saying the PS5 and XSX are under-powered are going to be eating their words later. There are going to be some amazing looking (and playing) games coming to these systems.

Hm, not under-powered, consoles never are. They get mid range hardware and devs optimize for it. What i am saying is that the leap isnt as great as we are used to. Thats a different thing to under-powered. A 2070 system is not underpowered either.

High-end PCs are always going to be more powerful. You may as well be saying the Sun rises in the East. It's obvious.

A 3060Ti is more powerfull, but thats considered high-end i guess. No idea where people place hardware anymore.

Yeah it's a good thread, I've followed it from the start. I'm still of the opinion that I don't really care though. If devs can't find a way to hide the compilation during gameplay or load screens (and from the above link it certainly seems like they have options if they plan for it from the start) then I'm more than content to have a 5-10 min compilation on first run. Yes you have to do it again with every driver update or patch, but in reality the number of patches or driver updates most people experience during the playtime of a typical game is going to be largely irrelevant.

Its a topic for concerned people. Im not concerned, with direct storage, NV and probably AMD, together with MS finding solutions that will fix those issues. NV promised direct access to the SSD/GPU, at speeds higher then any console. I have no doubt thats possible if they want to.

Consoles punch above their specs usually though and I don't think this generation is going to be any different.

True they have always done so. But the pc has gotten much better in that regard though, in special since consoles have become more and more alike pc's. Optimization has been really bad before, like 15 to 20 years ago. Looking at last generation, its quite amazing that say a 7870 still can even run modern AAA titles ported badly.
Remember the days when MGS2 got ported to PC.
 
I expect future games to look better than CP77 regardless whether they are PC or console based.

IQ improvements aren’t strictly hardware performance-based. A big reason for that is consoles where devs find more elegant solutions to pull more performance out of the fixed-hardware platforms over time.

Given all the bugs and glitches and the way even new Nvidia cards struggle, CP77 comes off as a straight brute force endeavor.
 
False again you use Tflops as a mesure like a robot reading spec and not knowing what real world performance means. RDNA 2 architecture is more performant than GCN 1.1 architecture of 2013. And it is visible partially in backward compatibility. Ghost of Tsushima runs at 1080p 30 fps on PS4 and 4k 60 fps on PS5, this is 8 times more pixel. The PS5 GPU without using the more advanced feature is 7 to 8 times more powerful than the PS4 GPU. On GPU side this is probably the same amount of improvement between the two generations and this is not bad knowing slow down of Moore Law but power consumption is bigger on PS5 and XSX compared to PS4 and Xbox One.

PS3 CPU to PS4 CPU there was no big improvement at all, some of the CELL power used for graphics go back to the GPU and Jaguar was weak. PS5 and Xbox Series a CPU is 5 to 6 times more powerful in normal workload and probably 10 times more powerful in SIMD workload. If PS5 reserve the same amount of RAM for OS than Xbox Series X RAM size is 2,7 times higher and more important memort bandwitdh is only 2,5 times but GPU memory compression is better in RDNA2, this is the same reason Nvidia GPU needs less bandwidth than GCN GPU, here again the improvement is higher than on paper and other things AMD has a patent to mitigate memory contention in an APU. But memory bandwitdh is probably the least improved aspect knowing consoles GPU don't have infinity cache.

RAM and streaming storage are linked and SSD speed jump is massive x100, it means RAM size is not a problem and this is why MS said having a fast SSD means RAM size is 2.5 times bigger like having 33.75 GB of RAM dedicated to game. Out of the CPU, this was the other weakness of the PS4/XB1, the devs were fighing the streaming limitation day one. You don't talk about all the element of the console but nothing surprising for someone who look like to have very superficial notion of how CPU, GPU, APU, other coprocessor, RAM and storage work out of being able to read a spec paper and maybe read a benchmark, not sure after this one after reading your commentary about 7870.

This would be good for PS4 is 7870 2.5 Tflops was near but this is not true because of limited VRAM size of 7870 it hurts a lot this PC GPU. You repeat lie after lie and you were prove false multiples times on this one. We all know you are a troll but I have my doubt maybe you are a bot because human being learn when they do an error or are wrong.

And for the list it is a matter of taste, it is a graphic/technology list and TLOU2 does things much better than DS, I think character model looks better, animation is far ahead too. Imo I would not have give it the first place but it is one of the best looking 2020 title and one of the best from old generation title.
Great post.

I think the "My GPU is bigger" talk is essentially a case of "I put my PC together so I'm an Engineer".
Linus Tech Tips syndrome.
 
Back
Top