5.5 is 25% better than 4.4. We have seen others games with up to 9x improvement (from PS4 to PS5) when GPU limited.That aligns with the TPU database which rates the 5700XT as 4.4x faster than the R7 265 which was equivalent to the PS4 GPU. Obviously the PS5 GPU is a bit faster than the PS4 GPU so roughly 5x more performance seems reasonable when not using RDNA2 specific features.
https://www.techpowerup.com/gpu-specs/radeon-r7-265.c2558
https://blog.quanticdream.com/detroit-a-vulkan-in-the-engine/
Some explanation here consoles API versus DX12/Vulkan API.
In fact I started a thread on exactly my fear of this becoming a bigger issue going forward:In regards to the shader compilation I would guess this is the reason for HZD's compilation at start up. For games that are designed from day 1 for the PC, developers can optimise to avoid the pitfalls of long shader compilation steps, making it possible. to complete in the background during gameplay or at reasonable length loading screens. But a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.
Personally, I don't mind it but I can understand their reticence to this approach given that the step could take up to 20 minutes on a low end PC.
Really interesting article, thanks for the link. I found this part particularly interesting:
"The CPU of the PlayStation® 4 is an AMD Jaguar with 8 cores. It is obviously slower than some recently-released PC hardware; but the PlayStation® 4 has some major advantages, such as very fast access to the hardware. We find the PlayStation® 4 graphics API to be much more efficient than all PC APIs. It is very direct and has very low overhead. This means we can push a lot of draw calls per frame. We knew that the high number of draw calls could be an issue with low-end PCs.
One other big advantage is that all the shaders can be compiled off-line on PlayStation® 4, meaning the loading of shaders is nearly instantaneous. On PC, the driver needs to compile shaders at load time: this cannot be an off-line process because of the wide configurations of GPUs and drivers that need to be supported.
During the development of Detroit: Become Human on PlayStation® 4, artists could design unique shader trees for all materials. This resulted in an insane number of vertex and pixel shaders, so we knew from the beginning of the port that this will be a huge problem."
I'm surprised draw calls are still so relatively expensive on the PC. I thought one of the major reasons for DX12 and Vulcan was to vastly decrease the expense of draw calls. Or is that just DX12? Or is it the case that they are vastly decreased vs DX11/OGL but still very expensive vs PS4? Which begs the question just how bad was DX11!
In regards to the shader compilation I would guess this is the reason for HZD's compilation at start up. For games that are designed from day 1 for the PC, developers can optimise to avoid the pitfalls of long shader compilation steps, making it possible. to complete in the background during gameplay or at reasonable length loading screens. But a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.
Personally, I don't mind it but I can understand their reticence to this approach given that the step could take up to 20 minutes on a low end PC.
5.5 is 25% better than 4.4. We have seen others games with up to 9x improvement (from PS4 to PS5) when GPU limited.
- The last guardian: PS4 1080p ~20fps to PS5: 1890p ~60fps, that's up to 9x (and using BC)
- Spiderman remastered has about 7-8x more perf on PS5 using the 60fps mode (so RDNA2 features).
Another Indie game I don't remember the name has up to 8x perf improvement using BC on PS5.
If that's in 'prioritize resolution' mode (which I believe locks at 4k), that's extremely impressive. I've seen videos of the Series X in this mode with vsync unlocked where it gets over 100+fps. Prioritize performance is perhaps another matter, where the res is variable - but even there it doesn't look like it drops under 4k that often.
Perhaps, but you can see Arkham Knight doing its shader compiling when you start it up after a new driver and look at CPU usage % through Rivatuner, it will be pegged at 100% for a minute or so (shader compiling is usually a task that scales well across threads). Arkham Knight's stuttering was due more to how it handled streaming in textures. If it was purely a CPU bottleneck with shader compiling then much faster CPU's would be able to eliminate it, but that was never really the case.Is it possible Arkham Knight PC was doing it in realtime or something?
How are you calculating Spiderman:RM having 7-8x more perf using the 60fps mode exactly?5.5 is 25% better than 4.4. We have seen others games with up to 9x improvement (from PS4 to PS5) when GPU limited.
- The last guardian: PS4 1080p ~20fps to PS5: 1890p ~60fps, that's up to 9x (and using BC)
- Spiderman remastered has about 7-8x more perf on PS5 using the 60fps mode (so RDNA2 features).
this was running in BC mode at ps4 pro settings, i think the next gen update that dropped on december 4th caps the framerate at 60fps or 120fps depending the mode, and they cranked up the details and lighting to be more in line with the PC version, but to what extent i don't know.
Really interesting article, thanks for the link. I found this part particularly interesting:
"The CPU of the PlayStation® 4 is an AMD Jaguar with 8 cores. It is obviously slower than some recently-released PC hardware; but the PlayStation® 4 has some major advantages, such as very fast access to the hardware. We find the PlayStation® 4 graphics API to be much more efficient than all PC APIs. It is very direct and has very low overhead. This means we can push a lot of draw calls per frame. We knew that the high number of draw calls could be an issue with low-end PCs.
One other big advantage is that all the shaders can be compiled off-line on PlayStation® 4, meaning the loading of shaders is nearly instantaneous. On PC, the driver needs to compile shaders at load time: this cannot be an off-line process because of the wide configurations of GPUs and drivers that need to be supported.
During the development of Detroit: Become Human on PlayStation® 4, artists could design unique shader trees for all materials. This resulted in an insane number of vertex and pixel shaders, so we knew from the beginning of the port that this will be a huge problem."
I'm surprised draw calls are still so relatively expensive on the PC. I thought one of the major reasons for DX12 and Vulcan was to vastly decrease the expense of draw calls. Or is that just DX12? Or is it the case that they are vastly decreased vs DX11/OGL but still very expensive vs PS4? Which begs the question just how bad was DX11!
In regards to the shader compilation I would guess this is the reason for HZD's compilation at start up. For games that are designed from day 1 for the PC, developers can optimise to avoid the pitfalls of long shader compilation steps, making it possible. to complete in the background during gameplay or at reasonable length loading screens. But a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.
Personally, I don't mind it but I can understand their reticence to this approach given that the step could take up to 20 minutes on a low end PC.
In fact I started a thread on exactly my fear of this becoming a bigger issue going forward:
https://forum.beyond3d.com/threads/...pc-about-to-become-a-bigger-bottleneck.61929/
Consoles punch above their specs usually though and I don't think this generation is going to be any different.
It seems you don't understand what is streaming. Latency doesn't need to be as good as RAM the streaming system is not used for assets for the next frame even in R&C Rift Apart portal is to fill RAM with assets used in 1 to 2 seconds. Here it will not be a bottleneck because you can load the best assets quality in RAM without problem. RAM or VRAM inside any consoles PC or contains much more assets than the GPU will render for the next few frames. Depending of the framerate RAM contains at minimum 60 to 120 frames in a non realistic scenery where a team will have unique assets for the next two seconds and much more because of the game size limit and cost limit for assets creation.
Destiny 2 with it dynamic resolutions show more and same than GOT the framerate is 60 fps locked on PS5. It does not mean GOT push fully what the PS5 is capable off into this backward compatiblity mode. GOT is blocked to 1800p because of the PS4 Pro version.
EDIT: the 7870 is 38% more powerful in theory than the PS4 GPU
All this cross gen/back compat performance and RDNA2 Vs GCN , console Vs PC talk suggests people are struggling to define what they expect from a generational leap in graphics other than bigger numbers.
ut a game developed exclusively for consoles with no accommodation for the PC would have no need to implement such optimisations and so when it comes to porting it to the PC, there are some fundamental aspects of the engine which make shader compilation something that has to be done up front.
4x resolution does not equal 4x the performance. Resolution doesn't scale that way with performance. Check basically any game in this review, for the most part the 3080 is about the same performance at 4K as the 2070S is at 1440p. 4K is 2.25x the resolution of 4k, but look at the GPU's relative performance at the same resolution... the 3080 isn't even close to 2.25x the 2070S performance.
If that's in 'prioritize resolution' mode (which I believe locks at 4k), that's extremely impressive.
I think people that are saying the PS5 and XSX are under-powered are going to be eating their words later. There are going to be some amazing looking (and playing) games coming to these systems.
High-end PCs are always going to be more powerful. You may as well be saying the Sun rises in the East. It's obvious.
Yeah it's a good thread, I've followed it from the start. I'm still of the opinion that I don't really care though. If devs can't find a way to hide the compilation during gameplay or load screens (and from the above link it certainly seems like they have options if they plan for it from the start) then I'm more than content to have a 5-10 min compilation on first run. Yes you have to do it again with every driver update or patch, but in reality the number of patches or driver updates most people experience during the playtime of a typical game is going to be largely irrelevant.
Consoles punch above their specs usually though and I don't think this generation is going to be any different.
From 1080p 30fps to native 4K (mostly native or near native) at 60fps on PS5. The first gameplay they showed was a whole mission at (locked) native 4K 60fps.How are you calculating Spiderman:RM having 7-8x more perf using the 60fps mode exactly?
Great post.False again you use Tflops as a mesure like a robot reading spec and not knowing what real world performance means. RDNA 2 architecture is more performant than GCN 1.1 architecture of 2013. And it is visible partially in backward compatibility. Ghost of Tsushima runs at 1080p 30 fps on PS4 and 4k 60 fps on PS5, this is 8 times more pixel. The PS5 GPU without using the more advanced feature is 7 to 8 times more powerful than the PS4 GPU. On GPU side this is probably the same amount of improvement between the two generations and this is not bad knowing slow down of Moore Law but power consumption is bigger on PS5 and XSX compared to PS4 and Xbox One.
PS3 CPU to PS4 CPU there was no big improvement at all, some of the CELL power used for graphics go back to the GPU and Jaguar was weak. PS5 and Xbox Series a CPU is 5 to 6 times more powerful in normal workload and probably 10 times more powerful in SIMD workload. If PS5 reserve the same amount of RAM for OS than Xbox Series X RAM size is 2,7 times higher and more important memort bandwitdh is only 2,5 times but GPU memory compression is better in RDNA2, this is the same reason Nvidia GPU needs less bandwidth than GCN GPU, here again the improvement is higher than on paper and other things AMD has a patent to mitigate memory contention in an APU. But memory bandwitdh is probably the least improved aspect knowing consoles GPU don't have infinity cache.
RAM and streaming storage are linked and SSD speed jump is massive x100, it means RAM size is not a problem and this is why MS said having a fast SSD means RAM size is 2.5 times bigger like having 33.75 GB of RAM dedicated to game. Out of the CPU, this was the other weakness of the PS4/XB1, the devs were fighing the streaming limitation day one. You don't talk about all the element of the console but nothing surprising for someone who look like to have very superficial notion of how CPU, GPU, APU, other coprocessor, RAM and storage work out of being able to read a spec paper and maybe read a benchmark, not sure after this one after reading your commentary about 7870.
This would be good for PS4 is 7870 2.5 Tflops was near but this is not true because of limited VRAM size of 7870 it hurts a lot this PC GPU. You repeat lie after lie and you were prove false multiples times on this one. We all know you are a troll but I have my doubt maybe you are a bot because human being learn when they do an error or are wrong.
And for the list it is a matter of taste, it is a graphic/technology list and TLOU2 does things much better than DS, I think character model looks better, animation is far ahead too. Imo I would not have give it the first place but it is one of the best looking 2020 title and one of the best from old generation title.