Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

Per Irobotos comments, I remain very confident the data presents an unclear picture: Clear evidence exists to say that the xbox's performance is closer to what you'd expect from the TF number in certain game engines we know are forward looking -- forward +, compute heavy, etc.

On the other hand, this isn't a clear advantage in every game you'd expect it to be, and there are obvious performance deficits and hitches that are hard to justify. The evidence is way too muddy for the confidence some posters are displaying to either side -- we have to wait and see.

One data point I'm really interested in seeing over the next ~4-5 years is future multiplatform games made by activision. One hand wavey, hard to pin down claim you could make is that xbox titles get less dev time by multiplatform studios, due to market share. A major multiplat studio being acquired by microsoft, but continuing to support some games on ps5 is a rare experiment to see whether that's a real thing.
 
Yet the PC gpu's are both wide and fast.

In relation to consoles, yes.

In relation to RDNA 2 versus PC gpus, no.

A 6900 has 5120 shader units while there are a number of GCN based gpus (including Fury and Nano at 28nm) that sport 4096. They dropped down two major nodes (287nm vs 7nm) and employ three times as many transistors (8.9 vs 26 billion) to only increase the max shader count by 25%. Most of the performance increase in terms of T-flops comes from clocks increases not more execution units.

It would be easy to categorize RDNA2 as a narrow design especially since Nvidia 3000 series sport up to over 10,000 in FP32 units while supporting tensor and ray tracing cores.
 
Last edited:
In relation to console, yes.

In relation to RDNA 2 versus PC gpus, no.

A 6900 has 5120 shader units while there are a number of GCN based gpus (including Fury and Nano at 28nm) that sport 4096. They dropped down two major nodes (287nm vs 7nm) and employ three as many transistors (8.9 vs 26 billion) to only increase the max shader count by 25%. Most of the performance increase in terms of T-flops comes from clocks increases not more execution units.

Very true. Hows that in relation to NV? I wonder aswell where the ideal ratio between clocks and CUs/shaders/execution units would be to go, obviously going wider is essential going forward, how important are clocks one would wonder.
Its very intresting that Sony went the complete opposite way versus what MS did. Its basically the only intresting difference between the two consoles :p
 
The interesting one is Xbox series s. Same architecture but massively chopped down gpu perf.

Some devs did the obvious, lower resolution paired with 60 fps.

But some other devs still unable to deliver 60 fps at all, and with severe visual cuts (e.g. Guardians of the galaxy).
 
Ps5 in bc mode faster than rtx2070 in sane settings (tough we can have dlss on rtx), quite a norm on sony ports
What is missing in his PS5 vs PC comparisons is the cost of CBR on Pro which can be quite high. Native 1512p is not equal to CBR 4K (native ~1530p + cost of CBR). Far from it. Native 1800p against CBR 4K would be more fair IMO.
 
  • Like
Reactions: snc
Ps5 in bc mode faster than rtx2070 in sane settings (tough we can have dlss on rtx), quite a norm on sony ports

I don't think that's correct. He talks around 31:40 about the PC taking a heavy performance hit from vsync. And he proceeds to show the exact same scene with vsync off running at much higher than 60fps on the 2070 that he later uses to compare to the PS5 - this time with vsync on showing the 2070 dropping below 60fps. To make the problem worse, because vsync is on, any frame that doesn't hit 16.66ms get's recorded as a 33.33ms frame and thus brings the average way down so even if just 5% of frames render in 16.5ms the hit the average would be disproportionately higher. He then bizarrely goes on to explain this as 30-40% more performance for the PS5 in parts despite never showing a delta larger than 25% in one extreme instance in his video - specifically the one below which he earlier shows running at well over 60fps on the 2070 with vsync off lol.

He could have easily mitigated the issue by simply running with adaptive vsync so that vsync turns off when the framerate drops below 60fps (as many console games do), or used an external frame rate limiter which he mentioned earlier in the video didn't take as high a toll on the frame rate as the games limiter or vsync.



Some other fun points I picked up:
  • He's referring to his 2070OC as a 2070 Super again
  • Apparently the PC version isn't really an upgrade over the PS5 despite the better graphics, better image quality, higher framerates and faster loading, because AMD cards have a performance bug (that the developers are aware of and working to fix) which lowers performance more than it should in rare instances
  • There's strong emphasis on this being 'just a PS4P backwards compatibility' game that doesn't use the PS5's RDNA2(?) features with the implication being that it's performance advantage would be even greater if it were PS5 native. While this is true, it does completely ignore the fact that the same would also be true had the game been developed on the PC side to take native advantage of RDNA2 or modern Nvidia architectures.
My final thought on this is that while it's not strictly a performance comparison, it might be nice to compare the quality of experience available on the 2070 compared with the PS5 which is probably a more fair comparison on the relative capabilities of these two systems. Ignoring DLSS while useful for the raw performance comparison does unfairly penalise the Nvidia GPU because it spends silicon and die space on those tensor cores which could have otherwise have been spent on more CU's for example. So putting all of that silicon to work - as supported by the game - by turning on DLSS seems like a fairer way to compare the two. DLSS Balanced looked to have similar or slightly better image quality to the PS5 solution in those comparisons so how would the experience compare using that for example? Could the 2070 hold 60fps at higher than PS5 settings perhaps?
 
  • There's strong emphasis on this being 'just a PS4P backwards compatibility' game that doesn't use the PS5's RDNA2(?) features with the implication being that it's performance advantage would be even greater if it were PS5 native. While this is true, it does completely ignore the fact that the same would also be true had the game been developed on the PC side to take native advantage of RDNA2 or modern Nvidia architectures.
Your bring false equation here, there is big difference between running gamec bc on ps5 and not using all arch. features. About vsync, maybe you are right but its not his fault its not best implemented in pc version.
 
Last edited:
Ps5 in bc mode faster than rtx2070 in sane settings (tough we can have dlss on rtx), quite a norm on sony ports
And it is still dx11 so even the cpu might get the bottleneck here. I really don't understand why developers still use dx11 when they convert a PS4 game. Dx12 or Vulkan should be much better alternatives for that.
 
Your bring false equation here, there is big difference between running gamec bc on ps5 and not using all arch. features. About vsync, maybe you are right but its not his fault its not best implemented in pc version.
Yes Death stranding on PS5 showed us this. Big performance improvement from Pro to PS5. About 4x more pixels pushed (thanks to GPU) on PS5 compared to Pro (instead of like 2.5x when using BC GCN)
 
Yes Death stranding on PS5 showed us this. Big performance improvement from Pro to PS5. About 4x more pixels pushed (thanks to GPU) on PS5 compared to Pro (instead of like 2.5x when using BC GCN)

There is Death Stranding and the Touryst. It is much better than the Ps4 Pro version in the two cases. I don't like the fact he compared PC and a BC title. This is not very interesting.
 
As someone already pointed out, that goes both ways, if somethings optimized for ps5, its the same for the pc with rdna2/modern gpu architectures.
 
Last edited:
I don't think that's correct. He talks around 31:40 about the PC taking a heavy performance hit from vsync. And he proceeds to show the exact same scene with vsync off running at much higher than 60fps on the 2070 that he later uses to compare to the PS5 - this time with vsync on showing the 2070 dropping below 60fps.
I honestly have no clue what is happening with Vsync in this testing.
Since when does vsync do this in any game?
 
And it is still dx11 so even the cpu might get the bottleneck here. I really don't understand why developers still use dx11 when they convert a PS4 game. Dx12 or Vulkan should be much better alternatives for that.


Because dx12 can be a pain in the ***, same for vulkan. And dx11 too, but it seems easier for a lot of studios. I prefer a well designed dx11 renderer, vs some dx12 atrocities stuttering a lot&stuff we see more and more...
 
Your bring false equation here, there is big difference between running gamec bc on ps5 and not using all arch. features. About vsync, maybe you are right but its not his fault its not best implemented in pc version.

I may be oversimplifying it so I'm happy for you to explain why this is wrong, but as I see it, a game running on PS5 in BC mode is not taking advantage of either the specific architectural strengths and weaknesses of the RDNA architecture or the GPU's more modern feature set. However it is still able to apply it's full compute and rasterization resources to running the game - just in a less efficient way than for a game built for that specific GPU.

And that's exactly how a modern PC GPU would run a port of a PS4 game. NXG even went into some depth in his video around that and the reasons for using DX11 and having to work around the UMA etc... Had the game been built for modern DX12U capable PC's with discrete memory pools to leverage all their modern features and capabilities, and tuned for RDNA2 and Turing/Ampere architectures from the ground up then it would obviously perform better there too.
 
Yes Death stranding on PS5 showed us this. Big performance improvement from Pro to PS5. About 4x more pixels pushed (thanks to GPU) on PS5 compared to Pro (instead of like 2.5x when using BC GCN)
Is that due to being a 'native' PS5 title, or engine improvements? We don't know the extent of either way. Guess we'll see when it hits the PC.
 
Back
Top