Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
They can't be lowered, especially with deferred shading where geometry is completely decoupled from pixels shading.
Reducing geomery processing time can only make savings on pixels processing larger since pixel processing will take larger fraction of frame time.

With the simplest implementation of forward shading, there can be some correlation between geometry culling speed and VRS gains, but nobody does bare-bones forward rendering these days.
People usually do at least depth prepass with forward shading to save on redundant fragments shading and fill in normals and depth for screen-space effects. Hence early culling in primitive shaders or meshshaders will again do nothing to reduce pixel shading time.
Only the simplest forward rendering without depth prepass can benefit from the early culling so that backface geometry is not shaded, but there are other tools to kill those fragments.
Also, early culling won't fix the main issue with forward renderers - it won't help with small triangles. We are approaching 1 pixel sized triangles in games and with forward shading this means that quad shading granularity will eventually lead to 4x supersampling for 1 pixel sized triagles, so shading efficiency will be 1/4.
I thought UE5 demo showed the best way to render very small triangles is with pure compute? and the rest of polygons ideally being renderered by primitive shaders (at least on PS5) ?
 
I thought UE5 demo showed the best way to render very small triangles is with pure compute? and the rest of polygons ideally being renderered by primitive shaders (at least on PS5) ?
When it comes down to actual rendering, isn't that demo not rendering small triangles. I thought that was the point of the virtualized pipeline, to avoid rendering those pixel/subpixel sized polygons without losing much of the detail they would have provided.
 
thought UE5 demo showed the best way to render very small triangles is with pure compute?
These are orthogonal problems.
UE5 still very likely does deferred shading, where all shading happens on full screen quad rather than small triangle fragments like in forward.
Epic also have to calculate derivatives manually since there is no built-in function for them in compute shaders, so even if they had to process fragments in a forward fashion they still might have had some opportunities to merge quads in CS before shading them.
 
More work/time dedicated to choosing console-specific settings/features is of course the answer with the best results. But when a 3P needs to develop a cross-platform game for no less than 4 "old" consoles plus 3 "new" ones, that might become an impossible choice.
There are cases where the Series X was seemingly used as baseline, but the end result was a SeriesS being left out of new-gen features like ray-tracing or 60FPS modes, resulting in a poorer experience than the one provided by the older One X.

Regardless, we should hope for this to be solved when 3P stop making games for the 8th-gen consoles, but the temptation to keep developing games for an userbase of 160 million will be too great for a long time.

It definitely will, but I think the transition time should be par for the course, ignoring lockdowns related to the pandemic (which seem to be easing up/going away). We had an even larger cumulative install base going from 7th-gen to 8th-gen, and that gen went more or less fully 8th-gen by 2015.

That said I know there are market differences today vs. back then, which could make it harder to fully transition. But that just puts more pressure on Sony and Microsoft in particular to make fuller use of their own hardware with 1P content because 3P uptake could be slower than before.
 
Now that we talk UE5, just found out that UE5 will/can use doubles for world/scene coordinates.
When it comes down to actual rendering, isn't that demo not rendering small triangles. I thought that was the point of the virtualized pipeline, to avoid rendering those pixel/subpixel sized polygons without losing much of the detail they would have provided.
As far as we know it's compute software rasterizer with dynamic LoD.
Meaning it reduces/selects polygon count good for area to get near pixel sized polygons and rasterizes them using compute/software path.

Result will most likely end to G-buffer and then be shaded just like every other opaque object.
 
Last edited:
in 120fps mode ps5 1.2x faster than pc with rtx2070 oc, loading on ps5 2x faster than pc with nvme ssd
Well this is not a like for like comparison. Resolution in another but the main point should be that DX11 is really a bit limiting here. It is also funny that games still not use all the GPU memory available . Just ~4GB is used for on the RTX2070.
Loading times are not really surprising. They are already fast on the HDD of the PS4 Pro, the the NVME won't change much.

Btw, what CPU is he using for the PC? A Zen+ (2700) or what kind of CPU. This can also make a huge difference because Zen2 is much, much faster in games.
But the main point would still be DX11.
 
Last edited:
Well this is not a like for like comparison. Resolution in another but the main point should be that DX11 is really a bit limiting here. It is also funny that games still not use all the GPU memory available . Just ~4GB is used for on the RTX2070.
Loading times are not really surprising. They are already fast on the HDD of the PS4 Pro, the the NVME won't change much.

Btw, what CPU is he using for the PC? A Zen+ (2700) or what kind of CPU. This can also make a huge difference because Zen2 is much, much faster in games.
But the main point would still be DX11.

Are you contending that the difference in load times are attributed to D3D11 shader compilations ? To truly get an idea if that was the case we'd have to know what his testing conditions were such as if he was testing against the game in it's very first startup or subsequent startups.

Some games on startup or during multiple loading sections will issue dummy draw calls to trigger shader compilation. Others, particularly games running on UE4 will trigger shader compilations during runtime when a draw call is issued which will potentially cause frame time spikes in-game. On subsequent runs of the game, none of this matters since most shaders are compiled by the driver and are stored as cache for future reuse so the affect of shader compilation on loading times becomes nonexistent.
 
in 120fps mode ps5 1.2x faster than pc with rtx2070 oc, loading on ps5 2x faster than pc with nvme ssd
Great. We finally have that game benchmarked against PC GPUs. Great results there.

Well this is not a like for like comparison. Resolution in another but the main point should be that DX11 is really a bit limiting here. It is also funny that games still not use all the GPU memory available . Just ~4GB is used for on the RTX2070.
Loading times are not really surprising. They are already fast on the HDD of the PS4 Pro, the the NVME won't change much.

Btw, what CPU is he using for the PC? A Zen+ (2700) or what kind of CPU. This can also make a huge difference because Zen2 is much, much faster in games.
But the main point would still be DX11.
PS5 CPU has reduced clocks (and supposedly dynamic) and 8MB of L3 cache instead of 32MB, which is significant. We know PS5 CPU is about on par with a 1700x (mostly because of the reduced L3 cache). At 120hz the main bottleneck should be the CPU. Why should he use a high end 5ghz CPU to test such a mode? How that should be fair?

Dirextx11, you mean tools are not mature on PC?
 
PS5 gpu performs where it belongs, around, or a tad above 5700XT performance. Which even nioh 2 showcases. In other titles its below that, abit over etc. Around 2070 performance for non rt rendering. DLSS tech discounted altogether for normal rendering, ofcourse.

No idea why some still are hung up on comparing a late 2020 machine to a 2018 mid range RTX gpu.
 
PS5 gpu performs where it belongs, around, or a tad above 5700XT performance. Which even nioh 2 showcases. In other titles its below that, abit over etc. Around 2070 performance for non rt rendering. DLSS tech discounted altogether for normal rendering, ofcourse.

No idea why some still are hung up on comparing a late 2020 machine to a 2018 mid range RTX gpu.
Probaby because its very popular card and nxgamer have it in his pc :d he could use 2021 rtx3060, results would be very similar
 
Probaby because its very popular card and nxgamer have it in his pc :d he could use 2021 rtx3060, results would be very similar
Now that would be a good comparison as it’s the same generation GPU so we can be a bit more conclusive in the results.

I suppose this just really underlines what a great bang for buck the consoles have, you can buy a PS5 for the cost of the 3060!
 
How can there ever be a like-for-like comparison between a game on consoles and PC? :???:
At first you should always try to balance the details so it at least almost looks like the console version. That you should be open about what options you set. And at least you should try to use the same resolution.

PS5 CPU has reduced clocks (and supposedly dynamic) and 8MB of L3 cache instead of 32MB, which is significant. We know PS5 CPU is about on par with a 1700x (mostly because of the reduced L3 cache). At 120hz the main bottleneck should be the CPU. Why should he use a high end 5ghz CPU to test such a mode? How that should be fair?

Dirextx11, you mean tools are not mature on PC?
The CPU with the reduced cache is also used with the Ryzen 4xxx Mobile processors. The cache reduction didn't really make the CPU much worse. It still easily outperforms a Zen+ processor because of the much better IPC especially in games.

And the thing about DX11 is, that it still has e.g. a draw call problem. So you need a much higher clocked CPU to compensate that because most times a single thread is limiting the whole performance.
 
I suppose this just really underlines what a great bang for buck the consoles have, you can buy a PS5 for the cost of the 3060!
Consoles at release are subsidized and have a greater economy of scale than most discrete components, so they're always a spectacular bang-for-buck in processing power compared to anything else on the market.

Unless they're consoles from Nintendo.
 
  • Like
Reactions: snc
File sizes for Crash 4, both before and after the nextgen updates. Looks like a 10% difference (2GB) between PS5 and Series X|S. Large differences from last-gen and current-gen though, around 50%.

Details from https://www.futuregamereleases.com/...-size-cut-to-half-on-ps5-and-xbox-series-x-s/

  • 30.01 GB – Xbox One (without day one patch)
  • 45.38 GB – PS4
  • 22 GB – Xbox Series
  • 20 GB – PS5
  • 9.4 GB – Nintendo Switch

The game has a file size of 45GB on PS4, while the Xbox One version takes up 30GB, without the day one patch, which increases its size to 40GB.
 
Consoles at release are subsidized and have a greater economy of scale than most discrete components, so they're always a spectacular bang-for-buck in processing power compared to anything else on the market.

Unless they're consoles from Nintendo.

Is the PS5 more spectacular consider how the PS4 was against it's PC contemporaries, price to performance? Not sure about the other components but the SSD, at least, seem very good compare to what the PS4 had versus the most common storage on PC back then.
 
PS5 gpu performs where it belongs, around, or a tad above 5700XT performance. Which even nioh 2 showcases. In other titles its below that, abit over etc. Around 2070 performance for non rt rendering. DLSS tech discounted altogether for normal rendering, ofcourse.

No idea why some still are hung up on comparing a late 2020 machine to a 2018 mid range RTX gpu.

Either to exaggerate the ps5's value or show the currently terrible pc market of jacked up gpus prices, even some of the older cards. Maybe both.
 
Is the PS5 more spectacular consider how the PS4 was against it's PC contemporaries, price to performance? Not sure about the other components but the SSD, at least, seem very good compare to what the PS4 had versus the most common storage on PC back then.

Not considering price, but technically (were at b3d still), its about the same thing really. PS5 can be compared to about the lowest end NV and AMD have to offer now, RX6700 and 3060. Or mid range 2018-19 stuff. CPU wise, their a generation behind, basically a 3700X thats heavily cut down in clocks and cache performance. Ram quantity their sporting what a higher end GPU packs alone in total at lower speeds.
PS5's ray tracing performance is basically lower tier Turing 2060 from 2018. Advanced DLSS tech is currently absent aswell.

Yeah, you can get a PS5 for 3060's money, but you also are stuck with 3060 performance in non-ray traced games then, which is the next gen feature now and in the future. A 3060 doing RT and DLSS will outperform the PS5 at quite great lengths.
 
Status
Not open for further replies.
Back
Top