Current Generation Games Analysis Technical Discussion [2024] [XBSX|S, PS5, PC]

I mean, water in FS2024 is nice for such a huge game, but far from the best released in a video game objectively.

If he said FS2024 is the best looking flight sim ever released though, I would agree.
 
I believe it's a full game - https://store.steampowered.com/app/491540/The_Bus/

Just not AAA, but there hasn't been that many AAA UE5 games in general yet. Depending on how you classify games like Wukong, Stalker 2 and Hellblade 2 then the only AAA UE5 game at all might be Fortnite. Now that I think about it even upcoming UE5 AAA games are sparse in the immediate future? 2025 might only have that Marvel game that you would fully classify as being AAA? The other projects known to (or highly speculated to us UE5) that are AAA I believe are much further down the pipeline currently.

The Matrix and those user created demos I believe would be only "content" that use UE5 currently in a somewhat contemporary open city environment with faster than foot traversal akin to Cyberpunk.
 
Last edited:
RTX 'rays per second' counts are really off. RTX was introduced as 10 gigarays per second. Real-world actual rays per second is a tiny, tiny fraction of this.

10 gigarays actual performance should provide really high quality results. I feel the promise of realtime raytracing was overstated, and though valuable, it's a long way from early dreams.

What's the minimum ideal amount of rays-per-second actually-achieved for 1080p60 full pathtracing, and how far are we from that? Recursion depth of 7 provides good results in classic Monte Carlo renderers, so 2 million pixels x 7 x 60fps = 840 million rays per second.

What are the bottlenecks preventing this? I'm guessing we're not anywhere near attaining that level of raw performance and we're going to be hacking graphics with RT components for years to come.
 
RTX 'rays per second' counts are really off. RTX was introduced as 10 gigarays per second. Real-world actual rays per second is a tiny, tiny fraction of this.

10 gigarays actual performance should provide really high quality results. I feel the promise of realtime raytracing was overstated, and though valuable, it's a long way from early dreams.

What's the minimum ideal amount of rays-per-second actually-achieved for 1080p60 full pathtracing, and how far are we from that? Recursion depth of 7 provides good results in classic Monte Carlo renderers, so 2 million pixels x 7 x 60fps = 840 million rays per second.

What are the bottlenecks preventing this? I'm guessing we're not anywhere near attaining that level of raw performance and we're going to be hacking graphics with RT components for years to come.

Depends how they’re counting gigarays. Is each BVH node and triangle intersection counted as a separate ray? I assume so as it’s impossible to define a metric that encapsulates the entire traversal of a single ray as that’s entirely content specific.

If it’s the former definition then the numbers make a bit more sense. A single ray traversal likely requires dozens (hundreds?) of box and triangle intersection tests. Either way we need dramatically higher performance than what’s there today.
 
What he calls one of the path tracing problems in Cyberpunk 2077 has more to do with post processing like automatic exposure than path tracing.

In Cyberpunk 2077 you can also increase the number of rays via mod and relatively early the accuracy hardly improves. With ReSTIR which Cyberpunk 2077 got last year you get infinite bounces.
 
Last edited:
Depends how they’re counting gigarays. Is each BVH node and triangle intersection counted as a separate ray? I assume so as it’s impossible to define a metric that encapsulates the entire traversal of a single ray as that’s entirely content specific.
Could just be a triangle intersect tests. The reference was to the 'RT Cores'. These accelerate BVH traversal and triangle intersects. From the source Turing whitepaper

Turing can deliver far more Giga Rays/Sec than Pascal on different workloads, as shown in Figure 21. Pascal is spending approximately 1.1 Giga Rays/Sec, or 10 TFLOPS / Giga Ray to do ray tracing in software, whereas Turing can do 10+ Giga Rays/Sec using RT Cores, and run ray tracing 10 times faster.

Using their TF conversion, 10 TF per Gigaray on Pascal. Turing's 10(+!) gr/s on RT cores, for launch RTX, would be equivalent to 100 TF of Pascal compute. We need, what, 100x more than whatever that's accomplishing to hit the universal, no developer effort lighting model?

I'm not sure how this relates to launch RT performance and current best-available RT. I don't know that nV have ever followed up with a more detailed look at RT acceleration and just report 'faster and better'. Although benchmarking gives us an idea of how it's scaled from launch which provides something of a trajectory.

Edit: I'll just refine my definition of end-goal here. We presently are running at well below one sample per pixel. For ideal RT pathtracing, we need 1 sample per pixel for reflections and refractions, with multiple rays, and several samples per pixel for lighting. I guess we should stop looking at pure Monte Carlo and look at the optimised offline renderers like Cycles. The main issue is generating enough information that we eliminate noise, up to whatever standard our theoretical ideal denoising solution can wrok with.
 
Last edited:
What he calls one of the path tracing problems in Cyberpunk 2077 has more to do with post processing like automatic exposure than path tracing.

In Cyberpunk 2077 you can also increase the number of rays via mod and relatively early the accuracy hardly improves. With ReSTIR which Cyberpunk 2077 got last year you get infinite bounces.
Yeah.
But if it doesn't do it when RT/PT are not enabled, doesn't that sort of fit the point of the video - That enabling RT also enables a noisier image.
 
Still you should say what the main reason is. It is not path tracing.

Cyberpunk 2077 -
without auto exposure mod

with auto exposure mod


The fact that it only happens in his example with path tracing is more due to the fact that it is much brighter in places in the picture with enabled path tracing
 
Last edited:
I'll just refine my definition of end-goal here. We presently are running at well below one sample per pixel. For ideal RT pathtracing, we need 1 sample per pixel for reflections and refractions, with multiple rays, and several samples per pixel for lighting. I guess we should stop looking at pure Monte Carlo and look at the optimised offline renderers like Cycles. The main issue is generating enough information that we eliminate noise, up to whatever standard our theoretical ideal denoising solution can wrok with.

Yep that’s the goal but it’s a constantly moving target. One sample per pixel can have very different costs depending on the content you’re tracing. I expect new hardware to crush portal rtx relatively soon but still struggle with modern geometrically rich games.
 
I noticed the lag effect with the very first RT game I played - Metro. We just aren't there yet. The cards are so underpowered, they had to come up with DLSS in the first place. Even with several ray bounces for film quality, we were using Octane to get a denoising effect - especially on the interiors where the main light source was the sun coming into a single window. I think it's something we will have to deal with for a long time.
 
Can you give a ball park time-to-render in an offline renderer of a gaming-type complexity? I guess that comparison is fraught with incomparable parameters, such as memory speed of render-farms versus consumer level gear. But let's say a movie frame takes 180 seconds to render, we can do a bare minimum performance delta consideration between the horsepower that takes 180 seconds and what it'll take to get that down to realtime. If instead it's only 3 seconds on a workstation, we're not as far.
 
Can you give a ball park time-to-render in an offline renderer of a gaming-type complexity? I guess that comparison is fraught with incomparable parameters, such as memory speed of render-farms versus consumer level gear. But let's say a movie frame takes 180 seconds to render, we can do a bare minimum performance delta consideration between the horsepower that takes 180 seconds and what it'll take to get that down to realtime. If instead it's only 3 seconds on a workstation, we're not as far.
We are very far from what I've experience. It would totally depend on the content ( as someone else mentioned). I just made a thread about the requirements for games to be considered fully path-traced and it's a bear. From my experience, smoke, fog, clouds and fur/hair were the things that brought CPUs to their knees. Most of the participating medium took hours due to the ray-marching. The fur was a big bear in interior scenes that had 1 light source coming into the room. Getting rid of the noise took several hours render time from all the samples we had to cast. It's probably much much faster now with denoising being incorporated into the pipelines. I haven't worked in film since 2016. I'm working with Unreal 5.0 nowadays.
 
We've had famous timespans for things like Pixar movies taking umpteen hours per frame. Are there no numbers out there? And yeah, I guess it is a moving target as once you get to a decent time per frame, you ramp up the quality, and some features like volumetrics will just tank path-tracing. Are massive render-farms still a thing? You'd have thought with the exponential increase in power plus GPGPU things would have gotten nippier!
 
Back
Top