GART: Games and Applications using RayTracing

Status
Not open for further replies.
With respect to Minecraft people can see what they are getting, and it's entirely up to them to judge if they want to be blown away by it.
I do not intend to assert of false marketing or something like that, totally not.
I have this in mind since many years, because the GI method i'm working on also has this 'problem': A game could run at equal settings and FPS on strong and weak GPU. So how do we measure performance difference then, if all we may see the latter has more lag?
That's a problem. and it will only grow in the future. There is no simple way to measure lag, or sample counts externally. We need to pay more attention to spot those things, but we lack any absolute measure tech journalism or review sites require.
We have this problem with RT in general, texture space shading, world space GI techniques. The only solution would be games providing detailed data from built in benchmarks. A single 'score' is not enough, and can't be related to other games.
 
So how do we measure performance difference then, if all we may see the latter has more lag?
Like framerate, there is a lower bounds, or in this case upper bounds, to which pass that point becomes unplayable. The latency is consistent however, it's not like at some moments it's going to suddenly be more responsive and others not, change in latency is tied with frame rate. Which is why we haven't seen games so laggy that ti's unplayable, we certainly complain about how some games are less responsive than others though.
 
it's an intersting topic anyways.
It is, but I don't think it applies to RT games alone, the most advanced rasterized game in this generation is Red Dead 2, specifically the PC version .. and it suffers from delayed input lag as well, even when running at unlocked 60fps or more.

It seems the more you perform advanced Rendering in your pipeline, the less responsive your game will be.
 
Visual lag in lighting does not affect framerate or input latency, so it's not about the game feeling responsive or not. If we ever get to the point where lighting is completely decoupled from framerate, FPS (and maybe also resolution) is no longer a useful metric for performance (and neither is input lag).
We will move into this direction slowly, but we are already on track.
It's not the end of the world, but it could happen we end up depending only on marketing claims from IHVs or game devs, making the industry less transparent and trustworthy even on the tech side of things. So it is a (minor) problem we should address.
 
We will move into this direction slowly, but we are already on track.
Not sure if we are on track necessarily. There will always be a distribution of games where the responsiveness of the update will need to be faster or could be slower. I'm willing to bet the distribution of this is still largely the same, only it's not as narrow as it was before. But i don't believe there is data to suggest that the mean is shifting towards more latent.
 
It seems the more you perform advanced Rendering in your pipeline, the less responsive your game will be.
I don't think so. I've heard RDR2 feels laggy to play, but i think it's because of advanced animation. Having natural locomotion for your game avatar contradicts the idea of responsive player movement, because a biped character has to obey laws of physics (at least visually to look natural) and so can not change direction instantly. It's probably a real and serious problem for future games, but not related to laggy lighting which is independent from input or gameplay. (I think the solution to the problem is to have responsive control only for the camera, but the avatar is allowed to lag behind a bit.)

There will always be a distribution of games where the responsiveness of the update will need to be faster or could be slower.
I realize i'm misunderstood by both of you.

So no, the lag that i meant is only visually. E.g. in Minecraft RTX i saw the shadow of a box the player holds in its hands is very laggy, and you can see how temporal accumulation and denoising are doing their thing.
But the game still feels responsive, runs at good framerate and there is no additional input lag because of RT.
Well, actually framerate is lower then with RT off, so ofc. input lag may be affected and some may notice that, but lets assume we would compare 60fps RTX on vs. 60 fps RTX off. The only lag we would notice is only visually (and only about the lighting - not the geometry!) - that's what i meant.

Now consider the example of a future game with full texture space shading. Tiles of texture are updated stochastically until we come close to 16ms. Then we raster the scene using current state of texture, and continue stochastic updates for the next frame.
This game would run on any GPU at 60fps. Even a slower rasterizer would only result in less texture updates, but constant 60fps are guaranteed.
At this point, FPS would tell us nothing about performance of the GPU. We would want to know how much texels per second the GPU can update instead.
 
This game would run on any GPU at 60fps. Even a slower rasterizer would only result in less texture updates, but constant 60fps are guaranteed.
I see what you are getting at. You can only get behind by so much and there will be limits before everyone starts noticing it's too much. If you are 2-3 frames behind at 120fps it's not going to be as bad. Unless you are statically saying the time to complete lighting and shadows is 33ms or maybe even 66ms, and you allow the rest of the game to update at 2ms for instance.
 
You can only get behind by so much and there will be limits before everyone starts noticing it's too much.
Yes, but it becomes much more forgiving if we have texture space shading to cache results:
For most parts, lighting is static. Low update probability in such areas of the scene is fine.
If total action is going on, eyes are forgiving to lighting errors but focus on geometry.
If no action happens but lighting changes a lot (turning off the single single in current room), you can temporally reduce lighting resolution to update more surface area. (feature requires decoupling texture and lighting resolution ofc.)

But i'm still as doubtful about TS as i was a year ago. The preprocessing times for global parametrization turned out to be a real problem i have not thought about before.
I need to make it work at least for the low res i need for GI, but not sure if i can extend this to full resolution texture level.
So it's hard even before you can touch runtime complexity. :/
 
The reason for the lag in visual Information in minecraft rtx is because the game is unfinished - asking the devs, they want to eliminate as much of it as possible for full Release. Denoiser and the irradiance Cache are still unfinished.
 
Great Minecraft RTX vs SEUS PTGI comparison
from comments section:
Thanks guys for such a great comparison! It's interesting for a few reasons. First, you are comparing a HW vs SW solution. Second, it's low level DX12 + DXR1.0 vs OGL + GL's compute shaders. Third, it's more or less universal fully path-traced renderer with clever tricks vs highly tailored to minecraft hybrid renderer with both rasterisation and SW path tracing GI. Here is my score for both. Universality: SEUS is great, but doesn't support moving entites and other dynamic geometry. SEUS's RT acceleration structure is highly customized for cubic blocks and chunks, which are static. Minecraft RTX on the other hand uses much more general BVH with dynamic objects support, hence all dynamic objects are supported (and different non-cubic shaped objects must be also supported, though, likely w/o irradience cache support, which is likely baked in in cubes vertices). Hence, the point goes to Minecraft RTX for its versatility. Minecraft RTX - 1, SEUS - 0 Physical correctness and plausibility: SEUS uses rasterization for shadows, caustics and seems a number of other effects, it's a hybrid renderer after all. Due to rasterisation, it doesn't support realistic umbra, penumbra and other soft shadows from area lights and we all know the sun in minecraft is huge (area wise), hence almost all shadows must be soft and diffuse like in path traced Minecraft RTX. With rasterisation, you can do only so much as in SEUS. Even with variable penumbra shadow maps sampling and filtering, it will still be highly limited (though, SEUS doesn't use Percentage-Closer Soft Shadows). Even though SEUS still supports soft shadows from indirect lighting via path tracing, so does the Minecraft RTX. So, the point goes to Minecraft RTX for realistic primary path tracing shadows from area lights and for path traced caustics. Minecraft RTX - 2, SEUS - 0 There are many other aspects in which Minecraft RTX is much better - less noises (more rays) and irradiance cache visibility in low light conditions - 1:05, more bounces and less energy loss due to spatio-temporal RT filtering (environment better matches to magma's color and the whole scene is brighter) - 1:10, ray marched volumetric sun shafts with color transport in Minecraft RTX, support for dynamic reflections (moving objects) on full range of PBR materials (from 0 to 1 roughness), blocks with textures with transparencies (such as leaves blocks on trees) are rendered correctly - you can notice how GL renderer with SEUS misses back faces on such blocks - 2:28 (look at the tree). As you can see, devil is in details, Minecraft RTX is obviously much more advanced, though, you'd expect this from a fully path traced title with HW ray-tracing acceleration
 
AstroNaughty Games shows how to implement ray tracing with space game Grimmstar
April 22, 2020
"Space is big. Really big. When we set out to create Grimmstar, we had to ensure that the scale of space was well represented. This brought a problem to the forefront of our development: lighting. Thanks to NVIDIA and Unreal Engine 4’s usage of real-time ray tracing, our problems have been solved, and the result is even better than we anticipated.

Grimmstar features entire solar systems through which the player can freely traverse. We have to incorporate rather large levels that have enormous planets, moons, stars, and other celestial objects in realistic proportion to the size of the player’s ship. This presented a number of issues.

First, building lighting information (even with Lightmass Importance Volumes) for each level was far too strenuous. A single sub-level gave us 7GB of lightmap information. That just won’t cut it, so we had no other option but to check the box to force no pre-computed lighting. Second, the vast majority of objects are in space, so they are dynamically moving. Space station rings, asteroids, ships, planets, and moons all move because there is no resistance. Lastly, the very large distances between objects and the enormous variation in the size of assets inside levels led to some less-than-desired outputs when using Cascaded Shadow Maps. They aren’t bad, but they’re not quite the quality we set out to achieve."
https://www.unrealengine.com/en-US/...plement-ray-tracing-with-space-game-grimmstar
 
So they're designing a game that only a small percentage of PC gamers can play?
 
I don't think so. I've heard RDR2 feels laggy to play, but i think it's because of advanced animation. Having natural locomotion for your game avatar contradicts the idea of responsive player movement, because a biped character has to obey laws of physics (at least visually to look natural) and so can not change direction instantly. It's probably a real and serious problem for future games...
Not only future games. It's a problem identified in Uncharted back in 2007. ND wanted realistic animation, but had to compromise to maintain responsiveness. There isn't any solution - as you say, responsive gaming animation defies the laws of physics. Either gamers learn to adapt to realistic games, which is actually a tall ask because there isn't the force feedback of natural forces informing momentum and the like, or realistic animation never happens because it can't coincide with arcade controls, and we have sliding feet and turning on the spot forever more.

As for the matter of laggyness in the renderers, I wonder if heavy temporal artefacts adds to a sense of latency? Does perception consider the movement to happen at the start of the change, or when the last frame of the previous state fades out of view? Does 5 frames of motion smearing give a sense of 5 frame of lag?
 
I wonder if heavy temporal artefacts adds to a sense of latency?
Who cares, if realistic character movement will make games slower anyways? :D

But seriously, your worries are not (yet) justified. We have seen some artifacts like ghosting from TAA that look like physical lag, but that's mostly resolved and no more issue.
Current RT will not add to this, because it only affects the lighting not the geometry. I think even if the smearing can look like motion trails in rare cases, it is not worse than what we have seen with TAA.

But it may become an interesting question when we go full RT. If this ever happens, the smearing would affect geometry as well in a trivial implementation.
But we'll come up with similar solutions as with TAA to filter out too heavy smearing and trade it for inaccurate lighting temporally as usual.

I wonder if the transition to full RT will happen early. Currently it seems inefficient to use slow RT for primary visibility when raster can do this much faster.
But with RT we can update just some changing regions of the screen and reproject the rest. In contrast, raster looses it's efficiency with partial updates.
That's quite interesting. Video compression codecs often work by transforming quads of previous frame to the next, which proofs reprojection would work for us the same way.
This could solve the ancient problem of video games to calculate the same pixel again and again over hundrets of frames, which is just a waste.

On the other hand this defies the idea of Monte Carlo integration over multiple frames, so there is no free lunch.
But because RT is so flexible with mixing any kind of stochastic approaches, it's probably easy to find good compromises and we might get some unexpected efficiency wins.
 
But seriously, your worries are not (yet) justified. We have seen some artifacts like ghosting from TAA that look like physical lag, but that's mostly resolved and no more issue.
Current RT will not add to this, because it only affects the lighting not the geometry. I think even if the smearing can look like motion trails in rare cases, it is not worse than what we have seen with TAA.
The worst case is the path traced Minecraft in the very low light situations where the multiple bounces accumulating make it special-effect level smeary. I doubt many mainstream games will suffer from that though as devs will avoid it by going with other solutions, but the worst case is definitely a problem.
 
The worst case is the path traced Minecraft in the very low light situations where the multiple bounces accumulating make it special-effect level smeary. I doubt many mainstream games will suffer from that though as devs will avoid it by going with other solutions, but the worst case is definitely a problem.

Apparently minecraft doesn’t produce motion vectors so they’re doing something very hacky to feed dlss the motion data it needs. Sounds like that temporal lag can be improved.
 
Status
Not open for further replies.
Back
Top