GART: Games and Applications using RayTracing

Status
Not open for further replies.

This outdoor shot embodies what we have been saying all along, scenes in games is full of incorrect lighting, abnormal sheens and rims of light around all objects/characters, but when you see the real deal (accurate lighting), you realize what a huge difference it is, and how it grounds people and object in the scene, you get the feeling of "next gen" that we all clamor for, and you really can't go back after that.
 
Heh, so all current hybrid RT implementations are not "physically correct"/"representing natural lightning" now?

It's been repeated in all RT related threads multiple times by some individuals incl. you
 
Last edited:
Someone extracted every comparision from the CD Project behind video:
Indoor shots looking so good. Unbelievable how fast we have come from one simple effect like reflections, GI or shadows to something like this with pathtracing quality. Only 4 1/2 years.

Yeah it’s great stuff. The Cyberpunk implementation is definitely a “preview” as it’s not going mainstream anytime soon but it’s amazing that we got to this point so fast.
 
Nvidia is a manufacturer of PC graphics cards not consoles. If your basic point here is that the PC ecosystem should be forever subservient to the lowest common denominator of console hardware and not aim higher I don’t think you really believe that.
I believe that developers should do what's sustainable for themselves and if that means ending exclusive feature development then so be it ...

Nvidia are the ones who sold the entire concept that PC hardware is too good for parity with consoles so any failings on that front should be directed and owed up on their part. It's not going to be easy keeping up anti-developer practices like anti-parity so Nvidia ought to be the ones to continue bearing that burden over developers since they don't have ownership over DLSS nor do they have the means to share it ...

If you abhor the idea of PCs being second class citizens compared to consoles then developers moving to UE5 should be seen as an epiphany to you since Unreal Engine over the years has become a death valley for Nvidia's libraries (GameWorks, PhysX, no support/unmaintainable RTX branch, DLSS will follow next) when you've had historically PC-centric developers like CD Projekt (who were strong proponents Nvidia devtech integration with their own REDEngine) now moving to UE5 for better console support compared to their own subpar in-house engine ...
Developers don’t like upscalimg tech that give them back significant frame budget to work with? Why in the world aren’t all console games running at native 4K then?
Developers don't enjoy getting no contributions for their unified development pipeline ...
The goal is to sell discreet GPUs... which they do a better job of than the competition. DLSS more than anything.. is free marketing for both Nvidia and publishers.

More than anything... I'm worried about developers utilizing DLSS/FSR as a crutch for shitty optimization. That should be concerning you far more than anything else.
Free marketing doesn't mean free integration and I'm not worried at all about developers utilizing a proprietary temporal reconstruction technique as a cover up if they have no intention of keeping it around indefinitely by not paying for the cost of integration on their own ... (all of Nvidia's libraries eventually meet the same ending which is obsolescence) ...
 
I believe that developers should do what's sustainable for themselves and if that means ending exclusive feature development then so be it ...

I got that. Even though PC hardware refreshes every 2 years and consoles refresh every 7 years you think it’s in developers best interest for PCs to stagnate for the duration of the console generation. All I can say is that I hope you are very wrong and enterprising developers continue to aim higher and push technology further.
 
I got that. Even though PC hardware refreshes every 2 years and consoles refresh every 7 years you think it’s in developers best interest for PCs to stagnate for the duration of the console generation. All I can say is that I hope you are very wrong and enterprising developers continue to aim higher and push technology further.
The only reasons why those practices were tolerated in the past was because of the vast technical differences between hardware designs between different platforms, shorter development times, smaller budgets, non-standard software engineering methods, and the many different engines ...

But now that platforms have similar architectures, longer game development timeframes, bigger budgets, standard coding practices, and share the engines/code as well, nearly all of the rationale to have game SKUs diverge from each other have truly disappeared ...

Do you think the industry is somehow headed into the wrong direction and should undo all that "progress" ?
 
But now that platforms have similar architectures, longer game development timeframes, bigger budgets, standard coding practices, and share the engines/code as well, nearly all of the rationale to have game SKUs diverge from each other have truly disappeared ...

What’s your take on PS5 haptics and how is it different to DLSS from a developer’s perspective? They are both vendor specific technologies not available on competing platforms.

Do you think the industry is somehow headed into the wrong direction and should undo all that "progress" ?

Consolidation lowers total cost but it also stifles innovation. Is it a good thing that the PS5 and Xbox are basically the same hardware in a different shaped box? For devs sure, for gamers not so much. Same goes for game engines. Should everyone just use UE5 and call it a day? Of course not because it means every game would inherit the same fundamental strengths and weaknesses and be limited by Epic’s resources, priorities and imagination.

Cross platform rules the day and there are clear benefits to a common baseline feature set across platforms. Allowing for innovation beyond that lowest common denominator will not undo any of that. It is purely additive for those developers with the desire and resources to do more on platforms that offer more.
 
What’s your take on PS5 haptics and how is it different to DLSS from a developer’s perspective? They are both vendor specific technologies not available on competing platforms.
PS5 haptics feature isn't conceptually much different to other controllers rumble functionality out there. Whether developers decide to make great use of it or not in comparison to other control peripherals is entirely up to them. Haptic feedback also isn't very invasive to a game's codebase since it only affects the control interface logic which is a small surface to begin with. DLSS is invasive to a game's codebase/content and developers don't understand exactly what the neural network is going to do so the propensity for uncontrollable abnormalities in the output is there. It's a major reason why channels are often established between developers and Nvidia devtech so that the former gets the necessary resources for app integration and that the latter gets feedback/data to retrain/modify to improve their models/library. Changing DLSS libraries from the one shipped with the application is not a good idea since that can introduce regressions more easily without developer coordination ...
Consolidation lowers total cost but it also stifles innovation. Is it a good thing that the PS5 and Xbox are basically the same hardware in a different shaped box? For devs sure, for gamers not so much. Same goes for game engines. Should everyone just use UE5 and call it a day? Of course not because it means every game would inherit the same fundamental strengths and weaknesses and be limited by Epic’s resources, priorities and imagination.

Cross platform rules the day and there are clear benefits to a common baseline feature set across platforms. Allowing for innovation beyond that lowest common denominator will not undo any of that. It is purely additive for those developers with the desire and resources to do more on platforms that offer more.
Parity is just a symptom of industry trends and if you use a commercial engine like Unreal where there's no proper integration for a feature that you want, Epic Games will just refuse to support your branch of their engine (unless you're a big publisher/customer with a better license) hence "unsupported" branch and tough luck for the developers too. There's a reason why DLSS remains as a "plugin" for an engine like Unreal rather than an "upstreamed" feature where you can count on Epic Games to make an intentional effort to not break functionality and fix it if does. DLSS even in a plugin format has to be updated very frequently with highly active development codebase changes such as Unreal engine or it'll fall into disrepair. A lot of people here seem to underestimate how much work goes into integrating a feature like DLSS and it's far away from a one and done event that everyone is thinking of with major commercial engines when the truth actually is the plugin getting constant updates ... (still isn't compatible with Unreal's depth of field effect implementation to this day)

If there is a disposition to do more for a specific platform/hardware on the developers part then they'll usually seek out "blessings" from their vendors (whether it'd be timed exclusive benefits or exclusive features) for assistance ...
 
Many small time developers managed to implement DLSS from scratch in a few days, even one man team developers, so this doesn't match reality.


Heh, so all current hybrid RT implementations are not "physically correct"/"representing natural lightning" now?
If they don't support RT GI then yeah, lighting is inaccurate, even if the game supported RT shadows/reflections, RTAO helps, but RTGI is needed. Cyberpunk only supports RTGI from the sky only at "Pyscho" settings, and only for the first hit, bounce lighting is not ray traced and relies on regular raster probe methods, so in the example I mentioned without path tracing, everything is lit by a generic cubemap of the sky, giving all objects a bluish sheen regardless of their location with respect to light.
 
Last edited:
DLSS is invasive to a game's codebase/content and developers don't understand exactly what the neural network is going to do so the propensity for uncontrollable abnormalities in the output is there. It's a major reason why channels are often established between developers and Nvidia devtech so that the former gets the necessary resources for app integration and that the latter gets feedback/data to retrain/modify to improve their models/library. Changing DLSS libraries from the one shipped with the application is not a good idea since that can introduce regressions more easily without developer coordination ...

This doesn’t really gel. DLSS is as invasive as TAA and FSR as it uses the same inputs and integrates at the same stage in the rendering pipeline. Last I saw the UE DLSS plug-in basically implements a generic UE interface and is a drop in replacement for the built-in TAAU. The “uncontrollable abnormalities” clearly are not DLSS specific. I suppose TAA and FSR abnormalities are controllable / desirable?

If there is a disposition to do more for a specific platform/hardware on the developers part then they'll usually seek out "blessings" from their vendors (whether it'd be timed exclusive benefits or exclusive features) for assistance ...

Well sure asking for help makes perfect sense. The important thing is where is the motivation coming from. I believe there are developers out there who are truly passionate about adopting cutting edge graphics tech. It’s hard to accept that every ISV out there is happy with the baseline console feature set unless some vendor cajoles them into doing more. That would be sad state of affairs. 4A claimed that a unified lighting engine was a primary goal for their rendering tech. They didn’t say somebody paid them to think that way. I’d like to think it’s the same for CDPR and their latest forays into RT.
 
Heh, so all current hybrid RT implementations are not "physically correct"/"representing natural lightning" now?
Adding to the above, there is a common misconception regarding RT often made up now by the media, something like 'rays simulate how light travels in the real world.' I hat that. It's bullshit.
Light does not spread out along a single ray like a laser pointer (almost) does. It spreads in every direction, so a single ray can not describe this.
Equally, the incoming light affecting the shading of a surface (except a perfect mirror) also comes from all directions, so again rays don't describe this function over a sphere either.

What rays do (for gfx) is to answer a visibility test. They do not transport light. Instead, we calculate the light transport using surface / light properties, normals, eventually distance or solid angle, etc.
The reason rays now give us better lighting than former realtime approaches is that we can integrate this light transport now over the whole sphere by taking samples. Basically, every ray (plus an eventual shadow ray from the hitpoint to a light) answers us how one point on that sphere looks like, and integrating many such point samples we get an approximation of how the whole sphere probably looks like. (If a surface is opaque, a half sphere is actually enough, because light from the backside has no effect.)
But unfortunately one such ray is not enough to tell the whole story about how the hit point looks like. Because light reflects form surfaces and then illuminates other surfaces, we need to chain multiple rays forming a path, to capture those bounces. That's where the path tracing name comes from.

To capture all bounces from the real world, we would need a path of infinite length. Beside the fact we also would need infinitely many point samples on the (half)sphere to get all the incoming light.
So to make a claim of 'physically correct', we first need to agree on some error which is acceptable, as there is way to calculate this precisely.
If we say error below perception, or something like the noise of an image sensor is fine, path tracing is accurate, and can also deal with difficult phenomena like refractions, so it's flexible.
But it's not efficient. Any efficient realtime approach would attempt to cache incoming light at the surface, so we no longer need paths because we can get that information from the cache. A cache with support for angular lookup also removes the need for many samples to approximate the sphere, so both expensive infinities can be removed if we accept some spatial and angular quantization.
Such caching really is difficult to do. Texels of a lightmap was a first attempt, but this became more difficult as games grew larger. It's thus easy to sell path tracing for its simplicity and generality, and the higher the specs and costs need to be, the better for some suspect parties. ;D
But ideally, we'll see a combination of both those approaches in the future. Any research and progress on PT as shown will additionally help with that on the long run.

That said about lighting and it's accuracy and costs. But 'Hybrid RT' does not affect lighting and has nothing to do with it. So i agree it's ok to call Quake II RTX, Portal RTX or now CP 'path traced'.
Hybrid only means the primary visibility is rasterized not traced.
Limitations ray traced primary visibility would address are: Better transparency, DOF, AA and motion blur.
Limitations of those current path traced games in comparison to offline rendering come mostly form aggressive reduction of sample counts, e.g. accepting samples of nearby pixels or previous frames, which is where all current progress comes from.
 
The reason rays now give us better lighting than former realtime approaches is that we can integrate this light transport now over the whole sphere by taking samples.

Former real-time approaches also don’t scale to many shadow casting lights and struggle to emulate even one secondary bounce.

So to make a claim of 'physically correct', we first need to agree on some error which is acceptable, as there is way to calculate this precisely. If we say error below perception, or something like the noise of an image sensor is fine, path tracing is accurate, and can also deal with difficult phenomena like refractions, so it's flexible.

I would hope this is obvious :) No one thinks “physically correct” means that we’ve built a universe simulator.

Any efficient realtime approach would attempt to cache incoming light at the surface, so we no longer need paths because we can get that information from the cache. A cache with support for angular lookup also removes the need for many samples to approximate the sphere, so both expensive infinities can be removed if we accept some spatial and angular quantization. Such caching really is difficult to do.

Lumen and Restir GI say hello.
 
This doesn’t really gel. DLSS is as invasive as TAA and FSR as it uses the same inputs and integrates at the same stage in the rendering pipeline. Last I saw the UE DLSS plug-in basically implements a generic UE interface and is a drop in replacement for the built-in TAAU. The “uncontrollable abnormalities” clearly are not DLSS specific. I suppose TAA and FSR abnormalities are controllable / desirable?
It's somewhat more invasive than either TAA or FSR. While the setup maybe similar, virtually all of the reprojection/accumulation/sample reuse & rejection logic, luminance computation, and history evaluation gets tucked away into a black box that's inaccessible to the developer on a deeper level. With other temporal reconstruction methods, source code access allows developers to change any of those things. Also UE doesn't really have a "stable" interface so you can't just leave any plug-ins unmaintained ...

You're basically accepting undefined behaviour with DLSS. If developers do test the feature and find undesirable output, they are effectively gated from making modifications unless they do painful/unproductive content design workarounds as the alternative. AMD mockingly lists "no machine learning" with a pictogram of a student & graduation cap as a feature since they provide source code access with the technology for educational content purposes/value too. For a graphics programmer, there's nothing else to learn after integrating DLSS so they pray that the tool works as intended or cooperate with Nvidia if it doesn't ...
Well sure asking for help makes perfect sense. The important thing is where is the motivation coming from. I believe there are developers out there who are truly passionate about adopting cutting edge graphics tech. It’s hard to accept that every ISV out there is happy with the baseline console feature set unless some vendor cajoles them into doing more. That would be sad state of affairs. 4A claimed that a unified lighting engine was a primary goal for their rendering tech. They didn’t say somebody paid them to think that way. I’d like to think it’s the same for CDPR and their latest forays into RT.
I'm sure there are developers out there who are more interested in higher end graphics technology but whether that inclination is strong enough to overcome the requirement of a unified codebase and development pipeline is another subject matter in itself. With developers opting in UE5 more than ever, they simply don't have a say anymore in that area since Epic Games likes having control over their own graphics code. Gone are the simple days of where Nvidia can just take up graphics devtech integration gig to ship their technology so they may have to fill in the holes for game development left by the likes of Epic Games in the near future which might be more responsibility than they want to take up. Developers must either fall in line with Epic Games to receive support or face worse development conditions ...
 
Last edited:
Lumen and Restir GI say hello.
Can you explain their methods of caching?
Afaict, Lumen has probes in screenspace. Which means a wall behind your back isn't cached. (Heard they use some form of volume probes similar to DDGI as well, but various talks and presentations did not cover the whole pipeline well.)
Restir GI also has reservoirs only in screenspace. And they do not cache incoming light, but only which points on the sphere are good directions to look for it? Not sure about that - only skimmed the paper. Bit if so i guess it's also limited to only the first bounce surfaces visible on the screen can see.

To give an example for what i mean, take Metro Exodus. They use DDGI probes in worldspace. They claimed to use path tracing methods, but actually their 'paths' terminate at the first hitpoint. There they look up the DDGI probe, so forming an actual path is simply not needed.
The probe lookup is even better because it has information about the lighting integrated to the whole sphere at the hitpoint, which also has infinite bounces as we get them for free with caching.
Contrary, a path (usually) samples only one light at the hitpoint, and then repeats this process for a finite number of bounces. So the information at the hitpoint is incomplete, increasing the number of samples you need to make Monte Carlo accurate enough. Restir alone can not fix this, it helps sample counts only by picking lights or directions with a better probability of high contribution. (Curious what 'neural radiance caches' mean in this regard.)

So Exodus has much better efficiency than the 'fully path traced RTX games'. The reason it's accuracy is way worse is not the method used, but the low spatial resolution of the DDGI volume probes.
Once we replace this very coarse volume approximation with denser probes just on the surface, the error from resolution still limited isn't worse then the error from accepting spatial temporal samples. And then i'm happy, at least regarding the topics discussed.
 
Can you explain their methods of caching?
Afaict, Lumen has probes in screenspace. Which means a wall behind your back isn't cached.

Restir uses a screen space cache but each sample traces a multiple bounce path. Lumen’s surface cache is in texture space and does support caching surfaces that are facing away from the camera. I think the surfaces still need to be in the viewport though.
 
It's somewhat more invasive than either TAA or FSR. While the setup maybe similar, virtually all of the reprojection/accumulation/sample reuse & rejection logic, luminance computation, and history evaluation gets tucked away into a black box that's inaccessible to the developer on a deeper level. With other temporal reconstruction methods, source code access allows developers to change any of those things. Also UE doesn't really have a "stable" interface so you can't just leave any plug-ins unmaintained ...

You're basically accepting undefined behaviour with DLSS. If developers do test the feature and find undesirable output, they are effectively gated from making modifications unless they do painful/unproductive content design workarounds as the alternative. AMD mockingly lists "no machine learning" with a pictogram of a student & graduation cap as a feature since they provide source code access with the technology for educational content purposes/value too. For a graphics programmer, there's nothing else to learn after integrating DLSS so they pray that the tool works as intended or cooperate with Nvidia if it doesn't ...

I'm not sure AMD is in a position to be mocking anything. It seems they haven't learned their lesson with that. Yes DLSS is a black box but maybe that's a good thing given the alternatives aren't any better. This notion that giving developers free reign is always the right option isn't really supported by facts. It's results that matter, not developer egos. 20 years ago everyone in the software industry thought they could build it better themselves now everyone is rushing headfirst toward SaaS vendors and delegating the hell out of everything. Sometimes that's the right call.

I'm sure there are developers out there who are more interested in higher end graphics technology but whether that inclination is strong enough to overcome the requirement of a unified codebase and development pipeline is another subject matter in itself. With developers opting in UE5 more than ever, they simply don't have a say anymore in that area since Epic Games likes having control over their own graphics code. Gone are the simple days of where Nvidia can just take up graphics devtech integration gig to ship their technology so they may have to fill in the holes for game development left by the likes of Epic Games in the near future which might be more responsibility than they want to take up. Developers must either fall in line with Epic Games to receive support or face worse development conditions ...

Yep, I suppose time will tell how much UE will dictate things in this generation and whether other engines will offer something compelling.
 
I think there could be a problem with devs being handed black box features by Nvidia, but DLSS 2/3 aren't really those cases because they're just upscaling/frame generation. Some of the ray tracing solutions that include ML training I can get behind the argument more.
 

Patch 1.62 for Cyberpunk 2077 is being rolled out on PC. This update brings the technology preview of Ray Tracing: Overdrive Mode for high-end PCs.

Together with NVIDIA, we’re bringing a completely new, fully ray-traced, aka path-traced, rendering mode to the game with this patch – Ray Tracing: Overdrive Mode. We’re proud of it. It pushes the boundaries of what's possible in technology. However, because it is so new and fundamentally different from what we've been using so far, we know it's not going to be perfect from the start and players might experience some issues – that’s why we’ve decided to call it a “Technology Preview”. This is a vision of the future that we want to share, and we're committed to continue working on and improving this feature.

The technology preview of Ray Tracing: Overdrive Mode is currently supported on NVIDIA GeForce RTX 40 Series (4070 Ti and up) graphics cards. Also, on NVIDIA GeForce RTX 3090 (1080p, 30 fps). As this is a cutting-edge feature, it requires the highest-performing hardware available to run it properly. Ray Tracing: Overdrive is very GPU intensive, therefore it’s set to "off" by default.

For other ray-tracing-capable PC graphics cards with at least 8GB VRAM, we included an option to render path-traced screenshots in Photo Mode. This is possible because it means rendering just one frame, as opposed to rendering several frames every second (i.e. FPS), which would happen when playing the game.

You'll find the list of changes for patch 1.62 below:

  • Path Tracing: Technology Preview

    Added a Ray Tracing: Overdrive preset which includes the Path Tracing technology. You can enable the Ray Tracing: Overdrive preset in Settings > Graphics > Quick Preset, or just Path Tracing separately in Settings > Graphics in the Ray Tracing section.

    Additionally, we included an option to render path-traced screenshots in Photo Mode for other Ray-Tracing-capable graphics cards with at least 8GB VRAM. If your graphics card has more than 8GB VRAM and this option is still greyed out, it means you need to lower your in-game resolution. Note that the higher the resolution and the less powerful the GPU is, the longer it will take to take a screenshot (between a few seconds to several minutes). You can enable Path Tracing for Photo Mode in Settings > Graphics in the Ray Tracing section.

  • DLAA

    Added NVIDIA DLAA, an AI-based anti-aliasing mode designed to improve image quality. DLAA requires a NVIDIA RTX graphics card. It can be enabled in Settings > Graphics in the NVIDIA DLSS section.

  • Intel XeSS

    Added support for Intel Xe Super Sampling 1.1, an upscaling technology using machine learning to provide improved performance with high image quality. It can be enabled in Settings > Graphics in the Resolution Scaling section.

  • Benchmark improvements

    Improved the Benchmark to display more information in the results screen, including PC specs, GPU driver version and selected settings.
 
Status
Not open for further replies.
Back
Top