Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Fractured Lands Dev: NVIDIA DLSS Improves Visual Quality and Performance at Once; We’ll Look into Adding RTX

We got in touch with Torin Kampa (Chief Technical Officer) at the studio behind the vehicle focused Battle Royale game (currently on Steam Early Access) to discuss their implementation of DLSS as both an image and performance enhancing, as well as the likelihood of implementing raytracing in Fractured Lands later down the line.

Why did you decide to add support for the new NVIDIA DLSS technology to Fractured Lands and what do you think of it?
Many developers on our team have a passion for rendering so we jumped at the opportunity to get involved early with this new technology. Gaming at 4K resolution looks beautiful and offers the competitive advantage of spotting enemies at greater range in greater detail – though often
with the drawback of subpar framerate. This new generation of cards pushes us closer to
buttery smooth 4k (especially with DLSS enabled). Across all resolutions, the reduction of TAA
artifacts can mean the difference between wondering if that shimmer was an enemy versus
knowing it was a fully kitted wastelander who should be approached with caution.

Do you have an ETA on when NVIDIA DLSS could be available in the live game?
There is no hard date yet on when DLSS will be available in Fractured Lands, but we expect
to coordinate a hotfix with NVIDIA in the near future.

Will you look into the possibility of adding raytracing (NVIDIA RTX) to the game eventually? As developers, what’s your opinion on NVIDIA’s bid for raytracing with the new GeForce RTX cards?
We will definitely look into adding RTX raytracing to Fractured Lands. As developers, we’ve been teased with the idea of realtime raytracing in complex environments for years, and it’s only
recently become available. Over time, we imagine realtime raytracing will be a paradigm shift
similar to the transition from forward to deferred rendering.
https://wccftech.com/fractured-lands-nvidia-dlss-improves/
 
Video Series: Real-Time Ray Tracing for Interactive Global Illumination Workflows in Frostbite
October 2, 2018
Real-time ray tracing is upon us. Electronic Arts leads the charge in integrating the technology on an engine level.

This video series features Sebastien Hillaire, Senior Rendering Engineer at EA/Frostbite. Sebastian discusses real-time raytracing and global illumination (GI) workflows in Frostbite. He explains the context and the current workflows Frostbite uses, then describes the process of GI Live preview tech in Frostbite using DX12 and DXR API on top of NVIDIA RTX technology).

We know it’s hard to set aside an hour to watch a recorded talk, so we’ve broken Sebastien’s talk into four segments that can each be watched in seven minutes or less.

  1. Current Global Illumination in EA Games (5:55 min)
  2. Live Authoring of a Scene (4:15 min)
  3. Tracing Lightmap Texel (4:43 min)
  4. Battling Noise, Irradiance Volumes, and Future Thoughts (6:20 min)
https://devblogs.nvidia.com/video-real-time-ray-tracing-workflows-frostbite/
 
Introduction to Real-Time Ray Tracing with Vulkan
October 10, 2018
NVIDIA’s new Turing GPU unleashed real-time ray-tracing in a consumer GPU for the first time. Since then, much virtual ink has been spilled discussing ray tracing in DirectX 12. However, many developers want to embrace a more open approach using Vulkan, the low-level API supported by the Khronos Group. Vulkan enables developers to target many different platforms, including Windows and Linux, allowing for broader distribution of 3D-accelerated applications. NVIDIA’s 411.63 driver release now enables an experimental Vulkan extension that exposes NVIDIA’s RTX technology for real-time ray tracing through the Vulkan API.

This extension, called VK_NVX_raytracing, is a developer preview of our upcoming vendor extension for ray tracing on Vulkan. The extension targets developers who want to familiarize themselves with API concepts and start testing the functionality. As denoted by the ‘NVX’ prefix, this API is not yet final and could undergo some minor changes before the final release.
https://devblogs.nvidia.com/vulkan-raytracing/
 
Last edited by a moderator:
Some new performance numbers on the "Speed of Light" demo from talks on this years GTC:

It reached 24fps ("We define real-time as 24fps"). At FHD with 20% super sampling. Running on not one, but two RTX 6000 in SFR.

Effectively, they got around 15 (hitting / missing) rays per pixel per frame. Not than fancy "10 Giga-rays per second" claimed by Jensen, but a far more modest 300 Mega-rays per second and card against a non-empty BVH, and that despite 95% of the GPU load being attributed to just the "hidden" parts of the ray tracing, not actual shading. With that performance number being relatively stable within a broad range of model complexity.

15 rays per pixel is a joke, for anything but the most trivial, pure specular materials or pure shadow calculation. For a couple of effects, like scattering with transparent materials, they still had to fall back to screen space post processing effects, partially because unsupported by the card, partially because that requires significantly more rays. Also no geometry in the scene other than the car and the box lights, in order to waste as little rays outside the relevant area as possible.

Transfer these numbers to a modern 4k @ 60fps requirement, and you are left with less than 1 ray per pixel per frame, after already using a hybrid tracer. Not even enough to cast shadows against a single point-light.


The "dancing robot" demo from GTC? Ran on 4 GPUs, "live". Probably with the same "cinematic restrictions".
Yes, it's still impressive. But it's about factor 16-32x away from the efficiency and performance required to make this in any form relevant for actual real-time computer graphics.

And keep in mind that even Jensen acknowledged this time that Moore's Law is dead, even for Nvidia. They can still afford to add more special function units (not running in parallel) with smaller nodes, but they are hitting a wall with power density as well, and the only way around that wall would be to switch from high performance to a low power node, getting another factor 2. Or in other terms: That required performance increase is simply not possible.
 
Some new performance numbers on the "Speed of Light" demo from talks on this years GTC:

It reached 24fps ("We define real-time as 24fps"). At FHD with 20% super sampling. Running on not one, but two RTX 6000 in SFR.

Effectively, they got around 15 (hitting / missing) rays per pixel per frame. Not than fancy "10 Giga-rays per second" claimed by Jensen, but a far more modest 300 Mega-rays per second and card against a non-empty BVH, and that despite 95% of the GPU load being attributed to just the "hidden" parts of the ray tracing, not actual shading. With that performance number being relatively stable within a broad range of model complexity.

15 rays per pixel is a joke, for anything but the most trivial, pure specular materials or pure shadow calculation. For a couple of effects, like scattering with transparent materials, they still had to fall back to screen space post processing effects, partially because unsupported by the card, partially because that requires significantly more rays. Also no geometry in the scene other than the car and the box lights, in order to waste as little rays outside the relevant area as possible.

Transfer these numbers to a modern 4k @ 60fps requirement, and you are left with less than 1 ray per pixel per frame, after already using a hybrid tracer. Not even enough to cast shadows against a single point-light.


The "dancing robot" demo from GTC? Ran on 4 GPUs, "live". Probably with the same "cinematic restrictions".
Yes, it's still impressive. But it's about factor 16-32x away from the efficiency and performance required to make this in any form relevant for actual real-time computer graphics.

And keep in mind that even Jensen acknowledged this time that Moore's Law is dead, even for Nvidia. They can still afford to add more special function units (not running in parallel) with smaller nodes, but they are hitting a wall with power density as well, and the only way around that wall would be to switch from high performance to a low power node, getting another factor 2. Or in other terms: That required performance increase is simply not possible.

Do you have a link to this performance?
 
Some new performance numbers on the "Speed of Light" demo from talks on this years GTC:

It reached 24fps ("We define real-time as 24fps"). At FHD with 20% super sampling. Running on not one, but two RTX 6000 in SFR.

Effectively, they got around 15 (hitting / missing) rays per pixel per frame. Not than fancy "10 Giga-rays per second" claimed by Jensen, but a far more modest 300 Mega-rays per second and card against a non-empty BVH, and that despite 95% of the GPU load being attributed to just the "hidden" parts of the ray tracing, not actual shading. With that performance number being relatively stable within a broad range of model complexity.

15 rays per pixel is a joke, for anything but the most trivial, pure specular materials or pure shadow calculation. For a couple of effects, like scattering with transparent materials, they still had to fall back to screen space post processing effects, partially because unsupported by the card, partially because that requires significantly more rays. Also no geometry in the scene other than the car and the box lights, in order to waste as little rays outside the relevant area as possible.

Transfer these numbers to a modern 4k @ 60fps requirement, and you are left with less than 1 ray per pixel per frame, after already using a hybrid tracer. Not even enough to cast shadows against a single point-light.


The "dancing robot" demo from GTC? Ran on 4 GPUs, "live". Probably with the same "cinematic restrictions".
Yes, it's still impressive. But it's about factor 16-32x away from the efficiency and performance required to make this in any form relevant for actual real-time computer graphics.

And keep in mind that even Jensen acknowledged this time that Moore's Law is dead, even for Nvidia. They can still afford to add more special function units (not running in parallel) with smaller nodes, but they are hitting a wall with power density as well, and the only way around that wall would be to switch from high performance to a low power node, getting another factor 2. Or in other terms: That required performance increase is simply not possible.

As a beam one counts every single request of intersection with the scene. So something like transparency are "N" rays depending on how deep one wants to go. The 10 Giga-rays per second are measured under ideal conditions and in practice one has clearly less. The latencies for texture accesses, complex shaders with more registers etc. are quickly pushing it down (this is normal). Practically the performance of raytracing depends on how many near misses one has, how divergent one is and what additional latencies one gets. I wouldn't give too much about the value but look for the papers and applications that show it in the context of real content. It only needs the value to compare the SKUs and that's what Nvidia raytracing experts say.

For example "Ray-Tracing Operations Per Second" is marketing and one will see what is established as a measure when other manufacturers also have something. Then it will run out on certain benchmark scores as usual.
 
Last edited:
Some new performance numbers on the "Speed of Light" demo from talks on this years GTC:

It reached 24fps ("We define real-time as 24fps"). At FHD with 20% super sampling. Running on not one, but two RTX 6000 in SFR.

Effectively, they got around 15 (hitting / missing) rays per pixel per frame. Not than fancy "10 Giga-rays per second" claimed by Jensen, but a far more modest 300 Mega-rays per second and card against a non-empty BVH, and that despite 95% of the GPU load being attributed to just the "hidden" parts of the ray tracing, not actual shading. With that performance number being relatively stable within a broad range of model complexity.

15 rays per pixel is a joke, for anything but the most trivial, pure specular materials or pure shadow calculation. For a couple of effects, like scattering with transparent materials, they still had to fall back to screen space post processing effects, partially because unsupported by the card, partially because that requires significantly more rays. Also no geometry in the scene other than the car and the box lights, in order to waste as little rays outside the relevant area as possible.

Transfer these numbers to a modern 4k @ 60fps requirement, and you are left with less than 1 ray per pixel per frame, after already using a hybrid tracer. Not even enough to cast shadows against a single point-light.


The "dancing robot" demo from GTC? Ran on 4 GPUs, "live". Probably with the same "cinematic restrictions".
Yes, it's still impressive. But it's about factor 16-32x away from the efficiency and performance required to make this in any form relevant for actual real-time computer graphics.

And keep in mind that even Jensen acknowledged this time that Moore's Law is dead, even for Nvidia. They can still afford to add more special function units (not running in parallel) with smaller nodes, but they are hitting a wall with power density as well, and the only way around that wall would be to switch from high performance to a low power node, getting another factor 2. Or in other terms: That required performance increase is simply not possible.
The solution is simple: run ray tracing at less than 1spp and use denoising and upscaling to compensate. Even under those contraints it's far better than rasterization.

Also, just render at 1080p and upscale to 4K. Only the guys at Digital Foundry would notice.

@sebbi agrees:

 
Last edited:
NVIDIA RTX Effects Cost 9.2 ms in Remedy’s Northlight Engine, Running at 1080p on an RTX 2080 Ti
As reported by Golem.de, the raytraced scene delivered clearly higher quality graphics but the expense was rather significant. Between contact shadows (2.3 ms), reflections (4.4 ms, as you can see in the picture below) and denoising (2.5 ms), all of the NVIDIA RTX effects cost 9.2 ms in render time.

This is important, as I’m sure many of you already know, because the overall render time cannot be higher than 16 ms in order to target 60 frames per second, or 33 ms in order to target 30 frames per second. That means the remaining budget to fit everything else and achieve 60FPS would be a mere 6.8 ms.

To make matters worse, the demo was running at 1080p resolution using the brand new top of the line RTX 2080 Ti GPU. Lower specced graphics cards such as the 2080 or 2070 would inevitably fare worse. On the other hand, Remedy will surely optimize NVIDIA RTX performance ahead of Control’s release (currently planned for next year) and it’s also possible that the final game will allow users to customize the options, for instance deactivating the costly reflections while keeping raytraced contact shadows and diffuse GI.
https://wccftech.com/nvidia-rtx-9-2-ms-remedy-northlight/
 
Also, pretty meaningless numbers without the time required by the default implementatinon without raytracing. If those effects or their equivalents were at, say 3 ms, the judgement of RT performance would be quite different than if the raster-based effects would need 12 ms. Of course, those are just made-up numbers, just to illustrate a point.
 
FFXV Dev: NVIDIA DLSS Provides Substantial FPS Boost; We’re Looking Into Adding NVIDIA RTX Reflections, Too

We were able to have a quick chat with Takeshi Aramaki, Technical Director and Lead Programmer on FINAL FANTASY XV at Square Enix, who discussed NVIDIA DLSS at length and revealed that NVIDIA RTX based reflections may also make it into the game in the future. He also pointed out that the technologies available in RTX GPUs could potentially change the way developers make games in the future.

Q: Can you explain your implementation of NVIDIA DLSS in Final Fantasy XV? Was it easy to get it working?

A: The implementation of NVIDIA DLSS was pretty simple. DLSS library is well polished so, with DLSS,
we were able to reach a functional state within a week or so, whereas it could take months if we
implemented TAA on our own. The velocity map and how it’s generated differ depending on each
game engine. In order to support that aspect and to keep pixel jitter under control, we needed to
modify parameters.

Q: Does it improve image quality compared to TAA (Temporal Anti Aliasing)? What kind of performance improvements have you seen after enabling it?

A: The resolution of the texture reduces partially at times, but polygon edges are much cleaner and
we’ve been able to realize blur reductions. The performance significantly increases with the 4K
resolution and the framerate also shows a substantial improvement.

Q: Have you looked into adding actual real-time ray tracing to Final Fantasy XV or is it too late for that? What do you think of the NVIDIA RTX and the GeForce RTX graphics cards, anyway?

A: Ray tracing is a technology that we’re tackling with much interest, and we’re now looking into the
possibility of using raytracing to depict reflections in FINAL FANTASY XV WINDOWS EDITION. I believe it’s possible to integrate raytracing into the game engine later and use it to express certain
in-game elements.
I think the GeForce RTX graphic card will play an important role in pushing for a next-generation
expression of game graphics. I also think that these graphics cards can potentially change the way we create games in a significant way.


https://wccftech.com/ffxv-nvidia-dlss-substantial-fps-boost/
 
Back
Top