Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Imo raytracing looks nothing special.
Whether if you like it or not, RT is special. Quite special, actually. It will allow developers creating realistic-looking games and stuff that wasn't technically possible until now (in real-time and in an accurate way, so to speak).

If i want to see real life, i just go out. If games look look the same, it gets boring BECAUSE while it looks real, player usually cant destroy the world.

So being in real looking game is sad, as static world feels like hands are tied and cant do anything else than in real world

Like drive throught a wall or something
By the same reason, let's go back to 16-bit graphics games, right?

It's not that you have to choose real life OR a boring game. Also, you don't play racing games because in real life you can already see cars? Or you don't want to see realistic reflections in a puddle in a game just because you can see it in the real-life pavement outside your house?

BTW, no matter what tech you have, developers can create whatever they want. You may have the means to create a very realistic-looking game, but ultimately the devs will choose what aesthetics they want for their game.
 
I disagree with the sentiment, but it does get me wondering. What utility does it have for stylised aesthetics?

Reflections and shadows are obvious, useful applications no matter the aesthetic, but how else might they be applied in, for example, a cell shaded game?
 
I disagree with the sentiment, but it does get me wondering. What utility does it have for stylised aesthetics?

Reflections and shadows are obvious, useful applications no matter the aesthetic, but how else might they be applied in, for example, a cell shaded game?

Shadows, ambient occlusion, edge detection, reflections, global illumination. Ray tracing is really just visibility checks and all games have visibility.
 
Imo raytracing looks nothing special.

If i want to see real life, i just go out. If games look look the same, it gets boring BECAUSE while it looks real, player usually cant destroy the world.

So being in real looking game is sad, as static world feels like hands are tied and cant do anything else than in real world

Like drive throught a wall or something

Just because it's ray-traced doesn't mean it looks like real life. Do Pixar movies look like real life?
 
Just because it's ray-traced doesn't mean it looks like real life. Do Pixar movies look like real life?
Well small OT nitpick.. But Pixar was actually the last major studio to adopt ray-traycing. Every one of their movies & shorts prior to Monster's University in 2013 wasn't RT & they still looked damned good...
 
Last edited:
Just because it's ray-traced doesn't mean it looks like real life. Do Pixar movies look like real life?
Blue Sky Studios would be better studio as they always used ray-tracing from the early days of Ice Age.
Pixar finally did move to RT pipeline a while back, pretty sure that MU was still partly rasterized and after that they moved fully to path tracing.
 
Just because it's ray-traced doesn't mean it looks like real life. Do Pixar movies look like real life?

Well small OT nitpick.. But Pixar was actually the last major studio to adopt ray-traycing. Every one of their movies & shorts prior to Monster's University in 2013 wasn't RT & they still looked damned good...

Blue Sky Studios would be better studio as they always used ray-tracing from the early days of Ice Age.
Pixar finally did move to RT pipeline a while back, pretty sure that MU was still partly rasterized and after that they moved fully to path tracing.

Maybe the info wasn't accurate enough in the first post, but the intended point is valid: even if you have and use the tech, you can create whatever you like, not just photorealistic stuff that tries to look like real-life footage.
 
Well small OT nitpick.. But Pixar was actually the last major studio to adopt ray-traycing. Every one of their movies & shorts prior to Monster's University in 2013 wasn't RT & they still looked damned good...

And it is only last versions of Renderman than they only keep the pathtracing.

Blue Sky Studios would be better studio as they always used ray-tracing from the early days of Ice Age.
Pixar finally did move to RT pipeline a while back, pretty sure that MU was still partly rasterized and after that they moved fully to path tracing.

The last renderman version is now full pathtracing and ditiching all rasterized part last year if I remember well.

https://graphics.pixar.com/library/RendermanTog2018/paper.pdf

Very good document with history of the renderman
 
Last edited:
Just thought about this: Quantum Break did something similar to what NVIDIA is trying to pull off. While the game didn't use true ray tracing, they used some lighting techniques that are still quite more advanced than the rest of the industry. It is still very taxing on the hardware. I can barely run the game on my 2080 with 40fps in many areas @native 1440p Ultra.

They had to develop a special upscaler to run the game at lower resolution then upscale it to achieve smooth performance, especially on consoles. At the intro level, I disabled the upscaler @1440p and got my fps slashed from 90fps to 54fps! That's how much performance the upscaler provided.

NVIDIA is repeating this very same formula with RTX, they are doing DLSS for essentially the same reason. A similar formula might be employed to get ray tracing working on consoles.
 
Consoles have been using advanced upscaling techniques for years now, starting with Killzone Shadowfall on PS4, moving through checkerboarding in Rainbow Six, and moving on to temporal reconstruction of which different developers have their own take, and the best is probably Insomniac's. As it's incredibly effective in compute on <2 TF machines, the response to DLSS has been very muted from some of us and in cases like mine, it looks more like just trying to find something for the AI units to do where assuming they weren't added to Turing for graphics reasons. DLSS's main benefit might well be just adoption in the PC space where temporal reconstruction has remained niche, as reconstruction offers loads more quality per pixel for a minimal reduction in clarity.

If you want some strong pro-upscaling opinions, search Sebbbi's post history for reconstruction.
 
Imo raytracing looks nothing special.

If i want to see real life, i just go out. If games look look the same, it gets boring BECAUSE while it looks real, player usually cant destroy the world.

So being in real looking game is sad, as static world feels like hands are tied and cant do anything else than in real world

Like drive throught a wall or something

if I want to see realistic space battles can I "just go out"?

C'mon you can do better :)
 
. As it's incredibly effective in compute on <2 TF machines, the response to DLSS has been very muted from some of us and in cases like mine, it looks more like just trying to find something for the AI units to do where assuming they weren't added to Turing for graphics reasons.

So why dont Nvidia or AMD do it like its being done on consoles, via compute?
 
So why dont Nvidia or AMD do it like its being done on consoles, via compute?
Backwards compatibility on multiple generations of GPUs that aren't as strong on compute? Too slow switching between graphics/compute on Nvidia GPUs pre-Volta/Turing?
 
So why dont Nvidia or AMD do it like its being done on consoles, via compute?
Up to now it's been a requirement of the engine developers, and there's not been a one-size-fits-all solution, with different engines having different data available for reconstruction. DLSS is a drop-in solution that only works on colour data as I understand it. The upside is that it can be applied to any game. The downside is it needs Tensor silicon to implement and a supercomputer to train the neural nets.

In terms of silicon use to implement, DLSS seems inefficient to me.
 
TensorCores are nearly for free. They are just processing units. nVidia is putting them into the SM and using the upgraded scheduler hardware to feed them. You can compare GP100 with GV100 for the size penalty.
 
Backwards compatibility on multiple generations of GPUs that aren't as strong on compute? Too slow switching between graphics/compute on Nvidia GPUs pre-Volta/Turing?

HD7870 or 7970 or higher should be able? Yes people with pre-2012 gpus wouldnt have had the advantage then.
 
In terms of silicon use to implement, DLSS seems inefficient to me.
whoa, hold up on judgement for a second there.
a) DLSS upscales all the way to 4x the resolution here and it can scale even higher if desired, it's not just anti aliasing, it's super scaling. And it's got better accuracy in the process.
b) DLSS can be done on regular compute as well, it doesn't need to be completed on tensor cores but it's performance is going to be 20x more easily in this function and any other function that is specific to DNN models.
c) If you consider how much tensor cores can do, then it's easy to see it do RT denoising (which it hasn't been enabled yet) with DLSS, and any other AI models, like animations, or physics, or even on the fly content creation.

to put things into perspective: Tesla self driving us the PX system which is 2x Tegra X2 and I think possibly 2 pascal GPUs (not sure what this could be) but X2s are 1.5TF each, so.. looking at a total of < 10 TF for that whole system. The combined FLOPs for that system is nowhere near the tensor core power of what's in a 2070 rated at 60 Tensorflops. NN accelerators can be so simply built that, Tesla is also now designing their own ASIC DNN AI accelerators because it's more power for less silicon.

You'd be lucky to have very effective computer vision at such lower computing power, let alone write compute algorithms to solve it. NN's are super efficient not because the algorithm is some magical thing, but it just turns out that tons of data scales performance very well.

TLDR, you're not going to get 60TF of NN power out of pure compute, so this is actually very efficient.

The downside is it needs Tensor silicon to implement and a supercomputer to train the neural nets.
This is false or at least should be false. If it's like this, it's because nvidia mandates it to be like this for $$$. When DirectML is released, it would work on any hardware setup. Training can be done on any setup, the faster the setup the faster the training.

If you have time, I do recommend watching
http://on-demand.gputechconf.com/si...-gpu-inferencing-directml-and-directx-12.html

Hardware requirements (any GPU that supports DX12) to run DirectML.
The hardware in question is very low end, I think it's a laptop, looks like a lenovo
Nvidia only provided the model to be run against the hardware.

AI Hair

AI Denoising for Shadows, AO, and Reflections
 
Last edited:
whoa, hold up on judgement for a second there.
a) DLSS upscales all the way to 4x the resolution here and it can scale even higher if desired, it's not just anti aliasing, it's super scaling. And it's got better accuracy in the process.
b) DLSS can be done on regular compute as well, it doesn't need to be completed on tensor cores but it's performance is going to be 20x more easily in this function and any other function that is specific to DNN models.

Thoughts? http://www.capcom-unity.com/gregaman/blog/2012/11/05/okami-hd-powered-by-technical-innovation-love

ninja.gif shifty.gif
thinking-face-whatsapp.png
 
Status
Not open for further replies.
Back
Top