Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
I think some form of RT and hybrid rendering will be there one day but I don't think it will be on next generation console, too early.
Now that Sony has given up the floor to MS at E3. The potential is higher to push the DXR narrative for the direction of graphics for the industry.

All you need is developer support really.
MS could provide the hardware in 2020.
I don’t think making DXR happen is anything incredibly challenging;
Take your Scorpio chip add RT acceleration and possibly AI accelerators, and shrink to 7nm, clock higher and provide the memory to support it. Swap the cpu. You’d probably end up close to 7-8 TF. For many years of research this seems reasonable.

More reasonable than trying to fit 14TF into 360mm^2 for $399
 
There are countless criticisms about the results from rasterising, for years and years people pointing out what's wrong with them and their ugly hacks. However, it's the only way to get decent framerate 3D graphics from a computer and will remain the only way through hybrid renderers still until we have notably faster tech that'll allow all ray tracing, if that ever happens. Excepting special cases like traced SDF, which again are only now possible and it'd have been really stupid to demand the ditching of rasterising in favour of traced SDF over the past few decades because traced SDF wasn't possible.

"We don't want rasterised graphics because they suck! Give us raytraced games at one frame every four hours!!"
See? Specialized hardware is required for speed and maximizing flexibility at its expense is too costly to be worth it. RTX already gives us HRT at 1080p60fps as an add-on. Just imagine the performance with a renderer actually designed for HRT.

Not at all. One is reality. The other is emotional paranoia. One has a place here. The other does not.
Emotional, you say. :p

I think some form of RT and hybrid rendering will be there one day but I don't think it will be on next generation console, too early.
Two years from now seems very feasible to me.
 
Doing something makes a great deal of sense for professional imaging. If RTX.... was a pro only card and not released for gaming, it's whole perception would be different. A cynical view sees RTX as a pro card released to gaming before the tech is ready for gaming. That's part of the discussion.

I don't really understand the cynical view. They'll never know that the "tech is ready for gaming" unless they put the tech in the hands of developers, which means releasing products and iterating improvements. Nvidia's current RTX implementation may not be, and most likely isn't, the best solution. Now that developers have it, and can hammer it with ideas, they'll have a better idea of where to go next based on feedback. There's no version of this where Nvidia can get it right in isolation. It hasn't happened with anything else related to graphics.

That's why it'd be best for consoles to have some form of RT functionality. Console devs tend to be ruthless optimizers because they can't afford brute-force approaches. If Nvidia, AMD, Intel want to see the way forward with RT graphics, console devs will be play a big part in shaping what that looks like.

This isn't to say that every game will have to embrace RT. They devs will figure out all the ways it'll work, and when it doesn't, depending on whether their title is 30Hz or 60Hz+ target. Again, RT requires a lot of general ALU power, so any RT-enabled gpu will be a powerful rasterizer as well.
 
Currently, BFV with RT enabled is causing about half (or more) of the traditional GPU resources normally used to go to waste. That is a monumental waste of silicon under normal circumstances. It's the kind of thing you can only do on a freaking massive chip, when you have no competition at the high end, and can also sell your GPU to professionals at a huge markup.

HRT may well be - and hopefully is - the future of affordable consumer rendering. But so far RTX doesn't prove this. Those figures for the 2070 are so bad that you simply wouldn't use it - at only 1440p on medium a 570 4GB user would have a massive fps and competitive advantage. Optimisations need to come very thick and very fast. Hopefully they will.

The performance hit is currently so bad, that I'd think any dedicated RT hardware should be added to a console via an edram heavy chiplet. Very low latency access, very high internal bandwith, and hopefully you could run it in parallel with one of the stages of the traditional pipeline to add little to no additional latency. Very high performance and you don't need to add it to the base SKU. Because the kind of console gamers who jumped onboard this gen at $300 but still expect "normal" versions of Fifa, Forza, GT, CoD, Battlefield, Halo (i.e. 60 fps) aren't going to want games that don't have the core, basic gameplay experience.
 
Currently, BFV with RT enabled is causing about half (or more) of the traditional GPU resources normally used to go to waste. That is a monumental waste of silicon under normal circumstances. It's the kind of thing you can only do on a freaking massive chip, when you have no competition at the high end, and can also sell your GPU to professionals at a huge markup.

HRT may well be - and hopefully is - the future of affordable consumer rendering. But so far RTX doesn't prove this. Those figures for the 2070 are so bad that you simply wouldn't use it - at only 1440p on medium a 570 4GB user would have a massive fps and competitive advantage. Optimisations need to come very thick and very fast. Hopefully they will.

The performance hit is currently so bad, that I'd think any dedicated RT hardware should be added to a console via an edram heavy chiplet. Very low latency access, very high internal bandwith, and hopefully you could run it in parallel with one of the stages of the traditional pipeline to add little to no additional latency. Very high performance and you don't need to add it to the base SKU. Because the kind of console gamers who jumped onboard this gen at $300 but still expect "normal" versions of Fifa, Forza, GT, CoD, Battlefield, Halo (i.e. 60 fps) aren't going to want games that don't have the core, basic gameplay experience.
I’ve been trying to follow along and learning some things. It could be a while before RT is built ground up and it may be necessary for solid performance for reflections.
https://twitter.com/sebaaltonen/status/1063317705170829312?s=21
https://twitter.com/bartwronsk/status/1063318330566569984?s=21
https://twitter.com/sebaaltonen/status/1063319001252671488?s=21
https://twitter.com/sebaaltonen/status/1063319881590996993?s=21
https://twitter.com/sebaaltonen/status/1063320442805649408?s=21
https://twitter.com/bartwronsk/status/1063320913263833088?s=21
https://twitter.com/bartwronsk/status/1063321333872775169?s=21
https://twitter.com/sebaaltonen/status/1063344314363887616?s=21
https://twitter.com/bartwronsk/status/1063321486113439745?s=21

The thread is long... I stopped pasting the tweets in here LOL.
 
Im sure if Sony can they will have som form of RT in their next games console. Problem might be that the design/development of the consoles probally began for a while ago and it might be abit too late to add such a huge feature.
 
Some extra use cases for the BVH hardware in RTX: Data lookup for complex lighting, and screen space physics (that are also world aware). And we are only scratching the surface here.

So much for it being a black box implementation limiting the creativity of developers.

https://blog.demofox.org/2018/11/16/how-to-data-lookups-via-raytracing/
That's GPGPU. They're talking about representing the data in graphics terms and using the graphics hardware, just interpreting the results differently. If the box was any colour other than black, perhaps a lot more could be done with it? The existence of some novel uses doesn't prove that the implementation doesn't restrict other novel uses (including performance enhancing ones) and options aren't being limited to fewer than if the hardware wasn't 'black box'.

This is exactly the same as GPGPU versus compute. GPUs could be made to do non-graphics work by structuring the workload as graphics tasks to fit the hardware. The hardware was then upgraded to better support general purpose processing.
 
More games in the works with RT, intresting to say the least.
RT is impressive but dlss is something too and having both in a title could be a thing.
 
More games in the works with RT, intresting to say the least.
RT is impressive but dlss is something too and having both in a title could be a thing.
Could you elaborate on the "more games" part?
Also, I don't understand the fuss around DLSS unless your only point is trying to get some AA at 4K, but even then I would still probably pick 4K without AA rather than the 1440p-orsomething upscaled and AA'd
 
That's GPGPU. They're talking about representing the data in graphics terms and using the graphics hardware, just interpreting the results differently. If the box was any colour other than black, perhaps a lot more could be done with it? The existence of some novel uses doesn't prove that the implementation doesn't restrict other novel uses (including performance enhancing ones) and options aren't being limited to fewer than if the hardware wasn't 'black box'.

This is exactly the same as GPGPU versus compute. GPUs could be made to do non-graphics work by structuring the workload as graphics tasks to fit the hardware. The hardware was then upgraded to better support general purpose processing.
Is the rasterization pipeline fully programmable? No. Why? Because it would be too slow.

You keep calling RTX a black box, but really, exactly what things does it prevent you from doing? Do you have any specific examples of the things we would miss out on if it wasn't for its adoption or is it just FUD?

If we didn't adopt it at least in the short term I can tell you what we would miss out on: fast triangle-intersecting ray tracing.
 
Abit like all the people that said what the PS2 could or couldnt do.
As Sebbbi says, "Where's the mesh shader hype".

As this goes on, and honestly, I'm seeing a lot of activity and discussion around DXR, more than I've seen with other new features that have been released with maxwell, and pascal. And so I'm lead to believe that they have enough information to go on to determine if it's worthwhile to explore DXR.

Hoping to see Mesh and Primitive shaders make it for next gen as well. But DX12/Vulkan needs to adopt it into their main branch as oppose to an extension for it to become a standard for all.

I don't know if this is the correct thread for discussing it, but it is part of Turing ;)
 
Could you elaborate on the "more games" part?
Also, I don't understand the fuss around DLSS unless your only point is trying to get some AA at 4K, but even then I would still probably pick 4K without AA rather than the 1440p-orsomething upscaled and AA'd
DLSS should be able to up-rez from 1440p to 4K and look nearly native, it's branch of research would cover topics like AA etc.
It's important because the debate against DXR is that the games can only run at 1080p. But with AI-up res we're looking at the ability to push that significantly higher.
 
You keep calling RTX a black box, but really, exactly what things does it prevent you from doing?
I don't know. I wasn't discussing whether the BVH was a black box or not. I was just pointing out that the existence of a GPGPU use of the hardware does not prove it to be transparent and flexible. Neither does that prove it's a 'black box'.

Do you have any specific examples of the things we would miss out on if it wasn't for its adoption or is it just FUD?
I'm not arguing on the features because I don't know the specifics of the implementation. Others such as milk I think have spoken at length about what they'd like to see in the hardware regards accessible memory access hardware.

If we didn't adopt it at least in the short term I can tell you what we would miss out on: fast triangle-intersecting ray tracing.
You're repeating the same argument instead of moving the discussion forwards. The discussion has proceeded thus far...

1) RTX's BVH is a black box that limits what devs can do.
2) Here's a use of BVH that isn't for raytracing, thereby proving the hardware isn't limiting.
3) Use of the hardware in ways it's not been designed for does not prove it's no limiting.

What we need now is either someone to show that this GPGPU implementation is actually enabled by a transparent implementation disproving point 3, or link to hardware docs etc. showing how the hardware works and is accessible, disproving point 1, or acknowledging yes, the hardware is restrictive but at least it's a first step.
 
As Sebbbi says, "Where's the mesh shader hype".

As this goes on, and honestly, I'm seeing a lot of activity and discussion around DXR, more than I've seen with other new features that have been released with maxwell, and pascal. And so I'm lead to believe that they have enough information to go on to determine if it's worthwhile to explore DXR.

Hoping to see Mesh and Primitive shaders make it for next gen as well. But DX12/Vulkan needs to adopt it into their main branch as oppose to an extension for it to become a standard for all.

I don't know if this is the correct thread for discussing it, but it is part of Turing ;)

The mesh shader hype is high on developer side

http://reedbeta.com/blog/mesh-shader-possibilities/

And they hope primitive shader is the same things and will be functionning on next generation console.
 
DLSS should be able to up-rez from 1440p to 4K and look nearly native, it's branch of research would cover topics like AA etc.
It's important because the debate against DXR is that the games can only run at 1080p. But with AI-up res we're looking at the ability to push that significantly higher.
I do understand that point, but the fact is that it loses details in the process and the only independent (even if NVIDIA was heavily involved) solution available so far breaks at least DoF if not more in the process
 
I do understand that point, but the fact is that it loses details in the process and the only independent (even if NVIDIA was heavily involved) solution available so far breaks at least DoF if not more in the process
There's other methods as well, I don't think DLSS is the only one (well unfortunately Nvidia coined the term) so we might be using the term liberally when we shouldn't.

microsoft has one that runs purely on compute and not on tensor cores. I posted it somewhere (I'll try to come back and link it), but it was part of their DirectML presentation. Not sure if that was the same as nvidia's implementation of DLSS. I don't think it is.

we're still pretty early in the using of NN to enhance images. A lot of R&D needs to be done here I think at least in the gaming industry.
 
Status
Not open for further replies.
Back
Top