Next gen lighting technologies - voxelised, traced, and everything else *spawn*

IMO, greatly expanded capability of physics processing [mass scale soft/rigid object collisions] would be more welcome to me than inclusion of raytracing in gen9 consoles. Fistfights in the water, fully realistic clothes, lifelike explosions of body tissues in horror games, interactive particles of all types, crazy VFX particle effects [Housemarque on steroids], etc. Physics processing can directly impact gameplay.

Like raytracing, all of this was possible before, but it was slow to render when larger amount of particles was introduced.

DXR accelerates BVH creation/updating and ray tracing against it. The very things rigid body collision detection rely on.

Yes, yes, YESSS!! It will be either this or that. Maybe in the following gen after next gen we will be able to enjoy both RT and a higher fidelity in physics simulations, but IMO physics simulations need to evolve a huge step NOW.

Yesterday I saw some "leaked" DOA 6 screenshots, and people were pleasantly surprised commenting how next gen it looked. While they said these things, I just saw THIS:
This needs to stop. And I'm even trying not to look at some clear polygon edges or the lack of proper hair, but come on! Cloth simulation began way back in the first PlayStation console, and we're still like this? Just apply some darn physics to the cloth in a "simple" 2 characters on screen game, so that the sleeves don't end up looking and feeling like rigid tubes, at least! I'd rather prefer realistic clothing over a perfectly accurate reflection.

Ew at the horrid dubstep music in the last video. :D
That barely looks better than last gen fighting games.

Even the PS3 FFVII tech demo has better cloth physics.

 
Last edited:
I'm going to repeat myself again..but it must be made clear that supporting DXR doesn't require any special hardware (other than a DX12 class GPU and obviously drivers enabling it). Anybody can go and download & compile Microsoft's DXR samples and run them on Pascal/Maxwell GPUs and it just works (not so much on Radeon because of the drivers I guess). A GTX 1080 can reach 1Grays/s in one of the sample which is 10x slower that the RT Core in the 2080Ti for example but "only" 6x slower than a 2070. So I wouldn't worry too much about futur console support for RT. Now, will they have dedicate HW for it? That depends on AMD but they can still claim support for it (even the Xbox/PS4 could..) anyway.
Supporting DXR without the hardware would be largely unfeasible and a pointless exercise. 6x a 1080TI is equivalent to over 60TF compute power.

So we need support with the hardware to make any sort of graphical difference here.
 
I could see Nvidia wanting to partner with MS on the next console to specifically drive adoption of ray-tracing. They would not only have the PC market but a giant presence in the console industry. 2 out of the 3 platforms would run on their hardware. In addition partnering with MS on their streaming console could also drive sales of data center GPU's used for the data center processing.
DXR is platform agnostic, so it still likely sitting with AMD in this scenario. I can’t see nvidia changing their business model, they sell GPUs with high premiums, consoles just stand to take that away. Supporting DXR should already drive adoption of their hardware.

Intel and AMD should have equivalents arriving.
MS should have had enough foresight with this to have planned this with their next Xbox. The timing is right, to release DXR now and have it mature in time for console releases in 2020/2021
 
There is IMO practically near-zero chance of this happening given that:
- Work on Scarlet is already well under way (same with PS5 which is all but confirmed to be using a variant of Navi)
- Nvidia can't provide a APU SoC with an X86 CPU (and Ryzen is too good to pass on)
- MS wants full and perfectly working BC support for all of it's console games.

This. Unless it's Microsoft's plan to sell an $700-900 console next to Sony's, presumed, $400ish console, Microsoft are still restrained by the technical options that Nvidia can offer. Nvidia can offer GPU cores. Where's the rest of the console coming from and how much is that costing and how much complex does your console need to be because you're not using an APU.

DXR is platform agnostic, so it still likely sitting with AMD in this scenario. I can’t see nvidia changing their business model, they sell GPUs with high premiums, consoles just stand to take that away. Supporting DXR should already drive adoption of their hardware.

CPU agnostic perhaps and that's a big IF; plenty of Microsoft's Windows 10 frameworks do not run on non-80x86 CPUs because they don't need too so Microsoft are not expending engineering effort to ensure they do, but this API is not platform agnostic unless Microsoft are looking to make the DirectX Raytracing API an open standard. If not, it's platform locked to Windows 10.
 
Or Intel. Intel's new GPU available in 2020 will contain RT-functionality, far better than the defunct Larrabee technology. And Intel has been hinting about a major partner "partnering" with them on using their GPUs/CPUs within the console space around 2020.

Intel has been telling the world "no seriously you guys the next GPU we make will be totes cool" for years. Hell if you've ever had the misfortune to listen to them talk about their current GPU tech you'd know there's no need to evolve as the current solution is just peachy. Honestly if they'd just concentrate on decent driver support that would be a step in the right direction.

Since Xbox there's been little room in Intel's plans for high volume low margin products like console SoCs but with the death of broad x86 consumer computing perhaps they might decide that a wee console chip is an idea worth doing. They have hired some important talent recently but it seems too soon for this cycle.
 
CPU agnostic perhaps and that's a big IF; plenty of Microsoft's Windows 10 frameworks do not run on non-80x86 CPUs because they don't need too so Microsoft are not expending engineering effort to ensure they do, but this API is not platform agnostic unless Microsoft are looking to make the DirectX Raytracing API an open standard. If not, it's platform locked to Windows 10.
Sorry I meant GPU vendor agnostic; ie like DirectX calls work on any GPU vendor that supports DXR.
 
I could see Nvidia wanting to partner with MS on the next console to specifically drive adoption of ray-tracing. They would not only have the PC market but a giant presence in the console industry. 2 out of the 3 platforms would run on their hardware. In addition partnering with MS on their streaming console could also drive sales of data center GPU's used for the data center processing.
I wonder how would that affect backwards compatibility
 
As I said, Sony/MS/AMD/Intel whatever can claim support for real time RT if they which..simply because there are tons of way to do real-time ray tracing on today's DX12 Class GPUs without the need of RT Cores:


Sure. One can also teardown a brick-wall by slamming their car into it (brute forcing). But that doesn't make it an ideal choice when a backhoe is available. Simply-put, having the proper hardware equipment in place, more specifically bespoke RT cores, would relieve these types of headaches.
 
Sure. One can also teardown a brick-wall by slamming their car into it (brute forcing). But that doesn't make it an ideal choice when a backhoe is available. Simply-put, having the proper hardware equipment in place, more specifically bespoke RT cores, would relieve these types of headaches.

It's still kind of an odd design choice. For some years GPU hardware has been progressing towards powerful flexible cores than can be used for anything and now we have this. Bespoke processors. If these are not being used, are those resources just sat there idle?
 
Sure. One can also teardown a brick-wall by slamming their car into it (brute forcing). But that doesn't make it an ideal choice when a backhoe is available. Simply-put, having the proper hardware equipment in place, more specifically bespoke RT cores, would relieve these types of headaches.
Sure. But the whole point of this tweet is simply to show that the XXGrays/s claims for the RTX cards are all but meaningless when you don't know how they where measured (scene complexity, view/ray directions, bounces, static or non-static meshes etc..). We will have to wait a few weeks until somebody spills the beans on how those RT cores effectively work etc. As I said in an other thread: Turing is an awesome GPU for Quadro cards the speed up in light baking for tons of things like normal, height, lighting, AO when doing content creation is going to be nuts..but not so much on the consumer side especially at that price and if those RT cores take a shit to of die space. Unlike Jensen's claim it's not as easy as "It just Works!"

Case in point, lot's of shortcuts are still going to be taken (floating car?):
rnh85ml.jpg
 
Last edited:
That's true. It's like triangles per second. I assumed it was how many rays the custom hardware could process in regards to their specific workload, akin to the peak triangles per second of a GPU's T&L unit, but we don't know for sure. Although that'd give the biggest possible figure, so it's probably the one they'll pick for marketing reasons! ;)

Is there any description anywhere of what exactly nVidia's RT hardware actually does (or PVR's RT)? What is it accelerating and how? The big bottleneck with RT is data driven rather than processing.
 
It's still kind of an odd design choice. For some years GPU hardware has been progressing towards powerful flexible cores than can be used for anything and now we have this. Bespoke processors. If these are not being used, are those resources just sat there idle?

I'm not denying the benefits of having more flexible cores - being able to do more than one rendering task. But here we are today - at the the crossroads. The current class of available GPUs aren't well suited (unless brute-forcing and multi-SLI/NVL is your thing) for RT in the way of performance. If Nvidia's (possibly AMD as well) current formula is to have custom seperate RT logic/cores within the overall GPU design - then why not? Until they (Nvidia/AMD/Intel) figure out a better way of repurposing the current rasterization of doing things, I'm in support of custom specific tasked cores.
 
Sure. But the whole point of this tweet is simply to show that the XXGrays/s claims for the RTX cards are all but meaningless when you don't know how they where measured (scene complexity, view/ray directions,bounces, static or non-static meshes etc..). We will have to wait a few weeks until somebody spills the beans on how those RT cores effectively work etc. As I said in an other thread: Turing is an awesome GPU for Quadro cards the speed up in light baking for tons of things like normal, height, lighting, AO when doing content creation is going to be nuts..but not so much on the consumer side especially at that price and if those RT cores take a shit to of die space.
Weird. I interpreted that tweet differently, as in overwhelmingly positive for games going forward.

I get the need to actually break down the Gigaray question but we run into similar issues on the rasterization side. We quote TF but that doesn’t translate into performance. Thus we get people quoting AMD and Nvidia flops.

That being said, Sebbbi being the beast of the programmer that he is may have found some sick optimizations to make things fly for his game. One would need to question how applicable what his implementation is for other titles and also how far he could take things with Claybook on a 20XX RTX card.
 
That's true. It's like triangles per second. I assumed it was how many rays the custom hardware could process in regards to their specific workload, akin to the peak triangles per second of a GPU's T&L unit, but we don't know for sure. Although that'd give the biggest possible figure, so it's probably the one they'll pick for marketing reasons! ;)

Is there any description anywhere of what exactly nVidia's RT hardware actually does (or PVR's RT)? What is it accelerating and how? The big bottleneck with RT is data driven rather than processing.
I believe the BVH acceleration is a big part of the acceleration. Holding ray information in there of some sort and I assume it needs to be constantly updated every single frame so perhaps they have a method to update the data structure rapidly.
 
I'm not denying the benefits of having more flexible cores - being able to do more than one rendering task. But here we are. The current class of available GPUs aren't well suited (unless brute-forcing and multi-SLI/NVL is your thing) for RT in the way of performance. If Nvidia's (possibly AMD as well) current formula is to have custom seperate RT logic/cores within the overall GPU design - then why not?

The why not is that you're paying for hardware that, in all likelihood, is going to see very little use in mainstream games for a few generations. Remember PhysX cards? This looks like this decades's PhysX. :yep2:

It looks a bit.. well.. desperate. They Nvidia wanted to ship something genuinely new that they didn't think AMD could counter with an alternative and just went with this because what else do they have?
 
The why not is that you're paying for hardware that, in all likelihood, is going to see very little use in mainstream games for a few generations. Remember PhysX cards? This looks like this decades's PhysX. :yep2:

You have to start somewhere. Better RT hardware methods aren't going to design themselves. If Nvidia feels like pioneering the way or leading the charge to better RT methods/performance, I'm all for it. You can't fault an innovative company like Nvidia (or AMD) on trying to venture out on new products or concepts.
 
The why not is that you're paying for hardware that, in all likelihood, is going to see very little use in mainstream games for a few generations. Remember PhysX cards? This looks like this decades's PhysX. :yep2:

It looks a bit.. well.. desperate. They Nvidia wanted to ship something genuinely new that they didn't think AMD could counter with an alternative and just went with this because what else do they have?
Well, one can say that they at least have something, unlike AMD... Turing is a great GPU for Pros and even though Pascal is plenty enough for 99% of the gaming needs on PC (lack of serious competition from AMD helps) there would have been a massive shit-storm if they didn't release consumer /GeForce versions this GPU 2 years after Pascal. Especially after not having non-pro version of Volta...They had nothing to loose.
 
Back
Top