Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
The first question would be, what does GV100 have wrt rt acceleration? ;) I don't believe that has ever been answered. My expectation is it actually has very little (at least no where near wizard levels).
 
PowerVR was claiming 100 MRay/s in 2W for GR6500, which was on 28nm. IIRC they had HW acceleration for ray-triangle intersection testing, BV traversal, and some sort of "coherency engine", which I guess means ray sorting. That was a product that also offered a more traditional rendering path. It feels like it should be possible to do something pretty impressive with 10x the power budget on 12nm.
 
I doubt it's anything more than GV100 already has

Maybe, maybe not. It may not have more hardware units, but maybe these GPUs will have the next iteration of the hardware. After all, V100 that's used in the the current Quadros and Titan V is over a year old now. I'm hoping the Geforce RTX is further optimized for game and graphics performance and not just reused Votla.
 
Wasn't it confirmed that they use tensor cores to accelerate denoising, which is AFAIK the only RT-acceleration GV100 has?

I thought the same, but none of the Gameworks RTX Raytracing features is beside one denoising feature is using Tensor Cores. There was a presentation on GDC and i think all of them were just using compute. But at the same time RTX is only really supported by Volta. That's all a bit strange.
 
I thought the same, but none of the Gameworks RTX Raytracing features is beside one denoising feature is using Tensor Cores. There was a presentation on GDC and i think all of them were just using compute. But at the same time RTX is only really supported by Volta. That's all a bit strange.

Found the presentation:
https://www.gdcvault.com/play/1024813/

Only path tracing is using tensor cores. All real time raytracing isn't using them as i understood. At least they only mention TCs in path tracing denoising.
 
PowerVR was claiming 100 MRay/s in 2W for GR6500, which was on 28nm. IIRC they had HW acceleration for ray-triangle intersection testing, BV traversal, and some sort of "coherency engine", which I guess means ray sorting. That was a product that also offered a more traditional rendering path. It feels like it should be possible to do something pretty impressive with 10x the power budget on 12nm.


That's the the history of PowerVR... "Imagine what this tech can do on a big gpu". And it never happens. They may have the best RT tech in town, but since they're not in the gaming and workstation market,nobody care :/
 
That's the the history of PowerVR... "Imagine what this tech can do on a big gpu". And it never happens. They may have the best RT tech in town, but since they're not in the gaming and workstation market,nobody care :/
If you make tech for a market that doesn’t care about it, you’re probably doing it wrong... ;-)
 
Wonder if nVidia will offer up a second GPU as an RTX accelerator. The raytracing pass is essentially independent of the classic graphics pipeline. The second GPU would just have to ship the shadow and reflection buffers over to the primary after rendering.
 
Wonder if nVidia will offer up a second GPU as an RTX accelerator. The raytracing pass is essentially independent of the classic graphics pipeline. The second GPU would just have to ship the shadow and reflection buffers over to the primary after rendering.
Not sure a second GPU would be needed. They seem to have an algorithm for using using adaptive ray tracing with TAA anti-aliasing (ATAA) which would was demonstrated with the Unreal Engine 4.

We introduce a pragmatic algorithm for real-time adaptive supersampling in games. It extends temporal antialiasing of rasterized images with adaptive ray tracing, and conforms to the constraints of a commercial game engine and today’s GPU ray tracing APIs.
https://forum.beyond3d.com/posts/2038965/
 
RTX is nothing more than a PR name for driver support of DXR...or propaganda to make it sound like Nvidia GPU's have "special" RT HW..you choose what suits you.


What to learn more about DXR and RTX? Let's hit Nvidia's dev site:


https://news.developer.nvidia.com/dx12-raytracing-tutorials/


NVIDIA RTX technology combined with Microsoft’s DXR ray tracing extension for DirectX 12 will enable greater realism while simplifying the development pipeline.


Oh nice, let's see what RTX does in conjuction with DXR...open both tutorials linked there...not a single mention of RTX! CTRL+F: RTX = Nada! So nothing that would require VOLTA..but NVIDIA has only released DXR enabled drivers for Volta GPUs...so a VOLTA GPU is required for real-time RT right? Nice trick..


NVIDIA RTX is the product of 10 years of work in computer graphics algorithms and GPU architectures. It consists of a highly scalable ray tracing technology running on NVIDIA Volta architecture GPUs. Developers can access NVIDIA RTX technology through the NVIDIA OptiX application programming interface, through Microsoft’s new DirectX Raytracing API and, soon, Vulkan, the new generation, cross-platform graphics standard.


That's a load of nothingness...giving that the only "RTX" feature (which used Volta's Tensor Cores) is OptiX Denoising (which BTW also work on non Volta GPU's...in OptiX)


Anyway..there's no magic sauce..as stated by MS every DX12 GPU should be able to use DXR with the right drivers...Volta is being pimped just because its tensor cores are used for faster OptiX denoising (which wasn't used in the Remedy demo for example). Radeon GPU are just as capable doing all of this btw. Radeon Pro Render (which is Open source and uses OpenCL and works perfectly on NVidia GPUs) has been greatly improved in the last year, it also now has a GPU accelerated denoiser just like OptiX..was integrated into Unity in the Progressive lightmaper, has a real-time Vulkan version which has already been demoed etc..but hey Nvidia gotta be Nvidia (admittedly they do a have a tons of resources and skilled engineers currently working on Ray Tracing R&D..but their PR / Marketing has always be shit-tier Jen Seb just can't help it I guess...


The faster your GPU the faster RT will be done..as of right now ..no magic HW..but magic marketing..
 
Last edited:
@Ike Turner your text is colored rather than default. It's unreadable on dark theme. Could you please clear the formatting on the text?
 
RTX is nothing more than a PR name for driver support of DXR...or propaganda to make it sound like Nvidia GPU's have "special" RT HW..you choose what suits you.

What to learn more about DXR and RTX? Let's hit Nvidia's dev site:
https://news.developer.nvidia.com/dx12-raytracing-tutorials/


Oh nice, let's see what RTX does in conjuction with DXR...open both tutorials linked there...not a single mention of RTX! CTRL+F: RTX = Nada! So nothing that would require VOLTA..but NVIDIA has only released DXR enabled drivers for Volta GPUs...so a VOLTA GPU is required for real-time RT right? Nice trick..



That's a load of nothingness...giving that the only "RTX" feature (which used Volta's Tensor Cores) is OptiX Denoising (which BTW also work on non Volta GPU's...in OptiX)

Anyway..there's no magic sauce..as stated by MS every DX12 GPU should be able to use DXR with the right drivers...Volta is being pimped just because its tensor cores are used for faster OptiX denoising (which wasn't used in the Remedy demo for example). Radeon GPU are just as capable doing all of this btw. Radeon Pro Render (which is Open source and uses OpenCL and works perfectly on NVidia GPUs) has been greatly improved in the last year, it also now has a GPU accelerated denoiser just like OptiX..was integrated into Unity in the Progressive lightmaper, has a real-time Vulkan version which has already been demoed etc..but hey Nvidia gotta be Nvidia (admittedly they do a have a tons of resources and skilled engineers currently working on Ray Tracing R&D..but their PR / Marketing has always be shit-tier Jen Seb just can't help it I guess...
The faster your GPU the faster RT will be done..as of right now ..no magic HW..but magic marketing..​
Fixed for easier reading for us dark theme users, thanks
 
? I'm not seeing this on my end in Edge/Chrome or Fierfox...must be an issue with Windows 10's shitty insider build I'm running..
If you used copy/paste it's most likely the culprit here, it for whatever reason does (apparently randomly, too) formatting on copy/paste to XenForo, it's been happening to me for quite some time and it's always the same - dark text which is unreadable on dark theme.
I'm not even running Insider-builds (though my Windows is broken since 1803, autocorrect can't be turned off no matter what, MS's support threw the issue around 3 levels of techs without success, going clean install once the fall update comes)
 
Wasn't it confirmed that they use tensor cores to accelerate denoising, which is AFAIK the only RT-acceleration GV100 has?

denoising != ray tracing. Denoising is a trick that may or may not let you shoot less rays but still achieve roughly the same image quality. You could apply the same "trick" with other rendering techniques.

PowerVR was claiming 100 MRay/s in 2W for GR6500, which was on 28nm. IIRC they had HW acceleration for ray-triangle intersection testing, BV traversal, and some sort of "coherency engine", which I guess means ray sorting. That was a product that also offered a more traditional rendering path. It feels like it should be possible to do something pretty impressive with 10x the power budget on 12nm.

They did have some dedicated hardware for calculating ray-triangle and ray-box intersections, but that wasn't where the "big win" was. What you call their "BV (did you mean BVH here?) traversal" and "coherency engine" was the big win. Wizard had the ability to "store" the scene in a BVH/oct-tree (can't recall which it was...the trend today though is BVH since it's often faster to "update" so that would be my guess). When a ray would be submitted to "fire", Wizard would not fire that ray right away. Instead it would wait until it collected enough rays that were being fired in the same "general location" of the tree and then fire them all. Remember, ray tracing is not alu bound, but memory bound. Rays VERY quickly bounce in many different directions so a common optimization today is to group rays together that will (hopefully) access the same memory locations. This optimization I believe is employed by most "software" ray tracers today. How to update the tree (when anything in the scene changes) is one of the more interesting research problems left (current trend: store two levels in a BVH tree)!

I very much doubt Volta has any of this. Part of the reason Wizard failed was the amount of area that had to be dedicated to solely ray tracing was...a lot. No SoC (or console...) vendor wanted to pay that much area for an unproven market. So what does Volta have? I'm guessing some compute/cache enhancements that happened to also benefit ray tracing, but nothing revolutionary. Oh and I suppose that 900 GB/s of memory bandwidth helps a lot too. :D
 
Last edited:
denoising != ray tracing. Denoising is a trick that may or may not let you shoot less rays but still achieve roughly the same image quality. You could apply the same "trick" with other rendering techniques.
I should have been more precise, what I meant was "denoising using tensor cores is AFAIK the only ''special acceleration" for RT Volta has over other GeForces"
 
Status
Not open for further replies.
Back
Top