GART: Games and Applications using RayTracing

Discussion in 'Rendering Technology and APIs' started by BRiT, Jan 1, 2019.

  1. chris1515

    chris1515 Legend

    You don't have console implementation on PC because it uses DXR. And I am not sure devs will use the flexibility they have on consoles if they need to create two RT system one for consoles and one for PC. On consoles you have standard BVH solution too and I would not be surprised if all devs use this for the moment and maybe forever because they need to release games on PC.

    I am not sure Sony studios will use this flexibility if they need to release titles one day on PC. Every consoles titles will release one day on PC.
     
  2. JoeJ

    JoeJ Veteran

    IDK, but probably they use the same BVH data structure and build/refit compute shaders.
    I do know that GTX DXR is slower than other compute raytracers, so we can assume NV did not optimize the hell out of it. Still good enough. I wish AMD would do this too... :/
     
    CarstenS likes this.
  3. DegustatoR

    DegustatoR Veteran

  4. CarstenS

    CarstenS Legend Subscriber

    I believe it is the correct metric speaking from a consumer point of view and also from being a cut-down of the 2nd-largest die in it's family.

    But may that as it be. The topic was RDNA2 vs. Turing, not Ampere.
    Then look at 6800 vs. 2080 Ti if you will, for which i provided an example as well. Or do you insist, that I use 6900 XT here, since it's also a 1000-EUR-class product, as the 2080Ti has been?
     
  5. trinibwoy

    trinibwoy Meh Legend

    Neither PhysX or DXR were a primary driver of GPU prices.

    How can you say it was introduced too early when there are shipping games making practical use of RT today? You’re comparing today’s reality with some alternate version that either has no DXR or more flexible DXR. I don’t know why we would be better off with no RT in games today and it’s easy to say that something should be better but how do you know that the flexibility you want is even achievable today? Where are the alternatives to DXR that prove your point?

    I agree with you on large scale fluid simulations. Hardware is nowhere powerful enough yet. At the same time though we still don’t have good cloth simulation in games even though it’s definitely within reach of today’s GPUs. So yeah it’s great that proprietary PhysX lost but the end result is the bar remains low for everyone because nobody else picked up the torch. The long predicted open solution never arrived. Not really something worth celebrating.

    The price of doing nothing has even worse consequences. Would you be more satisfied if this console generation had no RT at all? How is that better for us long term?
     
    DavidGraham and PSman1700 like this.
  6. DegustatoR

    DegustatoR Veteran

    Yeah, I wanted to point that out as well. PhysX died and instead of all the free and better alternatives we're back to square one basically, with ragdoll physics running on CPU, with nobody being in any hurry to create anything like what PhysX was.
     
  7. JoeJ

    JoeJ Veteran

    My impression is selected devs were contacted last minute just to make some quick demos for the DXR showcase. In public, there was nothing.
    Likely Epic knew about it some time before too. Maybe they complained, maybe they didn't. But does not matter - LOD was the elephant in the room for decades, and no matter which solutions people come up with, access to BVH is necessary for any method i'm aware of. Next gen was coming close, so the assumption there might come some progress on LOD was obvious, together with the assumption about increasing dynamic geometry all together.

    A nice hypothetical question about UE5 would be: 'If Epic had known about DXR and it's limits long enough, maybe they would have cancelled the work on Nanite, because it's geometry is not compatible although it's still traditional triangles?'
    Personally i have the same situation, thus my rant. I'm happy DXR was announced right before i would have started work on compute raytracing, so i dropped those plans in time. But actually i come back to thinking it might be the best compromise, although this really makes no sense.
     
  8. JoeJ

    JoeJ Veteran

    Because i'm afraid API limitations get never fixed properly. So we have nice RT games now, but worse games than what would be possible tomorrow.
    Notice this is no problem from HW vendors perspective. A reason to upgrade is a good thing for them, but a bad thing for developers and end users.
    It's not about alternative fantasy realities, but about conflicting interests which needs to be leveraged. Increasing production costs require some rethinking on all ends.
     
    CarstenS likes this.
  9. DegustatoR

    DegustatoR Veteran

    It's the same old story again - better to have something which is usable now instead of waiting for some more years until a more flexible solution could appear (or it could totally not, see PhysX example).
    I also don't understand how having DXR1 precludes us from "fixing API limitations" in the future. Isn't that a bit like saying that DX5 has precluded us from getting DX7-12?
     
    pharma, HLJ, DavidGraham and 3 others like this.
  10. trinibwoy

    trinibwoy Meh Legend

    DXR doesn’t appear to be fundamentally broken. In this first iteration BVHs are black boxed for a very good reason. BVHs need to be hyper-optimized for the traversal and intersection hardware that they’re running on. Asking every developer to solve this problem on day one is a very bad idea. Imagine every game developer having to optimize their BVH implementation separately for AMD, Intel and Nvidia. E.g if Nvidia hardware prefers BVH8 but AMD likes BVH4.

    It may be nice academically but a complete disaster for actually shipping games out the door. The IHVs would have no opportunity to optimize for their hardware and performance would be all over the place depending on the developer’s preferred platform and/or their ability to even code a proper BVH pipeline.

    A hypothetical future DXR version with programmable BVH will run just fine on today’s hardware. No need to upgrade.
     
    DavidGraham and PSman1700 like this.
  11. JoeJ

    JoeJ Veteran

    That's not my request. Accessing / building / maintaining BVH should be possible, but not necessary for those who won't have any benefit. So yes, DXR is not 'broken', it just misses that essential option.
    If we get it, yes. But if we don't get it, using more powerful HW is the usual practice to achieve further progress. And i think we can't count on that anymore that much. So we need all options to optimize, and my request isn't even low level.
     
  12. JoeJ

    JoeJ Veteran

    IDK either, but there was a primary OpenCL implementation (so no access to intersection HW), and a Vulkan one (which in theory has access now).
    My guess is there is still no HW acceleration, and Turing was also faster than RDNA with Radeon Rays. Maybe the benchmark lists if using CL or VK versions to be sure.
     
  13. trinibwoy

    trinibwoy Meh Legend

    In order to give developers access to the BVH the api needs to expose a well defined BVH data structure instead of the current opaque TLAS/BLAS hierarchy. This in turn would impose constraints on the hardware traversal and intersection implementation as it would need to adhere to this strict api definition and the compression/caching considerations that go along with it.

    This seems quite a bit less flexible overall than the current approach where IHVs have room to innovate.
     
    pharma, HLJ, DavidGraham and 2 others like this.
  14. chris1515

    chris1515 Legend

    A counter example is console, they have the same standard DXR into the API, multiple BVH solution done by the platform holder and the possibility for everyone to customize its own BVH.

    It doesn't means AMD and Nvidia doesn't have their own solution, it means more flexibility. Keep the standard solution for first gen title and gives flexbility to devs who needs it.
     
  15. trinibwoy

    trinibwoy Meh Legend

    Is it true that developers can customize the BVH on consoles? Do you have a source for that?
     
    PSman1700 likes this.
  16. JoeJ

    JoeJ Veteran

    I guess interest is minor, so not worth the 10 minutes of work to expose that single intersection instruction >:/
    Personally i'd use it, even if NV then still needs another solution...

    Not my request either. I don't see a good solution for a vendor independent BVH API. And i don't need it. Vendor extensions would be the way to go, but that's not popular within Microsofts vision of 'standardize everything'.
    So my biggest hope is Vulkan extensions. NV usually is creative here, e.g. device generated command buffers extension. But they already have the performance lead and likely see no need to make things complicated. AMD does nothing so far, also maybe because RDNA3 might use another data structure then.

    So it's difficult and just temporary. But the current solution has zero flexibility, and nothing is less flexible than that.
    To me, adding patches for each future generation would be much less work and headaches than trying to achieve LOD with DXR, which basically is impossible. So i would welcome temporary extensions with open arms, until vendors start to agree on BVH format (if they disagree at all).
    Take AMDs format for example, which we know what it is from the intersection instruction interface: 4 child pointers and bounding boxes. Super simple. Not as complicated as you and many here might think.
     
  17. trinibwoy

    trinibwoy Meh Legend

  18. chris1515

    chris1515 Legend

    There is no hardware traversal solution on consoles, no RT core. I don't see any limitation for devs to do what they want. After Microsoft and Sony have their own BVH solution, devs can use. They can do their own or more probably customize existing MS or Sony BVH and tailor them to their needs. Microsoft told for example on Xbox Series X you can precompute the BVH offline for static geometry and steam it from the SSD and do other optimisation on it at least on Xbox Series. And all of this is out of scope on PC.

    https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

     
    Last edited: Jul 3, 2021
  19. DegustatoR

    DegustatoR Veteran

    There is, they are called ray accelerators.
     
    PSman1700 likes this.
  20. chris1515

    chris1515 Legend

    Ray/intersection is hardware accelerated but not BVH traversal on AMD GPU. NVIDIA GPU hw accelration use RT core for BVH traversal and ray/intersection acceleration. Denoising and DLSS is accelerated by Tensor Core.
     
Loading...

Share This Page

Loading...