NVIDIA's Morgan McGuire predicts that the first AAA game to REQUIRE a ray tracing GPU ships in 2023

Discussion in 'Graphics and Semiconductor Industry' started by SlmDnk, Jul 29, 2019.

  1. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,446
    Likes Received:
    10,114
    Location:
    The North
    Good examples here. Was hard to notice the difference, usually I could only spot foliage or something like that. Depth of Focus being of lower quality etc.

    I guess the debate for me, whether low settings with RT enabled would be noticeably better than ultra settings. Most of the time we benchmark ultra vs ultra+RT enabled (low/med/high) but have we tried low settings + RT? @Dictator ?
     
    DavidGraham likes this.
  2. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,194
    Likes Received:
    1,182
    It matters because if the technology cannot be deployed across different tiers of the market, it will remain a niche technology. Nothing inherently wrong with that - but it is of limited interest.
    In your discussion about RTRT, the underlying theme is that technologies will move from the very expensive to become ubiquitous. My counterargument to that is that the technological foundation for assuming such a migration going forward is not solid. And that the motives for doing so at all are lacking from a consumer point of view. There hasn't been consumers on barricades demanding "arguably better shadowing at high expense". What we have seen are quite a bit of disgruntlement that price/performance development has slowed a lot. People want better performance cheaper.

    Again you're arguing that "since other technologies in the past made it to mainstream, RTRT will too". I contend that this kind of reasoning does not apply across neither technologies nor time. Unifying vertex and pixel shading for instance brought flexibility and ultimately efficiency, and the cost in complexity was bought with advancements in lithography. But neither of those may apply equally to RTRT. Sure everyone is entitled to their own opinions, but I'm not looking for other peoples opinions. I'm looking for their arguments and analysis as it applies to RTRT specifically.

    Also, an option when it comes to transistor budget is simply not to spend it, and save on die area, power draw, cooling, and total cost. It may make a better product for the end consumer.

    Of course. But if it doesn't migrate throughout the market it is basically e-peen for those who like to spend their disposable income on tech toys just for the sake of it. And there it is again, the question whether this will actually become the default way to handle some of the lighting in games. I can't say with any certainty if or when that might happen, my crystal ball is lousy when it comes to such things particularly at the very long time scales we are likely talking about here. But I can easily make a case for why it shouldn't as long as the efficiency isn't there.

    Trying to refine this into a clear question I'd make this attempt: Does it make sense for consumers to trade transistors away from the mainstream path of today to dedicated RTRT hardware?
    And my answer to that question, at this point in time, is no.
    If the efficiency picture changes drastically, so would my answer.
     
  3. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,508
    Likes Received:
    776
    Sony and MS must be really dumb investing in RT then.
     
  4. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,412
    Likes Received:
    3,270
    They propably didn't. AMD did.
     
  5. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    480
    Likes Received:
    200
    The answer of course is, it doesn't really. Nvidia made some great advances with its latest architecture, but those advances have to do with raserization. Moving to task/mesh shaders is great, it unifies and simplifies pipelines for developers, allowing ease of development for certain kinds of programming and faster operation on less silicon. Both are a clear hardware win, and I greatly hope AMD implements something with relative parity on the PS5/XBS.

    But Nvidia's dedicated BVH/triangle hardware that takes up extra silicon, AND Microsoft's DXR, are both steps backward. They were designed by hardware enthusiasts wanting to overcome a very specific, narrow problem without regard for programmers or consumers. Programmers want flexibility, they want to be able to program things. Microsoft's ridiculous "do it our way by fiat!" has ruined and delayed far too much graphics hardware. Fixed tessellation pipelines have resulted in a barely used feature for a decade now, geometry shaders were an MS dictated feature that has proven entirely useless, and now they want to ruin raytracing through their tactic of being "too big to ignore" as well.

    Raytracing was already coming. UE4 has had signed distance field tracing for years now, it's shipped in Gears 5 for distant shadows. The Hunt: Showdown and Kingdom Come both use Cryengines voxel cone tracing. Control's signed distance field cone tracing, whatever that is specifically, runs on modern consoles, looks great, and is blazing fast.

    But now because the great dictator Microsoft has decreed it, many engines have abandoned fast, programmable tracing to concentrate solely on BVH triangle tracing using specialized hardware that costs consumers extra money, gives programmers a narrow corridor to achieve anything, and is frankly quite slow for what it actually achieves. If this continues then McGuire is right, it will take years before specialized RT hardware is required, because it's slow and painful and costly. The only other option is that AMD and Microsoft's actual game developers, not some isolated hardware lab dudes, convince MS that programmable, unified RT hardware is the way to go for XBS, then RTX itself will be seen as a curious, low compatibility experiment, but consumers will ultimately win.
     
    #85 Frenetic Pony, Sep 20, 2019
    Last edited: Sep 20, 2019
    milk likes this.
  6. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,508
    Likes Received:
    776
    Well i'm glad they did.
     
    milk likes this.
  7. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,446
    Likes Received:
    10,114
    Location:
    The North
    Nvidia chose to go with BVH structure acceleration. It doesn't have to be and that's entirely on nvidia to handle ray intersection in that method. DXR is a generic API that is an extension of DX12 for the purpose of casting rays, _any_ rays, calling on drivers to determine intersection and allowing a shader to be run against the intersected triangles. DXR is nothing more than a unified API for ray casting and intersection, nothing more, it's purposefully designed to be flexible which is why it's not locked to only shooting light rays, the type of ray cast is entirely up to the developer. It can be used for AI vision, audio etc.

    These are made in tandem with the vendors. It's funny you mention geometry shaders and the need to be flexible. GS are entirely flexible and programmable, and they are too slow to work which is why they were ignored. The trade off for flexibility is and will always be speed. We started at fixed function and over time as silicon got more powerful we moved to the flexible compute. I think the romancing of the idea that we could have started GPU processing where we are today is laughable. Thats like saying we should never have bothered with coal, oil, nuclear, and renewables and should have figured out how to build Fusion from the get go.

    With large caveats and limitations. It does not function as robustly as what can be done with standard ray traversal. The fact that engines are moving towards ray tracing without ray tracing hardware wasn't a sign that they didn't need ray tracing acceleration. It was the signal that they should start making it. You're looking at it backwards. Developers see ray tracing as being the tool they need to continue to push the envelope further on the graphics side - the graphics vendors, MS, and the industry worked collaboratively to bring RT forward.

    Once again, is _only_ a ray casting and intersection API. It does not require the hardware to use BVH or anything of that sort. How intersection is handled it entirely up to the vendor and the vendors are free to compete on their R&D of how to improve the speed at which intersection is found whether the rays are coherent or not.
     
  8. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    480
    Likes Received:
    200
    Certainly DXR is flexible from a "trace rays" point of view. But no concepts of other shape tracing are included, as well as a somewhat fixed view of how acceleration structure can be used and built to begin with. And what I'm worried about is narrow, ray only hardware showing up in Xbox Scarlett, discounting that well designed hardware could potentially accelerate any number of other traversal mechanics at the tradeoff of some speed. Recursively splitting voxel cone tracing, for example, could make it fast enough to use if it had good hardware support, as just one example. But if hardware is designed to only accelerate DXR focused ray into acceleration tree structure tracing, where the only data available is at the "bottom" of the tree, then that's not going to happen.

    Geometry shaders were an example of Microsoft adding a new pipeline area that nobody used, which was the point of how bad going off a spec can be. And as one example the PS2 vector unit pipeline was already better, this is an argument against a concept from before geometry shaders even existed. Hardware vendors have actually merged a lot of these Microsoft defined stages themselves into the same hardware pipeline, after realizing building something that mirrored MS's spec was too slow. I'm not saying they weren't there with them going down the wrong road, developing the bad spec in tandem. I'm saying there's a wrong road to go down and Microsoft is already a little off track.

    What caveat? What limitation? You can do anything you want in compute shaders, trace anything you want. I'm arguing for that same flexibility to be transferred as much as possible to hardware. This demo was done in 4k on a Vega56, with a voxel structure that's not really DXR like. DXR specifies "bottom level" as where the data is other than the acceleration structure is, and that data as some assumed geometry primitives.

    I was too harsh on Microsoft explicitly. But, as just one example of "build to the spec" an already proposed update to DXR is to allow for beamtracing, an odd custom shape tracing that traces a pyramid like shape and just adds all hit polys to a render cue, something which DXR simply has no concept of at the moment. If the spec as is, is followed too closely fast beamtracing wouldn't be possible. The point wasn't really to be harsh on MS over DXR, but to provide examples of mirroring the current spec can limit speed, function, and cost. I'd love to see hardware that can accelerate screenspace tracing so ray independence doesn't mean sample counts get expensive for SSAO/SSR, or hardware that can accelerate cubemap tracing so isotropic cubemap reflections can be done, or etc. if DXR is followed as a guide to how to accelerate raytacing then the hardware wouldn't make any of these faster than they are today.
     
    #88 Frenetic Pony, Sep 21, 2019
    Last edited: Sep 21, 2019
    milk likes this.
  9. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,662
    Likes Received:
    3,555
    Location:
    Barcelona Spain
    Geometry shader was slow because it was serial out of Intel one, Tesselator was not flexible enough and not very useful... Direct X geometry pipeline until Mesh shader was not very good at least from what devs say... There is some errors but the most important is to get rid of them and now geometry DX pipeline is back on track...
     
  10. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,278
    Likes Received:
    3,523
    Nope, 1080p, 30fps and medium graphics settings.
    There are no magic solutions, you can do that at a great speed cost.
     
    PSman1700 likes this.
  11. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,582
    Likes Received:
    624
    Location:
    New York
    I disagree. The foundation is quite solid since that's exactly what has happened for the last 30 years. People want better performance cheaper and that's exactly what we've gotten. What was considered cutting edge performance 10 years ago is now available on an IGP. I'm not sure why you think RT will be any different to all the other technologies that started high-end and trickled down over time. That's literally how the entire technology industry works whether it's computers, cars, phones or TVs.

    I suspect your answer will change soon as RT efficiency can only increase from here. We've only seen the first iteration after all.

    In terms of the best place to spend transistor budget I think RT makes as much sense as anything else. We throw away a lot of performance on things that have less impact and value than RT. E.g. imperceptible ultra settings, 4k resolution. So I say hell yeah let's try something that actually makes a difference.
     
  12. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,508
    Likes Received:
    776
    Better then what? OG Xbox showed what was better though, PS2 didn't even compare in 99% of things.
     
  13. techuse

    Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    244
    Likes Received:
    143
    For the last 30 years we have had lithography advancements to trickle that tech down. We no longer have much more of that.
     
  14. seahorsesaw

    Newcomer

    Joined:
    Oct 21, 2017
    Messages:
    60
    Likes Received:
    32
    The only people who believe in constant lithography improvements in a finite world are madmen and techvangelists.
     
    PSman1700 likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...