Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Discussion in 'Console Technology' started by vipa899, Aug 18, 2018.

Thread Status:
Not open for further replies.
  1. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,059
    Likes Received:
    1,021
    If you look at your own phrasing, it's easy to put the finger on the weakness. "True, realtime global illumination." Raytracing doesn't offer true global illumination, it is another approach to it, the quality of which depends on implementation and computational effort. "True" vs. "hacks" are meaningless - it's all just pixels on a screen, and the only valid measure of success is to what extent we accept the result. "kludges that should be replaced", why should they be replaced? Intellectual purity? That's the domain of philosophers, not game developers. And the other half contains the caveat: "...when possible.". Well, is it? The answer, at this point in time is simple - no it isn't.
    Will it be possible in the future? Well the jury is out on that one.
    And my contribution to this discussion is that what will ultimately decide that is efficiency.
    Lithographic advances won't solve that, partly because they apply to all approaches, partly because it is a well running dry. If can't be efficiently done today, well, don't hold your breath for hardware to solve it for you. It may never, in gaming, amount to more than something cool that help tech nerds such as me justify their expensive PC gaming rigs. It will help nVidia in other markets however.
     
    temesgen, Xbat, BRiT and 2 others like this.
  2. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,795
    Likes Received:
    1,903
    If AMD follows Nvidia's path of separate RT logic/cores from the main CUs (rasterization) array, then I agree. But, if the Navi architecture is a whole departure from the current GCN SIMD design, one would think (hope) some form of RT logic makes into the new compute units. And honestly, I think the magical $399 price point is dead ($499 is more like it). Streaming-boxes/services ($199-249) will be the new entry point for next generation gaming.
     
    Alucardx23 and Heinrich4 like this.
  3. Lalaland

    Regular

    Joined:
    Feb 24, 2013
    Messages:
    596
    Likes Received:
    265
    My skepticism around a $499 price point comes down to the idea that $100 more in the BOM would make a substantial enough difference to the end product to offset being $100 more expensive. I think if streaming were a realistic prospect any time soon the rumoured phone contract style Xbox Live + Console product wouldn't be on the cards at all.
     
  4. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,885
    Likes Received:
    6,159
    eventually it's becoming increasingly clear that the computational power or budget required for the hacks is hitting the equilibrium point of dedicated RT hardware. As the hacks continue in such a direction, diminishing returns should make it such that RT would clearly surpass our current methods. This is the only logical thought process here, we must be close to that turning point now such that MS, AMD, Nvidia and others are all moving towards this direction. This can't be some weird conspiracy, ray tracing is clearly offering a substantial visual difference when it's pushed.

    Some of the key points from the DXR guidelines is that the hardware should be flexible and part of the compute queue in direct X. That makes the RT additive, as opposed to having a completely separate pipeline. So I don't think necessarily that we're seeing an insane amount of hardware dedicated to these resources here.

    Also, it's too early to judge the performance on these titles yet. These are first wave titles on crappy drivers, and immature code meant to demo what they can do. First wave might be too early to get in, I agree on all points on this front, but 2 years from now is a different story. There's been a flurry of activity on this thread since it's announcement 1.5 weeks ago. We have over 100 weeks left to go before new hardware arrives.

    Maturity moves quicker than you think and there is still quite a lot of run way before next gen consoles arrive. We have no yet seen Intels and AMDs implementation of RT, which I am certain we will by 2020.
     
    vipa899, DavidGraham and eloyc like this.
  5. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,686
    Likes Received:
    11,134
    Location:
    Under my bridge
    For the same reason they've been replaced in offline rendering - the effort required to the produce the same or superior results are far less when the graphics rendering does all the work instead of engineers and artists constantly hacking away. "True" means a unified lighting algorithm - a solution to the one problem of illuminating and visualising the scene. "Hacks" means a fragmented solution to the problem by addressing lots of individual problems and tying them together. A "true" solution will be perfectly scalable and produce correct results without needing constant reinvention. It also has intrinsic advantages like rendering a circular warped camera perspective rather than a linear perspective warping (wide-angle is rounded).

    Specifically regards my point though, you ask what can raytracing bring now. The current Turing RT solutions in a hybrid rasteriser should solve the lighting problem in an optimal engine (rather than current shoe-horned games). Games like Uncharted and Quantum Break would hit a new visual high without the lighting ever breaking or scenes being pretty but stuck together by superglue, and I don't think that'll be achieved with layers and layers of rasterising methods. Volumetric lighting solutions are getting the best results in current engines and even these are should benefit from RT acceleration.
     
    Alucardx23, pharma, vipa899 and 2 others like this.
  6. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,795
    Likes Received:
    1,903
    Material, resources and labor are becoming more expensive. I just don't see Sony/MS taking $100 (or more) hit on hardware. Console gamers at some point will have to except that the current [launch] pricing model of $399 will not last. I think (believe) that streaming-boxes and services will become more-and-more important as time goes go. Not replacing standard standalone systems, but an option and cheaper solution towards entry into the next-generation of gaming.
     
  7. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,956
    Likes Received:
    4,553
    I don't think the RTX cards are going to impact anything at this point. The SoCs' designs are most probably finalized by now.
    And if it takes nvidia a whopping 750mm^2 at 16FF density to make a GPU capable of running the first RT implementations at 1080p60, the console makers definitely aren't interested in it right now.

    It might appear in mid-cycle refreshes, though unless AMD can find a way to extract a lot more Gigarays-per-second per-transistor, I doubt it will see the light of day in the next gen, even when they get access to sub-5nm.


    So Turing cards will have a different featureset along the lineup?
    Well considering how even TU102's RT usability is a bit questionable (games with partial RT not achieving solid 60FPS at 1080p), I guess that makes sense.
     
    Lalaland likes this.
  8. turkey

    Regular Newcomer

    Joined:
    Oct 21, 2014
    Messages:
    739
    Likes Received:
    429
    Is there a thread on how this all works?

    I wonder what temporal data can be kept and how they can keep new ray quantity down to lower the cost of these techniques.
    It still seems quite brute force in nature when the data given back is ground truth of a sort. Shadows and lighting in general do not change based on camera position although different areas of the game world are now seen.

    They are de-noising which seems a good reduction in load but can they store intermediate and derived data temporary rather than the final frame buffer. Can they take the outline and shadow opacity and reproject as they know how the model it was cast from has changed? ( Thinking the final age being composed of many layers rather than steps altering layers)

    It seems like a whole new area of creative software should surface if there are GPUs that are good enough to get over the line, from there onwards it's about improving this and expanding what can achieve an acceptable result.
     
  9. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,271
    Likes Received:
    3,721
    @turkey It's an interesting thought. If everything is calculated in world space, you know where the camera is moved each frame, so why not re-use as much data as possible. The problem with rasterization is it's pretty much locked into fixed-function work relative to a single view from the camera. Could ray tracing actually de-couple a lot of the secondary ray (non-camera) calculations and then re-use them until they're updated?

    Edit: Thinking about it, I have no idea how this would work. You'd have to re-test visibility every frame for dynamic objects, so I don't know how much of a win that is. Also you'd have to create something like a dynamic lightmap?
     
    #269 Scott_Arm, Aug 28, 2018
    Last edited: Aug 28, 2018
    turkey likes this.
  10. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,983
    Likes Received:
    2,556
    The new DirectX RT is an API that lets you shoot Ray's from wherever you want, and it returns you the polygon it's hit and where. How you use that data and how you cache it in intermediate buffers or not is entirely up to the dev.
     
    #270 milk, Aug 28, 2018
    Last edited: Aug 28, 2018
    turkey likes this.
  11. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,686
    Likes Received:
    11,134
    Location:
    Under my bridge
    Absolutely. Once devs have the option, efficiency improvements will develop. And yeah, baking data as you go is an option. Offline renderers cast a few rays for a rough scene as you edit, and accumulate more and more rays to improve the quality. You could use rasterisation of raw polygons with an ID buffer to identify areas of significant object delta needing more rays, and accumulate volumetric data for rough lighting coupled with additional top-up rays. I remember a paper years ago about adaptive sampling based on localised geometry detail - large flat walls don't need as many rays per area as fiddly objects.

    All sorts of options will present themselves. The more machines that have RT hardware, or decent GPU acceleration, the faster these techniques will develop. However, if just left to the professional space, they'll still happen in the raytracing packages for professional ise, so it's not like the tech will die out if the gaming space doesn't take up RT hardware out of the gate. Game devs would push it faster though!

    Edit: Posted on the RT tech thread


    We already have temporally accumulated rays using lower res geometry.
     
    #271 Shifty Geezer, Aug 28, 2018
    Last edited: Aug 28, 2018
    turkey likes this.
  12. xz321zx

    Newcomer

    Joined:
    Apr 20, 2016
    Messages:
    117
    Likes Received:
    32
    In one of the presentations the expression used is "shadow map space".
     
    Scott_Arm likes this.
  13. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,983
    Likes Received:
    2,556
    I was curiously considering how Nvidia's RT hardware acceleration may work in cases you are tracing rays at triangles AND other primitives (SDF, Voxels, Spheres/cylinders/cubes, Depth Maps, etc).
    Of the little they've openly said about what their RT hardware actually does, it seems to focus solely on calculating the actual ray-triangle intersection math. Supposedly it does nothing for other forms of scene representation or volumetrics.
    So if I want to cast a ray and have it tested against both types of primitives simultaneously, will it have to cast duplicates on RT cores and CUDA cores? How does the driver handle synchronisation?

    It's no surprise they didn't show many demos of volumetrics... Maybe they aren't so efficient. At best case they are no better than non RTX hardware, at worse they are slower, if not also buggy...
     
  14. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    432
    Location:
    New York
    I wish that was true but I don’t think we’re anywhere close to 90% of real-time graphics nirvana. Just compare any offline render to the prettiest games. We still have a very long way to go.

    I agree though that other important parts of the experience have been neglected for too long.
     
    Lalaland and Scott_Arm like this.
  15. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,271
    Likes Received:
    3,721
    Watching the Nvidia Optix talk, BVH acceleration works with any primitive type you define.

    Edit: Sorry, you define intersection and bounding box algorithms per geometry primitive type. So you have multiple BVH structures per type of primitive you define. I believe that's where the hardware acceleration comes into play. Probably does tracking of rays and interections, as well as some stuff related to traversing the BVH. All of the intersection algorithms, close-hit and any-hit algorithms are just cuda programs. That's the way I'd interpret it.
     
  16. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,983
    Likes Received:
    2,556
    My line of thought was based on the assumption their approach was not focusing on BVH traversal but just on Ray-triangle intersection acceleration in-hardware. At least that is what I saw someone quoting here. I know it goes against the typical RT hardware acceleration solutions.
     
  17. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,271
    Likes Received:
    3,721
    Nvidia isn't particularly forthcoming. In the way this presentation describes the Optix API, you're still left to guess how the hardware helps. It does mention a few pieces that are cuda programs.
     
    BRiT and milk like this.
  18. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,271
    Likes Received:
    3,721
  19. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,795
    Likes Received:
    1,903
  20. ultragpu

    Legend Veteran

    Joined:
    Apr 21, 2004
    Messages:
    5,436
    Likes Received:
    1,627
    Location:
    Australia
    Wonder how many tensor cores or teraflops are needed to raytrace everything including GI, shadows, AO, reflection, refraction "SSS, water caustics and hair shadow"? I'm guessing 5-6 more gens at a decent resolution?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...