Nvidia Turing Architecture [2018]

Discussion in 'Architecture and Products' started by pharma, Sep 13, 2018.

Tags:
  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,833
    Likes Received:
    2,663
    3D Mark VRS feature test is online now, and 3D Mark is claiming up to 50% uplift from VRS!

    [​IMG]

    Also Intel is claiming close to 40% uplift on their Gen11 iGPUs

    [​IMG]
     
    Lightman likes this.
  2. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    558
    Likes Received:
    644
    Imagine each PC monitor had built in eye tracking. Could save multiple orders of magnitude of GPU power, and maybe also useful as input device replacing / enhancing mouse?
    But at this point rasterization becomes questionable. Instead rendering multiple rectangles of varying resolution it would be more efficient to render a single low res image with non planar projection having continuous magnification towards the focus point, and upscale / blur / unproject from there. There is research to do this using tessellation (curved triangle edges necessary), but RT or splatting would beat this. (Current VRS would become obsolete as well, so i don't see it as a VR feature but one more thing to keep up with crazy 4K / 8K demands.)
     
    pharma likes this.
  3. techuse

    Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    92
    Likes Received:
    29
    Out of everything new in Turing, VRS is the feature id like to see get the most use.
     
    milk likes this.
  4. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    3,635
    Likes Received:
    2,263
    Location:
    Barcelona Spain



    Funny

    EDIT:
     
    #284 chris1515, Aug 27, 2019
    Last edited: Aug 27, 2019
    Lightman and eloyc like this.
  5. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,436
    Likes Received:
    442
    Location:
    New York
    Seems tessellation performance is still setup/culling bound. I would think at some point the rasterizers would be the bottleneck but that doesn’t seem to be the case. TU102 is 50% faster than GP102 with the same number of rasterizers.
     
  6. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,046
    Likes Received:
    2,633
    Does raaterization need to be done with a regular matrix of samples though? Can't it be reworked to support irregular and/or non-linear sampling across the screen. Things like MSAA, programable sampling locations, VRS and some other features already require some flexibility from GPUs on that part. Can't that be expanded further.
    In fact, I've always wondered how all those features that alter sampling positions, fragment quantities and etc mesh with the rest of the gpu's rendering pipeline...
     
  7. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    558
    Likes Received:
    644
    This would be an option. But it would still feel wrong. If you render a frame with fov close to 180, the content in the center has worst resolution and at the border the most. The opposite from what we want.
    Inverting this with controlled sampling density would work, but i assume it would add too much complexity to rasterisation HW, which already seems bloated.
    Sadly i can not find the paper another dev showed me. Can't remember exactly, but something like 500 x 500 pixels would be enough to reach 4K quality. I did not believe this at first, but how many words on screen can you read without moving the eyes? 3 or 4 in a row?
    Is it still worth to have dedicated hardware, limited to just primary visibility, to render a 500 px frame? Likely not.
    (I failed to convince you guys RT cores would not be necessary, so now i try to target ROPs, hahaha :) )
     
    milk likes this.
  8. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,046
    Likes Received:
    2,633
    How much more complexity is what I wonder. As I said, some of the groundwork is already there thanks to MSAA, VRS and such. How cool would it be if before we ditched rasterization altogether, we had a few more GPU gens with hardware accelerated but still highly programable rasterization. With amount of samples and their position being controllable with high granularity with compute. I'm talking rasterization shaders over here. Do to rasterization what Mesh Shaders/NGG is doing to the geometry pipeline. Kill specific pre-programmed MSAA and VRS modes and give devs the tools to come up with their own. Open up Early-Z and Zbuffer compression for devs too while at it. And can I get the same thing to be done with texturing too? Can I dream?

    Ok, that changes everything. For 500px, we don't even need RT acceleration. Just general compute would do a better job at it. Maybe even a CPU might get similar performance to the damn GPU. That would throw everything we know about rendering out the window.
    But then you remember you wanna be able to play games on your TV with your pals watching and your eye tracking goes to shit.
    Keep those ROPs in there JoeJ...
     
    JoeJ likes this.
  9. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    558
    Likes Received:
    644
    Totally old school! Wife can even serve the beer without clothes on because all the pals waering hip VR goggles all the time. :)
    But you're likely right - we'll get texel shaders, rasterization shaders, subsample position shaders, subsample pixel micro shaders, hierarchical subsample pixel micro shaders and all this long before i get what i want :D
     
    milk likes this.
  10. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,046
    Likes Received:
    2,633
    I don't know if it was the naked wife serving beers or this second part that did it, but I got aroused. Just kidding, it was obviously the second part.
     
    Lightman and JoeJ like this.
  11. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    558
    Likes Received:
    644
    As you wish. In that case i'll give the only AR device i have to another pal :p
     
  12. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,035
    Likes Received:
    1,038
    Nevermind that these games would kill YouTube streaming and VG review sites that heavily produce video content.
     
    #292 dobwal, Aug 28, 2019
    Last edited: Aug 28, 2019
    milk likes this.
  13. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    558
    Likes Received:
    644
    What a loss! :) Streamers could still render the games full res on powerful HW, while the few that actually still prefer to play a game instead watching it, can play it on cheap low power HW.

    But seriously, i guess having robust gaze tracking from a distace is hard. It might work for computer monitors eventually, but not so well for distant TV sets i guess. Surely more a VR/AR option. (...was just an idea because i've seen some Intel laptop with eye tracking.)
    Also i think the savings on rendering would not be that dramatic:
    I assume AA has to be very good, also (or mainly) at the borders of the screen to avoid flicker that we would perceive as moving objects. So, many sub samples.
    For RT lighting you still need a high sample count as well, which is somehow independent of resolution.
    World space based lighting methods to time slice GI would not benefit so much either.
     
  14. Dictator

    Newcomer

    Joined:
    Feb 11, 2011
    Messages:
    138
    Likes Received:
    347
    Couldn't it be done like BFV variable ray tracer except based on the areas of the screen you are looking it? Or ... in some other way... making the denoiser wokr extra hard and be more expensive on the focal area and less/expensive/less accurate in areas out of the viewer focus?
     
  15. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    558
    Likes Received:
    644
    Sure there are options, but you will not see a 10 times speedup just from going down to 500x500px.
    I'm no expert here, but i see some intersting things, not obvious on the first thought:
    When changing focus quickly to another section of the screen, it takes 1/4 second until i see that sharply. This is great because the previous low res results from that area should be still good enough to get going from there. Also the human perception of motion in the focused area is 'laggy' in comparison to the peripheral border area (coming from the primal need to detect dangerous animals quickly, they say).
    So we need high spatial quality in the center but temporally stable results at the borders i guess. I assume laggy lighting is still acceptable everywhere because it's likely not so important to detect motion.

    But the problem is we have bad neighborhood information in the borders, because pixels cover large solid angles. And this will break both denoising and TAA, which kinda defies the whole foveated rendering idea.
    So this will not make high quality path tracing cheap - you'd just need to do more samples per pixel than before.
    A solution would be something like prefiltered voxels for example. Here you could pick the voxel mip from the pixel solid angle and there would be no aliasing or flickering (see Cyril Crassins works before VCT - i don't say this is parctical, but there are not many options to get prefiltered graphics).
    For the lighting some world space based methods have similar properties allowing such good filtering, and i assume this works well here. Though, this would not benefit from the lower resolution.
    Still, the win could be: Expensive RT to get high frequency details like sharp reflections and hard shadows would be necessary only in the focused area at all, allowing for much higher quality in return. Requirement is both lighting techniques have to be accurate so they match and can be blended - VCT would fail here.

    ... far fetched random thoughts, ofc :) But we see similar dilemmas aready now with DLSS: Even if we could do RT at quarter resolution and upscale just that while the rasterization happens at full resolution, we would loose samples for denoising. So the current standard to upscale the whole frame instead seems a compromise between a lot of things.
     
  16. PSman1700

    Regular Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    607
    Likes Received:
    154
    Funny in the way that the PS2 wasn't really the way forward nor developer friendly. Hardware vertex shaders where though. Voodoo6000 that never really made it to the market was trying the same approach.
     
  17. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,254
    Likes Received:
    1,940
    Location:
    Finland
    Huh? Voodoo 5 6000 had exact same features as 4500/5500 which made it to the markets. If you're referring to Rampage instead, that was supposed to have PS and VS 1.0
     
    Lightman likes this.
  18. PSman1700

    Regular Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    607
    Likes Received:
    154
    Now i'm unsure.... wasn't it one of the unreleased voodoo products doing PS2-like rendering in some way?
     
  19. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,833
    Likes Received:
    2,663
  20. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,357
    Likes Received:
    461
    Location:
    Finland
    The test is Tier1 VRS, so Intel should see similar speedup as well.
    It is also somewhat visible and would be preferable to be mixed with TAA or something to mask it.

    In VRS talks there was nice ideas to use it for areas with strong DoF which at least in older version of UE4 was very expensive in itself.
    Certainly a nice tool to have.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...