Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Oh boy.
    Gotta see how the next Exynos part with RDNA performs tbh.
     
  2. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    603
    Likes Received:
    1,122
    Yes, they have to go backwards with Gaming-Ampere. A100 would be twice as efficient as Gaming-Ampere.
     
  3. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    1,060
    Likes Received:
    328
    Location:
    Luxembourg
    Well how that ends up wouldn't matter much as we don't expect 5LPE GPUs. The only good sign about 5LPE is that Qualcomm initially was onboard for the S875, disregarding the recent yield issue rumours and them going back to TSMC for a late year refresh.

    Samsung's 10LPP was better than TSMC's 10FF, 8LPP is also good, although not as great as TSMC N7 but also not too far off, besides density where it's far behind.

    7LPP is supposedly double the static leakage and -5% dynamic power vs N7, and keep in mind N7P and N7+ are better over N7 as well.
     
    Kej, sonen, Lightman and 1 other person like this.
  4. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Eh, still gonna be helluva interesting round of benchmarking for you to do.
    The whole 875 or only the modem?
    Initial thingies said X60 only.
    what the hell went wrong there
     
  5. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    1,060
    Likes Received:
    328
    Location:
    Luxembourg
    I'm not convinced we'll even see RDNA next gen so there's that. And people give SLSI too much credit when they've fucked up for 5 years in a row.
    Qualcomm hinted at some issues regarding N5 - in-between the lines I think they couldn't get any volume allocated.
    TSMC just has more resources and better R&D. Hard to keep up when you have no customers left - having tons of customers that can give you feedback is incredibly useful.
     
    Kej and BRiT like this.
  6. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    P sure it's coming.
    The Mali GPU portion was never totally truly awful sans 990.
    I have some slight hopes.
    N5 for '21 is nothing besides Apple so I doubt they had no slots at all. Weird.
    As does Intel, but Intel set themselves on fire.
    Also 14 and 10 worked pretty well for all I care.
     
  7. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    603
    Likes Received:
    1,122
    I looked up GP107. A GTX1050TI card delivered 44 GFLOPs/W within 60W and with a core clockrate around 1700MHz and the max-q variant at 54 GFLOPs/W. A 1650 sits at 46GLOPs/W with 1824MHz and a 2080TI FE with 1750MHz is around 54 FLOPs/W (same as a GTX1080).

    Basically Gaming-Ampere delivers no improvement over Pascal on 16nm and Pascal on 14nm.
     
    Tarkin1977 likes this.
  8. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    1,060
    Likes Received:
    328
    Location:
    Luxembourg
    Last I heard there will be a G78, but not sure if we'll see more than one design.
    TSMC saw the EUV conundrum coming and focused on DUV first and foremost while Samsung gambled and lost that bet. TSMC meanwhile focused to either develop their own pellicle (not confirmed) or some super-secret anti-contamination system while Samsung struggles to get wafers out due to EUV yield issues. I hope 5LPE is competitive because if not - 3GAA will be their last chance as a leading edge foundry beyond which they'll lose viability at the leading edge forever.
     
  9. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Google's semi-custom part is rumored to be G78 sure, but E1000 (or w/ever they call it) actual is ??????.
    Sounds like voodoo but EUV is voodoo so anything is possible.
    God I hope, QC needs to get their balls squashed for laziness.
     
    Lightman likes this.
  10. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
  11. Ike Turner

    Veteran

    Joined:
    Jul 30, 2005
    Messages:
    2,110
    Likes Received:
    2,304
    Nobody (none of the big animation& VFX studios ) uses GPU farms for final frame rendering. GPU rendering is mainly used by the artists during production for dev-look, lighting setup, etc. The one case where some form of GPU ray tracing was used was on Avatar & Tintin using PantaRay (Weta's ray tracer developed by Nvidia's Jacopo Pantaleoni) to bake directional ambient occlusion for the spherical harmonics pipeline. Final rendering was then done using Renderman. Weta Digital's current renderer, Manuka, was initially developed as an hybrid CPU/GPU path-tracer but the GPU path has later been dropped. When it comes to Nvidia's RTX there's also the small "issue" that most renderers use double-precision (64-bit) floating-point at several stages while RTX relies on single-precision (32-bit) floating-point which can result in inaccurate shading and limit accuracy in large scenes.
     
    #591 Ike Turner, Aug 22, 2020
    Last edited: Aug 23, 2020
    jlippo, Lightman and TheAlSpark like this.
  12. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    So somewhere between things which are pretty common on all modern cards including those based on RDNA1?
     
  13. SimBy

    Regular

    Joined:
    Jun 21, 2008
    Messages:
    700
    Likes Received:
    391
    I don't think there was ever a single GPU reference air cooler that came even close to being this beefy. It may very well be that this is some special Nvidia only edition (no AIB partner versions) monster with huge headroom for OC. Or whats more likely, Nvidia pushed 3090 or whatever ends up being called, to the brink of whats possible with air cooling and it likely has very little or no headroom left. This thing screams 400W.
     
    Cuthalu and hurleybird like this.
  14. hurleybird

    Newcomer

    Joined:
    Feb 22, 2012
    Messages:
    37
    Likes Received:
    7
    Forget reference, this might be the beefiest GPU air cooler period.
     
  15. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Interesting. Surely, they got something out of the deal though. It would be rather odd to just give Nvidia money for no reason.
     
  16. Ike Turner

    Veteran

    Joined:
    Jul 30, 2005
    Messages:
    2,110
    Likes Received:
    2,304
    It's literally in the second paragraph:

    Pixar licensed Mental Ray's QMC which Nvidia acquired in 2007 & expanded upon.
     
    Lightman likes this.
  17. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Crazy how reading links works ;).

    Do you reckon they are using CPU rendering in Presto / USD?
     
    #597 ninelven, Aug 23, 2020
    Last edited: Aug 23, 2020
  18. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    The upcoming Renderman was announced as fully vendor agnostic, so I'm not sure if they're using any sort of DXR standard. That being said, technically The Mandalorian does use GPUs for their realtime backprojection stuff. Since... whatever part of Disney is responsible for it said they're opening up the tech and studios for other productions I wouldn't be surprised if more shows end up using it as well. We'll probably see Quadro RTX 8ks and raytracing for the background stuff in Season 2, but I wonder how they'll upgrade after that. Big Nvidia chip v Big AMD chip from this year, fight!

    Regardless, the render times are so slow on CPUs that I wouldn't be surprised to see big production houses start to roll out GPU render farms over time. More and more production renderers are getting the capabilities, and overall it's probably a big time saver.
     
  19. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Not for reference cards, though. Couple of years ago, when small gaming boxes were all the rage, Nvidia was pretty adamant that their ref designs had to be 2-slot blowers or they wouldn't fit into those particular small cases anymore.

    But then, maybe that exotic thing is necessary when you put your 400 Watt SXM4 into an adapter for PCIe. ;)

    Not really sure, but for really large scenes, maybe there's a limit to the 1st-gen RTX cores. Their BVH traversal in hardware could fall off a cliff somewhere when the internal cache is oversubscribed.
     
    Lightman likes this.
  20. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    not
    reference
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...