Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,210
    NVIDIA clearly stated TSMC will produce the majority of 7nm orders, Samsung will get minor orders. The silliness is thinking NVIDIA will bifurcate their design across two foundries with two different fabrication processes.
     
    PSman1700, pharma and BRiT like this.
  2. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,210
    No it IS. A100 shares the same arch as Geforce Ampere, same CUDA cores, caches, Tensor core and everything, NVIDIA will add new generation RT cores, reduce the number of Tensor cores and call it a day, you don't use two different fabrication processes for this.
     
  3. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    1,060
    Likes Received:
    328
    Location:
    Luxembourg
    And if this is 350W on TSMC 7nm then it means Ampere is garbage. I'd rather believe it's that high power because it's on Samsung.
     
  4. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,210
    That's funny, did you by any chance get to know what kind of performance you are getting?
     
  5. Pinstripe

    Newcomer

    Joined:
    Feb 24, 2013
    Messages:
    153
    Likes Received:
    133
    My guess is that GA102 & GA104 will be fabbed @TSMC 7nm, and the smaller GA106 & GA107 @Samsung 8nm or 7nm.
     
  6. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    The thinking was "gaming" ampere would not be 7nm , but 8nm from samsung, so this quote would still be right. And everybody was fighting over TSMC capacities at 7nm... A cheaper but good 8nm process could be a logical solution.

    If it's silly to you, ok I guess...
     
    w0lfram likes this.
  7. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,210
    Again why? If those are two different archs then that's fine I guess, but they are one arch with different configuration, you don't bifurcate your design like that unless the fabrication processes are identical or similar enough.
     
    #747 DavidGraham, Aug 28, 2020
    Last edited: Aug 28, 2020
    A1xLLcqAgt0qc2RyMz0y likes this.
  8. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    It seems TSMC has strict NDA and non-compete (NDA'd team cannot communicate with another foundry for multiple years) clauses in their contracts for use of their bleeding edge nodes. So, either NVidia has signed on the dotted line and there's a "single design" (as @DavidGraham asserts) or NVidia is multi-fab and is using multiple teams, hence multiple designs.

    If NVidia has done any work at Samsung, then all that work is toast if NVidia is now 100% TSMC. Or the TSMC element of that work was on non-bleeding-edge TSMC.

    3 stark choices. Take your pick.
     
    w0lfram, Kej, hurleybird and 5 others like this.
  9. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,462
    Location:
    Finland
    Wanna bet on the "same CUDA cores" part? :-D
    GA-variants won't have FP64 CUDA cores which are separate from FP32 CUDA cores but in the same SM. Then there's of course rumors regarding different FP32/INT32 split, but they could be just rumors and nothing more. And there's more than just RT-cores missing from A100, which will be in GA-series (not in the SM units though, ROPs are outside for example)
     
  10. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Nvidia confirmed, that GA100 is a complete graphics chip. Not sure about the RT cores though, but ROPs, Raster and Display Engines are all there.

    edit: re-read it, RT cores are not part of GA100.
    Wrote about it here, but you'd have to take my word for it anyway that I did not make that up out of thin air. :)


    edit2: And please let me state the perfectly obvious in saying that this would mean no GA100-Titan cards this round.
     
    #750 CarstenS, Aug 28, 2020
    Last edited: Aug 29, 2020
  11. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    NV has been using both TSMC and Samsung for quite a while now, guys.
     
  12. Man from Atlantis

    Regular

    Joined:
    Jul 31, 2010
    Messages:
    960
    Likes Received:
    853
  13. It seems to me that the power requirements for the high end ampere cards, to the point of making up a new 12pin power connector, are an indicator that the rumors of nvidia going with Samsung 8nm (equivalent to TSMC 10nm in performance IIRC) are true.

    We know from Microsoft's latest presentation that large 7nm SoCs are very expensive even if yields are good, and we also know nvidia got comfortable with making very large chips for different performance brackets with Turing.

    This doesn't mean nvidia won't win in absolute performance on their $1500 halo products (or they wouldn't have bothered with GDDR6X). But this could leave some room for 7nm AMD GPUs to finally get some notebook dGPU market, as suggested before.
     
    w0lfram, Lightman and BRiT like this.
  14. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,678
    320W is disappointing for the 3080 if it has the same number of SMs as a 2080 ti with a less than 200 MHz clock increase. That’s a 70W increase in power (30%) over the 2080 ti with only a 11% clock boost, on a node shrink. Only the that makes sense is the SMs are wider or the dumped most of that power into RT and tensor cores. AMD looks positioned to be able to smoke them on raw raster performance, but I guess we’ll see.
     
  15. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,678
    2070 looks very similar to 2080 super but 30W LESS. Having a 100W gap between 3070 and 3080 doesn’t make a lot of sense to me. I wonder if 3080 will be the dud in this lineup.
     
  16. yuri

    Regular

    Joined:
    Jun 2, 2010
    Messages:
    283
    Likes Received:
    296
    The 3090 really has to be very fast since it already has 135% 2080Ti FE TDP. Given the node jump, it *should* be pretty fast assuming nV wants to at least maintain the efficiency ratio.
     
  17. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Bet.
     
  18. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,678
    I’m really curious now if the 3070 is on a better node (TSMC) or if it has a lower ratio of RT and tensor cores. Otherwise I don’t understand the 100W difference.
     
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,109
    Location:
    New York
    I think it’s fair to assume GDDR6X is going to account for a significant chunk of the power budget. It would explain the relatively svelte 220W TGP for the 3070.

    I wouldn’t bet on AMD smoking the 3080 In pure raster. Best case Big Navi will come with a 384-bit bus at 16Gbps which puts it at exactly the same bandwidth as the 3080. It should make for a very fun matchup though especially if RDNA2 clocks high.
     
  20. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    3070 is GA104, 3080 is GA102 this time around.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...