NVidia Ada Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Jul 10, 2021.

Tags:
  1. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,235
    Likes Received:
    4,259
    Location:
    Guess...
    Man Lovelace is going to be such an awesome upgrade from my current 1070. Can't wait!
     
    PSman1700 likes this.
  2. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,088
    Even for me coming from a 2080Ti it will be a generation leap.
     
    pjbliverpool likes this.
  3. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Some speculation from a Mid-March "Moore's Law is Dead" rumor aligns with more recent leaks.

    [​IMG]
     
    PSman1700 likes this.
  4. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,088
    4060Ti faster than a 3090 in rasterization sounds very nice. Scary to think what that 4090 in its turn would do.
     
    #464 PSman1700, Apr 9, 2022
    Last edited: Apr 9, 2022
    pharma likes this.
  5. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,235
    Likes Received:
    4,259
    Location:
    Guess...
    That all looks great but I have precisely zero faith in Moores Law is Dead after he got just about everything wrong in the run up to Ampere and RDNA2.
     
    BitByte, McHuj, CeeGee and 11 others like this.
  6. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    I still don’t believe 2x or more performance than a 3090 outside of specific scenarios.
     
    Picao84 likes this.
  7. arandomguy

    Regular Newcomer

    Joined:
    Jul 27, 2020
    Messages:
    251
    Likes Received:
    355
    I don't see 2x performance as being unreasonable or even unexpected given circumstances. We've kind of been dealing with product generations that have been staggered (mid die first then big die), 2 gens per node staggering improvements, and/or "half node" gens (eg. Ampere). But if we isolate into what would be more theoretically possible if say generations were released in kind (big vs big with no drop off) on full node improvements, 2x gains would be more in line with expectations if anything.

    If we look at GP102 vs GM200 there was essentially a 2x range increase (https://www.techpowerup.com/review/msi-gtx-1080-ti-gaming-x/30.html) and that was with GP102 being relatively conservative with a rather large drop in die size (601mm to 471mm) without a TDP increase either (at least for the 980ti vs 1080ti). Now imagine if the market told Nvidia that there was people willing to do a 350w+ "GP101" config at launch for a massive premium (without waiting for the staggered release) at $2000 if not higher that was closer to around 600mm.

    With another die, GD103, now likely being part of the product stack from inception it's likely that GD102 will essentially be pushed up in terms of where it sits as there isn't much room to push everything from GD104 downwards. This has somewhat been a point of contention over the last few years but the market is showing that there is demand for a wider stack (and really if we look at how multi-GPU was handled in years past this has always been the case) and I've said this as well that a potential future of MCM GPUs is further going to push this issue. There's a substantial amount of buyers who are willing to blow way past that mid hundreds in cost (eg. $500) and 250w power budget and has always had. The market is just moving to service those buyers.
     
    T2098 likes this.
  8. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    What about bandwidth ? I'm sure they can improve stuff with "just" smarter/bigger cache&such, but a one point you will hit the external bw hard... I don't care that much for a raw 2x increase, but I would prefer a 2x increase at the "minimum" fps of current gen. Going 180fps to 360, meh, but going 27 to 54 would be nice, if it's a gpu limitation...
     
    egoless, BRiT and nnunn like this.
  9. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    603
    Likes Received:
    1,122
    Performance in rasterizing games are not limited by geometry or compute performance. So i dont know how you can improve performance in these games by 2x when even today Ampere isnt even fully utilized. I think nVidia should ignore it and go full in with spending transistors for raytracing, compute and DL.
     
    pharma, McHuj, xpea and 3 others like this.
  10. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,678
    I'm reviewing some the leaks and I can't say it makes a lot of sense to me unless the SMs are drastically different. This gpu should be 5 or 7nm fab, but I'm seeing leaks saying AD104 will go up to 400W with 60SMs. RTX3080 has 68SMs on 12nm and has a cap of 320W. Unless each SM has a huge increase in cuda cores, I don't really see how the new gpus could. Like would it make any sense to buy an RTX 4060 and have it end up being about as powerful as an RTX3080, but draw 80W more power? I haven't really kept up with the rumours, but some of this just seems odd to me.
     
    CeeGee, Picao84, pharma and 2 others like this.
  11. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    Averages should not appear in benchmarks. Maxima used to be shown once and now they're generally not found.

    If reviewers insist on showing two bars for a card, 0.1% and 1% are fine.
     
  12. Putas

    Regular

    Joined:
    Nov 7, 2004
    Messages:
    737
    Likes Received:
    354
    On 8nm, Turing was on 12.
     
  13. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    It still doesn't look right for a AD104 to be on par with 3090 while somehow consuming 400W. The wattages are likely completely out of place. I'd bet that people are looking at PCIE5 power as if it must be used straight away to 100% since it's there.



    [​IMG]
     
    #473 DegustatoR, Apr 11, 2022
    Last edited: Apr 11, 2022
    pharma, PSman1700 and xpea like this.
  14. arandomguy

    Regular Newcomer

    Joined:
    Jul 27, 2020
    Messages:
    251
    Likes Received:
    355
    I don't see how they could do that approach from a planning perspective at this stage. Unless there is a major departure Nvidia's GPU designs per gen essentially scale up/down the stack. Major raster performance gains is still going to be very important and and critical for AD107, 106 and I'd say even 104 in terms of how it impacts the market.

    This is not to say that ray tracing performance will not be have a strong focus. As it's likely going forward that ray tracing will be the main "console+" graphics we get on the PC.
     
  15. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,088
    Higher settings do make a difference, so does higher framerates and resolutions. With the help of DLSS/XeSS to up the performance deflict. Ray tracing's a big one, but certainly not the only one, in special as the generation moves on. Raster performance is still important and is going to be for a while.
     
  16. TopSpoiler

    Newcomer

    Joined:
    Aug 18, 2020
    Messages:
    74
    Likes Received:
    176
  17. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    I'd just like to be able to buy one somewhere remotely close to MSRP.
     
  18. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    $3000 MSRP for 4090 should do the trick.

    And, while I'm here, I just don't get the power consumption and performance numbers being rumoured, Samsung 8nm to TSMC 5 (or 4) should be a massive boon for performance and power consumption.

    It'll be interesting to see if the newer GDDR6X speeds reduce power consumption, too.
     
    DegustatoR likes this.
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,109
    Location:
    New York
    I was looking at H100 numbers and wondering the same thing. Using H100 as a proxy for AD102 the density increase is ~2.2x going from Samsung 8nm to TSMC 4nm.

    AD102 rumors point to a 1.7x increase in SMs and a 16x increase in L2 cache from GA102. Add a bit more for architectural improvements and higher clocks and it sorta makes sense. 600w for 60 billion transistors at high clocks + 24GB GDDR6X isn’t that surprising given the flattening power scaling curve. H100 is 700w for 80 billion transistors and probably much lower clocks with HBM.

    I would expect at least 2x performance for all of that though.
     
    Lightman, xpea and PSman1700 like this.
  20. psurge

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    955
    Likes Received:
    52
    Location:
    LA, California
    Also, don’t the AMD rumors say 500W for even more transistors, cache, perf (and higher clocks, IIRC 2.5GHz vs 2.2) spread across multiple dies? Maybe the higher power numbers on the NV side come from having to up the clocks far into the non-linear part of the perf/power curve so that they can compete with a multi-die solution performance wise? Or AMD knocked it out of the park on the physical design for this upcoming gen?
     
    xpea likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...