NVidia Ada Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Jul 10, 2021.

Tags:
  1. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    One aspect could be pricing. As of now, A100 pretty much is at the reticle limit (fact) on a more expensive process (educated guess), uses CoWoS and HBM2. BOM should me much higher and you'd have to sell at a lot less when targetting also gamers. 3k$ max? And of course, now Hopper has to have RT cores. If their current variant does not have those, you cannot even go generally for content creators because even there marketing has done it's work apart from some real use cases.
     
  2. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    Pricing needs to be competitive also. The only way I can see this turning out to be true is if the top end Hopper based card would occupy a niche which would be unreachable to Lovelace even at the maximum die size possible.

    But then again why would Lovelace even exist? If GH202 will in fact be a MCM then why not have a GH202 as a single chip lower in the lineup? Or why even use Hopper for an MCM if Lovelace is presumably better for gaming in the first place?

    Paint me dubious about this one. Looks a lot like your typical smoke and mirrors if your ask me.
     
  3. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    No, the GPU has none of that.
    They need a 5th non-A100 discrete GPU in Station A100 to output display.
     
  4. Samwell

    Newcomer

    Joined:
    Dec 23, 2011
    Messages:
    149
    Likes Received:
    183
    I'm also sceptical for a Gaming GH202, but it's not impossible.

    GH202 would of course be higher in the lineup than GL102, it would be more like 2x 500mm². A single chiplet of GH202 might make no sense, if the interconnect/cache needed for mcm takes to much die space. Better to have a 450mm² single chip than a 500mm² GH202, looking at the wafer prices.

    Why should Lovelace be better for gaming? Because of RT cores? Have a look at Ampere for Compute and Ampere Gaming, the differences are already pretty big, but Nvidia is calling it one architecture. What if the real difference between Hopper and Lovelace is single chip vs mcm, while the architectural differences weren't bigger than in Ampere?
    I'm expecting the new GPC->CPC->TPC->SM Design not only in hopper, but also in Lovelace.
     
  5. IIRC at some point in B3D we discussed whether GPU means the chip or the card, or the aggregate of chip + VRAM. It came up when the "X2" dual-chip cards started to appear.
    It's not a clearly cut definition out there on specialty websites, and the rise of chiplet GPUs is bound to make things harder to discern.
     
  6. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    I am talking about the graphics processing unit GA100, which is a fully functioning graphics chip which just lacks RT cores (and maybe an NVENC, or it's just disabled - can't find the notes atm).
     
    #126 CarstenS, Jul 25, 2021
    Last edited: Jul 25, 2021
    Lightman and PSman1700 like this.
  7. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Nvidia next generation GPU returns to TSMC - Business Times (ctee.com.tw)
    July 26, 2021
    [​IMG]
     
    Lightman, del42sa and PSman1700 like this.
  8. Are they.. using Internet Explorer?
     
    Picao84 and Ika like this.
  9. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    You know what... thinking it through- If "Lovelace" is a different arch from "Hopper", and the first is monolithic while the second is some multi die(MD) thing (insert marketing term) then my guess is this: Hopper is on TSMC 5nm, which has TSMC's new chiplet bridge packaging, while Lovelace is on Samsung 4nm.

    Doing the calculations for potential yields on AMD's chiplet GPUs showed just how much cost can drop for doing multiple smaller chiplets over relatively large monolithic stuff. If Nvidia has the ability to do multi die graphics GPUs I suspect they'd prefer that quite strongly over not. Profit margins shoot up quite a bit there, at least potentially. So the reason for going MD on one arch but not both is, so I imagine, you can only do it on one instead of both. If Nvidia expects to sell out its entire TSMC wafer allotment just doing AI chips, then of course it's going to just make AI chips, the margins there are way, way better than for consumer GPUs regardless of MD stuff. So then they have a different node, and foundry, for their lower margin business. One that doesn't currently have the packaging tech for multi die stuff like they'd want to do. But if you're out of options, and everyone around the world is for the near future, you take what you can get.

    It's not like Samsung 4nm isn't a good leap from their 8nm. Going by Samsung's numbers density should be over double and power might nearly be cut in half. Caveats of "those are headline numbers" even included, just a straight port of Ampere would see solid benefits across the board, let alone any other improvements. Anyway, that's a possible scenario that makes sense to me anyway.
     
    #129 Frenetic Pony, Aug 1, 2021
    Last edited: Aug 1, 2021
  10. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    If Nvidia wanted to make chiplets but they needed TSMC to do it then they would be making chiplets at TSMC. It’s a strange leap of logic to assume they would base their architecture strategy on “take what they can get” from a foundry perspective. Lovelace being monolithic isn’t sufficient evidence of which foundry it will be made on.

    It would however indicate that Nvidia’s chiplet architecture for gaming isn’t ready yet or they (mis?)calculated that it wasn’t the right time to bring it to market.
     
    DegustatoR likes this.
  11. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Or it would be a two-way safety measure, betting on supply constrains for competition getting produced at TSMC. They have an independent chip source to cover if for whatever reasons they cannot get enough chips out of TSMC themselves and they diverse their second big source of income out.

    Remember, Nvidia only has it's GPU business. If at AMD something should have backfired on consumer chiplet-technology, they could at least cover with their CPU and Semi businesses. If Nvidias first try at "real" MCM (not talking HBM here) had problems of whatever kind, they'd still have the other option to cover basic operations. Contingencies. Jen-Hsun really was internalizing what they learned during the first Fermi debacle.
     
  12. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Not how their product roadmaps work.
    Short cycles > optionalities.
    No they'd just move on with cutting into the next product cycle.
    Remember Vega11?
     
  13. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    I mean, it's not impossible for Nvidia's consumer GPU chiplets to "not be ready", why I mentioned it was only one hypothesis. But chiplets just make too much business sense, there's no "not the right time to bring it to market", there's only "bring it to market as fast as we can". If their engineers for that are doing AI chiplets first and foremost, then of course that's what they'll do, and the consumer GPUs can wait however long. It'll take AMD 9-12 months (ish?) to go from AI to consumer GPU chiplets, but... maybe it'll take Nvidia longer, long enough that a monolithic arch makes sense, I don't know. Or heck if only takes them equivalently, but Hopper isn't due out till the end of next year, maybe having up to date products is worth the giant investment of a quickly outdated arch just to ensure pressure isn't taken off the competition and their market share is kept up. A new arch in 3 years instead of 2 sounds like a lot, these days.

    But assuming TSMC "just has more wafers to throw around" is silly. We know they don't, we know they're booked solid for years and keep announcing yet more new foundries and more investments. "They'll just make chiplets at TSMC" isn't an assumption that can be made at all. Hells imagine if AMD doesn't have enough wafers to go around either. If that's the case, and Nvidia has their arch on Samsung, not TSMC, they'd have AMD soundly beaten in the supply category for consumer GPUs, chiplets or not. AMD would prioritize CDNA2 over RDNA3, while Nvidia wouldn't even have to make that call.
     
    #133 Frenetic Pony, Aug 1, 2021
    Last edited: Aug 1, 2021
  14. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    603
    Likes Received:
    1,123
    Chiplets dont make sense for GPUs especially for rendering. The overhead of scheduling will decrease the efficiency which will make shrinking even much harder. You have to specific programm towards chiplet to avoid synchronisation conflicts. Nobody will do this in the gaming business.
     
  15. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Nah, never knew, Nvidia - which is the topic in this thread - did a Vega11.
     
  16. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Nice quips (time to apply for a capeshit writer position!) but irrelevant shit altogether.
    Short cycles are king.
    Oh man...
    Note under "shit that will age catastrophically".
     
  17. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    603
    Likes Received:
    1,123
    You mean like your baseless pedictions about RDNA2? Sure.
    But maybe you can explain how AMD will schedule raytracing workload over multiple chips which are only connected through a L3 cache without a programmer specific design the BVH for it.
     
  18. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Idk chief everything's on target.
    Miracles and magic.
    Please cease doing an ownage on yourself.
    Will age really foul.
     
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    The same way they schedule work over multiple shader engines that are only connected through L3 today. The only difference is off-chip vs on-chip comms latency.
     
    no-X and BRiT like this.
  20. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    And there's only two more of them.

    From your POV it's just a gigahuge GPU with moar h/w innit.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...