NVIDIA Maxwell Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Feb 9, 2011.

Tags:
  1. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    It looks like just someone doing a "what if?" like on B3D or anywhere. Small typos (GTX 520, GTX 510), an arrow leading to GTX 880 that makes no sense (inconsistent), a made up respin is there. No GK208 and hell, they still sell GF108.

    There's no information (the drawing says "is it a GM100 or a GM104? we don't know, maybe a GM104"). The author bets on gddr5, that seems safe but I remember speculation about gddr6.
     
  2. DSC

    DSC
    Banned

    Joined:
    Jul 12, 2003
    Messages:
    689
    Likes Received:
    3
    http://www-03.ibm.com/press/us/en/pressrelease/41684.wss

    Wonder if there will be Nvidia POWER chips with Nvidia GPUs in them.
     
  3. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    This is interesting when thinking about IBM, which outside of licensing the ISA to some designers hasn't always transfered its core IP to outside parters--with the rather unimpressive cores in the current-gen consoles versus their contemporary IBM cores as an example.

    Perhaps it was a business model consideration, and IBM felt that IP was too valuable an asset to transfer. I suppose the question now is what changed, the returns from sharing, or the value of the IP and IBM's microelectronics division.
    IBM might also be buying into the heterogenous HPC wave, or hedging its bets for cloud server tech.

    Nvidia could use a partner that isn't in the business of providing GPU silicon or throughput chips. That leaves PowerPC as far as notable architectures go in this space, I think. MIPS is acquired, x86 is arrayed heavily against Nvidia now, ARM makes its own graphics IP and is getting muscled into by multiple GPU providers.
    What slice of the original space does this leave for Nvidia's custom ARM core, however?

    The inclusion of an interconnect partner shows how important that part is. Intel's bought interconnect tech and is about as aggressive on that scale as it is with the silicon it wants to push. AMD bought into something along those lines, for the dense microserver environment at least.
     
  4. boxleitnerb

    Regular

    Joined:
    Aug 27, 2004
    Messages:
    407
    Likes Received:
    0
    http://semiaccurate.com/2013/08/07/amd-to-launch-hawaii-in-hawaii/

    Now we know what was cancelled (if SA is right). What's the big deal about canceling the big Maxwell (presumably @28nm since Nvidia would likely not start with the big GPUs on a new node again)? If 20nm runs well, they could have pulled in the 20nm Maxwell GM104 for instance. Why should GM104 be a "minor blip"?
     
  5. jaredpace

    Newcomer

    Joined:
    Sep 28, 2009
    Messages:
    157
    Likes Received:
    0
    Tahiti vs Gk104 & Tahiti vs Gk110. Nvidia made big progress. They now dominate AMD in performance/watt and overall performance. Can hawaii catch up? Nvidia just copied you guys and split up their big cores into little ones; something AMD did like 4 generations ago. But they are faster & more efficient now, and you guys have to deal with it. :twisted: They stole a page out of your book. Any big surprises for us this time? Moar rops, big die? Moar pixel pushin powa?
     
  6. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    nv's role could be as simple as porting gpu drivers to power isa.

    The real question is what is Google doing there. They seem to have no obvious role as a supplier.
     
  7. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Why would anyone believe that such a chip existed in the first place?

    Difficulty level: "because Charlie said" not a valid answer.
     
  8. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    :?:

    AMD's architecture is not split, it's GCN top to bottom. Unless you were talking about CPUs, but that doesn't really apply.

    And GK208 suggests NVIDIA might not intend for the current split to remain.
     
  9. boxleitnerb

    Regular

    Joined:
    Aug 27, 2004
    Messages:
    407
    Likes Received:
    0
    Because 20nm might be too broken in the beginning, too expensive, so that a big Maxwell@28nm would make sense?
     
  10. UniversalTruth

    Veteran

    Joined:
    Sep 5, 2010
    Messages:
    1,747
    Likes Received:
    22
    I don't think it makes much sense unless that new architecture is so much more efficient that they would be able to squeeze dozens of percents more performance with the same transistor count...
     
  11. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    You can not really go bigger than GK110 on 28nm.
     
  12. Frontino

    Newcomer

    Joined:
    Feb 21, 2008
    Messages:
    84
    Likes Received:
    0
    You really think there wouldn't be anyone willing to buy a 700 mm2 chipped video card with a 4 slot cooling system at $2,000 for gamers and $10,000 for pros?
     
  13. boxleitnerb

    Regular

    Joined:
    Aug 27, 2004
    Messages:
    407
    Likes Received:
    0
    more than 600 mm2 isn't possible, the current reticle limit is about 600 mm2.

    You cannot go bigger than GK110 on 28nm, but you could go faster if you increase perf/W and perf/mm2 by architectural means.
     
  14. Frontino

    Newcomer

    Joined:
    Feb 21, 2008
    Messages:
    84
    Likes Received:
    0
    Staying on 28nm and improving efficiency will not give enough gain to justify debuting Maxwell at that node.
    If anyone could tell me how much theoretically would Kepler be faster than Fermi at the same 40nm process, I'd appreciate it.
     
  15. boxleitnerb

    Regular

    Joined:
    Aug 27, 2004
    Messages:
    407
    Likes Received:
    0
    So you don't know and you nevertheless say "will"? Interesting.
     
  16. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    If you really want to do it, you can go larger than that. You can check the technical specs here. Often only the maximum square size one can possibly fit in (~ 25x25 mm²) is quoted, but the absolute limit with a rectangular die is actually higher.
     
  17. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    26x33 gives 858mm2. There's still a little bit of headroom!

    (Refreshing to see somebody linking to hard data. Thanks!)
     
  18. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    I would assume leaving a little bit of space on the edges can't hurt. But even 25*32 = 800mm² is still significantly larger than GT200 or GK110.
     
  19. Frontino

    Newcomer

    Joined:
    Feb 21, 2008
    Messages:
    84
    Likes Received:
    0
    800!
    What kind of power consumption and thermal dissipation would it have?
     
  20. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    Run it at a lower voltage and not very aggressive clock speed and you can get away with less than a HD7970GE while still being a lot faster. That's the way how GK110 ends up with a relatively low power consumption, significantly less than what you would expect from the die size difference to GK104. It is evidently not a very good method to assume a linear scaling of power consumption with the die size. ;)

    Or the short version: It depends.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...