NVIDIA Maxwell Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Feb 9, 2011.

Tags:
  1. UniversalTruth

    Veteran

    Joined:
    Sep 5, 2010
    Messages:
    1,747
    Likes Received:
    22
    :shock: Wow, we are already in the speculation zone for what will come in 2015. :lol:

    Given that we know almost nothing about what's coming in a quarter... that's a nice try. ;)
     
  2. A1xLLcqAgt0qc2RyMz0y

    Regular

    Joined:
    Feb 6, 2010
    Messages:
    985
    Likes Received:
    277
    I don't know what the V I from S|A is smoking but TSMC will have 20nm in full production in 2013 so no way is Maxwell going to be either on 28nm or released in 2015.

    TSMC Lays Process Technology Roadmap Out.
    http://www.xbitlabs.com/news/other/...SMC_Laids_Process_Technology_Roadmap_Out.html

    TSMC says it's ready for 20-nm designs
    http://www.eetimes.com/design/eda-design/4398190/20-nm-open-for-design-says-TSMC

    20nm Technology
    http://www.tsmc.com/english/dedicatedFoundry/technology/20nm.htm

    TSMC sketches finFET roadmap
    http://www.techdesignforums.com/blog/2012/10/16/tsmc-finfet-roadmap/
     
    #102 A1xLLcqAgt0qc2RyMz0y, Dec 7, 2012
    Last edited by a moderator: Dec 7, 2012
  3. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,748
    Likes Received:
    470
    I think performance at 20/22 nm without FDSOI or FinFETs will be a dud (ie. not competitive with just sticking to 28 nm). TSMC seems to be the only one trying it.
     
  4. A1xLLcqAgt0qc2RyMz0y

    Regular

    Joined:
    Feb 6, 2010
    Messages:
    985
    Likes Received:
    277
    http://www.tsmc.com/english/dedicatedFoundry/technology/20nm.htm
     
  5. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Not to parse PR text too finely, but the statement about TSMC's process is an OR list, not an AND.

    It's 30% faster, and 1.9x as dense, and 25% less power.
    It's 30% faster, or 1.9x as dense, or 25% less power.

    For GPU chips that tend to push things on performance, size, and power, the former list sounds more promising than the latter.
     
  7. Dr Evil

    Dr Evil Anas platyrhynchos
    Legend Veteran

    Joined:
    Jul 9, 2004
    Messages:
    5,767
    Likes Received:
    775
    Location:
    Finland
    Doesn't the density increase come in all situations and the only trade off is between speed or power consumption?
     
  8. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    SRAM can use larger cells than the process minimum for performance or reliability reasons.
    Logic has tended to scale less than SRAM with each node, and more complex or higher speed logic has shrunk more slowly.
    Less regular structures may need extra measures to manufacture, and wires in more complex logic can restrict density improvement.
    When increasing for drive strength for FinFETs, extra fins and thus extra area can used to provide more current.

    For control of leakage or manufacturability, parts of the chip can use physically larger transistors to reduce leakage and resist variation. Power gates are physically large relative to the rest.
    Even with ideal scaling, the leakage and power efficiency issues will require allocating more of the larger transistor budget to more aggressive power control measures.

    The smaller gates are physically less able to control leakage. Without a materials change or fundamental change in the structure of the gates, say HKMG or FinFETs that Intel regularly pushes out, shrinking today's tech by a node means headaches.
    Intel has been pretty good about getting these difficult transitions done before the problems they solve smack it in the face.
    The foundries tend to delay these more fundamental changes by a node, so they get smacked in the face every other node.

    One thing I'm curious about with the foundry plans to hop to FinFETs with a 14nm transistor but 20nm metal layer is how this might compare to Intel's historically being somewhat less aggressive in scaling wire density.
     
  9. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,416
    Likes Received:
    178
    Location:
    Chania
    The design isn't even done yet and that site is so self-assured to create even a picture with a supposed SKU name. As a sidenote it is true afaik that NVIDIA contracted TSMC for their 20nm. I just have the weird gut feeling that the supposed "news" that has been plagiarized from one website to the other with an origin of a korean site isn't really mentioned per se in the original source.
     
  10. iMacmatician

    Regular

    Joined:
    Jul 24, 2010
    Messages:
    771
    Likes Received:
    200
    They've had names and "specs" for ten 700 series cards on their site for at least half a year now. So I'm not surprised. :lol:
     
  11. UniversalTruth

    Veteran

    Joined:
    Sep 5, 2010
    Messages:
    1,747
    Likes Received:
    22
    Why do I have the feeling that this upcoming generation was ready loooong time ago but NV and AMD artificially with other intentions delay the launch till Q1-Q2 2013... What is so special about this generation that they need so much time to cook it?

    And from this line of thoughts, I wouldn't be surprised that some people have known the specs for quite a while.
     
  12. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    4 or 5 quarters between updates is artificially long?
     
  13. keldor314

    Newcomer

    Joined:
    Feb 23, 2010
    Messages:
    132
    Likes Received:
    13
    Given that the next generation is on the same process, it's a distinct possibility.
     
  14. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    I think AMD and Nvidia are very close to each other this generation, with no major obvious flaws. New silicon on the same process can only give incremental changes, IMO. I wonder if it's not better waiting for 20nm for the next big push? Maybe just do some clock speed bump and call it a day.
     
  15. tviceman

    Newcomer

    Joined:
    Mar 6, 2012
    Messages:
    191
    Likes Received:
    0
    Nvidia probably wants to lead the 700 series with GK110, but they are probably allocating all their current production until special orders and the HPC supply chains are adequately saturated. But also you are probably partially right, there probably isn't a pressing need on Nvidia's end to refresh their current GPU's, given their higher $$$/mm^2 than what they were getting off Fermi dies.
     
  16. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Possible guess predicated on Maxwell turning out to be the chip that puts an on-die CPU:

    Wanting to get the hybrid architecture out in time for an HPC deal?
    Trying to move to a self-hosting or mostly self-hosting Teslas?
    I would figure that Nvidia has a decent idea on how their design would work, but it might help to put the design through its paces sooner rather than later.
    There is doubt about the timeliness of 20nm.

    This wouldn't require that the consumer boards get Maxwell.
     
  17. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    I will believe if a co processor ARM cpu is used in the gpu, the benefit could only come by the software and instructions used. For me this use look to be a really short term solution when i look the ambition of other actors about it.
     
    #118 lanek, Dec 15, 2012
    Last edited by a moderator: Dec 15, 2012
  18. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    If Maxwell has a timeline that is not served by the possible timing of 20nm, an initial wave on 28nm could a way to reduce risk of further schedul slips.
    Having ARM cores implemented could possibly remove one of the notable shortcomings Tesla has against Larrabee, which has CPU cores that are at least capable of running CPU-side code on the board.
    Possibly, at some point accellerator board in some eventual future product would just be the motherboard, and Nvidia can remove a competitor's silicon from being a required element to all of its computing products.
    CUDA software running on top the driver layer shouldn't care what CPU is running the driver.

    It's a possible direction Nvidia could be taking, at least.
    If the host was moved on-die, the Tesla board would--with some work akin to a sever version of its Tegra SOC design--look a bit like an ARM shared-nothing dense server board.
    What doesn't seem to work yet is a solution for a cache-coherent interconnect, and the RAM pool a GPU would have is too tiny without the much larger capacity DIMM pool the host x86 chip currently provides.
    A memory standard like HMC might solve the capacity without sacrificing the bandwidth Tesla currently relies on.
    The other thing is the need for an interconnect, since that is something Nvidia has thus far been reliant on the platform hosted by that inconvenient neighbor it has in the compute node.
     
  19. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    With the instructions included in ARM right now, a complete rewrite of it will be needed,... you will not been able to use it as primary " self processor" without loose a lot of efficiency vs use an x86 processors for do the work. So the impact is really limited on what it will cost you in term of developpement, resource used on a gpu. Ofc Nvidia have the possibility to rewrite the software and library for use it on certain purpose, but, is it really efficient outside marketing ? The main problem of CUDA is the efficiency, most of the usage time you win by use it in computing is lost then, by the way you need hardware for re-encode and been able to use the language and library, and then decode and re encode again the result for control them... ( as CUDA is reallly not safe in term of fiability ( specially when you need a lot of precision on the result.. you neeed basically many hardware and cpu usage time for check the results ), And im affraid this will become worst with an ARM processor for it . ( basically you cant use the base of software/instructions/code you use normally, it will be needed to re encode them for it, it push even further the problem of CUDA API/Library by even need to force an other manipulation of the data ). With this direction the approach of AMD and HSA, look more .. safe ( maybe not the word i wanted to use or not the right word, maybe i should say more toughtful ) .
     
    #120 lanek, Dec 15, 2012
    Last edited by a moderator: Dec 15, 2012
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...