NVIDIA Kepler speculation thread

Discussion in 'Architecture and Products' started by Kaotik, Sep 21, 2010.

Tags:
  1. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,829
    Likes Received:
    2,794
    Location:
    New York
    Ha, wow that "strike force" article really was a waste of Internet space. In all fairness though Fermi is a much more complex arch than Evergreen/NI. Smaller variants (GF104/114) seem to be doing great on 40nm. Besides JHH already copped to the fact that the engineering teams fucked up on Fermi's interconnect fabric.
     
  3. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,086
    Likes Received:
    4,307
    Location:
    Finland
    http://www.digitimes.com/print/a20111026PD214.html

    Something in that smells - first we have nVidia who has officially announced that Keplers are coming sometime next year and production starts next year, too, and they're "expected to announce it in december" - then we have AMD who has already 28nm chips in mass production, yet are "expected to release sometime next year" :D
     
  4. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,704
    Likes Received:
    3,755
    Location:
    Germany
    Speaking of selective Memories…

    Both AMD and Nvidia had to find their respective ways around the problems of TSMC's 40nm process.

    You don't need to take my word for it though, take Anand's!
    http://www.anandtech.com/show/2937/9
    "The problem with vias was easy (but costly) to get around. David Wang decided to double up on vias with the RV740. At any point in the design where there was a via that connected two metal layers, the RV740 called for two. It made the chip bigger, but it’s better than having chips that wouldn’t work. The issue of channel length variation however, had no immediate solution - it was a worry of theirs, but perhaps an irrational fear.

    TSMC went off to fab the initial RV740s. When the chips came back, they were running hotter than ATI expected them to run. They were also leaking more current than ATI expected."

    Now, if you're already near the reticle limit, you cannot simply make the chip bigger, so obviously Nvidia couldn't go the RV740 route.
     
  5. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,488
    Likes Received:
    205
    Location:
    Chania
    Not that I disagree with the reasoning behind your post, but it's irritating how many obvious mistakes golem.de makes in many of its write-ups. Tape out silicon for NV is A1 and not A0.
     
  6. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    It's disturbing to see how many people still believe 40nm was an Nvidia specific problem. The fact that you don't hear about others doesn't mean it wasn't there.

    (Hint: 99% of TSMC customers are not in the fanboy public eye as are AMD and Nvidia. They also don't have BS journalists writing about them.)
     
  7. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    But it's also telling that right next to Nvidia, there was a company that was way ahead of them and had far fewer problems with the process. Which is perhaps a good reason to think nvidia botched something, even if they were standing on the shoulders of TSMC.
     
  8. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    When you're dealing with issues that are clearly process related, the standard way is to flag them to the fab, make them fix it, and hope it happens before major production starts. It's unusual to change your design for it: the fab will normally already tweak some polygons for you before mask making. Making your die larger and doubling up is pretty much unheard of. In this case, AMD did so anyway and reaped the benefits initially. Nvidia paid for it, but probably mostly in low yielding GT21x products. (Does anybody here really care about those?)

    But let's not forget the other side of the case: there is no question that TSMC fixed the issues completely early 2010 and 40nm became quickly a very stable high yielding process without any via doubling monkey business. This means that AMD paid an unnecessary price for this in 2010. And since GF100 was released to market only late Q1 2010, there are no reasons to believe it was a victim of the yield issue. (Something they have stated clearly in their financial disclosures.)
     
  9. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    AMD scored a home run by launching next to Win7, won lots of market / mind/ dev share. I think that was worth it, especially if you consider that AMD was coming from a point of area efficiency advantage.

    The real point is, fanboism aside, in a competitive market you will be held to the standard of your best competitor, and not to the standard of things-as-usual. That's why I think it's reasonable (but it's not a slam dunk) to say that NV screwed the 40 nm pooch.
     
  10. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    It was definitely worth it. They took a very unusual step by gambling on TSCM not getting their stuff fixed on time and won.

    However, I think a perfect 40nm would have resulted in the same outcome, with GF100 still as late as it was. So I don't get the focus on yield... unless you believe that the GF100 tape-out date was postponed specifically due to low yields.
     
  11. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    A large die is going to yield poorly. So that factors in obviously. But beyond that, AMD bet on TSMC screwing up and won so everybody expects nv to have made the same bet.

    And gf100 had poor clocks and heat as well.
     
  12. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    That factors in what?

    If Nvidia didn't delay GF100 tape out to work around yield issues, one could say they made the best possible decision: after all, by the time they went to production, those issues were history.

    So making the same bet would have been even worse (for GF100). So do we agree that 'everybody' was wrong?

    Uhm, yes, sure.
     
  13. Jaaanosik

    Newcomer

    Joined:
    May 18, 2008
    Messages:
    146
    Likes Received:
    0
    Fermi is using double vias as well AFAIK.
     
  14. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,704
    Likes Received:
    3,755
    Location:
    Germany
    Haven't heard of that before, and neither seems google - at least with a credible source instead of some random forum post, which indeed show up when googling "Fermi GF100 "double vias"".
     
  15. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    Me neither. And how can GF100 be considered reticle size limited?
     
  16. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    People's focus on yields. They see considerable disparity and latch onto it.
    If they hadn't spun Bx series, I would have agreed. They needed a silicon spin to really fix fermi which came almost a year behind AMD. If TSMC had fixed the process by march 2010 and nv's engineering was all right, then why was a Bx spin necessary?

    An indicator of inadequate impedance matching between process and architecture, right?
     
  17. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Are you sure about that? I don't remember hearing about it.
     
  18. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    The 40nm process is of exceptional quality right now, without monkey business. Yes, I am sure about that. As are tons of companies who are have rolled out their 40nm chips.

    As for the exact timing about when that happened: I don't know first hand, but see various press releases and financial filings put it at end of Q4 2009/Q1 2010. It was certainly not much later.

    There has never been an initially broken process that wasn't fixed eventually. 40nm was no different, it just took a little longer.
     
  19. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    The B part was faster and consumed less power. Nvidia said they used new lower leakage cells. Sounds like enough reasons to me, especially because the A version power requirements likely prevented them from productizing a full SM part.

    The via related fallout probably had no influence over speed and power consumption.
     
  20. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,704
    Likes Received:
    3,755
    Location:
    Germany
    Isn't it at about 550 sqmm?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...