NVIDIA Kepler speculation thread

Discussion in 'Architecture and Products' started by Kaotik, Sep 21, 2010.

Tags:
  1. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    'Tis but a scratch!
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    Would appreciate a source for that 300w target, other than your ass of course :p

    And you're not following the discussion. We're talking about their goals, not the contingency plans. Dave was suggesting that the current state of affairs is all according to plan. Anyway, my point is that Nvidia's claims about Kepler's increased perf/w over Fermi don't say much since they obviously missed their perf/w target this time around. Simply not screwing up as bad will get them halfway there.
     
  3. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Well, that depends. Where was the screw-up, exactly? Is it a process thing, or a deeper problem with the architecture? GF104 would suggest that it's a little bit of both: they made changes to the architecture, sacrificed a few things (including DP and geometry performance) probably improved the physical implementation, leaned on a more mature process, and overall GF104 does a lot better than GF100 in perf/W (in games anyway, not so sure about GPU computing with DP) but still falls short of Cypress.

    So yes, the process part is just a matter of not screwing up as bad as they did with Fermi. But the architecture part? If the architecture is intrinsically not power-efficient, then they have a much bigger problem.

    Of course, it's all relative. In games, Fermi is clearly not power-efficient. In GPU computing, it's probably a bit more efficient than Cypress (it will be interesting to see how it fares against Cayman). In HPC as a whole, it's more efficient than CPUs in suitable workloads.
     
  4. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    No, I didn't say that.
     
  5. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Uhmm I hope the stated ~300W TDP up there isn't within the score. They claim themselves for their own reasons just a 250W TDP and the question here isn't if its realistic or not but whether the real power consumption a GTX480 ended up with was within their real intentions.

    I can't help but think that they were facing problems and power consumption ended up quite a bit outside any of their early expectations. Yes they obviously went from the get go for a 6pin+8pin board but on the other hand so did GTX280 and it had only a "real" 238W TDP.

    If they really wouldn't care if a GPU burns close to 300W then I'm honestly wondering why the claim only 250W for the GTX480 in the first place.
     
  6. Jaaanosik

    Newcomer

    Joined:
    May 18, 2008
    Messages:
    146
    Likes Received:
    0
    I am surprised that you do not give credit to Cypress for pushing Fermi.
    There is no way Fermi is within planned TDP. You've got to be blamed for that. :)
     
  7. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    LOL (for the obvious joke). You're missing one significant point though: if a GTX480 was to consume somewhere around 230W (real) TDP under all the conditionals it was in, then it would have made a quite ridiculous difference for a new generation against GT200. Ignore the competition for a second, how do you sell that to a GT200 owner?
     
  8. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    I'm sure I'm not the average case of a GT200 owner, but I'm looking very hard @ both GF100 and GF104 if for no reason other than to run Folding @ Home to get a solid points boost (double or better).
     
  9. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Well, if the GTX 480 were to draw ~230W, it would be the GTX 470. Not a hugely attractive upgrade over the GTX 285, but not horribly slow compared to the GT200 generation as a whole.

    I always considered the GTX 470 to be the "real" GF100, so to speak: it has a reasonable (albeit excessive) TDP, and almost decent yields, making it an actual product versus the somewhat vaporous GTX 480.
     
  10. Jaaanosik

    Newcomer

    Joined:
    May 18, 2008
    Messages:
    146
    Likes Received:
    0
    It still would have been faster then GT200, plus DX11 features, better GPGPU, ...
    We all know how NV marketing can work miracles.
     
  11. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    We all know how the halo around a flagship product exactly works for the average consumer. Imagine what kind of reception the Fermi line would had gotten if their top dog would had been a rough GTX470-equivalent and the 2nd best runner another tad below that.

    Well the average consumer is hardly folding@home, so you've basically placed yourself already in the exception area of an average consumer base. Since my original comment was based around power consumption, the GF104 is a totally different chapter in terms of power consumption. Against a GT200 (depending also if it's a 65 or 55nm chip) the perf/W difference of a GF104 should be quite significant.
     
  12. Luminair

    Newcomer

    Joined:
    May 29, 2002
    Messages:
    11
    Likes Received:
    0
    You're not following Dave. The GF100 Fermi power requirement is fine. It may seem high to you, but it is within PCI-E spec. A reasonable goal that AMD shares.

    Nvidia just expected more performance to make the power worth it.
     
  13. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    Tell me the power requirement for a fully enabled Fermi at target clocks and then tell me if it falls within Nvidia's expectations. You're missing the point. It isn't that they were able to scramble and get power consumption down, the point is that they had to scramble - i.e. they missed their targets.

    I think what you mean is that "The hobbled, downclocked GF100 Fermi with a monster cooler power requirement is fine."
     
  14. Luminair

    Newcomer

    Joined:
    May 29, 2002
    Messages:
    11
    Likes Received:
    0
    What the shit. You don't follow at all. I suggest you better study what baumann is saying. Resuming radio silence
     
  15. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    Considering it was my comment that started this particular train of discussion that's not really possible. But yes, please do resume and maintain radio silence.
     
  16. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,418
    Likes Received:
    10,311
    And so you keep missing Dave's original point. Nvidia quite possibly had design goals of a Max TDP in the neighborhood of where GTX 480/470 ended up. As well those same design goals included fully enabled chips probably running at a higher speed than what we currently have.

    When the chips came back what part of the original design goals would most likely have been kept if that were the case? Certainly not the fully enabled chip or the higher clockspeeds as that would have put TDP way over not only the original design goals but also PCIE specs for certification.

    Now, do we know without a doubt that Nvidia was planning on such a high TDP? No. However, as Dave was speculating, the large die size indicates that this was most likely the case, as I'm sure Nvidia engineers aren't to naive as to think a large, densely packed chip running at high speeds was going to somehow sip power on the order of Cypress.

    In other words, they were most assuredly aiming for a higher TDP than Cypress but were also probably aiming for a higher or similar perf/watt.

    Regards,
    SB
     
  17. larrabee

    Newcomer

    Joined:
    Dec 21, 2009
    Messages:
    29
    Likes Received:
    0
    its plausible at best that 250 watts was their target TDP. ASICs are designed with power consumption as a parameter but this does not factor in manufacturing. parametric yield is likely to be very poor which is why the gtx480 overclocks well w/ good cooling e.g. h2o/ln2. 2.8GHz aint bad at all.:grin:
     
  18. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    I know english is a difficult language but man you guys are taking it to another level. Fermi does not refer to "where GTX 480 ended up". Fermi = fully enabled GF100 at target clocks, which is obviously not a GTX 480.

     
  19. caveman-jim

    Regular

    Joined:
    Sep 19, 2005
    Messages:
    305
    Likes Received:
    0
    Location:
    Austin, TX
    imo,


    GTX 480, 470 had target performance and tdp goals both of which involved the sacrifices we see of GF100's implementation of Fermi architecture.

    To say 480 was targeted at the TDP it's in is to say that was the trade off for the performance desired; it represents the design goals.
     
  20. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,418
    Likes Received:
    10,311
    Really, I'm not sure why this is so difficult for you to understand. When designing the chip there were performance goals as well as themal envelope (TDP goals).

    When they got Fermi back, which of those two goals would have been the most realistic to stick to?

    Or do you seriously think that Nvidia engineers were so naive and incompetant as to think they were going to get say a 150 watt TDP from such a large densely packed chip as Fermi at the performance goals they were targetting?

    So when they got it back, they had a choice of meeting the performance goals which would mean TDP over 300 watts. Or they could meet their TDP goals which meant lower performance than they were shooting for.

    It really is that simple.

    Regards,
    SB
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...