NVIDIA GF100 & Friends speculation

Discussion in 'Architecture and Products' started by Arty, Oct 1, 2009.

  1. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    read, did you?

     
  2. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    505
    Likes Received:
    189
    There's nothing debatable about it. Remember the GTX580 has 16/15 * 772/700 = 17.6% more throughput. By all rights, it should consume at least 17.6% more power. It doesn't, even with the GTX 480 cooler.
     
  3. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    quickfacts for the brainwashed: 580 with disabled OCP consumes 350W
    [​IMG]

    480 in TPU's review topped at what, 320W? so it's only a 10% increase there.
     
    #7523 neliz, Nov 23, 2010
    Last edited by a moderator: Nov 23, 2010
  4. PSU-failure

    Newcomer

    Joined:
    May 3, 2007
    Messages:
    249
    Likes Received:
    0
    To be fair, you have to use a fixed temp, assuming temp derating is similar (~2W/°C).

    In fact, even at max power draw the GTX580 throttles in FurMark sooner than the GTX480 (97°C vs 105°C iirc).

    Summing it up, the ~20% efficiency improvement goes down to ~0% (less than 10% at the very least).


    I don't know if the resulting higher efficiency is due to some process work, "fully enabled" die, power delivery redesign or a combination of these, but it's clearly not sufficient to imply a redesign.
     
  5. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Anyone tried the GTX580 cooler on GTX480?
     
  6. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    505
    Likes Received:
    189
    Temp derating is a) nonlinear, so using a linear approximation is shaky, b) design dependent, so assuming it's the same to prove the design is still the same is circular reasoning.

    I think we can all agree that GF110 is a minor reworking of GF100. Personally, I would be more comfortable calling it GF100b. But clearly, GF110 is different chip than GF100 - it has a different die size and a few feature tweaks. Nvidia explicitly claimed they reworked the transistor voltage thresholds, and I see no reason to invent conspiracy theories to prove they didn't.
     
  7. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    505
    Likes Received:
    189

    Using Furmark to evaluate power consumption is misleading. All chip designers optimize their circuits for the common case, and then ensure that corner cases don't break the chip. Take a look at power consumption in real life games, and you'll see that GF110 is more power efficient than GF100. This isn't surprising - all Nvidia's efforts to improve GF100 have rightly been focused on games, not synthetic benchmarks like Furmark.
     
  8. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,535
    Likes Received:
    144
    Any reasonably recent CPU is quite thoroughly fine with violently running Linpack or all the other "burn" variations of it, without violating spec, and without relying on lame mechanisms like app-detection - is Furmark significantly different as a concept?
     
  9. caveman-jim

    Regular

    Joined:
    Sep 19, 2005
    Messages:
    305
    Likes Received:
    0
    Location:
    Austin, TX
    I recall it wasn't too long ago that it was possible to voltage or heat death a processor under F@H/Linpack/IBT. AMD and Intel introduced throttling and thermal protection (hard off, clock throttle, etc) to prevent this, then stated putting better coolers, higher TDP's on chips. I think we're just seeing the same start of ramp up on similar (or better) technologies.
     
  10. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    Power consumption is dependant (among others) on GPU temperature, so it's related to cooling. Even this furmark graph shows, that the longer the application runs, the higher power consumption is. So it's really hard to judge power efficiency of two GPUs equipped by significantly different cooler. Until anybody takes GTX480 cooler, places it on GTX580 and tests it in e.g. Crysis or Medal of Honor, I won't be convinced, that the GPU is significantly more power-efficient.
     
  11. Florin

    Florin Merrily dodgy
    Veteran Subscriber

    Joined:
    Aug 27, 2003
    Messages:
    1,707
    Likes Received:
    345
    Location:
    The colonies
    I think it's ridiculous that this mechanism is relying on app detection, however. It needs to be general purpose - Nvidia is fighting the symptoms, not the actual issue.
     
  12. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    505
    Likes Received:
    189
    Exactly. To be honest, Nvidia's choice of using OCCT/Furmark app detection strikes me as a complete kludge. The right thing to do is just power limit the chip dynamically: monitor how much power you're burning and then throttle softly and dynamically to make sure you stay at power budget. App detection is the wrong solution to this problem, I think Intel CPUs have done the right thing here and I expect GPUs to follow suit.
     
  13. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Doing app detection is a lot simpler, because it doesn't require any kind of monitoring hardware, which would be necessary for anything more sophisticated. I don't think they expected to run into such power issues when they started designing Fermi, so they didn't include it; plus, they had enough on their plate already, and were rather area-constrained. Perhaps that will change with Kepler or Maxwell.

    That said, app detection seems to work fine, so far.
     
  14. Florin

    Florin Merrily dodgy
    Veteran Subscriber

    Joined:
    Aug 27, 2003
    Messages:
    1,707
    Likes Received:
    345
    Location:
    The colonies
    I appreciate that it is easier Alex, but the current solution actually does seem to rely on monitoring hardware - which just stays inactive unless (particular versions of) Furmark or OCCT are detected. Various reviews have demonstrated/measured this, and here's an image from Hexus which highlights the hardware side of things:

    [​IMG]

    What disadvantages would there be to simply having this on all the time, like in CPUs? (Back in the days their thermal protections were external to the CPU die instead of internal as well).
     
  15. caveman-jim

    Regular

    Joined:
    Sep 19, 2005
    Messages:
    305
    Likes Received:
    0
    Location:
    Austin, TX
    As far as I can tell your argument appears to be 'cooler transistors use less power so the improved cooling on the GTX 580 makes the GF110 appear to have better perf/w than the 480'. Is that correct?
     
  16. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    I don't know what is actually happening, but a reason that you may app detect for something like this could be because the driver may need to poll the current monitoring / regualtor devices for activity, which can chew up CPU cycles; you certainly don't want to do this for all apps. The solution we implemented on Evergreen feeds directly into the microcontroller we have on the GPU, taking away any potential driver overhead and making the solution truly generic.
     
  17. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    What happens when someone writes a new app? People set fire to their cards until Nvidia brings out a driver update?
     
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,245
    Likes Received:
    4,465
    Location:
    Finland
    Or uses older version of the said apps, the protection doesn't for example work on older Furmarks
     
  19. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,516
    Likes Received:
    24,424
    It is possible Nvidia's new drivers will cause their cards to melt even then. It has happened in the past. :wink:
     
  20. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    They really are doing the detection in SW and looking for a specific application that causes overheating. That's like virus scanning. If you haven't seen the problem application before, you aren't protected!

    CPUs have been using dynamic systems for a long time now that will prevent against arbitrary programs overheating the CPU. Intel designed such a system in Montecito (and had to disable it), Tukwila, Nehalem, Sandy Bridge, etc. and AMD has designed one in Llano, Bobcat and Bulldozer.

    David


    PS: Yes, cooler transistors leak less. So all things being equal, if you improve cooling you will lower the power consumption of the chip. However, more cooling may use more system power (i.e. a bigger fan).
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...