NVIDIA GF100 & Friends speculation

However, more cooling may use more system power (i.e. a bigger fan).

The vapor chamber cooler doesnt cost them a single watt. U dont need a bigger fan when u can increase the area of the cooling fins or change materials for better heat conduction.
 
I appreciate that it is easier Alex, but the current solution actually does seem to rely on monitoring hardware - which just stays inactive unless (particular versions of) Furmark or OCCT are detected. Various reviews have demonstrated/measured this, and here's an image from Hexus which highlights the hardware side of things:

[image]

What disadvantages would there be to simply having this on all the time, like in CPUs? (Back in the days their thermal protections were external to the CPU die instead of internal as well).

My bad, I thought it was purely software but it's clearly not. I think Dave's explanation makes sense, though.

What happens when someone writes a new app? People set fire to their cards until Nvidia brings out a driver update?

In theory, that could be a problem. But realistically, power viruses don't just pop up every week. With Furmark and OCCT occupying that space, I doubt we'll see anything else appearing any time soon. Even if we did, NVIDIA is usually pretty quick to react.

Plus, people who run power viruses usually know what they're doing.

PS: Yes, cooler transistors leak less. So all things being equal, if you improve cooling you will lower the power consumption of the chip. However, more cooling may use more system power (i.e. a bigger fan).


In this case the fan is slightly bigger (If I'm not mistaken), but I think the cooler's higher efficiency is mostly due to its vapor chamber.
 
In this case the fan is slightly bigger (If I'm not mistaken), but I think the cooler's higher efficiency is mostly due to its vapor chamber.

I thought the rated power of the 580 fan was 8 to 10 W lower, but I couldn't actually be bothered to check.
 
I thought the rated power of the 580 fan was 8 to 10 W lower, but I couldn't actually be bothered to check.
I think rated power is fairly meaningless since they pretty much never reach their max. If they did, people would get deaf :).
Maybe that's the reason of the power limiter after all, otherwise the fan noise wouldn't be within safety regulations :).
 
Thanks for the link. Figure 2 nicely illustrates the non-linearity of leakage power as a function of temperature. Assuming 2 C/W as a previous poster did imposes a linear model, which the curve in figure 2 suggests would lead to a large overestimation of power saved by lower temperature.

That figure also goes from 0°C to 120°C. From 70°C to 90°C, which is what we're interested in here, it's not exactly linear, but assuming that it is wouldn't lead to significantly off results.
 
Maybe it isn't related only to GPU temperature, but also to temperature of VRMs. I remember a test showing, that different coolers tested on HD2900XT 512MB GDDR3 (TDP 215W) resulted in (up-to) 20W different power consumption in load (the higher RPM, the lower the power consumption was).

I think it's possible to say, that different coolers on GTX480/580 can also result in ~10% different power consumption.
 
I think it's possible to say, that different coolers on GTX480/580 can also result in ~10% different power consumption.
I already pointed that.

Redesigned PCB around the power delivery area, refined VRM circuitry and better cooling together could be the only improvements as far as power efficiency goes.

When you have 20A flowing between PEG aux plugs and the VRM itself and something like 200A between the VRM and the GPU, Ohm's law isn't that meaningless. Hell, 1 milli-Ohm translates to 40 watt power dissipation in the PCB alone.
 
Looks like the 570 is still 1280MB

That's not too good, but i guess they have to differentiate the 570 from the 580 somewhat. Other than killing one-two SMs that is. I just hope it ends up close to the 480 perfromance level, with better thermals and power draw. Maintaining the HSF would be very kind of them as well.

I reckon that 1280MB of video RAM will be more than enough for up to 1920X resolutions for another year at least.
 
Any bets on clocks? I'm thinking somewhere right around 480 levels. 580 would still be a good deal faster and 570 would be a decent replacement for 470.
 
Any bets on clocks? I'm thinking somewhere right around 480 levels. 580 would still be a good deal faster and 570 would be a decent replacement for 470.

The 480 is 25% faster than the 470 and my best guess is that Nvidia may want to keep the same distance for the 580-570 again. That would put the 570 below the 480 while giving a decent replacement card as you said without enraging 480 owners.

TBH I am not sure if Nvidia would care more about enraging previous series owners or countering AMD effectively, so they may want to let AMD launch their cards before Nv launches theirs. This could be a possible reason of why AMD delayed their 69XX cards. They will not give Nvidia enough time to react and decide if the 570 will be a -1SM part or a -2SM part, in order to catch up with the Christmas shopping spree. If that was the case, I would launch a 570 and a 575 at the same time! Too much segmentation? Maybe!

In any case, returning to the original 25% performance difference, I think the 570 will be a 580 -2SMs at 580 clocks with 470 specs (rops,bus,framebuffer).
 
Somewhat higher.

That would put it really really close to the GTX 580… I'm expecting something like 650~675/1300~1350, which would put it very slightly below the 480, accounting for the fixed TMUs. Basically, a GTX 480 with a power draw similar to that of the 470.
 
GeForce GTX 570 Specifications, Release Date Leaked:

http://techpowerup.com/135450/GeForce-GTX-570-Specifications-Release-Date-Leaked.html

On a day when a lot of other geopolitical things are being leaked, our friends from Sweden found the specifications sheet of NVIDIA's new upcoming high-end graphics accelerator, the GeForce GTX 570. The GTX 570 will be a deputy to the company's recently-release GeForce GTX 580, it is based on the GF110 graphics processor, with 480 CUDA cores enabled, and a 320-bit wide GDDR5 memory interface holding 1280 MB of memory. At this point it looks like a cross between the GTX 480 and the GTX 470, but the equations take a turn when clock speeds step in: 732 MHz core, 1464 MHz CUDA cores, and 950 MHz (3800 MHz effective) memory, churning out 152 GB/s of memory bandwidth. Power consumption is rated at 225W. NVIDIA's upcoming accelerator is slated for release on 7th December, just five days ahead of AMD's Radeon HD 6900 series launch.
 
Those clocks seem quite high, the last rumours indicated clocks just shy of 700 Mhz.

And what about yields? Surely they need another lower bin part to harvest more dies. Or maybe those will be used for Quadro/Tesla
 
Hmm with these clocks looks like it should perform really close to GTX480 - it looses 13% memory bandwidth (and rops throughput) but core/shader is 4% faster.
 
Back
Top