However, more cooling may use more system power (i.e. a bigger fan).
The vapor chamber cooler doesnt cost them a single watt. U dont need a bigger fan when u can increase the area of the cooling fins or change materials for better heat conduction.
However, more cooling may use more system power (i.e. a bigger fan).
I appreciate that it is easier Alex, but the current solution actually does seem to rely on monitoring hardware - which just stays inactive unless (particular versions of) Furmark or OCCT are detected. Various reviews have demonstrated/measured this, and here's an image from Hexus which highlights the hardware side of things:
[image]
What disadvantages would there be to simply having this on all the time, like in CPUs? (Back in the days their thermal protections were external to the CPU die instead of internal as well).
What happens when someone writes a new app? People set fire to their cards until Nvidia brings out a driver update?
PS: Yes, cooler transistors leak less. So all things being equal, if you improve cooling you will lower the power consumption of the chip. However, more cooling may use more system power (i.e. a bigger fan).
In this case the fan is slightly bigger (If I'm not mistaken), but I think the cooler's higher efficiency is mostly due to its vapor chamber.
I think rated power is fairly meaningless since they pretty much never reach their max. If they did, people would get deaf .I thought the rated power of the 580 fan was 8 to 10 W lower, but I couldn't actually be bothered to check.
Razor1 said:
Thanks for the link. Figure 2 nicely illustrates the non-linearity of leakage power as a function of temperature. Assuming 2 C/W as a previous poster did imposes a linear model, which the curve in figure 2 suggests would lead to a large overestimation of power saved by lower temperature.
I already pointed that.I think it's possible to say, that different coolers on GTX480/580 can also result in ~10% different power consumption.
Hell, 1 milli-Ohm translates to 40 watt power dissipation in the PCB alone.
Looks like the 570 is still 1280MB
Any bets on clocks? I'm thinking somewhere right around 480 levels. 580 would still be a good deal faster and 570 would be a decent replacement for 470.
Somewhat higher.Any bets on clocks? I'm thinking somewhere right around 480 levels.
Somewhat higher.
Really close to the GTX480 to be exactly.That would put it really really close to the GTX 580
On a day when a lot of other geopolitical things are being leaked, our friends from Sweden found the specifications sheet of NVIDIA's new upcoming high-end graphics accelerator, the GeForce GTX 570. The GTX 570 will be a deputy to the company's recently-release GeForce GTX 580, it is based on the GF110 graphics processor, with 480 CUDA cores enabled, and a 320-bit wide GDDR5 memory interface holding 1280 MB of memory. At this point it looks like a cross between the GTX 480 and the GTX 470, but the equations take a turn when clock speeds step in: 732 MHz core, 1464 MHz CUDA cores, and 950 MHz (3800 MHz effective) memory, churning out 152 GB/s of memory bandwidth. Power consumption is rated at 225W. NVIDIA's upcoming accelerator is slated for release on 7th December, just five days ahead of AMD's Radeon HD 6900 series launch.