NVIDIA GF100 & Friends speculation

Considering Nvidia endorses overclocking of their cards, they should at the very least have warned reviewers that GTX 590 cannot even remotely handle the same voltages that GTX 580 cards can.
There's no doubt the GPU itself can handle it - but VRM is definitely different to GTX 580, so I wouldn't just expect it to be able to so as well. Some sites (like hexus) upped the voltage 0.05V, power got up 60W and they got the hint and stopped right there :).
Though you're right the VRM does seem to be designed without too much headroom - hopefully that doesn't affect lifetime in "normal" situations. Actually seems nvidia blames the driver now since OCP should protect the card even in this case (though if it were my card I wouldn't want to try it out...).
Interestingly, TPU actually got the best (by far afaict) OC with default voltage (before they blew the card) - almost too good to be true (they really got GTX 580 clocks with that voltage???)
OlegSH said:
Even with default clock it's still impressive 1.44X of heat and power at stock cooler over already hot card with huge power consumption
It's worse their stock voltage was 0.938V - hence 1.2V results in 1.64x of power (assuming the square scaling holds). Hexus got a 60W power increase for just a 50mV increase.
 
Regardless of how high it was, according to nVidia it shouldn't have happened, the card should have just shut down, and apparently with newer drivers it would just shut down under same conditions

At tbreak their card burned with 0.125V increase
 
I scratch my head why drivers are needed for this. I would have thought there would be be hardware CTF and regulator shutdowns in place.
 
Could it be a difference in how the VRM monitoring works?

Does Nvidia's solution check VRM temps and not current draw?
This may be a case of instantaneous draw spiking high enough to fry the VRMs before a temp sensor can catch it.

Could the driver be made to add a slight ramp-up delay on the supply voltage, such that the sensors have enough time to trip the limiter or shut down?
 
Does Nvidia's solution check VRM temps and not current draw?
This may be a case of instantaneous draw spiking high enough to fry the VRMs before a temp sensor can catch it.
This is the change that we made from RV770 to Cypress, however even without that RV770's regulators shutdown to prevent themselves from dying.
 
A driver bug frying 700+ dollar GPUs? Are we really back to those times when a virus could burn a monitor? This is rather incompetent hardware design in 2011.
 
Nvidia_midt_dk.jpg


I just wonder.. we know it will spark once, but TWICE!? Does it have a small hidden dual-pwm switch? or is it a special feature of the Zotac card? :runaway:
 
Nvidia_midt_dk.jpg


I just wonder.. we know it will spark once, but TWICE!? Does it have a small hidden dual-pwm switch? or is it a special feature of the Zotac card? :runaway:

Lol, I was wondering if we made that slogan up and why it would be the same as of some cards launching later today..

Which actually has something to do with lightning instead of ***-on-a-stick.
 
This is the change that we made from RV770 to Cypress, however even without that RV770's regulators shutdown to prevent themselves from dying.
Hmm on second thought you're totally right I don't understand why the VRM circuitry can't shutdown on its own. Even 15 year old CPU VRMs had overcurrent protection built in if I'm not mistaken. Maybe programmable limit?
 
Well it was by design, no (referring to OCP controller)?

So they could play with the limits throughout driver progression. But given that the overvolt death came with the driver-in-the-box, I don't think anyone's playing with fire for now :p
 
Uhm... (from http://www.techpowerup.com/reviews/ASUS/GeForce_GTX_590/3.html )
card2.jpg
front.jpg

(seems like I can't hotlink to hardware.fr, but here's the one showing 112C for the uncovered part: http://www.hardware.fr/articles/825-4/dossier-nvidia-repond-amd-avec-geforce-gtx-590.html )
Can anyone identify who's'who there, ie which are the ones really producing heat?

Have they really placed half of a 400W VRM circuity on the backside with no cooling whatsoever?? Considering the amount off heatsinks the motherboards places on the VRM for their puny 125W CPUs...
So is it a pure current problem or is it heat related (aka major design fault)?
 
Last edited by a moderator:
So have we reached the limits of what can be done with the current Fermi design on TSMCs 40nm process with the full raft of mid-high end cards releases in the GF5xx family? Or will there be any real performance gains (beyond maybe minor single digit increases on overclock specials) via respins or minor redesigns on the current proces tech?

I'm sure we will see new designs on the 28nm process, but until then, are we done with Fermi releases (not counting any new low-end releases)?
 
Can anyone identify who's'who there, ie which are the ones really producing heat?
The VRMs are the little squares next to the spot where capacitors would be soldered it, in the middle of the card. And they are all on the inside. And at the back it says 111C. Must be a bit hotter on the inside. They are rated up to 175C btw.
 
But what are those on the backside then? They're getting pretty hot too - or the VRMs are not connected to the cooler plate on the front.
 
Back
Top