AMD RV770 refresh -> RV790

That takes care of thermal concerns.

The wattage numbers would be a separate consideration.
A chip can draw power in excess of its specification and not necessarily trigger a thermal threshold, if the cooling apparatus can draw the heat away fast enough.

Monitoring that takes more than a thermal diode, but some designs have done it.
 
AMD tells me their TDP describes only the TDP of the graphic chip not the complete graphic card.
Perhaps it is a excuse. I do not know it. ;)
Then they should refer to it as such and not touting a 110 Watts as "Max Board Power" for an HD 4850. From where I am sitting, a board and a GPU are two different beasts altogether. :)

Weren't there reports of Nvidia's stuff getting quite hot as well? I know I was scared to let the new version run unattended. It started racing towards 100 C within seconds of starting on my GTX 285.

Yep, they're getting quite hot as well with some easily reaching a hundred degrees centigrade. Others, even OC-Versions, can be kept at ~85°C with stock cooling. Dunno the reasons for this large range of variation though.


I guess people aren't familiar with the term "Power Virus".

So, JegX from Ozone3D did code "machine code" specific to RV770-Chips as you imply with your reference to wikipedias definition of a power virus?

I was under the impression, that he was using basic OpenGL programming.


Its unlikely to remain a constant, with no other activity for very long.
So, how do you prevent such code from appearing in games? I mean, fur-rendering is far from being that unusual - remember the chimp-demo? Or Rubys fur collar in White Out?

After all, other code may show up some day which does load the GPU more than average - maybe even in a position within a game, which is in almost constant display.
 
Last edited by a moderator:
It seems the bit that most runs the risk of melting on AMD's cards is the voltage regulation circuitry. Is that circuitry monitored for thermals?

Jawed
 
Well, certainly FurMark is nothing more than an OpenGL app -- it just happened that the rendering algorithm engages a load pattern that unleashes the high currents. The graphics processors and their drivers are already quite well in squeezing the every last pipeline bubble in there, at least for the popular GFX loads, so low-level "hacking" wouldn't be the desired miracle.
 
Well this is interesting. This TDP thing is something i've been arguing about for the last 5 years, except for CPU's. If this standard is used for GPU's, then the same standard would be suicide for Intel on the CPU front. Not only was there P4 gen power hungry, but as LostCircuits discovered, max TDP is more in the neighborhood of 200 Watts - 220 Watts. However Intel says that that is in extremely rare conditions only.
 
It seems the bit that most runs the risk of melting on AMD's cards is the voltage regulation circuitry. Is that circuitry monitored for thermals?

Jawed

It is - GPU-z and Rivatuner are able to read the values at least for reference designs of HD4670 and higher.
 
So voltage regulator temperatures are monitorable but the card doesn't defend itself actively? :oops:

Instead driver hacks are required?

When a performance or enthusiast graphics card is the single most costly component in a PC shouldn't it protect itself? Fan control solely for GPU die temperatures seems rather naive.

I noticed an update on the article (which I hadn't noticed before) indicates that the regulators' maximum specified temperature is not known - so 125 celcius may not be dangerous...

Jawed
 
We recently found out that FireGl products are rated with a higher TDP, have a look at the Radeon FireGl 9270. That´s a HD 4870 with 2 GB of VRAM and its rated at 160 Watt typical and 220 Watt peak. We meassured about ~ 190 Watts for the HD 4870 1GB. Everybody can make his own conclusion on that ;).
http://ati.amd.com/technology/streamcomputing/product_firestream_9270.html

This is not a like for like comparison. Unlike the move from 512MB to 1GB on the 4870, which didn't change the number of memory devices just used higher desnity devices, to facilitate 2GB the Firestream board is a completely new design and uses twice as many devices; because of that the power characteristics are very different (and are higher).
 
I guess people aren't familiar with the term "Power Virus".
LOL
AMD using same excuses as Intel did back in the days of Netburst.
Was there EVER, ANY "power virus" or we should take it for granted because its written in Wikipedia!?
What about when AMD tells that their CPU TDP declares the maximal possible power CPU can draw?
 
Pathological code snippets that happen to hit high enough utilization to max out a processor's power draw have existed for a long time. It's not something that's made up, part of the process of verifying CPU functionality involves stress-testing using code that will actually stress the CPU to the point of exceeding its specifications.

There are likely tight loops in actual applications that would have similar effect, if those loops were taken out of their applications and run for a long period of time.

Why the wattage numbers for the RV770 are consistently high for Furmark and the hardware does not seem to catch it is something different. Either the measurements have some sizeable margin of error, or the hardware's method of self-regulation is missing something.
 
Why the wattage numbers for the RV770 are consistently high for Furmark and the hardware does not seem to catch it is something different.
The wattages for NVidia's higher performance GPUs are also way higher.

Graphics cards from both IHVs exceed the supposed PCI Express specifications. NVidia's slightly less guilty, that's all.

Jawed
 
I'm willing to allow some margin of error that permits a product to read at 5% over PCI-E spec. It's borderline even then, but still plausibly in the noise.

A double-digit percentage over spec is something harder to account for.

The lack of clarity concerning the listed TDPs that are below the actual measurements is baffling to me.
 
The 8 pin power supply for the GeForce GX 295 is admittedly within the specifications but the power consumption on the 6 pin plug (104 Watt) is obviously too high. Analogical the GTX 285 charging the 6 ping plug with 107 Watt is over the maximum.
75W is the specification for 6-pin connectors. 39% over is not excusable.

Jawed
 
Old fill-rate (ST) test from 3DMark01 on GTX280 gives similar results as FurMark on HD4870. I didn't check, if latest drivers changed anything, but after running the test for about 15 minutes (loop) the card became very hot and the cooler intolerably noisy. I didn't achieved similar results in any today game.

The reason, why G9x a GT2xx have quite acceptable typical power draw, originates from the low ALU:TEX ratio. Texture units, which have very high power requirements, are idling in most todays applications.
 
75W is the specification for 6-pin connectors. 39% over is not excusable.

Jawed

I missed the analysis on the 8-pin and 6-pin connectors.

I believe article pointed out that temperatures were not at critical levels on the GPUs, which hints that perhaps the regulation scheme is based on temperature thresholds with an assumed relationship between power draw and the measured die temperature.

In such a case, we could argue that the GPU cooling setups are in this corner case too effective and the thermal bands too lax.

Guard-banding based on electrical power draw is apparently more complex, and not implemented on GPUs.

Even for CPUs, this monitoring tends to be a motherboard alert system, not an automatic throttle on the chip.

I'm only aware of Montecito and possibly Nehalem as possible exceptions, and the former had that turned off. Those have on-chip microcontrollers for the job.
 
Even for CPUs, this monitoring tends to be a motherboard alert system, not an automatic throttle on the chip.

I'm only aware of Montecito and possibly Nehalem as possible exceptions, and the former had that turned off. Those have on-chip microcontrollers for the job.

Yeah, I noticed that when the cooler on my E8200 was loose, the system monitors the heat rising (up to about 125 degrees Celsius) but at no point does it try to do anything besides the default speedstep/fan settings.
 
Old fill-rate (ST) test from 3DMark01 on GTX280 gives similar results as FurMark on HD4870. I didn't check, if latest drivers changed anything, but after running the test for about 15 minutes (loop) the card became very hot and the cooler intolerably noisy. I didn't achieved similar results in any today game.

Them batch tests from 3DMark 06 (IIRC) are about the same.
 
Old fill-rate (ST) test from 3DMark01 on GTX280 gives similar results as FurMark on HD4870.[...]The reason, why G9x a GT2xx have quite acceptable typical power draw, originates from the low ALU:TEX ratio. Texture units, which have very high power requirements, are idling in most todays applications.
Back in time:

http://forum.beyond3d.com/showpost.php?p=1100079&postcount=17

it seemed to me that texturing could have been the reason that significantly higher power consumption was being seen on G92 in a particular synthetic test.

I wonder how GT200 and RV770 fair on this particular test.

Jawed
 
Back
Top