xmas said:
The goal of temperature-triggered clock-throttling is to ensure stable operation in extreme cases. It's a security feature.
It is entirely possible to head for both goals at the same time.
OK, then point me at the nVidia literature which discusses this "power saving" aspect and relates how much "power saving" actually occurs between 2D operation at 400MHz and 2D operation at 235MHz. I haven't been able to find any info from nVidia on this subject--so if you can't either, let's agree to stop calling it "power saving." I just don't see any evidence that this is what it is.
And actually, I rather think the goal of clock throttling is to clock down the chip when it overheats. The definition of "extreme" will obviously vary from system to system. And chip to chip. nVidia obviously thinks it has reason to clock down the gpu, doesn't it, since it *always* clocks it down in 2D and *always* clocks it down when it overheats (why should it overheat with adequate cooling?)
I mean if you're implying that there's something lesser in quality about a 3D chip reference design which excludes clock throttling, I'd have to tell you that's never been my experience. The great majority of 3D reference designs sold since the V1 have been completely successful with no need for clock throttling.
Regarding the cost:
The amount of additional software needed is close to zero, because they need it for their mobile parts anyway. Manufacturing is a bit more expensive because of the increased transistor count.
But design could even be cheaper. Why? NVidia advertises NV3x as a "scaleable architecture". They use similar building blocks for the whole family. Since they have to put in clock throttling for their mobile parts, why not design all the modules with it in mind, instead of designing two versions of some blocks, one with and one without clock throttling? That would save design and verification work.
This is really reaching...
Of course it costs more. Please quit confusing this chip with something relating to the "mobile market" and your idea of "power saving." This is not a mobile chip, nor is it designed for the mobile market. First of all I want to see some numbers from nVidia on "power saving"--like how much power I'm saving when I use a 5600. A hallmark of "power-saving" technology is that companies which employ it can express the power being saved in concrete numbers. Where "power-saving" techniques are employed in the mobile market, the goal is to preserve battery power, and the various companies employing such power-saving schemes have numbers to back up the premise they push. I've seen nVidia pushing no such premise about these products. Could it be that's because these products aren't meant for the "mobile market" and as such aren't designed with power-saving in mind?
Of course they knew they had to clock their high-end part as high as possible, and they knew they were going to use an extreme cooling solution. So clock throttling provides the benefits that they can safely clock it higher and still have enough "security margin" to guarantee operation in hot environments. And that the loud fan only has to run when absolutely neccessary.
Right--they knew that overclocking it was pushing the heat envelop and so they designed in a clock-throttle to knock down the MHz to cool the chip as they *expected* it to overheat from time to time.
Yes, of course. Isn't it a fine thing that it prevents your GPU to turn into smoke?
IIRC the default limit is 140°C. Kinda high, isn't it? Do you really think it will reach that limit in any but extreme circumstances like covering the exhaust hole or fan failure?
Right--as opposed to "power saving" and other assorted nonsense...
Good description of a heat-triggered clock throttle mechanism.
If it would clock down in "normal" operating circumstances, I'd consider the card over-clocked, which is a bad thing.
(Well, there might be reasons for such behavior, not specifically to GPUs, but then it would have to be advertised as such)
Yes, the 5800U, before being abandoned for mass production, was reported on more than one site to arbitrarily clock itself down in the middle of a 3D game (in some cases, screen savers.) This would indicate to me that the clock throttle was doing its job. However, that's far from saying that is a desirable outcome...
(When contrasted with chips that run all day long at their advertised MHz speeds without a need for clock throttling "protection," chips that do so for years without complaint.)
Like I wrote above, it's possible to target both issues with one design. And you can't really say which one is the intended effect and which one is only a side effect. I'm pretty sure NVidia had both in mind.
However, there is ample proof that the 5600 design employs a heat-triggered clock throttle, while there is no proof that any kind of mobile-market power saving is going on at all. How much power am I saving? What's the power savings nVidia advertises?
Do you have any proof that it can't run indefinitely at 400MHz?
No. Do you have any proof that it can?...
I know these different goals. I surely never said it was the same thing. But it can be accomplished with almost the same means.
I disagree. Power-saving technology as found in the mobile market clocks down *only* to save power. In every mobile version I've seen there are multiple steps of power-saving employed--multiple levels of power saving. Heat is not a consideration--doesn't enter the picture.
What we see in the 5600 is decidedly not that. We see a chip clocked to 400MHz to run 3D, with cooling adequate enough to presumably allow indefinite operation of the chip running 3D at 400MHz. Switching from 3D to 2D operation at 400MHz automatically cools the chip and consumes less power than when the chip is running 3D at 400MHz. Further, although I could be mistaken about this, I have not read that the fan in the 5600 shuts down in 2D operation. What I've read is that the fan runs all the time but its noise level is not obtrusive (like it was with the 5800U.)
Go back to the 5800U. Why did nVidia end up shutting the fan off when it clocked down to 2D operation? It had *nothing to do with saving power*, it was noise reduction, plain and simple. The noise of the fan was *so bad* in 3D operation that nVidia clocked down the gpu and turned off the fan to give people's ears a breather...
No power-saving there...
So, if the fan noise in the 5600 is not objectionable when running 3D at 400MHz, how could it possibly be objectionable when running 2D at 400MHz?
The simple truth is that if we eliminate power saving from the equation (which I think is entirely justified), then there's no reason other than heat for nVidia to clock down the 5600 to 235MHz in *2D* operation. I would stipulate that noise pollution was a worthwhile reason if nVidia turned off the fan at 235MHz--but certainly not power saving. But noise pollution seems questionable in itself if it is true that at 400MHz 3D the fan noise is no greater than the fan on a GF4 or a Radeon.
If there was any kind of "mobile-market" power saving going on I'd expect to see many user selectable, or automatic, levels of MHz function--that's what you see in that kind of power-saving scenario. And, as I said before, I'd expect to see some numbers from nVidia to back it up any claim of power saving.
Uhm, no, I don't want it to run silent and power-saving in 3D operation. I want it to run totally silent (power saving as a side effect) when I want it to be silent, and to run fast when I want it to be fast. Exactly like mobile parts do (in fact this is more about CPU than GPU)
Like I said then you'll have some wating to do on a product like that...
I think we've been talking about three distinct issues here that have become confused:
1) Power saving as we see it in the mobile market
2) Clock throttling for thermal reasons in nVidia nv3x line of gpus
3) noise pollution as in the 5800U
*chuckle* From what I've read number 1 isn't applicable to the 5600, #2 definitely is, and #3 I simply can't answer...
It's fine if you want all three--but it doesn't appear to me they are evident in the 5600.
Totally agree. Intel does that too, and I think it is a good idea to add such kind of security.
That's fine if you like it or see it a desirable feature. But that doesn't change the fact that there's nothing wrong with a chip that doesn't need thermal clock throttling for protection while in a normal operating environment.
WaltC said:
Again, the two are unrelated. Very simply, if the 5600 could run indefinitely at 400MHz + with nominal heat and voltage signatures--the card would have been designed to run that way from the start.
Well, understanding that nVidia first incorporated this type of thermal clock-throttling with its nv3x chips--and understanding what kind of heat the nv30 puts out--and understanding nVidia didn't see a need for this with the GF4--and understanding that none of ATi's current .15 micron chips appears to need it--I think it's a fair bet that nVidia's slapped it into its reference designs because it thinks thermal clock-throttling is needed there specifically to alleviate overheating problems.
As such, I don't have a quarrel with the thermal clock throttling because it seems to be needed on an active basis in these products. But I do have a quarrel with "power saving" definitions--as I can't see any justification for them. Possible noise pollution, thermal profile considerations make sense--"power-saving" of the type found in the mobile market simply does not appear to be present.