Query: Does the 5900U shift its 2D-3D clock speed?

WaltC

Veteran
I haven't seen any info on this that I recall and have assumed it does not shift them like the 5800U. Just read Brent's review on the new revision of the 5600U at [H] and was surprised to see it shifts from 235MHz 2d to 400MHz 3d, and comes standard with the same clock-throttling that afflicted the nv30. So, I thought I'd ask the question here as it now occurs to me that it might need to be asked...?

I had thought that in dropping the nv30 clock to 450MHz with nv35 nVidia had done a much better job and could maintain a consistent 450MHz without the need for clock shifting and throttling. Was I right? Thanks.

Edit: this question is not rhetorical...;)
 
I don't see how clock throttling can be considered an "affliction." It saves on power, heat, and unneccessary wear and tear on the card. You make it sound like it's a disease rather than a benefit.
 
bdmosky said:
I don't see how clock throttling can be considered an "affliction." It saves on power, heat, and unneccessary wear and tear on the card. You make it sound like it's a disease rather than a benefit.

What if '3D mode' doesn't get detected correctly, as with the famous OpenGL Screensaver bug?
 
Then it runs slower until the bug is fixed... note I did say "clock throttling" and not fan throttling. Laptops have used it for a long time now and you don't see them having meltdowns... well, I'm sure some may have suffered from them though. :oops:

*edit* grammar
 
clock throttling is nice
fan throttling is not nice (actually makes you take more notice of the noise IMHO)
 
ATi chips can power down certain parts that are not in use and I would not be suprised in the least bit if they do disable the 3dparts when not needed.

same core clock all the time while getting less heat when in 2d mode.
 
well they don't seem to do it right now and that means Nvidia is the only one offering these kind of feartures on their current desktop hardware and i like it.
The cards are as silent as you can get in 2D mode. That is awsome.

That whole throttling thing is one of the best non 3D features i have seen so far on desktop graphics hardware since dual head.
 
The original question was DOES the NV35 clock throttle....not if it's a good or bad thing. Like most everything said lately, it seems to boil down to which HIV you support........ ;)
 
martrox said:
The original question was DOES the NV35 clock throttle....not if it's a good or bad thing. Like most everything said lately, it seems to boil down to which HIV you support........ ;)
You should use an antivirus tool for your posts... :D
 
The problem with clock throttling occurs when the activation of throttling has more to do with operating temperature versus 2d/3d operation.

As was pretty clearly stated with the 5800 Ultra previews- early samples were commonly down-clocking to 2D speeds while trying to benchmark. This gave illusionary/fictional benchmark results as the testers may have had to run a benchmark over and over again (with adequate cooling time between tries) to get a complete run at 100% clockspeed.

This gives the illusion of a product that can maintain a higher clockspeed on benchmarks, but in practice, downclocks to a lesser performing card when playing games.. and in the MIDDLE of playing a game.

If there is no thermostatic control on the clock frequency, and the card cannot down-clock from overheating, nor does this occur from extended periods of "normal" gaming- then this whole thing is a moot point. BUT.. if it behaves much like the 5800 Ultra review samples.. it's just more fictional, unobtainable performance figures being propagated.
 
Sharkfood said:
The problem with clock throttling occurs when the activation of throttling has more to do with operating temperature versus 2d/3d operation.

As was pretty clearly stated with the 5800 Ultra previews- early samples were commonly down-clocking to 2D speeds while trying to benchmark. This gave illusionary/fictional benchmark results as the testers may have had to run a benchmark over and over again (with adequate cooling time between tries) to get a complete run at 100% clockspeed.

This gives the illusion of a product that can maintain a higher clockspeed on benchmarks, but in practice, downclocks to a lesser performing card when playing games.. and in the MIDDLE of playing a game.

If there is no thermostatic control on the clock frequency, and the card cannot down-clock from overheating, nor does this occur from extended periods of "normal" gaming- then this whole thing is a moot point. BUT.. if it behaves much like the 5800 Ultra review samples.. it's just more fictional, unobtainable performance figures being propagated.

I have two 5800 Ultra boards, none of them have any problems with running at stock speed. I suspect it was just poor pre mass production samples

the 5800 has thermal throttling as well as some odd "check for errors" throttling (probably works by the same mechanism as their auto overclocking does)
 
Xmas said:
martrox said:
The original question was DOES the NV35 clock throttle....not if it's a good or bad thing. Like most everything said lately, it seems to boil down to which HIV you support........ ;)
You should use an antivirus tool for your posts... :D

OOPS... :oops:
 
I guess nobody knows whether the 5900U does clock shifting and throttling like the 5800U....? Hmmm....I was under the impression that some folks had had some hands-on time with the reference boards...no?

So I guess the question is still open.

Meanwhile, I'll give you my take on why I don't like it as featured in the 5600.

Ever since the V3/TNT and the rise of 2D/3D chips (as opposed to 3D-only like the V1 and V2), I've noticed that when the chips are running in 2D operation they run *much cooler* than when doing 3D processing, even when running at the same MHz for each set of functionality. Presumably that is because the majority of circuitry in all current 2D/3D chips is dedicated to 3D functionality instead of 2D--and so, when you run 2D (Windows desktop, browsing, etc.)--you are exercising a much smaller portion of the chips' onboard circuitry. Hence using less power, and running much cooler accordingly.

I recall with my over-clocking experiments relative to the V3 how I could set the MHz clock to a certain level, boot into Windows without a problem, browse the Internet or run 2D benchmarking software, and everything would run without difficulty. However, at the same MHz speed when invoking a 3D application or game I would lock up after a certain period. Checking further, I found that while the graphics processor would remain cool enough to touch by hand as long as I remained in the 2D desktop, it would heat up rapidly when shifting to 3D processing at the same MHz speed--to the the finger burning point. From there I concluded that the V3 in 3D operation was operating much more of its onboard circuitry processing 3D, using more power in the process, and therefore dissipating more heat--which called for better cooling (which did help up to a point at which the chip just wouldn't run at a certain MHz speed in 3D--although 2D operation at that speed was not a problem.)

This is why I am concerned about what nVidia's started doing here relative to its nv3x chips. Since, when restricted to 2D operation, the chip should run much cooler at 400MHz than it does when processing 3D at 400MHz, if one assumes that the onboard cooling solution is indeed sufficient for indefinite operation in 3D at 400MHz, then it also ought to be sufficient to cool the chip at 400MHz in 2D operation as well (since it would be running much cooler since it is using less power and circuitry and dissipating less heat than in 3D operation.)

Since, however, there is active clock-throttle in place, and clock shifting is SOP for the reference-design card, it makes me question if the reference design is truly able to support indefinite 3D operation of the chip at 400MHz without overheating throttling back the clock.

I contrast this behavior with my 9800P which runs solidly for most of each day at 445MHz without a qualm, or the need to clock shift or clock throttle (and this is a standard 9800P ROOB.) Unless I am mistaken, as well, I have read that ATI's .13 micron 9600P can also run at MHz speeds in excess of 400MHz (maybe 500MHz+) without the need for clock-shifting or clock throttling.

As regards, the "efficiency" of what nVidia's doing--I can't agree at all that this is efficient in terms of power-saving and the like. First of all, "Speed-step" and its cousins are designed to save battery power in laptops, primarily, and are not accompanied by clock-throttling software which uses heat as a trigger (like what is apparently going on in the 5600U.) So, those kinds of comparisons are I think moot for workstation/desktop-class 3D card technology of this type. In fact, what would be "efficient" in terms of power consumption and heat is exactly the opposite of what nVidia's doing, it seems to me. If power-consumption savings was my goal, I would have the chip either running at 235MHz all the time, or else running at 400MHz only in 2D and clocking back to 235MHz for 3D operation (because 3D processing consumes a lot more power.) So I think we can conclude that "power saving" efficiency is not what's going on here. When processing 3D at 400MHz + the chip is using the maxium power it can draw, certainly much more than when running in 2D-only operation at 400MHz +.

I think that it is illogical (if not absurd) to assume the chip uses the same amount of circuitry for both 2D and 3D operations, and so there *must be* some other reason for this rather odd clock-shifting/throttling setup (which we do not see in competing products.)

My *guess* is that this arrangement exists because 400MHz+ 3D operation of the chip is not sustainable indefinitely because of thermal reasons and therefore at some point the clock throttle will shift the clock down to a lower clock speed.

These are the reasons why this clock-throttling/shifting nVidia began with nv30 does not impress me, or strike me as desirable--if a chip can run at xMHz indefinitely while doing 3D processing it *ought* to be able to to run at xMHz doing 2D processing indefinitely without any problem at all.

Enlightenment? Argument?
 
Why "ought" it run indefinitely at xMHz in 2D mode again? Because it can? That doesn't seem like reason enough for me. Yes the GPU draws less power when not using all its transistors for 3D operations, but again, it will use even less power running at a slower speed in 2D mode regardless of the number of active transistors. But what I see of more importance is the fact that it will generate less HEAT, which in turn will add to the longevity of the GPU.

I just don't see why everything must have a conspiracy theory behind it.
 
I just don't see why everything must have a conspiracy theory behind it.

Nvida started it.

Just because you are paranoid it doesn't mean that they aren't out to get you.
 
bdmosky said:
Why "ought" it run indefinitely at xMHz in 2D mode again? Because it can? That doesn't seem like reason enough for me. Yes the GPU draws less power when not using all its transistors for 3D operations, but again, it will use even less power running at a slower speed in 2D mode regardless of the number of active transistors. But what I see of more importance is the fact that it will generate less HEAT, which in turn will add to the longevity of the GPU.

I just don't see why everything must have a conspiracy theory behind it.

These "conspiracy theory" remarks are becoming a bit tedious.... I'll announce it when I develop such a theory, believe me...:D

When operating in 2D mode the chip is using *significantly* less power and putting off significantly less heat than when doing 3D processing. The point here is that the power savings between 2D operation at 400MHz and 2D operation at 235MHz is *scant and minimal.* So is the heat differential.

The fact is that using this kind of approach simply doesn't save enough power between that consumed running 2D at 400MHz and that consumed running 2D at 235MHz to *justify the cost* of adding in the clock-shifting, clock-throttling circuitry and software to the card's reference design.

Ergo, the clock throttle and clock shifting is there to downclock the 3D operation of the chip when it exceeds certain thermal parameters--since it is during the 3D operation of the chip that the most power is consumed and the most heat dissipated.

What I'm saying is that even at 400MHz the chip uses considerably less power and dissipates considerably less heat running in 2D than it does in 3D, so you automatically get lower power and heat profiles as soon as you start doing 2D processing. Hence the downclock to 235MHz in 2D is completely unnecessary from the standpoint of "power savings" and "heat", since both are largely reduced when the chip goes into 2D mode.

I don't know...it's possible I suppose that nVidia's chip circuitry runs wide open all the time whether doing 2D or 3D--if that's so then that might explain why they'd have to do something like this--but that would be a very *inefficient* chip design, IMO. That's why I'm wondering if the 5900U clock shifts and throttles like the 5800U and now the 5600...
 
WaltC said:
The fact is that using this kind of approach simply doesn't save enough power between that consumed running 2D at 400MHz and that consumed running 2D at 235MHz to *justify the cost* of adding in the clock-shifting, clock-throttling circuitry and software to the card's reference design.
In mobile devices clock-throttling obviously saves a significant amount of power. How do you know the cost of adding all this? Especially in the light of NVidia having to design such circuitry and software for their mobile parts anyway?

The only way this clock-throttling could be a bad thing is if it would be triggered already at rather lowish temperatures. But if it only happens in extreme cases like fan failure, then it's entirely a good thing.

I would really like to see more dynamically-clocked GPUs and CPUs (clocked high when performance is required, clocked low when doing simple tasks) in the desktop market. I'd like to have a PC that is totally silent when browsing the net or doing office work, but fast enough for highest quality 3D graphics.
 
Xmas said:
In mobile devices clock-throttling obviously saves a significant amount of power. How do you know the cost of adding all this? Especially in the light of NVidia having to design such circuitry and software for their mobile parts anyway?

This is not a mobile part, is the first answer. (Was nv30 in the 5800U a mobile part? People are grossly confused about this.) This product is *not* designed for the mobile market--what's unclear about that? It consumes far more power @ 400MHz when doing 3D operations than it would at 400MHz doing 2D operations, or at 235MHz running 2D operations.

As far as cost goes, if you can't figure out why it would cost more to include clock shifting and throttling in a 3D card than it would to exclude it, I can't help you.

Again, if the object here was power-saving then the card would clock at 235MHz for 2D and 3D operations. That would provide a maximum power savings over any modes discussed.

Additionally, the trigger for the clock-throttle and downclock is gpu heat. Not power consumption or battery life! There's a tab in the driver config that plainly allows the end user to set the limits for gpu heat which when reached will throttle back the MHz speed of the chip while it is doing 3D processing. (It's never going to overheat in 2D whether running at 235MHz or 400MHz.)

The only way this clock-throttling could be a bad thing is if it would be triggered already at rather lowish temperatures. But if it only happens in extreme cases like fan failure, then it's entirely a good thing.

I think you need to add in another condition as well--if it throttles back while running a 3D game, that would be a bad thing. I trust you'll agree with that idea, as that's point I've made from the beginning.

If you think this has anything at all to do with "power saving" you are deluded. I've not seen a single instance of nVidia PR where they have advertised this as a "power saving" function--rather this is an imaginative spin put on the issue by people who want to apologize for it.

Look, what happens in a mobile power saving scenario? First of all it *does not* depend on cpu temperature as a trigger--because the goal is not clock throttling but power saving (the two are quite separate categories not related to each other.) Whereas in a mobile power-saving scenario the goal is to reduce the MHz speed when computational demand is light, without regard to heat, for the express purpose of conserving battery life, in the 5600 the goal is to reduce MHz to manage heat. Any power-saving spin you want to put on that is purely coincidental to the goal of heat management.

So why is the heat management necessary to begin with? Simple, because the chip was not designed to run indefinitely at 400MHz +, and won't, and so it is essential to the viability of the chip that it *not* be continuously run at 400MHz. Therefore nVidia has designed the reference card to *always* throttle back to 235MHz in 2D, and to *always* throttle back to a lower MHz when temperature threshholds are exceeded *during* 3D operation. This leads to an inevitable implication that at 400MHz + the chip is being *overvolted* in order to be overclocked so as to process 3D at those MHz speeds. This requires a somewhat sophisticated approach to manage the heat which is the result of the increased power loads consumed at 400MHz + while doing 3D processing.

I hope you'll think about it and see there's a big difference between deliberately managing a clock to conserve power, and clock-throttling which is done specifically to manage heat.

I would really like to see more dynamically-clocked GPUs and CPUs (clocked high when performance is required, clocked low when doing simple tasks) in the desktop market. I'd like to have a PC that is totally silent when browsing the net or doing office work, but fast enough for highest quality 3D graphics.

Look, if you want "dynamically clocked" gpus you'll have to wait because the 5600 simply doesn't fit your definition. If it was "dynamically clocked" the purpose of which was to conserve power and reduce noise, then why doesn't the card allow you the option of running all of your 3D at 235MHz? *That* would be something I could agree with would be in the "power-saving, noise-management (well, only if the fan changes speeds or shuts down)" category.

But a *heat trigger* designed to throttle back 3D processing at 400MHz to a lower MHz speed?--has nothing to do with either power or noise management, and everything to do with controlling destructive temperatures. Again, the two are unrelated. Very simply, if the 5600 could run indefinitely at 400MHz + with nominal heat and voltage signatures--the card would have been designed to run that way from the start.

That's what's been wrong with the entire idea of using mobile power-saving tech as an analogy here. BTW, my 9800P makes exactly as much "noise" at 445MHz as it does at 380MHz...;) (Which is not much...;))

Edit: I guess, still, nobody knows whether the 5900U clockshifts and throttles...?
 
WaltC said:
This is not a mobile part, is the first answer. (Was nv30 in the 5800U a mobile part? People are grossly confused about this.) This product is *not* designed for the mobile market--what's unclear about that? It consumes far more power @ 400MHz when doing 3D operations than it would at 400MHz doing 2D operations, or at 235MHz running 2D operations.

As far as cost goes, if you can't figure out why it would cost more to include clock shifting and throttling in a 3D card than it would to exclude it, I can't help you.

Again, if the object here was power-saving then the card would clock at 235MHz for 2D and 3Doperations. That would provide a maximum power savings over any modes discussed.
The goal differs depending on the circumstances. The goal of utilization-based clock-throttling is to save power, minimize noise and maximize chip lifetime.
The goal of temperature-triggered clock-throttling is to ensure stable operation in extreme cases. It's a security feature.

It is entirely possible to head for both goals at the same time.

Regarding the cost:
The amount of additional software needed is close to zero, because they need it for their mobile parts anyway.

Manufacturing is a bit more expensive because of the increased transistor count.

But design could even be cheaper. Why? NVidia advertises NV3x as a "scaleable architecture". They use similar building blocks for the whole family. Since they have to put in clock throttling for their mobile parts, why not design all the modules with it in mind, instead of designing two versions of some blocks, one with and one without clock throttling? That would save design and verification work.

Of course they knew they had to clock their high-end part as high as possible, and they knew they were going to use an extreme cooling solution.
So clock throttling provides the benefits that they can safely clock it higher and still have enough "security margin" to guarantee operation in hot environments. And that the loud fan only has to run when absolutely neccessary.

Additionally, the trigger for the clock-throttle and downclock is gpu heat. Not power consumption or battery life! There's a tab in the driver config that plainly allows the end user to set the limits for gpu heat which when reached will throttle back the MHz speed of the chip while it is doing 3D processing. (It's never going to overheat in 2D whether running at 235MHz or 400MHz.)
Yes, of course. Isn't it a fine thing that it prevents your GPU to turn into smoke?
IIRC the default limit is 140°C. Kinda high, isn't it? Do you really think it will reach that limit in any but extreme circumstances like covering the exhaust hole or fan failure?

The only way this clock-throttling could be a bad thing is if it would be triggered already at rather lowish temperatures. But if it only happens in extreme cases like fan failure, then it's entirely a good thing.

I think you need to add in another condition as well--if it throttles back while running a 3D game, that would be a bad thing. I trust you'll agree with that idea, as that's point I've made from the beginning.
If it would clock down in "normal" operating circumstances, I'd consider the card over-clocked, which is a bad thing.
(Well, there might be reasons for such behavior, not specifically to GPUs, but then it would have to be advertised as such)

If you think this has anything at all to do with "power saving" you are deluded. I've not seen a single instance of nVidia PR where they have advertised this as a "power saving" function--rather this is an imaginative spin put on the issue by people who want to apologize for it.

Look, what happens in a mobile power saving scenario? First of all it *does not* depend on cpu temperature as a trigger--because the goal is not clock throttling but power saving (the two are quite separate categories not related to each other.) Whereas in a mobile power-saving scenario the goal is to reduce the MHz speed when computational demand is light, without regard to heat, for the express purpose of conserving battery life, in the 5600 the goal is to reduce MHz to manage heat. Any power-saving spin you want to put on that is purely coincidental to the goal of heat management.
Like I wrote above, it's possible to target both issues with one design. And you can't really say which one is the intended effect and which one is only a side effect. I'm pretty sure NVidia had both in mind.

So why is the heat management necessary to begin with? Simple, because the chip was not designed to run indefinitely at 400MHz +, and won't, and so it is essential to the viability of the chip that it *not* be continuously run at 400MHz. Therefore nVidia has designed the reference card to *always* throttle back to 235MHz in 2D, and to *always* throttle back to a lower MHz when temperature threshholds are exceeded *during* 3D operation. This leads to an inevitable implication that at 400MHz + the chip is being *overvolted* in order to be overclocked so as to process 3D at those MHz speeds. This requires a somewhat sophisticated approach to manage the heat which is the result of the increased power loads consumed at 400MHz + while doing 3D processing.
Do you have any proof that it can't run indefinitely at 400MHz?

I hope you'll think about it and see there's a big difference between deliberately managing a clock to conserve power, and clock-throttling which is done specifically to manage heat.
I know these different goals. I surely never said it was the same thing. But it can be accomplished with almost the same means.

Look, if you want "dynamically clocked" gpus you'll have to wait because the 5600 simply doesn't fit your definition. If it was "dynamically clocked" the purpose of which was to conserve power and reduce noise, then why doesn't the card allow you the option of running all of your 3D at 235MHz? *That* would be something I could agree with would be in the "power-saving, noise-management (well, only if the fan changes speeds or shuts down)" category.
Uhm, no, I don't want it to run silent and power-saving in 3D operation.
I want it to run totally silent (power saving as a side effect) when I want it to be silent, and to run fast when I want it to be fast. Exactly like mobile parts do (in fact this is more about CPU than GPU)

But a *heat trigger* designed to throttle back 3D processing at 400MHz to a lower MHz speed?--has nothing to do with either power or noise management, and everything to do with controlling destructive temperatures.
Totally agree. Intel does that too, and I think it is a good idea to add such kind of security.

Again, the two are unrelated. Very simply, if the 5600 could run indefinitely at 400MHz + with nominal heat and voltage signatures--the card would have been designed to run that way from the start.
Here I don't agree.
 
Back
Top