Actively using PowerTune like you seemingly did on that "no-additional-power-plug-so-let's-to-use-every-watt-we-have" HD7750 would be a very interesting option for a dual Tahiti card, though. Just like the "hard" 75W wall for HD7750, there is a "hard" 300W limit for HD7990 - so the situation seems comparable (though on diametrically opposed ends of the spectrum, of course).Why? Nothing I have seen changes the decision to be deterministic or not on the current products.
Actively using PowerTune like you seemingly did on that "no-additional-power-plug-so-let's-to-use-every-watt-we-have" HD7750 would be a very interesting option for a dual Tahiti card, though. Just like the "hard" 75W wall for HD7750, there is a "hard" 300W limit for HD7990 - so the situation seems comparable (though on diametrically opposed ends of the spectrum, of course).
I won't argue against being deterministic @stock settings. I really see your point there.
But given that you said PowerTune is highly programmable, I think a lot of people would really appreciate a kind of software switch in CCC that allows them to easily put their card into a "PowerBoost" mode that dynamically raises clocks/voltages to make full usage of the given power budget at all times.
That way, you could basically keep the deterministic appraoch @stock settings - but provide your customers with all the advantages of easily and automatically puhing their specific card to its individual limits within a hard 300W power envelope.
Overclocking has always been a gamble. I like the fact that overclocking is automatic and hassle free. I understand that it complicates things for reviewers, but that's a fair trade-off in my very personal opinion.Dave Baumann said:Why? Nothing I have seen changes the decision to be deterministic or not on the current products. On the contrary in fact.
I wonder how people will feel about it being a fair trade-off when the reviewed products are performing better than what they get in their own off the shelf part?
I see what you did there
Most probable candidate for performance schemes based on more aggessive power thresholds would be HD7750 - and look what I found (should read way more midrange reviews):
A stock PowerTune setting that's actually limiting gaming performance:
And what's the result of PowerTune actually kicking in at stock settings?
Nearly 50% better perf/watt than GTX680
As I said earlier: A detailed perf/power review of an OCed HD7970 @stock PowerTune limits would be really interesting. Should behave rather similar to what Nvidia did on the GTX680 - just "self made".
Overclocking has always been a gamble. I like the fact that overclocking is automatic and hassle free. I understand that it complicates things for reviewers, but that's a fair trade-off in my very personal opinion.
Yes, but it is also the "bonus" for those that be interested in it. Why turn the default product performance into a "gamble"?Overclocking has always been a gamble.
I think the problem with Nvidia's approach really isn't the approach itself, but the fact that it's a stock setting and can't be easily turned on/off.Overclocking has always been a gamble. I like the fact that overclocking is automatic and hassle free. I understand that it complicates things for reviewers, but that's a fair trade-off in my very personal opinion.
Yes, but it is also the "bonus" for those that be interested in it. Why turn the default product performance into a "gamble"?
Because it is already a "gamble"? Performance may vary with OS version, 3rd party software, montly drivers releases, etc.
For the market it is targeted this variance is acceptable.
Yeah, you're probably right. Better keep it simple.Nice mockup Mianca
I would like to add a level to your sketch.
In my opinion as an end user I would prefer an smartboost "automatic power deterministic" checkbox, and that's it. I like it to be super easy to use for everyone. The smartboost should then overclock the card based on the specific chip at hand.
I did a quick mockup of how AMD could do it:
Since Dave is actively participating in this thread I'd like to take the chance and ask about how GPU-Boost is different from what Intel and AMD are employing in their CPUs. Based on TDP headroom, clocks are increased - shouldn't this also be subject to individual ASIC variation?
Llano and Bulldozer use activity counters, like Cayman & S.I., and therefore produce deterministic results. Intel relies on analog measurements detailed by David Kanter here: http://www.realworldtech.com/page.cfm?ArticleID=RWT042011004622&p=2
So Intel and NVIDIA use a similar system, although Intel's is much more sophisticated.