AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
And yes, I'm absolutely certain AMD cards get less hot than Nvidia ones, unless AMD has coolers that magically work better under full load while being quieter than Nvidia's at the same time, an aircooled Fury, which is solidly faster than a 980 (non ti) is still several degrees cooler as well http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/17

As Dr Evil noted, you are comparing apples-to-oranges. The Asus Fury Strix would be better compared to a card using the same - or very similar- cooler. Unfortunately the closest you'll get is the Asus GTX 980 Ti Strix. Identical cooler, but the 980 Ti board sports a considerable overclock ( ~ 20-25% actual boost under load and a modest memory bump) while the Fury Strix is clocked at AMD reference frequencies. Note that while the Fury has a lower card temperature - due in part as HC noted because of the centralized nature of the Fiji+HBM componentry, the Fury also uses a lot less power (by 50W) thanks to the 980 Ti's OC, while the Fury also generates more noise ( 44.9 dBA vs 41.2 dBA) indicating a more aggressive fan profile than the GTX 980 Ti card with the benchmark being used.
 
He's the one that wanted to know about heat, so, whatever. Per performance AMD cards put out less heat than Nvidia cards, I mean...
First a pet peeve: can we just use the scientific terms and drop the term 'heat' and use 'power' instead? When you use the term 'heat' it makes me believe that you don't understand that heat and power are exactly the same thing. Stupid, I know, but you'd be surprised common this is.

With that out of the way: I simply don't see it. If I look at hardware.fr, or really any other publication that measures power for the GPU only, not only does a Fury X consume more than a GTX 980 Ti or Titan X, by a considerable amount, but the performance is equal or lower as well. So the perf/W is even worse. Maybe there are some special conditions where the Fury X performs much better relative to a Titan X, but I haven't seen those cases and even then, they're outliers.

... that's probably not that important for most people unless your running a huge datacenter with expensive cooling. But then performance per watt isn't that important unless you're doing the same thing and people go nuts over that for no reason.
Perf/W is much more interesting than absolute perf when you're talking about architecture. More perf can be achieved with brute force. You can't do that with perf/W or perf/mm2.

[You also know perfectly well what voltage limited means, or rather power draw limited if you really want. Power draw doesn't scale linearly with performance, a Fury X draws at 300 watts, about the highest a GPU can reasonably go while a Titan X draws 250 watts.
That's the problem when you're not careless in formulating an argument: you're confusing those who do care about these thing. And, TBH, I still don't understand what your point is. In fact, I'm now even more confused: on one hand, you claim that "per performance AMD put out less heat than Nvidia cards" (IOW: perf/W). Yet at the same time, you claim it's 50W more than the Titan X. And we know that the Fury X is not 20% faster than the Titan X, so there's a contradiction right there.

And yes, I'm absolutely certain AMD cards get less hot than Nvidia ones, unless AMD has coolers that magically work better under full load while being quieter than Nvidia's at the same time, an aircooled Fury, which is solidly faster than a 980 (non ti) is still several degrees cooler as well http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/17
And now, you're confusing 'heat' with 'temperature'?

Do you have a clue at all?
 
The biggest problem, to reiterate again, for AMD is their lack of clockspeeds. Even with added voltage. The best result for a Fury with LN2 is 1.4Ghz while Kepler cards could do that with added voltage and Maxwell does it without breaking a sweat. Efficiency then becomes a casualty as AMD have to up the clocks to the max to compete at stock, overclocked there is no comparison of course once the limit for die size has been reached. Even if AMD don't improve theirs but nvidia falter with Pascal's then they would have more breathing space.

Secondly, AMD's having a hardware scheduler makes them appear worse in GPU-only power consumption tests, system power consumption works out to be closer. Also depends on manufacturer, the game used.

http://www.techspot.com/articles-info/1024/bench/Power_03.png

http://www.techspot.com/articles-info/1024/bench/Power_02.png
 
First a pet peeve: can we just use the scientific terms and drop the term 'heat' and use 'power' instead? When you use the term 'heat' it makes me believe that you don't understand that heat and power are exactly the same thing. Stupid, I know, but you'd be surprised common this is.
Maybe AMD's engineers have added circumventing the principle of conservation of energy to their list of accomplishments :oops:
 
The biggest problem, to reiterate again, for AMD is their lack of clockspeeds. Even with added voltage. The best result for a Fury with LN2 is 1.4Ghz while Kepler cards could do that with added voltage and Maxwell does it without breaking a sweat. Efficiency then becomes a casualty as AMD have to up the clocks to the max to compete at stock, overclocked there is no comparison of course once the limit for die size has been reached. Even if AMD don't improve theirs but nvidia falter with Pascal's then they would have more breathing space.

Secondly, AMD's having a hardware scheduler makes them appear worse in GPU-only power consumption tests, system power consumption works out to be closer. Also depends on manufacturer, the game used.

http://www.techspot.com/articles-info/1024/bench/Power_03.png

http://www.techspot.com/articles-info/1024/bench/Power_02.png


You can't say that is due to the "hardware scheduler" at all nor have we seen any lick of evidence that the hardware scheduler helps AMD Fiji over its competition, it seems to work well for GCN 1.0 and 1.1 cards though, so that seems to be more due to code than hardware. And then Hawaii uses the similiar amount of power as Fiji but is no where near the performance, so no power usage differential not due to the the inclusion of the hardware scheduler, its due to design choices made for many parts of the chip by both IHV's that make up the different uses in power.
 
Heat comes from work not done by transistor switching, as well as other efficiency loss. Now, while the same
Maybe AMD's engineers have added circumventing the principle of conservation of energy to their list of accomplishments :oops:

Ok, yeah, got stupid there. My fault. But however they do it their coolers are more efficient than Nvidia's, so temperature limited vs power draw limited is still very much a thing, and therein AMD still wins because they have better coolers. If power draw limited was the exact same thing as heat limited we wouldn't need coolers now would we? Or rather, we wouldn't be using them at all and all dies would be the same area and have the same heat output density...
 
Heat comes from work not done by transistor switching, as well as other efficiency loss. Now, while the same

Ok, yeah, got stupid there. My fault. But however they do it their coolers are more efficient than Nvidia's, so temperature limited vs power draw limited is still very much a thing, and therein AMD still wins because they have better coolers. If power draw limited was the exact same thing as heat limited we wouldn't need coolers now would we? Or rather, we wouldn't be using them at all and all dies would be the same area and have the same heat output density...

Admittedly, I'm a bit tired, but I have no idea what you're saying here.
 
Ok, yeah, got stupid there. My fault. But however they do it their coolers are more efficient than Nvidia's
Yeah? When did that become a thing? I'm guessing for you as soon as AMD ditched a reference design air cooler. I presume you're going to conveniently exclude AMD's actual reference coolers for HD 6900/7900/8900/ R9 290/290X cards.
Since there are very few AIB's that sell both AMD and Nvidia cards using the same cooling, and none that apply the same clocks (as percentage of reference) the available pool is quite limited (i.e. zero). But, as an example can you tell me why the MSI R9 390X Gaming (4.8% core OC/ 1.7% mem OC) runs at 76°C, while the same cooler on the MSI GTX 980 Ti Gaming (18.9% nominal core OC/ 1.2% mem OC) keeps the card to 72°C while being 2 dBA quieter - all with a roughly equal power draw - the latter isn't a supreme metric in any case without knowing the GPU utilization for both cards in the benchmark being used and the power draw estimation practice employed.

I'm going to suggest that if you make some sweeping statement that AMD have better coolers than Nvidia, you might provide some examples. A reference-to-reference comparison might be a good place to start
Admittedly, I'm a bit tired, but I have no idea what you're saying here.
Phew! I thought I was the only one!
 
The only thing I can add to this discussion is that comparing the exact same coolers on differen't GPU's is not ideal. MSI and ASUS are two vendors that sell for both camps and when a situation comes up like GK110 vs Hawaii where both have a similar TDP but a fair disparity in die sizes, the cooler they design to be shared between such chips will underperform by quite a great deal on the chip they have not primarily designed it for - simply due to heatpipe contact with the die or lack there of. Evidence is out there for ASUS 290/X parts that use the same cooler as a 780/Ti. I'm not sure if this is still the case with 390/X as I only looked into it when buying my 290X. I have not researched into any MSI parts, but my MSI 290X gaming has pretty horrendous cooling performance for some anecdotal evidence, whether this is due to the cooler design or one/multiple of many other factors I am unsure. I bought it simply because of the price/performance and somewhat regret that other brands that sell only AMD cards were not offered at such prices in my region (around $50 off).

With Fury die size being very much the same as GM200 I wouldn't think this is a problem with Fury cards, but HBM is a wildcard that may mix things up.
 
The only thing I can add to this discussion is that comparing the exact same coolers on differen't GPU's is not ideal. MSI and ASUS are two vendors that sell for both camps and when a situation comes up like GK110 vs Hawaii where both have a similar TDP but a fair disparity in die sizes, the cooler they design to be shared between such chips will underperform by quite a great deal on the chip they have not primarily designed it for - simply due to heatpipe contact with the die or lack there of.
Not really an issue with the example I provided. The MSI Gaming cards don't use a direct-touch heatpipe arrangement - the GPU contacts a contact plate to which the heatpipes are affixed.
cooler2.jpg
 
I just refreshed my memory on the situation and found that the MSI 290X gaming does perform quite well in reviews temperature/noise wise. After seeing the design, I can see why. Thanks for that.

I'm still unsure why mine hits 95C most of the year round while gaming. It's likely simply because of the often 30C+ environments I game. Either way it has made me wary for my future choices whether it be performance/watt or more efficient cooling (closed loop water etc).
 
I'm still unsure why mine hits 95C most of the year round while gaming. It's likely simply because of the often 30C+ environments I game.
High ambient temps would be a dead giveaway, as would high humidity ( thermal conductivity of air seems to decrease as humidity rises) - something I notice across the ditch in New Zealand with my system. If you haven't already set a custom fan profile for the card, I would suggest you do - the stock fan profile is geared more towards low noise than cooling. If the card is throttling (and at 95°C I guess it is) with a custom profile, you could always disassemble and apply a better grade of TIM.
 
Is that the temp where it starts to throttle back the clocks?
RedVi, I take it you don't have AC?
 
95C is the operating temp for Hawaii.
That is the throttling temp for the GPU as we are all well aware (and so desirable that AMD haven't repeated the exercise) A 95°C GPU for the MSI Lightning card being discussed is hardly its operating temperature parameter.
voltagetuning.jpg
Is that the temp where it starts to throttle back the clocks?
RedVi, I take it you don't have AC?
In a nutshell, yes, although the monitoring is somewhat more complex ( temp/board power limit/fan speed) than just temp throttling.
 
Yes, I know it's more complex, I was just recalling the driver fix Amd did to fix throttling on reference 290/290X boards by upping the fan limit to lower temps.
 
Yes, I know it's more complex, I was just recalling the driver fix Amd did to fix throttling on reference 290/290X boards by upping the fan limit to lower temps.
I think the driver fix just allowed a more aggressive fan profile (especially for the 290) to stop it reaching the throttling point under less arduous usage scenarios. The 95°C is still a hard limit for the GPU - which depending upon which product the company is selling is either "an optimal temperature" or an object of ridicule.
 
It's the optimal temperature for the company who doesn't wish to spend more on a better reference cooler ;)
 
  • Like
Reactions: dbz
Status
Not open for further replies.
Back
Top