AMD: R7xx Speculation

Status
Not open for further replies.
What I don't get is why the drivers out there, including hotfix for HD4850, are apparently not making the most of the fan speed and Power Play. Indeed, if RV770 is supposed to be super-intelligent about power consumption, why is it idling so hot and not clocking down very far?
Yes, especially the HD4870 idle consumption really seems high. Apparently it downclocks the core to the same level as HD4850 and idle power consumption is still way higher. I doubt the difference is only due to ram, but from the numbers seen HD4850 doesn't really seem to lower voltage at all, so I'd suspect HD4870 doesn't neither and since it's got (presumably) a higher voltage to begin with it has a higher idle power draw. You can only hope it's fixable by drivers and there isn't something broken in hardware preventing this...
 
Yes, especially the HD4870 idle consumption really seems high. Apparently it downclocks the core to the same level as HD4850 and idle power consumption is still way higher. I doubt the difference is only due to ram, but from the numbers seen HD4850 doesn't really seem to lower voltage at all, so I'd suspect HD4870 doesn't neither and since it's got (presumably) a higher voltage to begin with it has a higher idle power draw. You can only hope it's fixable by drivers and there isn't something broken in hardware preventing this...

There are Powerplay discrepancies between the 4850 and the 4870 AFAIK. The 4850 supports Powerplay in its entirety(core downclocking, downvolting and ram downclocking), whilst the 4870 supports only core downclocking. Take it with a grain of salt as I haven't recently inquired if that is still the case.
 
Looks to be even or better with the 260 aside from lost planet.
I'm wondering about lost planet - what does it do differently than other games? Maybe it's got lots of shaders with serial dependency? It's worth noting that it gets slower with higher resolution as expected (thus no cpu / setup etc. bottleneck), but going from NoAA/NoAF to 4xAA/16xAF results in a 2% drop or so only on rv770...
 
I don't get why the Radeons are running so hot though...smaller process and fewer transistors.
Still using stupid backwards fan blades on the 4870 = need to run the fan slow to make it quiet...

4850 is using the same heatsink/fan as 3850 with a whole heap more transistors so has a pretty good excuse for it as long as the fan actually spins up enough when needed.
 
Still using stupid backwards fan blades on the 4870 = need to run the fan slow to make it quiet...

4850 is using the same heatsink/fan as 3850 with a whole heap more transistors so has a pretty good excuse for it as long as the fan actually spins up enough when needed.


Actually, i think that the HD4850's fan has a slightly larger diameter than the one found on the reference HD3850's cooling solution.
 
something seems to be very wrong with the gt200 series, could be drivers, its not offering that much increase in performance for what it has under the hood. It is using a new code base for its drivers, so that might be the cause, but I'm skeptical
What's the mystery? It's performing exactly as it should.

The GTX 260 has 22% lower texturing and setup speed and only 2% higher math speed than G92b, which we know is similar in speed to the 4850. The only advantage it has is bandwidth and efficiency improvements, so considering that it's actually doing quite well. Theoretically, the GTX 280 is 5-39% faster than the GTX 260, and the benchmarks show that. The 39% only happens when shader limited (including texturing and math).

The problem is that GT200 has lower perf/mm2/clock than G92 by design, and it didn't hit nearly the same clock speed either. The design choice was probably made because NV didn't expect ATI to improve perf/mm2 (after all, they already picked the low hanging fruit with RV670), so they thought they could gamble a bit on making their architecture better for GPGPU with DP, more registers, more math, etc.

I expect GT200b to be 20% smaller and 20% faster (by correcting mistakes, not just the process), but that's going to be some time from now -- well after R700 completes ATI's leap to the front at all price points above $150.
 
Yeah the GT200 cards are now officially in no man's land price/performance wise.
You gotta be crazy and/or a fanboy to want to spend over twice as much for a card that's maybe 25% faster on average, and the GTX 260 is DOA.

This is somewhat similar to R580 vs. G71, but the build cost difference was much smaller there and NVidia didn't feel like maiming ATI as much as increasing margins.

I don't get why the Radeons are running so hot though...smaller process and fewer transistors.
It's doing the same work as GT200 in half the area. Perf/watt can only make up for that a bit.

What I don't get is why the drivers out there, including hotfix for HD4850, are apparently not making the most of the fan speed and Power Play. Indeed, if RV770 is supposed to be super-intelligent about power consumption, why is it idling so hot and not clocking down very far?
Very odd indeed. Idle power shouldn't be much more than that of RV670.
 
I expect GT200b to be 20% smaller and 20% faster (by correcting mistakes, not just the process), but that's going to be some time from now -- well after R700 completes ATI's leap to the front at all price points above $150.
And possibly below, isn't there supposed to be masses of RV770 deriatives coming in different price points? At least RV740, RV730 and RV710 or something like that.:D
Is there any news when those will be out?
 
And possibly below, isn't there supposed to be masses of RV770 deriatives coming in different price points? At least RV740, RV730 and RV710 or something like that.:D
Is there any news when those will be out?

I don't see any need to rush those out. AMD already has a cheap 55nm chip in RV670. I expect the R7xx variants to debut on 40nm some time in 1H 2009.
 
What's the mystery? It's performing exactly as it should.

The GTX 260 has 22% lower texturing and setup speed and only 2% higher math speed than G92b, which we know is similar in speed to the 4850. The only advantage it has is bandwidth and efficiency improvements, so considering that it's actually doing quite well. Theoretically, the GTX 280 is 5-39% faster than the GTX 260, and the benchmarks show that. The 39% only happens when shader limited (including texturing and math).

The problem is that GT200 has lower perf/mm2/clock than G92 by design, and it didn't hit nearly the same clock speed either. The design choice was probably made because NV didn't expect ATI to improve perf/mm2 (after all, they already picked the low hanging fruit with RV670), so they thought they could gamble a bit on making their architecture better for GPGPU with DP, more registers, more math, etc.

I expect GT200b to be 20% smaller and 20% faster (by correcting mistakes, not just the process), but that's going to be some time from now -- well after R700 completes ATI's leap to the front at all price points above $150.


Thats possible.

They better be shooting for more then 20% increase in performance for the gt200b.
 
read what I said just 3 pages back, what happened to CJ's 25% over the GTX
Care to point out WTF you're talking about?

You were talking about Crysis without AA right? take a look at that review, these cards are neck and neck without AA, with AA guess what as I said the radeon pulls ahead.
The only review out so far shows the 4870 ahead of the GTX 260 by a few percent with and without AA. The margin of victory doesn't change. The slides you said were fake also showed a similarly small margin of victory.
 
That review shows the power draw of the HD4870 to be higher than the GTX280 at idle, and very slightly lower at work.
Ouch.
That's one and a half process generations without much improvement in performance/watt for DX9 workloads, and perhaps more significantly, increased power draw in absolute terms. Will this continue at TSMCs 40nm process as well, I wonder? That the improvements in lithography will be spent extending the feature-set and performance will be bought with increased power draw?
 
That review shows the power draw of the HD4870 to be higher than the GTX280 at idle, and very slightly lower at work.
Ouch.
That's one and a half process generations without much improvement in performance/watt for DX9 workloads, and perhaps more significantly, increased power draw in absolute terms. Will this continue at TSMCs 40nm process as well, I wonder? That the improvements in lithography will be spent extending the feature-set and performance will be bought with increased power draw?

It was said in this thread that apparently the current drivers (Beta 8.6, Catalyst 8.6 and Hotfix 8.6) don't support the new powerplay features of HD4-series
 
Care to point out WTF you're talking about?

The only review out so far shows the 4870 ahead of the GTX 260 by a few percent with and without AA. The margin of victory doesn't change. The slides you said were fake also showed a similarly small margin of victory.
Also, the 4870 is 28% faster than the 4850 at 1920*1200 (no AA) with the 20% core increase, so the extra memory bandwidth is clearly making a difference.
 
Care to point out WTF you're talking about?

The only review out so far shows the 4870 ahead of the GTX 260 by a few percent with and without AA. The margin of victory doesn't change. The slides you said were fake also showed a similarly small margin of victory.


a few thats pretty much equal within 3%, the graph shows it as a few, up to 10-15% faster depending on the renderer used

and the settings are different anyways, very high vs. high. So the closest one we can look at is the dx 9 from the graph.
 
Didn't I read somewhere that people have been able to reduce the chip temperature a considerable amount by replacing the TIM under the cooler and re-attaching firmly? I'm pretty sure I read this in a forum somewhere unless I'm just imagining it?

Nothing to do with the power consumption of the cards but the high temperature of the chip is obviously of concern to some.
 
Thats possible.

They better be shooting for more then 20% increase in performance for the gt200b.
Can they really do much more than that? 20% higher clock would surely increase power consumption, and it would start to approach the clock speed of all their other designs. Maybe 30% at most, but you gotta think power/heat will become serious problems at that point. Reaching G92b clocks will be tough.

Looking at how much time has elapsed since G80's launch and how minimal the changes are in GT200 besides the numbers of various functional units, I'm sure they've spent plenty of time optimizing with a fine-toothed comb already.

Of course, I thought the same logic would apply to RV770 and we all know how that turned out...

a few thats pretty much equal within 3%, the graph shows it as a few, up to 10-15% faster depending on the renderer used
That 10-15% is often a 2 fps difference between integer-rounded numbers. Let it go. A slightly different timedemo is all you need to explain that. It isn't fake.

I was also wondering about the 25% comment you made, but whatever. It's not important.
 
Didn't I read somewhere that people have been able to reduce the chip temperature a considerable amount by replacing the TIM under the cooler and re-attaching firmly? I'm pretty sure I read this in a forum somewhere unless I'm just imagining it?

Nothing to do with the power consumption of the cards but the high temperature of the chip is obviously of concern to some.
Actually, that's not 100% true that it's got nothing to do with power draw. If it runs cooler, it should also use less power, since power draw is dependent on temperature (resistance probably increases, or other effects). Not sure how much of a difference it will make though, probably not that much but might be measurable :).
 
Actually, that's not 100% true that it's got nothing to do with power draw. If it runs cooler, it should also use less power, since power draw is dependent on temperature (resistance probably increases, or other effects). Not sure how much of a difference it will make though, probably not that much but might be measurable :).

Sure power draw has some effects, but heat isn't entirely tied to it, it depends a LOT on the cooler and how well the contact is done with the cooler, this is chip temp we're talking about, not the heat the card produces as whole to the case
 
Status
Not open for further replies.
Back
Top