AMD: R7xx Speculation

Status
Not open for further replies.
In Call of Duty 4 the 512mb 4870 is faster than GTX 280 until 4xAA is applied at high res. Memory or fillrate bound? What makes the rv770 so superior in this game?

On the question of temps, you can get an Accelero S1 for like $20 and enjoy silence while getting much lower temps (and probably a better overclock), and still spend less total than GTX 260.
 
Didn't I read somewhere that people have been able to reduce the chip temperature a considerable amount by replacing the TIM under the cooler and re-attaching firmly? I'm pretty sure I read this in a forum somewhere unless I'm just imagining it?
I don't know about the high-end cards but my el cheapo passively cooled AH3650 used a thermal pad of abysmal quality. Replacing it with some Arctic Silver Céramique yielded a reduction of 10C at idle and 15C (!) under load. With active cooling it should have made an even larger difference.
 
It was said in this thread that apparently the current drivers (Beta 8.6, Catalyst 8.6 and Hotfix 8.6) don't support the new powerplay features of HD4-series
The review also indicated much higher than necessary clocks at idle, so this is probably true. However, the 33W (!!) difference between the 4850 and the 4870 at idle indicates that there is more to it, my guess is that they (as usual) has pumped up the voltages for the 4870.

Powerplay does not affect power draw in operation though, which is what ultimately determines how you need to dimension cooling of the card and the rest of the system, and of course what the PSU need to be able to deliver.
Personally, I'm just not buying a 12 mpg car, no matter how large its trunk. :)
 
Anyway I'm sold on the HD4870 (2 of them in fact). After seeing how well they perform against G200, I'm utterly gobsmacked at how well DAAMiT have turned the ship around since R600. I mean, we're talking about a $299 card sometimes going head-to-head with a $649 card!


It kind of makes you wonder what the hell AMD's cpu division is doing..and why they are so utterly abysmal right now.
 
Pardon if someone already posted it, a quite large HD4870 review with benches and traslated (google)!

http://66.102.9.104/translate_c?hl=...iew.com/topic/2008-06-24/1214282167d9273.html

This is actually an excellent review, shaming many western sites. Clear, concise, copious benchmarks with clearly labeled game settings (it's amazing how many reviews often fail to even mention what DX version or what game settings they benchmarked each game under), and very handy summary +/- comparisons against all it's competitors.

Very few western sites will deliver all that in one review, if any.


It also tells me that 4870 is only around 10% slower than GT280. If that holds up I dont see how $649 can stand..
 
Last edited by a moderator:
The power draw of the 4870 seems rather high since ATI claimed to have designed for power rather than performance.

Idle is almost certainly because PowerPlay isn't working right but the load draw seems quite high.



Temperature is entirely due to the cooler (and the fan running low) so it's not really too concerning.
 
The power draw of the 4870 seems rather high since ATI claimed to have designed for power rather than performance.

Idle is almost certainly because PowerPlay isn't working right but the load draw seems quite high.



Temperature is entirely due to the cooler (and the fan running low) so it's not really too concerning.

Ati claims to have designed for Perf/watt, Perf/area and overall efficiency. Never have I heard any of their representatives saying to have NOT designed for performance, and the way the cards perform seems to indicate that actually performance was very much on their mind.
 
My 280s drop to 301MHz on the core at idle. According to EVGA's Precision, they are sitting at 41C and 43C ATM.

Once PowerPlay is operating as intended, the power draw and temp at idle on the 4870s should be much more to everyone's liking. :smile:
 
Performance on its own is just ok. Its the context of price this needs to be taken in and then this is nothing short of stellar!

Its unlikely NV will bring GTX2xx series prices down to within $50 of the appropriate 4800 but if they did they could claim victory again.

My point is that this isn't an all conquering victory like R300 was. ATI haven't made a GPU thats so fast than NV just can't keep up with it. They have made a slower GPU but they are selling it it for a much, much lower price, hence the huge victory.

were the real achievement on ATI's part comes in is that they can actually afford to sell it at this much lower price because its so damned efficient in comparison.

It does have a process advantage though so GT200b may still upset things in the ATI camp. Its still faster afterall so if they can bring its size and hence cost down into regions approaching the 4800 through process parity then we may have a good fight on our hands.
 
http://www.expreview.com/img/topic/hd4800/rv770ar-4.jpg R770 arch
http://www.expreview.com/img/topic/hd4800/r600-new.jpg R600 arch
Jawed please start to study the differences betwen the two chips, for us noobs

I'll answer instead :). Well these diagrams were leaked before already, so there's nothing new to see here.
I don't even know where to start describing all the differences - I don't think I'm alone not having expected that many changes.
1) tmu organization. No longer shared across arrays, each array has its own quad tmu. I'll bet the sampler thread arbiter had to change with that as well.
2) tmu themselves changed. While they always had separate L1 caches (I think - the picture is misleading), now the separate 4 TA and point sampling fetch units are gone (thus one tmu is 4 TA, 16 fetch, 4 TF instead of 8 TA, 20 fetch, 4 TF). Also, early tests indicate they are no longer capable of sampling fp16 at full speed (dropping to half and one quarter at fp32 IIRC).
3) ROPs. They now have 4xZ fill capability (at least in some situations) instead of just 2. The R600 picture indicates a fog/alpha unit which is now gone, though I doubt it really was there in the first place (doesn't make sense should be handled in the shader ALU). The picture also indicate shared color cache for R600, I don't know if this was true however. Could be though (see next item).
4) no more ring bus. Clearly with rv770 ROPs are tied to memory channels (just like nvidia G80 and up), and there are per-memory channel L2 texture caches. Instead of one ring-bus it seems there's now different "busses" or crossbars or whatever for different data (it's got a "hub", it's got some path for texture data etc.)
5) Other stuff like the local data store, read/write cache etc.

That seems like the most important to me, architecture-wise (of course it got 10 shader arrays instead of 4 too...)
 
Last edited by a moderator:
Ati claims to have designed for Perf/watt, Perf/area and overall efficiency. Never have I heard any of their representatives saying to have NOT designed for performance, and the way the cards perform seems to indicate that actually performance was very much on their mind.

Semantics. Perhaps I should have worded it as Efficiency _over_ Performance which translates to performance/watt.

I don't think anyone would have said ATI didn't improve performance over R600/RV670 even before the cards came out but just that it wasn't the absolute priority (i.e., getting the maximum speed period).
 
Heh, this back and forth between the two competitors is kinda fun to watch. TGDaily is reporting that AMD will allow partners to overclock their 4850's to combat the GTX9800+, and these are set to start rolling out 2nd week of July.
"AMD is preparing an answer to Nvidia's recently released GeForce 9800GTX+ card. Overclocked Radeon 4850 cards are set for an introduction in the second week of July."
 
It does have a process advantage though so GT200b may still upset things in the ATI camp. Its still faster afterall so if they can bring its size and hence cost down into regions approaching the 4800 through process parity then we may have a good fight on our hands.
I don't think it will really come into regions approaching the 4870 in price (well current price maybe but I'd bet AMD could go lower if the G200b would come too close). Die size will still be almost twice as large, board still more complicated (due to 512bit bus), the only thing cheaper is the ram (I've got no idea though how much of a premium gddr5 carries) - assuming it's still using gddr3 only.
nvidia basically got 10% higher clocks at the same power draw for G92 with the die shrink, maybe they can get slightly more than that with GT200. Also, I'd expect they could possibly sell versions with 9 multiprocessors instead of 8 without really losing many dies (that would be similar to the G80-G92 transition, where the "crippled" G80 had 6 mp and the "crippled" G92 had 7). To save some (not much though...) cost (and to differentiate it a bit more from the full part) maybe it would make sense to scrap one more memory channel instead. Such a part should overall be really faster than the HD4870 I reckon, hence a price premium would be ok.
But G200b isn't here yet...
 
Well, with AMD claiming G200 as "power hungry" you could argue Perf/watt isn't really any better with rv770 (at least if you compare HD4870 vs. GTX280). No doubt though about Perf/area...

the ATi slide was flops /watt. There's a definite advantage there, but ATi does need to get the idle consumption down to where it should be.
 
Status
Not open for further replies.
Back
Top