AMD RV770 refresh -> RV790

Conspiracy theory: this was a quick fix after their attempts to make a 40nm version failed or proved not worth the bother.
(an optimistic view)
Or probably because RV800 design got so well on the 40nm node, that the bothering with the current generation didn't worth either way. ;)
 
(an optimistic view)
Or probably because RV800 design got so well on the 40nm node, that the bothering with the current generation didn't worth either way. ;)

I wouldn't call that persective optimistic, but rather realistic and that goes for both IHVs.
 
http://www.xbitlabs.com/articles/video/display/radeon-hd4890.html
This is a very good result for the card that wasn’t initially supposed to compete against the single-processor flagship product from the enemy camp."
Note though that xbitlabs is testing with "high quality" driver options. Not wanting to start a fight here if that's fair or not, but some other reviews will use use default options which tends to help Nvidia's cards more than ATI (since they can't do "perfect" aniso filtering even with high quality). Might just make enough of a difference that even the OC cards can only compete with the GTX 285.
 
Just a quick update. RV790 does *not* support burst memory reads, we had an error in our documentation that will be fixed and updated shortly, sorry about that. Y'all are just too quick scanning through things and finding nuggets like that. ;-) There are some tweaks in RV790, but burst reads is not one of them.
Oh, that's interesting. Reviews now mention it supports memory bursts due to these docs.
I guess there's a good reason that it doesn't support this (doesn't it have to due to the 256bit memory bus or what?).
 
Looks like it is available only for the new 40nm parts (RV740 and its derivatives), so we should expect this feature in RV800 series for the high-end SKUs.
 
http://www.xbitlabs.com/articles/video/display/radeon-hd4890.html

In this review HD4890OC (900/3900) beats the GTX285 in 11 out of 16 tests and 1 draw (2560x1600). Pretty impressive! :)

The OC is actually 1000/4800. Its their own OC on the regular card, not the standard ATI OC of 900/3900.

So the OC scores shouldn't be paid too much attention since its not apples to apples, for a real comparison you would want to overclock the 285 as well.

Comparing to the standard 4890 at my playing resolution of 1920x1200 and ignoring synthetics, the scores stand at 8 wins for the 285, 5 wins for the 4890 and 2 draws.

So overall the 285 is still the fastest single GPU but not by much and the 4890 is extremely impressive given its price advantage and definatly a worthy alternative.

The real decider will be what kind of factory overclocks we see on the 4890 and what they sell for compared to the stock 285. It does seem that ATI could have a major winner on their hands here though. Kinda wish I was looking to buy at the moment but i'm holding out for the DX11 generation.
 
I guess there's a good reason that it doesn't support this (doesn't it have to due to the 256bit memory bus or what?).
There are no logic design changes to HD 4890, the changes are done from a physical design point of view.
 
Decap Ring
decap_c_800x600.jpg
 
There are no logic design changes to HD 4890, the changes are done from a physical design point of view.
Ok, thanks for the clarification. Still, considering the timeframe and the fact you need a new chip anyway, it seems slightly odd to me that you wouldn't also include some minor logic changes (especially those which were already implemented in other chips, so shouldn't be much risk). I guess that feature won't really help performance that much, then.
 
I'm seeing conflicting numbers; some say it's (much) better than 4870, some say worse. For example:


http://www.xbitlabs.com/articles/video/display/radeon-hd4890_6.html#sect0

vs


http://www.pcper.com/article.php?aid=684&type=expert&pid=11
Xbitlabs tested PowerColor reference board. I'm not sure, but it seems, that Asus, HIS and other OC boards have different BIOS and PowerPlay don't work at all. Even OC versions of HD4850 don't underclock in 2D fully, only to 500MHz and power consumption is therefore much worse.

I hope we'll experience some hint from ATi, why majority of boards doesn't have the promised lowered power consumption.
 
Hmmmm, power numbers seem to be all over. Anandtech shows power at idle to be slightly lower than 4870 and load to be almost the same as 4870 1 gig.

I wonder if Powercolor is doing something different to get such low power consumption numbers in the Xbit article. Anandtech used a HIS model.

I can't imagine that's the case though as I imagine all launch cards are using reference boards.

Strange. It's certainly tempting to get a 4890 1 gig to replace my 4870 512 if these power numbers are true. Although at 249 USD, I'm wondering if I'm just better off waiting for DX11 cards.

And compared to it's direct competitor it seems to generally be similar to or faster than the GTX 275. At least in the Anandtech article.

And ouch. Nvidia asked, quite profusely, for Anandtech to mention PhysX. And dock ATI points for not having it. Ouch. Totally opposite effect, I guess, be careful of what you ask for... You might just get it.

Regards,
SB
 
Last edited by a moderator:
And ouch. Nvidia asked, quite profusely, for Anandtech to mention PhysX. And dock ATI points for not having it. Ouch. Totally opposite effect, I guess, be careful of what you ask for... You might just get it.
I've just read the Sacred 2 chapter of Anandtech review. If nVidia asked for PhysX tests (and quite probably they did), that's an epic fail :mrgreen:
 
And ouch. Nvidia asked, quite profusely, for Anandtech to mention PhysX. And dock ATI points for not having it. Ouch. Totally opposite effect, I guess, be careful of what you ask for... You might just get it.

Yeah I don't get that. For the most part PhysX is still just a promise and while it's fine to talk about its potential it's way to early to be forcing it down reviewers' throats. If it was out there in a lot of games and people were ignoring it that would be another story. But Mirror's Edge and Sacred aren't enough for Nvidia to be so belligerent about including it in reviews.
 
Yeah I don't get that. For the most part PhysX is still just a promise and while it's fine to talk about its potential it's way to early to be forcing it down reviewers' throats. If it was out there in a lot of games and people were ignoring it that would be another story. But Mirror's Edge and Sacred aren't enough for Nvidia to be so belligerent about including it in reviews.

What I got from Anand is Physx is useless/doomed. Havok is the future. Physx is only supported by 1 vendor (thus wont be widely supported in games), Havok will be supported by all 3.
 
Back
Top