ATI RV740 review/preview

xbitlabs (which is the only one measuring just card power?) got some quite low numbers: http://www.xbitlabs.com/articles/video/display/radeon-hd4770_5.html#sect0
(3d load is too low, 06 sm3 test is not peak power, but that applies to all ofcourse).

I also smell a 4790 (high voltage and clocks) and 4750 (cheap/lowpower board, ddr3) when yields improve..

komplett has a lot of them here at ~100eur (incl 25% vat..) seems pretty sweet too (a bit more than 4830, less than 9800gt prices).
 
xbitlabs (which is the only one measuring just card power?) got some quite low numbers: http://www.xbitlabs.com/articles/video/display/radeon-hd4770_5.html#sect0
(3d load is too low, 06 sm3 test is not peak power, but that applies to all ofcourse).

I also smell a 4790 (high voltage and clocks) and 4750 (cheap/lowpower board, ddr3) when yields improve..

komplett has a lot of them here at ~100eur (incl 25% vat..) seems pretty sweet too (a bit more than 4830, less than 9800gt prices).

I'm sure that's already been mentioned hasn't it?

Yeah, definitely potential here for a 4790, but I'm sure AMD want to shift all their RV770 stock before introducing such an SKU, as it'd instantly make them all but obsolete.
 
xbitlabs (which is the only one measuring just card power?) got some quite low numbers: http://www.xbitlabs.com/articles/video/display/radeon-hd4770_5.html#sect0
(3d load is too low, 06 sm3 test is not peak power, but that applies to all ofcourse).
They are not the only one (though they got the lowest numbers for some reason - ok the reason for lower load is because they are not using furmark but it's a fair bit lower for idle too).
http://www.pcgameshardware.com/aid,...s-HD-4850-und-Geforce-9800-GT/Reviews/?page=2
http://ht4u.net/reviews/2009/amd_radeon_hd_4770/index11.php

Both of these are actually close (a little more than 30W at idle, whereas xbitlabs got below 18W). I wonder though if xbitlabs got a different bios - the ht4u article mentions that the memory (which is running 20% below spec) is actually overvolted 10%.
 
I'm sure that's already been mentioned hasn't it?

Yeah, definitely potential here for a 4790, but I'm sure AMD want to shift all their RV770 stock before introducing such an SKU, as it'd instantly make them all but obsolete.
It wouldn't make the 4870 obsolete. Though yes a HD 4790 with something like 800/950 clocks would be all that's needed to make HD 4850 completely obsolete. Though you could argue the 4870 is quite obsolete with the 4890, and so the RV770 would indeed no longer be needed.
 
I guess we'll see a new 4870 SKU as the RV790 salvage part by then.. Which would also allow a cheaper board due to reduced TDP. Wouldn't surprise me if RV770 is EOL already..
And yes, 4750 is known already, my point was that it will have to wait for the chip unit cost to go down (as performance would be hampered by ddr3).
 
Depends on "SKU" definition :) But yes, I meant revised board (chip pinout, power management cost reduction), same specs (except TDP) and name. They better keep the specs (instead of doing a "4860") to keep it straight against gtx260.
 
Expreview updated their article with CF test results:

4770cf51.png


Complete domination throughout the spectrum, and the pair of 4770's have actually less total memory than the 4890!

Edit:

ASUS EAH4770 - up to 971MHz w/ voltage tweak!
An update to the article:
Update: ATI not thrilled
Update: ATI has officially confirmed that it does not allow partners to overclock the HD 4770, at least not for the time being.
LoOoL -- ATi got afraid from their own ownage! :LOL:

p.s.: Now, where is Dave Almighty to save this one too?
 
Last edited by a moderator:
I would still rather have the single 4890 though to avoid all the standard dual GPU issues. Its not like a single 4890 isn't more than quick enough to handle anythign out there anyway.. :D Damn I would like one of those!
 
Complete domination throughout the spectrum, and the pair of 4770's have actually less total memory than the 4890!

Hmmmm, I wouldn't call that domination. After all, it's not like the 4890 is unplayable in any of those scenarios so you're actually getting nothing in exchange for multi-GPU woes. It would be much more interesting to see whether 4770 CF excels where the 4890 falters.
 
So, AMD still isn't controlling memory clocks to minimise idle power usage:

http://www.pcgameshardware.com/aid,...s-HD-4850-und-Geforce-9800-GT/Reviews/?page=2


Jawed

Yeah I actually have a profile to lower memory on the 4890 down to 490 MHz in order to save power, however...

While you can change the GPU clock up down or whatever, even while running a game with no one the wiser...

You can't seem to do that with the memory speed. Anytime memory speed is adjusted, the display blanks out for a split second and then re-renders everything.

I'm not sure if this is an issue with the chip (and thus we'd have to wait for a complete rework) or an issue with GDDR 5.

So, that would explain why power saving mode is able to drop GPU clock to 240 (yay) and I believe also adjusts GPU voltage, but it isn't allowed to touch the memory speed.

So my workaround for now is to just manually activate the profile when I'm done gaming. Still even if I don't at least at default this card still uses less power at idle than my launch 4870/512 does.

Regards,
SB
 
What kind of results are you seeing just overclocking memory?
We didn't due to time constraints. Sorry.

They are not the only one (though they got the lowest numbers for some reason - ok the reason for lower load is because they are not using furmark but it's a fair bit lower for idle too).
http://www.pcgameshardware.com/aid,...s-HD-4850-und-Geforce-9800-GT/Reviews/?page=2
http://ht4u.net/reviews/2009/amd_radeon_hd_4770/index11.php

Both of these are actually close (a little more than 30W at idle, whereas xbitlabs got below 18W). I wonder though if xbitlabs got a different bios - the ht4u article mentions that the memory (which is running 20% below spec) is actually overvolted 10%.

Plus, there's the german c't magazine, which also got 30ish Watts. I'v already hinted at in another forum, but maybe, just maybe Xbit's use of Windows XP can play a role.
 
Plus, there's the german c't magazine, which also got 30ish Watts. I'v already hinted at in another forum, but maybe, just maybe Xbit's use of Windows XP can play a role.
I dunno looks more like a bios issue to me (like with the different 4870).
That said, those 18W xbitlabs are getting are really low, though having seen the idle power results with underclocked memory at pcgh and ht4u I guess that's about what I'd expect with underclocked AND non-overvolted memory... Do you need to flash bios to adjust memory voltage?
 
But, this surely works with the mobile version, right?

Does the mobile version also adjust memory speed? And if so, is it using GDDR 5?

That could help determine whether it's something in the chip arch or possible something about how GDDR 5 works. Or just the Rv770 <-> GDDR 5 interface

Regards,
SB
 
I believe ATI might actually take the risk with the Mobility version and tolerate the flicker. In desktop versions it might be quite a decisive factor against choosing the card (it has to redraw? AWW!) but if it's done rightly on the laptop versions, people wouldn't bicker that much.


More interested in how the G/DDR3 (600-800Mhz) version performs.
 
I dunno looks more like a bios issue to me (like with the different 4870).
That said, those 18W xbitlabs are getting are really low, though having seen the idle power results with underclocked memory at pcgh and ht4u I guess that's about what I'd expect with underclocked AND non-overvolted memory... Do you need to flash bios to adjust memory voltage?

I think so, yes.


But WRT to the blanking screen at different memory clocks, it jsut occured to me: Could this be related to GDDR5's training algorithm needing to be redone for a different speed? Surely, offsets that are not the same for 400 MHz command-buffers than they are for 800 MHz ones, aren't they?
 
I think so, yes.


But WRT to the blanking screen at different memory clocks, it jsut occured to me: Could this be related to GDDR5's training algorithm needing to be redone for a different speed? Surely, offsets that are not the same for 400 MHz command-buffers than they are for 800 MHz ones, aren't they?
I guess retraining could be an issue. Does the card actually "redraw" as some mention or just flicker? I'd guess that if retraining would take too long you'd get display buffer underrun (hence screen blanks) since memory isn't available during that time, but I can't see the need for rerendering. Also, display buffer underrun could be avoided by a larger onchip buffer, not sure why AMD wouldn't simply have done that (unless we're talking a long time for retraining - I've no idea how long this takes actually, of course if we're talking tenth of a seconds larger display buffer would be unpractical).
 
Good thinking!

Here:

http://www.qimonda.com/graphics-ram/gddr5/gddr5_features.html

Dynamic data re-training is listed as a feature. But that might be a small-scale, continuous, type of process rather than what seems more like a "reboot" when making a radical change in clock speed.

Jawed

According to the whitepaper, some training steps are done once during memory initialization, others can be done continuously (this of course assumes the MC also supports this). Not sure though if retraining is really needed for lower frequencies. The whitepaper section about "scalable clock frequency and data rate" would seem to suggest no (at least as far as the training steps which are done at memory initialization goes).
 
Back
Top