ATI RV740 review/preview

To put that into perspective, according to xbitlabs (haven't seen the transkript anywhere yet), even the whole advanced mfg. (that is 130nm and below) only accounts for not even two thirds of wafer revenue...

"Advanced process technologies – 130nm and below – accounted for 65% of wafer revenues with 90nm process technology accounting for 25%, 65nm – 23%, and 45/40nm reaching 1% of total wafer sales."

Would not have placed any bets on that kind of contract distribution.
 
Not entirely surprised, how small of a process do chips that go into automobiles have to be for example? Or microwaves? Or most appliances?

Regards,
SB
 

Its an old news that they had a problem with a 40nm process (at least in this thread :)) but it interesting that they repeatedly say it over and over.


Looking at the most reviews all have the same two cards placed next to each other in review but unwillingly to say which one is actually tested. Xbitlabs on the other hand had actual reference design by Sapphire in the test and they're actually measured power drawn from every rail used as it could be seen
(http://www.xbitlabs.com/misc/pictur...radeon-hd4770/other/rhd4770power_full.png&1=1) and conclude with 50W total power draw in 3D mode and 17W in idle (http://xbitlabs.com/images/video/radeon-hd4770/other/rhd4770_power.png) or 59W less power draw than stock HD4850 in 3D mode.

While on the other hand heXus, which did their tests with retail HIS card, have overall system consumption of 88W in idle and 121W in load. Pretty heavy considering only 164W for HD4850 under load on the same system and only 43W less load than HD4850 would mean that this so called retail card consumes a huge amount of juice in idle mode comparing it to 50W:109W that xbit used to have. Well i don't how did they did their measure. But if this is true it's power hungry beast in idle comparing it to 'media edition'-reference card.

The both cards are running @750MHz core/shader clock and 3200MHz gddr5 clock, but xbit use RivaTuner to overclock it to 860/3900 clocks ("And we did find a way of overcoming the unfounded limitation by slightly modifying the latest RivaTuner version. In order to teach this popular utility to work with RV740, you need to open Rivatuner.cfg file, get to [GPU_1002] section and locate the “RV770 = 9440h-9443h,944Ch” line. Then you have to add “94B3h” descriptor to it.")

On naked reference card used by xbit there are Qimonda IDGV51-05A1F1C-40X ("According to the manufacturer specification, these 512Mbit chips can work at up to 1000 (4000) MHz frequency with 1.5V voltage as 16Mx32 or 32Mx16."). While heXus & techreport didn't bother to shot the naked card nor to properly OC it as LordEC911 stated. Well TR use to have poor reviews so nothing new there.

There's list of reviews here
http://forums.vr-zone.com/news-around-the-web/423107-hd-4770-reviews.html#post6664221

And even enthusiast.hardocp.com didn't bother with proper OC while guru3d stated 73W power draw in 3D on the same reference card. So i wonder if the half of testers are actually measured that?

The best naked shots http://www.techpowerup.com/printreview.php?id=/ATI/HD_4770
These guys always have the best comprehensive reviews and what's best they don it on the same rigs. So it's pretty amazing for me, that in Juarez and CoH, stock HD4770 beats older sis like HD4850 for as much 10%. Yep i now thats for hd4850 <700MHz mentioned somewhere above but still pretty amazing for 50% price of HD4850 class of card only 9 month after.


So if xbitlabs are not faking their power consumption, my wild guess would be that AMD, as they are pushing Mobility Radeons HD4830/HD4860 600MHz on the same RV740 chips, that they almost certainly use better chips with lower TDP in their promo HD4770 'media edition' series with different BIOS ofc. And it's not ridiculously there that PCI 6-pin power connector but too make the virtually same pcb for all HD4770 cards and place not so perfect chips they baked to improve poor yield of crappy 40nm TSMC process. Still hope they'll test next gen cards on their own 32nm bulk later this year. It's sharky and really an issue for some lawsuit but after all they doing to us since 2002 and none sues intel for placing same but better TDP chips in their notebooks where they can place it on doubled prices.
 
"have to be" is always a questions of margins, isn't it?

Yes but it also depends on transistor density. A chip with a million or less transistors doesn't really need to be on a more expensive node. with something like Rv770 and GT200 there really isn't option to manufacture it on a 130 nm node for example even if it would be a theoretically cheaper chip.

I'd imagine that those larger nodes are quite mature at the moment with very low defect rates. Also the cost of the fab has been leveraged for many years.

While something like 40/55/65 nm you are still paying back much of the initial investment costs of implementing the node.

I believe Intel still runs certain chips on 90nm and higher nodes due to the cost savings of already having the Fab costs leveraged over multiple years. And tranistioning the fab to produce those same chips on a lower node wouldn't be worth it.

Regards,
SB
 
So if xbitlabs are not faking their power consumption, my wild guess would be that AMD, as they are pushing Mobility Radeons HD4830/HD4860 600MHz on the same RV740 chips, that they almost certainly use better chips with lower TDP in their promo HD4770 'media edition' series with different BIOS ofc.

http://www.pcgameshardware.com/aid,...s-HD-4850-und-Geforce-9800-GT/Reviews/?page=2

They also do graphics card only power numbers and in this case see 30W at idle on the "media" board.
 
well. even if they only test the premium card, that one is on sale next week and so far there's no performance discrepancy between the cheap and the premium card. Maybe some longetivity issues but that's what warranty is for.
 
I recently found another potential cause of discrepancies as far as power draw and performance is concerned.

I sidegraded from a HD4870 to the 4770, and got some slightly odd results. With some effort, I traced it to the PCI-Express x16 slot being set to x1. This affected performance significantly, and power draw to a surprising extent. It would seem that the card tries to negotiate a x16 connection, but almost always fails, and then falls back to a x1 connection. At that setting, the card and bus draws some 17W (!) less at peak power draw than otherwise. Only once have I been able to actually benchmark the system at x16 setting before the card/system crashing.

It was not trivial to figure out what was going on, and I still haven't found a solution. (My card is a Sapphire 4770, and the motherboard is an ASUS P5B-deluxe, which has had no problems whatsoever with PCI-E x16, with either my 7900GTX, nor my HD4870. The HD 4770 seems to be the culprit in this case.)
 
http://www.pcgameshardware.com/aid,...s-HD-4850-und-Geforce-9800-GT/Reviews/?page=2

They also do graphics card only power numbers and in this case see 30W at idle on the "media" board.

That's really FU cause even 17W is astoishing leakage for that small chip considering 300mm2 RV790 with all that decoupling caps inside only leaks same 30W.

Anyway this sorts my doubts. Seems that nitty and deviant xbitlabs done their FurTest run (looking on that review ss) but they only revealed what they considered "real life 3D mode power consumption" --game run, not the real stress test like FMT @750MHz. That's why i always use to run ozone's OGL test to see if the card is really stable w/o sprites and sparkles where they shouldn't be. Thanks for that once again.
 
That's really FU cause even 17W is astoishing leakage for that small chip considering 300mm2 RV790 with all that decoupling caps inside only leaks same 30W.

Anyway this sorts my doubts. Seems that nitty and deviant xbitlabs done their FurTest run (looking on that review ss) but they only revealed what they considered "real life 3D mode power consumption" --game run, not the real stress test like FMT @750MHz. That's why i always use to run ozone's OGL test to see if the card is really stable w/o sprites and sparkles where they shouldn't be. Thanks for that once again.

What Furmark does is not what the chips are designed for, no 'real life' program stresses the chip like it does, no matter if it's game or gpgpu or whatever.
It's a bit dodgy comparison, but while you can rev up your cars engine at first gear 24/7, the engine isn't designed for that. Neither are GPUs designed for continous 100% stress on every part of the GPU
 
What Furmark does is not what the chips are designed for, no 'real life' program stresses the chip like it does, no matter if it's game or gpgpu or whatever.
It's a bit dodgy comparison, but while you can rev up your cars engine at first gear 24/7, the engine isn't designed for that. Neither are GPUs designed for continous 100% stress on every part of the GPU

That's really a bit dodgy. Motors are build to a certain spec - that's where the rpm-meter begins being marked red. A better one would be: You can drive your car on a perfectly flat and straight road full throttle - as in "full throttle" or "maximum speed" - it won't get faster than that. The manufacturer will have to provide the legally required guarantee services if your car breaks - not matter if it's only driven from your front yard to the post office or around a race track.

GPUs apparently have to "full throttles": One "Maximum Power Draw" and one "Even more power drawn that the maximum". Sounds strange, doesn't it?

And I've yet to see a warning message on any retail boxes, driver installations etc. saying "You can't run your GPU at full speed, please slow down".
 
Have you never seen the red line on the rev meter?!
Furmark = running the engine past the red line.
 
I agree with Carsten regarding the Furmark topic. No graphics card should be sold in a condition where running any standard OpenGL (or DirectX for that matter) program can cause it to break (in normal environmental conditions). Or, if you really want/need to build something like that, at least be open about it.

What would people say if Intel sold a new (boxed) 8-core chip, but in case you actually run prime on all 8 cores it will overheat in a few minutes? (I just realized that all of this is more or less OT in this thread)
 
Have you never seen the red line on the rev meter?!
Furmark = running the engine past the red line.
That's not how I see it. Furmark is just a normal 3d app, using only normal 3D API not some "driver backdoor" to create artificial load. And there's no guarantee in fact that Furmark really produces the maximum possible.
That said, some interesting numbers here (german):
http://ht4u.net/reviews/2009/leistungsaufnahme_grafikkarten_games/
In short, the difference in power consumption between games and Furmark is WAY higher on AMD cards than NVidia cards. There must be quite some units idling around on AMD cards in (current) games...
 
Back
Top