ATI RV740 review/preview

Already discussing it in the AMD RV770 -> RV790 thread, although I admit it wasn't exactly easy to know :)

Not saying it doesn't warrant a thread on its own but merely saying it if you were wondering why no one had "noticed" ;)
Yeah... actually I was kind of wondering. LOL. so when I went to that thread I was like, "oh shoot. It was already being discussed over there.":oops:

Yeah its really to bad it has a 6 pin connector, I hope its just for pre production products. We will see.

I know what you mean! I was really hoping for the rv740 to not need a power connector coz I was planning on upgrading an older system without having to buy a power supply (It has a 300W, 80plus) LOL. I guess I can either wait for Nvidia's green edition cards or just get a 4670. The reason why I waited was because it's only a couple of months away (hopefully!!!) and I wanted a newer card around 100 bucks. I extrapolated that since it's replacing the RV730 that they've also placed it in a similar TDP.

But then again, this looks like it would do quite a killing in Xfire so I might replace my brother's rig which has 2 3870s at the moment. unless the 4850s will turn out cheaper when these babies come out.
 
Last edited by a moderator:
People that say the 8800GT can counter this.. since when is a 8800GT on par with a 4850? the 740 is about 30 to 50% faster at the same pricepoint. Low res, yes the 8800GT can win some at low res but at high res it's a gone.
 
8800GT is MUCH more expensive to manufacture than R740, so it really cannot compete.
The same way GT200b "cannot compete" with RV770?
Reality is - you DON'T KNOW what's more expensive to manufacture - a card with a new RV740 chip on a new 40G process with GDDR5 memory or a card with an old G92b or G94b chip on the old 55GP process with GDDR3 memory.
I repeat: die size alone means nothing.

since when is a 8800GT on par with a 4850? the 740 is about 30 to 50% faster at the same pricepoint.
4750 isn't on par with 4850 either. And 8800GT/9800GT is an old card made on the old G92 chip. NV shouldn't have any problem with making the "new" 9800GT faster than the old one.
In the G3D results it's pretty simple to imagine the results of 8(8/9)00GT since it's faster then 9600GT but slower then 4850 and they have the results for these cards on their graphs.
 
Last edited by a moderator:
And 8800GT/9800GT is an old card made on the old G92 chip.

This...

is exactly why I hope the NV milkmachine will fall flat out on their face at some point... this is not good for technological advancement. Like G3D said:

So meanwhile others are re-labeling and reinserting their products at a new price point, ATI is moving forward with more interesting steps.
 
This...

is exactly why I hope the NV milkmachine will fall flat out on their face at some point... this is not good for technological advancement. Like G3D said:
What advancement is there in RV740 aside from the 40nm process for which an end-user shouldn't care anyway?
I think it's more a problem of AMD not being able to provide a chip for which NV won't have a more-than-a-year-old-answer then NV not doing technological advancements.
RV740 is a nice chip but what it essentially doing is filling the hole in AMD's line between RV770 and RV730. NV never had that hole so it's quite normal for them to slightly adjust their current offerings and be done with it. No technological advancements are needed from them here.
 
RV740 is a nice chip but what it essentially doing is filling the hole in AMD's line between RV770 and RV730. NV never had that hole so it's quite normal for them to slightly adjust their current offerings and be done with it. No technological advancements are needed from them here.

So, instead of offering the same capabilities throughout their lineup, you're okay with another rehash of GF4MX/FX5200 where they insert and relabel chips that are incapable of providing the same features as the rest of the line-up. resulting in yet another load of "why can't my XXX run XXX?" (referencing to CUDA now, DX in the past.)
 
What advancement is there in RV740 aside from the 40nm process for which an end-user shouldn't care anyway?
I think it's more a problem of AMD not being able to provide a chip for which NV won't have a more-than-a-year-old-answer then NV not doing technological advancements.
RV740 is a nice chip but what it essentially doing is filling the hole in AMD's line between RV770 and RV730. NV never had that hole so it's quite normal for them to slightly adjust their current offerings and be done with it. No technological advancements are needed from them here.

I don't follow your logic.

You make it sound like this chip has no merit. Even in the best case for nVidia they are forced to lower prices. And I doubt they are happy selling larger dies than ATI for the same price. Even if you take into account different cost of the manufacturing nodes, in general they should be cheaper.
 
So, instead of offering the same capabilities throughout their lineup, you're okay with another rehash of GF4MX/FX5200 where they insert and relabel chips that are incapable of providing the same features as the rest of the line-up. resulting in yet another load of "why can't my XXX run XXX?" (referencing to CUDA now, DX in the past.)
What CUDA capabilities are you (as in you personally) missing in G94b and G92b?

You make it sound like this chip has no merit. Even in the best case for nVidia they are forced to lower prices. And I doubt they are happy selling larger dies than ATI for the same price. Even if you take into account different cost of the manufacturing nodes, in general they should be cheaper.
Should be =/= are.
It's true that 40G is the way to go forward for both of them but it doesn't really matter who's the first to the new process.
When NV will release GT218@40G and it'll compete with RV710@55GP will you say that AMD selling RV710 against GT218 isn't good for "technological advancement"?
NV will release a 40nm competitor to RV740 (same old DX10 BTW) but while they have only 55nm chips what's wrong with using them to counter RV740?
I don't understand this logic -- "ooooh they don't have 40nm chip against RV740 -- THEY LOSE!" In practice it's much more complex than that. And RV740 being the first relatively large GPU on 40G may initally cost more than G94b or even G92b which are produced on the old and known 55GP in much larger quantities. Die size isn't the only cost defining parameter.
 
And why should we concerned by manufacturing costs as customers?

NV will offer with the GTS 240 (675/1688/1100MHz 112SPs G92b) an equal performance to the RV740XT at the same price point of ~$99 (considering GTS 250 512MB is allready priced at $129).

We have to accept that G92 was able to compete since 2007 with 4 different ATi GPUs and it is still competitive, since it delivers all features the market demands.

Of course full D3D10.1 would be a nice to have, but on the other side the competition is not able to deliver a free select-able high-quality AF on 32-80 TMU cards and it looks that the cards before HD4000 are not able to support OpenCL.

And coming back to RV740:

While it looks that AMD is not able to make it "PEG-only", Nvidia is preping an interesting solution:
http://vr-zone.com/articles/geforce-9800-gt-green--no-external-power-needed/6643.html?doc=6643
- PEG-only (<75W TDP)
- shorter PCB than the RV740XT ES
- ~ 90% performance of RV740XT ES
 
Last edited by a moderator:
Even though not quite a factor for most end-users, for OEMs I recon it's a big plus if you do not have a PCI-e-powerplug to supply, mount and RMA eventually.
 
Of course full D3D10.1 would be a nice to have,
We now know that D3D10 cannot provide all the eye-candy of Windows 7 - it'll be interesting watching NVidia marketing trying to explain away the lower memory consumption and better eye-candy of ATI and S3 D3D10.1 GPUs when running Windows 7 desktop.

http://www.istartedsomething.com/20081029/windows-7-dwm-cuts-memory-consumption-by-50/

As long as NVidia continues to market D3D10 GPUs once W7 arrives, which could be quite a long time, they'll be fighting this. Unless the newest GT2xx GPUs have 10.1 support, of course.

Jawed
 
We now know that D3D10 cannot provide all the eye-candy of Windows 7
Do we? Can you provide some link -- i wasn't aware of that...
Edit: Or is it the link you've provided above? Using shaders for blurs doesn't mean that blurs will be more beatuful. As for more/richer icon animations i think it's the _potential_ result of less memory consumption.
I doubt that W7 will look in any way different on DX10 and DX10.1 GPUs unless you push the DWM to its limits.
 
I read the link. The PDC-presentation tells us, that D3D10.1 API is the foundation - now i wonder, what techlevel might be used for desktop-effects.
 
According to DegustatoR, the G94 doesn't exist at all then.


Neither did the shrinks. Oh wait.

G92, G94, G92b, G94b vs RV670, RV770 and RV740.
Let's not even count SKUs.
 
And coming back to RV740:

While it looks that AMD is not able to make it "PEG-only", Nvidia is preping an interesting solution:
http://vr-zone.com/articles/geforce-9800-gt-green--no-external-power-needed/6643.html?doc=6643
- PEG-only (<75W TDP)
- shorter PCB than the RV740XT ES
- ~ 90% performance of RV740XT ES

90% of the performance? I remember the clocks going down on the "Green Edition" so one has to wonder why they put all the extra megahertz on the 9800GTX then. Clocks should be 550/900 instead of 600/900 and it's actually more expensive than the GTS 240. Save the planet, give us more money!
 
The same way GT200b "cannot compete" with RV770?
Reality is - you DON'T KNOW what's more expensive to manufacture - a card with a new RV740 chip on a new 40G process with GDDR5 memory or a card with an old G92b or G94b chip on the old 55GP process with GDDR3 memory.
I repeat: die size alone means nothing.

To both AMD and Nvidia, die size means quite a bit more than nothing.

And I think we can compare overall losses by AMD's graphics division (ATI) and Nvidia year to year (4th quarter 08 to 4th quarter 07) to have anecdotal evidence that AMDs die size advantage in every market segment where they compete with Nvidia certainly points to all current Nvidia chips being significantly more expensive (and lower margin) than their competing AMD counterpart.

Especially when you consider that marketshare is virtually the same year to year (Q4 08 to Q4 07) for the two. And that's even considering Nvidia had to move significantly more units (yay for price cuts to reduce margins even further to make them attractive versus the competition) than ATI to regain lost marketshare from Q3 08. In other words, Nvidia sold way more graphics hardware than ATI in Q4 08 and still ended up losing more money than ATI.

Die size means a LOT. Especially if you want either company to stay in business in the face of competition. Much less have the cash flow necessary to invest in large chunks of R&D.

ATI took steps to fix this when going from R600 -> Rv670, a relatively quick turn around. We're still waiting for Nvidia to do something. 65nm -> 55nm was a start but it's still not good enough.

Regards,
SB
 
Last edited by a moderator:
How likely is it, that Q4 was - with a disastrous quarter being unavoidable at any rate because of global economic - strategically engineered at least wrt the margin of disastrousness?

All in the light of future financial prospects as they unfolding just right now of course.
 
Last edited by a moderator:
How likely is it, that Q4 was - with a disastrous quarter being unavoidable at any rate because of global economic - strategically engineered at least wrt the margin of disastrousness?

All in the light of future financial prospects as they unfolding just right now of course.

That's why I compared 4Q 08 to 4Q 07. Marketshare was similar. So drops in profits for both companies would be "roughly" similar within 10-20% of each other. Nvidia has more employees but that's offset a bit by the fact they also sold considerably more GPUs. Yet their losses were staggering compared to the losses suffered by the AMD graphics division.

Revenue down 8% year to year for AMD's graphics division while it was down 47% for Nvidia's graphics division.

Regards,
SB
 
Last edited by a moderator:
Back
Top