RV730 - where are the 32 TMUs?

Yeah they priced them properly at least. But 2400/2600/3450/3650 really didn't offer performance that was worth paying for IMO. Their primary benefits were HDV playback and power consumption, but a gamer could easily pick a better card from the previous generation.

8600GT was a better card for gaming, even if it too was somewhat of a disappointment as a new mid-range card. Once 8600GT hit $100 it became a pretty good deal in the market of 12 months ago or so. I had one that would do 700/940, a dramatic overclock and one that really made it perform very nicely. Swapped it out for a 3850 though later on.

2600XT and HD3650 (at least the GDDR3 model) beat 8600GT, though
 
Not really.... especially not if AA + AF enters the picture.
http://www.techpowerup.com/reviews/VVIKOO/8600_GT_Turbo/11.html
http://www.techpowerup.com/reviews/VVIKOO/8600_GT_Turbo/7.html

But as I said, 2600XT was always cheaper than 8600 so it was priced appropriately....

Those are quite old reviews, in ixbt's (digitlife) i3DSpeed or whatever it's called nowadays 2600XT (and 3650 GDDR3) indeed were under 8600GT in the early days of 3650, but later they've risen above 8600GT, in other words, they've gotten more performance due new drivers than 8600GT
 
Not really.... especially not if AA + AF enters the picture.
Note that they are using a overclocked version - but yes that's what I am remembering too, the HD2600/3650 were somewhat competitive but only without AA/AF.
But of course, G84->G96(a/b) is just a shrink with some additional tweaks, nothing like rv635->rv730. And it seems nvidia already knew any G96 based product would lose badly to a HD4670, as they didn't even try to push up the clock speeds (there's certainly plenty of clocking headroom left in 9500GT, and the gddr3 ram isn't exactly very fast neither), instead using faster chips to compete with it, and focusing on low cost of 9500GT.
I think it's interesting to note though that nvidia's and amd's gpu lineup can't be directly compared, since nvidia has 5 chips (if you only count current generation) whereas amd only has 3 - in terms of performance (with the respective chips used in a full configuration) rating seems like this:
G98<rv710<G96(b)<rv730<G94(b)<G92(b)<rv770<GT200. If you'd rate them according to die size the order should be the same, except G92b and rv770 which are roughly the same size. Too bad for nvidia though that the performance differences tend to be much larger than the die size differences in these ratings...
 
HD3650 and GF8600 GT (ref. models) are included in HD4670 review at computerbase.de. HD3650 is not slower than GF8600GT according to this review and older HD2600XT was clocked even a bit faster.
 
HD3650 and GF8600 GT (ref. models) are included in HD4670 review at computerbase.de. HD3650 is not slower than GF8600GT according to this review and older HD2600XT was clocked even a bit faster.

Yeah it seems that the drivers have improved for 3650/2600. Or NV's drivers have gotten worse. Either way, back when 8600GT was worth buying a year ago or so, it was faster than 2600XT. Now days, it would be kinda dumb to buy any of those cards, with the $100 and down segment loaded with stuff like 3850, 4670, and 9600GT. Unless you are looking for HDV playback, I suppose, then the 3450 is a good call IMO at $25 ;)

There seems to be some ambiguity with the cards, that's for sure.
http://techreport.com/articles.x/12843
 
Last edited by a moderator:
Yeah it seems that the drivers have improved for 3650/2600. Or NV's drivers have gotten worse. Either way, back when 8600GT was worth buying a year ago or so, it was faster than 2600XT. Now days, it would be kinda dumb to buy any of those cards, with the $100 and down segment loaded with stuff like 3850, 4670, and 9600GT. Unless you are looking for HDV playback, I suppose, then the 3450 is a good call IMO at $25 ;)

Of course, but 3650 GDDR3 / 2600XT have been faster since around end of last year / beginning of this year or so (iirc, didn't bother checking exact month they got past 8600GT in the i3DSpeed-thingy)
 
...and still very competitive to nVidias products (in price/performance, performance/watt and performance/square mm, too)
Not at all, really. NVidia just used a larger process, and it was probably cheaper per transistor at launch. Moreover, G84 had fewer than RV630 and was quite a bit faster in most games. Add it all up and ATI wasn't close to NVidia in perf/$.

Price/perf is something else, and the only reason ATI was somewhat competitive there is that they had lower margins (they had no choice).
 
Not at all, really. NVidia just used a larger process, and it was probably cheaper per transistor at launch.
G84 i 170mm2, RV630 is 150mm2. That means 13% more chips per waffer - are you sure ONE RV630 was more expensive than ONE G84? I always heard quite opposite info...
 
G84 i 170mm2, RV630 is 150mm2. That means 13% more chips per waffer - are you sure ONE RV630 was more expensive than ONE G84? I always heard quite opposite info...
Hard to say. Do we have any idea what the wafer costs could have been for 65nm and 80nm processes back then? What the yields could have been, considering 80nm was on the market for some time and RV630/RV610 were the first GPUs manufactured at 65nm? I would assume that G84 was more expensive anyway, but who knows.
 
Mczak was referring to margins I believe.

Cheers
Indeed, I was just pointing out that nvidia has the fastest gpu out, same as last time, only now ati has higher margins since they didnt build such a goddamn big chip :)
 
According to expreview and this coincides with other data, G84 is ~169mm² and RV630 is ~149mm². The cost difference between a 90nm and 65nm wafer in that timeframe is *certainly* higher than 13.5%, so RV630 was already more expensive before we even consider yields. However, that cost and performance advantage evaporated on 65/55nm as G96a is roughly the same size as RV630 and G96b is roughly the same size as RV635 (~12x12 and ~11x11 for 65 & 55nm respectively).

Furthermore, while there is a clear wafer price premium between 80nm and 65nm, AFAIK there is no similar premium between the half-nodes, i.e. 65 and 55nm; the only catch is that yields are obviously lower at first and there's not as much capacity.

NVIDIA's Michael Hara said explicitly in a recent CC that TSMC suggested them not to switch their entire line-up to 55nm [at the same time ATI did] because at that point in time, they shipped about 3 times more silicon than ATI (remember typical share figures are in unit sales, and RV610/RV620 have been very, very successful while G86/G98 are nothing to be proud of...) and there wouldn't be enough capacity.

He also said very explicitly that was a cost disadvantage for them, implicitly even in that timeframe. And he said that while *EVERY* chip they order from TSMC today is 55nm, their transition is slowed down by the fact they and their partners still have plenty of 65nm inventory...So yeah, I don't think you can describe the 80->65nm transition for them as anything but a failure, and I don't think you can spin 65 vs 55nm in their advantage either, heh.

P.S.: Regarding RV730's 32 TMUs, I wish I knew for sure!
 
According to expreview and this coincides with other data, G84 is ~169mm² and RV630 is ~149mm². The cost difference between a 90nm and 65nm wafer in that timeframe is *certainly* higher than 13.5%, so RV630 was already more expensive before we even consider yields.
Before RV630 and RV610 were released, there were some rumours about OEM's ordering 80nm versions of those chips while ATI was still waiting for 65nm for official release. Do we know anything about those? Is there some truth to it or was it just wash? I wonder if making the chips at 80nm would be cheaper (I guess not since RV630/RV610 have more transistors than G84/G86, but anyway).
 
However, that cost and performance advantage evaporated on 65/55nm as G96a is roughly the same size as RV630 and G96b is roughly the same size as RV635 (~12x12 and ~11x11 for 65 & 55nm respectively).
Why are you even comparing those? They have not existed in the same timeframes.

P.S.: Regarding RV730's 32 TMUs, I wish I knew for sure!
The 32 Tex are there, the easiest places to see it would be AF performance drop (or lack of) in games, but most common theoretical test apps won't show it just yet (although you can make it show it on small textures).
 
Last edited by a moderator:
Back
Top