So, do we know anything about RV670 yet?

If the core truly has 666M transistors I wonder why on Devil's black earth the internal codename is "Gladiator" and not Antichrist :D
 
I remember 2900XT having over 700 million transistors and RV670 having 666 Million (from the presentation linked couple of days ago).

Indeed, halving the internal memory bandwidth (the ring bus) from 1024Mbit to 512Mbit saves a lot of die space, some of which they used for including the UVD (which the Radeon HD 2900 series lack).
 
Last edited by a moderator:
New day new fruitzilla info/fud/bs:

RV670XT to end up 20+ percent slower
Our sources have confirmed that RV670XT, the card that might be called Radeon HD 3870, should end up more than twenty percent slower than the 8800GT 512MB at its default clock.

ATI's chip simply cannot compete with 112 Shader units at such a high clock but it will sell for some $50 - $60 less to begin with.

RV670PRO will end up even cheaper as it has the hard task of fighting the soon to be launched 8800GT 256MB version.

The performance analysis is based on more than ten games and the current drivers, so if ATI comes up with miraculous drivers maybe they are going to be even more competitive.

2 days ago Fruitzilla write 3870 8-10% slower than default 8800gt, now20+ %.
 
2 days ago Fruitzilla write 3870 8-10% slower than default 8800gt, now20+ %.
My first reaction is that these bits are for covering all the bases.

On second thought, if we compare 8800GT and 2900XT in Crysis (on page29 of this thread), then we can see that 2900XT is up to 16% slower (1600x1200 without AA) or 57% faster (1920x1440 4XAA). Take your pick and make the news. :cool:
 
My first reaction is that these bits are for covering all the bases.

On second thought, if we compare 8800GT and 2900XT in Crysis (on page29 of this thread), then we can see that 2900XT is up to 16% slower (1600x1200 without AA) or 57% faster (1920x1440 4XAA). Take your pick and make the news. :cool:

That 57% faster is 8 vs 12.5 frame? ;)
 
And what is the problem? Is it hard to accept that the special function unit can do MADs as well?
Sorry, I thought you were implying 4 ALUs per shader unit. Instead, when correcting Shtal from 5:1 to 4:1, you were talking about ALU:TEX.

My bad. It's pretty clear what you were saying. Anyway, in light of the move to scalar ops, shouldn't we be saying R600 has a 20:1 ALU:TEX ratio? G92 has a 5:1 ALU:TEX ratio.
 
There are benefits to each. G80 may be easier to program, but it also has lower peak ALU power. Hence this is debatable.

How much area do you suppose to takes to run your shaders at 1150 mhz vs. 575mhz? Doubling the clocks isn't free.
Indeed. R600 does have more peak power. However, I assume NVidia is going to push the ALU clocks even higher later on to levels that CPUs run at. I'm hoping ATI does the same as AMD's know-how must be useful here, right?

(Sorry, I still like to distinguish between ATI and AMD.)
And as I said earlier, G80 is larger than R600. Hence, G80 is more "bad".
Considering that R600 was 80nm, you gotta admit that R600 was substantially behind G80 in performance per mm2.

Thankfully, RV670 seems to have rectified that, as G92 is much bigger than it even after considering 55nm vs. 65nm.
 
Last edited by a moderator:
And where is the data that shows "rather bad utilization in comparison"?

Just look at the results in any common real-life game benchmark. If your assumptions were right, G80 shouldn't be able to win a single benchmark except for texture-limited cases. So obviously the performance speaks for itself, no need to argue about theoretical values.
 
Back
Top