So, do we know anything about RV670 yet?

One would expect the story of 3xxx Vs 2xxxx to somewhat mimic that of GT vs G80 - faster in shader-limited cases, slower in bandwidth limited ones.

The real question for most folk will be how well the 3870 beats the 2900. Even if it's a bit slower to the GT it would slide by as long as it's faster then the 2900. Specially in AA department without the needed to for deferred rendering.
 
Last edited by a moderator:
I also have to ask..why is there any delay in the 3870 benchmarks? What is there to be discrete about at this point? Nvidia has released the GT and it's available at most stores (and appears to be doing well). The GTS 128 SP will be release within weeks and benchmarks are forth coming. What I don't seem to comprehend is why ATI is still withholding 3870 benchmark results? Someone help me understand this enigma? If nvidia is the only current competitor what other factor am I missing here? Nvidia is moving forward with 2 very nice cards yet we don't have a clue if the 3870 truly exist other then the fact there is information about DX10.1!
 
I also have to ask..why is there any delay in the 3870 benchmarks? What is there to be discrete about at this point? Nvidia has released the GT and it's available at most stores (and appears to be doing well). The GTS 128 SP will be release within weeks and benchmarks are forth coming. What I don't seem to comprehend is why ATI is still withholding 3870 benchmark results? Someone help me understand this enigma? If nvidia is the only current competitor what other factor am I missing here? Nvidia is moving forward with 2 very nice cards yet we don't have a clue if the 3870 truly exist other then the fact there is information about DX10.1!

How about because you plan a launch and then you follow through... you don't undercut your event by leaking numbers to sate the appetites of a few rabid forum goers who act like impatient children.
 
How about because you plan a launch and then you follow through... you don't undercut your event by leaking numbers to sate the appetites of a few rabid forum goers who act like impatient children.

Name calling isn't really an answer but an incite to argue. But on the same note, being able to be flexible regarding job duties or being able to handle situations as they occur is something that should be part of most performance reviews. So, having said that maybe you have a better understanding of my question? :D
 
Last edited by a moderator:
I also have to ask..why is there any delay in the 3870 benchmarks? What is there to be discrete about at this point? Nvidia has released the GT and it's available at most stores (and appears to be doing well). The GTS 128 SP will be release within weeks and benchmarks are forth coming. What I don't seem to comprehend is why ATI is still withholding 3870 benchmark results? Someone help me understand this enigma? If nvidia is the only current competitor what other factor am I missing here? Nvidia is moving forward with 2 very nice cards yet we don't have a clue if the 3870 truly exist other then the fact there is information about DX10.1!
Reviewing media need time with the card, so far no one has claimed they have even recieved it yet.
 
So now you're saying it's 64*5, which is better. R600 has 320 ALUs working per engine clock, G80 has 128 ALUs per shader clock. Would that be a reasonable assessment? If you are able to take advantage of 320 ALUs per clock, wouldn't that be something to advertise? In general, we see very good utilization of our ALUs. I don't know why the term "stream processor" was chosen over "ALU" or some else, I am not a marketing guy.


I am confused by your statements. You insist on comparing 128 to 320, but then you admit that they aren't comparable due to differences in clocking, etc. So why bother to compare them at all?

What is the point you are trying to make about R600 v. G80?

Except that G80 is larger than R600 so you can make even fewer G80's per wafer than R600s. In other words, I don't understand the point of this paragraph.

Nvidia’s is more efficient way to get maximum performance out of 128 stream processors; ATI on their other hand is more complex in the way for squeezing maximum performance out of 320 streams. But the concern is in my opinion it has to do with texture units; the R600 has 16 which are running approx ~740MHz where the G80 has 64 running at 575MHz, simply ATI does not have enough texture units; this is where I see the main problem for the HDRadeon2900XT. In my opinion, the 320 streaming processors using a VLIW architecture is too complex for a graphics card. They eat up too many transistors, which is why it doesn’t have enough texture units or AA resolve units. To compensate for this, they boosted its clock speed, which makes it run hotter and use more power, and thus necessitate a louder fan. I like Nvidia’s idea of having the shaders running at a higher clock than the rest of the chip, because then you get extra performance without eating into your transistor count and die size on a chip. Using lots of transistors is very bad, because it increases size and complexity of the chip. The wafers on which chips are made are fixed in size and if you have a chip with lots of transistors, it takes up lots of space, and you can’t make so many of them from one wafer. Having big complex chips can really hurt how much it cost to make. But RV670 built on 55nm tech changes the picture dramatically; what I mean is, it will not run as Hot and it will be cheaper to produce as well shrink from 512bit to 256bit memory, but as far tweaking the core itself will be interesting if it ever happens.
 
Nvidia’s is more efficient way to get maximum performance out of 128 stream processors; ATI on their other hand is more complex in the way for squeezing maximum performance out of 320 streams.
There are benefits to each. G80 may be easier to program, but it also has lower peak ALU power. Hence this is debatable.
But the concern is in my opinion it has to do with texture units; the R600 has 16 which are running approx ~740MHz where the G80 has 64 running at 575MHz, simply ATI does not have enough texture units; this is where I see the main problem for the HDRadeon2900XT. In my opinion, the 320 streaming processors using a VLIW architecture is too complex for a graphics card. They eat up too many transistors, which is why it doesn’t have enough texture units or AA resolve units. To compensate for this, they boosted its clock speed, which makes it run hotter and use more power, and thus necessitate a louder fan. I like Nvidia’s idea of having the shaders running at a higher clock than the rest of the chip, because then you get extra performance without eating into your transistor count and die size on a chip.
How much area do you suppose to takes to run your shaders at 1150 mhz vs. 575mhz? Doubling the clocks isn't free.
Using lots of transistors is very bad, because it increases size and complexity of the chip. The wafers on which chips are made are fixed in size and if you have a chip with lots of transistors, it takes up lots of space, and you can’t make so many of them from one wafer.
And as I said earlier, G80 is larger than R600. Hence, G80 is more "bad".

P.S. Please use paragraphs. Replying to a wall of text is not enjoyable.
 
How much area do you suppose to takes to run your shaders at 1150 mhz vs. 575mhz? Doubling the clocks isn't free.
And as I said earlier, G80 is larger than R600. Hence, G80 is more "bad".

Not worth to debate. :) Lets get back to RV670!


Hint: NV30 had to much transistors on 130nm tech!

Replying to a wall of text is not enjoyable.
Edit: Sorry :(
 
Last edited by a moderator:
There is no holdup.:rolleyes: Launch has been scheduled and accordingly samples will be sent out.
Funny how people forget that RV670 has supposedly been brought forward a couple of months, and then another week or two...

Mmm, that ixbt data looks tasty.

Jawed
 
Penstarsys is saying the RV670 has about 100 million more transistors than the R600. This is rumor (or at least without confirmation), but why would the RV670 have more than the R600 if it was "just" a die shrink?

Penstarsys said:
The RV670 has a much higher transistor count than the R600, and I am speculating that it might be as high as 800 million transistors vs. the R600’s 700 million.

Penstarsis: State of 3D: Q4 2007
 
Back
Top