I remember 2900XT having over 700 million transistors and RV670 having 666 Million (from the presentation linked couple of days ago).the possibility of 100 Million more transistors.
I remember 2900XT having over 700 million transistors and RV670 having 666 Million (from the presentation linked couple of days ago).the possibility of 100 Million more transistors.
I remember 2900XT having over 700 million transistors and RV670 having 666 Million (from the presentation linked couple of days ago).
afaik, no need of special versions.Cool, is it possible to just install NVPerfKit and see those numbers for the commercial build of any third party application/game or do you need a special version of the application as well?
Because that would make G92 automatically "the second coming of Christ"!I wonder why on Devil's black earth the internal codename is "Gladiator" and not Antichrist
I remember 2900XT having over 700 million transistors and RV670 having 666 Million (from the presentation linked couple of days ago).
RV670XT to end up 20+ percent slower
Our sources have confirmed that RV670XT, the card that might be called Radeon HD 3870, should end up more than twenty percent slower than the 8800GT 512MB at its default clock.
ATI's chip simply cannot compete with 112 Shader units at such a high clock but it will sell for some $50 - $60 less to begin with.
RV670PRO will end up even cheaper as it has the hard task of fighting the soon to be launched 8800GT 256MB version.
The performance analysis is based on more than ten games and the current drivers, so if ATI comes up with miraculous drivers maybe they are going to be even more competitive.
My first reaction is that these bits are for covering all the bases.2 days ago Fruitzilla write 3870 8-10% slower than default 8800gt, now20+ %.
My first reaction is that these bits are for covering all the bases.
On second thought, if we compare 8800GT and 2900XT in Crysis (on page29 of this thread), then we can see that 2900XT is up to 16% slower (1600x1200 without AA) or 57% faster (1920x1440 4XAA). Take your pick and make the news.
Girl presents Yeston HD3850 retail-card:
http://we.pcinlife.com/thread-840859-1-1.html#zoom
...
I won't comment on RV670 (whatever that is ), but it sure is interesting that G80's "64" texture units don't perform 4x as fast as R600's 16.
Better in my view.But do R600's 16 TUs provide the same image quality?
Better in my view.
Jawed
Sorry, I thought you were implying 4 ALUs per shader unit. Instead, when correcting Shtal from 5:1 to 4:1, you were talking about ALU:TEX.And what is the problem? Is it hard to accept that the special function unit can do MADs as well?
Not even the first time I've said it.Booor?
I think you're the first person I've seen make that comment in this generation. Not counting SirPauly and his EATM obsession of course
Indeed. R600 does have more peak power. However, I assume NVidia is going to push the ALU clocks even higher later on to levels that CPUs run at. I'm hoping ATI does the same as AMD's know-how must be useful here, right?There are benefits to each. G80 may be easier to program, but it also has lower peak ALU power. Hence this is debatable.
How much area do you suppose to takes to run your shaders at 1150 mhz vs. 575mhz? Doubling the clocks isn't free.
Considering that R600 was 80nm, you gotta admit that R600 was substantially behind G80 in performance per mm2.And as I said earlier, G80 is larger than R600. Hence, G80 is more "bad".
And where is the data that shows "rather bad utilization in comparison"?