How so?Sure, if you count marketing numbers.
What's your point? NVidia gets less than 50% boost from the additional 235 flops of the second MUL, and ATI gets less than 400% boost from 160x(5x1D) instead of 160x1D. Both claim 100% utilization under the right workload.Sure, if you count marketing numbers.
There's no theoretical way to achieve more than 50% utilization of the MUL on either G80 or G92 though (and it's not exposed at all on G80 outside of CUDA except in an ancient and obscure driver revision)What's your point? NVidia gets less than 50% boost from the additional 235 flops of the second MUL, and ATI gets less than 400% boost from 160x(5x1D) instead of 160x1D. Both claim 100% utilization under the right workload.
for g200? or your talking g92bI got 264 mm², 16mm x 16.5mm.
Thanks. In Tridam, we trust.
Not this year. (He's referring to G92b)for g200? or your talking g92b
Then what's up with NVidia's claim of it improving performance of the Perlin Noise shader in Vantage?There's no theoretical way to achieve more than 50% utilization of the MUL on either G80 or G92 though (and it's not exposed at all on G80 outside of CUDA except in an ancient and obscure driver revision)
Yeah, what was NVidia thinking with all that TEX and Z rate?...and RV770 keeping up with something that has 18% more fillrate, 89% more texture rate, 136% more Z rate, and 10% more BW is also fairly impressive.
BTW, RV770 has only 42% more flops.
From that review, it seems power draw is basically the same as the old 9800GTX. So essentially nvidia got a 9% clock increase along with the die shrink for free (I'm sure they could have opted for a small power draw decrease if they'd kept the clock the same instead). Though I must say idle power draw is still disappointing (compared to competition, even though the HD4850 with current drivers wouldn't have been hard to beat or even GTX2xx). Gets dangerously close to GTX260 (though has same load power draw too...) in some situations, and is quite competitive with HD4850 (which is cheaper, has lower power draw, and a better feature set, however) overall.Thanks. In Tridam, we trust.
The true number is something between 585 and 600, but definitely less than 600. My bet would be about 595... *shrugs*CarstenS said:Having said that - I have no idea whether it's the true number of the beast.
For GT200, not G80... Plus, the MUL is actually exposed for G92 now, it's just more limited than on GT200 for architectural reasons. On G80, it's still not exposed and likely never will be even though the SM arch is 100% identical to G80 afaik. Guess they had to figure out a way to make the 9800GTX gain a few percentage points against G80!Mintmaster said:Then what's up with NVidia's claim of it improving performance of the Perlin Noise shader in Vantage?
for g200? or your talking g92b
I don't know if you can fault NVidia in the ROP/Z department. It's amazing how close G94 is to G92 in games:Yeah, what was NVidia thinking with all that TEX and Z rate?
Jawed
ThanksG92b. Looks like the direct link to the picture doesn't work. You can get it there : http://www.hardware.fr/articles/724-2/b-maj-b-preview-radeon-hd-4850-geforce-9800-gtx-v2.html
Yeah, what was NVidia thinking with all that TEX and Z rate?
Jawed