55nm G92b die-shot @ PCPer

Sure, if you count marketing numbers.
What's your point? NVidia gets less than 50% boost from the additional 235 flops of the second MUL, and ATI gets less than 400% boost from 160x(5x1D) instead of 160x1D. Both claim 100% utilization under the right workload.
 
What's your point? NVidia gets less than 50% boost from the additional 235 flops of the second MUL, and ATI gets less than 400% boost from 160x(5x1D) instead of 160x1D. Both claim 100% utilization under the right workload.
There's no theoretical way to achieve more than 50% utilization of the MUL on either G80 or G92 though (and it's not exposed at all on G80 outside of CUDA except in an ancient and obscure driver revision)
And cho, that's definitely interesting but... what's your source? :) Or is that your own insider data?
 
It's the square root of 576 for starters - a number floating around quite frequently. Plus, I remember to have seen it somewhere written under a die in bright yellow letters...

Having said that - I have no idea whether it's the true number of the beast.
 
There's no theoretical way to achieve more than 50% utilization of the MUL on either G80 or G92 though (and it's not exposed at all on G80 outside of CUDA except in an ancient and obscure driver revision)
Then what's up with NVidia's claim of it improving performance of the Perlin Noise shader in Vantage?
 
...and RV770 keeping up with something that has 18% more fillrate, 89% more texture rate, 136% more Z rate, and 10% more BW is also fairly impressive. ;)

BTW, RV770 has only 42% more flops.
Yeah, what was NVidia thinking with all that TEX and Z rate?

Jawed
 
Thanks. In Tridam, we trust. :yep2:
From that review, it seems power draw is basically the same as the old 9800GTX. So essentially nvidia got a 9% clock increase along with the die shrink for free (I'm sure they could have opted for a small power draw decrease if they'd kept the clock the same instead). Though I must say idle power draw is still disappointing (compared to competition, even though the HD4850 with current drivers wouldn't have been hard to beat or even GTX2xx). Gets dangerously close to GTX260 (though has same load power draw too...) in some situations, and is quite competitive with HD4850 (which is cheaper, has lower power draw, and a better feature set, however) overall.
I'm wondering though will nvidia replace G92 chips with G92b in existing cards, or announce even new products (apart from 9800GTX+)? Seems to me like it wouldn't really make sense to produce both chips.
 
CarstenS said:
Having said that - I have no idea whether it's the true number of the beast.
The true number is something between 585 and 600, but definitely less than 600. My bet would be about 595... *shrugs*
Mintmaster said:
Then what's up with NVidia's claim of it improving performance of the Perlin Noise shader in Vantage?
For GT200, not G80... ;) Plus, the MUL is actually exposed for G92 now, it's just more limited than on GT200 for architectural reasons. On G80, it's still not exposed and likely never will be even though the SM arch is 100% identical to G80 afaik. Guess they had to figure out a way to make the 9800GTX gain a few percentage points against G80!
 
Yeah, what was NVidia thinking with all that TEX and Z rate?

Jawed
I don't know if you can fault NVidia in the ROP/Z department. It's amazing how close G94 is to G92 in games:
http://www.techreport.com/articles.x/14168/5
Some of that is setup, but I'm sure ROPs are a big part of it.

As for TEX, I still like their balance more than ATI's. :p

Given ATI's new ALU IP, though, I can't blame them. Half a teraflop in 30 mm2? Sure, slap it on! :LOL:
 
Interesting. So when can we expect NV's PR to start pumping out the "AA at high resolutions isn't necessary" meme? :p

Just kidding. But, as ATI once again have a good competitor out, I expect teh evil NVidia PR to start workiing overtime once again. ;)
 
Yeah, what was NVidia thinking with all that TEX and Z rate?

Jawed

I for one can definitely think of some things to do with abundant amounts of TEX-fill: Making an aniso-filter, that does not produce artifacts like the well-known shimmering known from GF7 which has returned in the less than well equipped R6x (texture-wise).

If you've got enough bandwidth and enough TEX, you do not have to... errr... optimize texture sampling quite so aggressively.
 
I've only seen static shots of scenes where R600 is supposed to shimmer, never a video. Would be good to see a G71 and R600 shoot-out.

Jawed
 
The only time I've seen reference to R6xx class hardware shimmering is w/ CatAI set to high... with G7x the consumer didn't really have a choice except a rather hefty performance hit.


More than 4x AA isn't really that important at very high resolutions IMHO, but at the midrange, 8x could be very nice indeed.

Random question: Is SSAA available anymore on nvidia hardware?
 
Back
Top