NVIDIA G92 : Pre-review bits and pieces

He thinks RV670 has more than 800M transistors while we know now that it has even less than the R600 and G80 at 666M. And there are some more flaws in there.

Well, Josh has to fit the transistor count into his theory that RV670 = R600 with 512bit bus + internal "fixes" + integrated Xillion (UVD) + DX10.1 support.

Trying to reconcile 800M+ transistors with know die size is pretty hard though.
 
Just finished reading Josh's latest on penstarsys, summary ..
  • G92 is 384bit
  • Nvidia shaved off 80-100 million transistors (ala G71), some undisclosed/unknown feature taking up the space
  • Not sure if G92 has DP support
  • AMD partners happy with the way they handled R600 EOL (2900 Pro)
  • RV670 has fixed AA resolve that plagued R600
  • Nvidia will refresh their line up in November
    8800GTX (new) - 700MHz Core, 128SPs @ 1800MHz, 384-bit
    8800GTS (new) - 650MHz Core, 112SPs @ 1600MHz, 320-bit
  • GX2 in Q1 2008 - 600MHz Core(s), 2 x 128 SPs, 1.5GB
  • Possibility of dual-slot RV670 at 1GHz to combat some of these refreshes (GTS), also possible that RV670 is 512-bit
  • 2 x RV670 card in Q1 2008

So the new G92 GTS and GTX will be called GTS/GTX V2? I think I saw that somewhere. What happened to 8900? Still a spring product?

My head hurts. :LOL:

Nice to see something new in the highend in Nov I guess. The new GTX should be pretty damn fast with the nice G92 improvements.
 
I wouldn't believe everything that's written in that article. For instance:

He thinks RV670 has more than 800M transistors while we know now that it has even less than the R600 and G80 at 666M. And there are some more flaws in there.
Well Josh's AMD sources were never that good, so thats not surprising.

Arun, I agree mostly with your SKUs. Remember, I made some guesses and you told me that you were preaching the same thing for the past 2 months.
 
It's not like EATM Pauly. It doesnt adjust LoD. Its simply more accurate form of alpha to coverage. Has alot less blending artifacts.

It's too bad ATI couldn't offer that feature officially......sigh!

However, these improved features that offer some quality with performance on alphas are so welcomed from ATI or nVidia. Super-sampled is still nice to have but the ability to raise your resolution and still have some nice quality on Alphas -- rocks. This is why these newer performing alpha features are so very welcomed.
 
That little tidbit keeps popping up, but R600's lead designer (and Dave as well) stated AA resolve through the shader processors was a design decision, not a bug.

A design decision indeed, and a horrible one, too (making it shader only, that is). I think RV670 is still not too late to revert it. We'll find out soon.
 
A design decision indeed, and a horrible one, too (making it shader only, that is).
Feel free to post some evidence or at least a theory as to why...

The best I can come up with is that games that heavily use shadow buffers with AA may be seeing a slow-down due to shader AA resolve, but that's about as close as I can get, and I've no idea if it's meaningful.

Jawed
 
Sure it is a bug. And you wont truly expect someone from ATI to say "yeah, we fwcked that up" :LOL:

As for Josh, he has a vivid imagination for sure.
 
That little tidbit keeps popping up, but R600's lead designer (and Dave as well) stated AA resolve through the shader processors was a design decision, not a bug.
It probably was, but it should not be that slow. The bandwidth, texture ability, and math ops needed for shader resolve suggest that it should take a lot less time than it does. Compare the AA performance drop between R580 and R600, and there had to be a bug somewhere.

I don't think the design decision was that bad.
 
Compare the AA performance drop between R580 and R600, and there had to be a bug somewhere.

We have, which leads to the conclusion that dedicated hardware-based AA resolve is faster than shader-based AA resolve.
 
We have, which leads to the conclusion that dedicated hardware-based AA resolve is faster than shader-based AA resolve.
That conclusion is making too many assumptions. The point I was making is that even if we assume the resolve is infinitely fast on R580, the difference between the two is far more than how long it should take to do the resolve in the shader.

R600 can feed its shaders 80 FP32 values per clock. It only needs 64 RGBA8 samples per clock to output 4xAA resolved pixels at 16 per clock. The resolve time should be tenths of a millisecond even for high resolutions.
 
How insane is it that NV is actually adding texture capabilities in G92? So I guess, in a (american) football analogy, they go from beating AMD in texturing with G80 by a 45-7 score, to 52-7 with G92.

Talk about adding insult to injury, about AMD's completely wrong, insanely texture limited path.

It's maddening how obvious AMD's screwups are. I can see them, and I dont know a damn thing about graphics cards. Fix your texturing. Fix your AA.

Interestingly, G92 defeats R600 by a larger margin the newer the game is, as far as I can tell. R600 is definitly a backward looking architecture.
 
Last edited by a moderator:
It's too bad ATI couldn't offer that feature officially......sigh!

However, these improved features that offer some quality with performance on alphas are so welcomed from ATI or nVidia. Super-sampled is still nice to have but the ability to raise your resolution and still have some nice quality on Alphas -- rocks. This is why these newer performing alpha features are so very welcomed.

Definitely for the second paragraph; as for the first sentence I'd prefer to see blur only on very fast moving objects in a scene and while I also believe that we need better filters for AA in general, I still object to anything that blurs the entire scene.

Custom filters can be applied selectively within a scene with a clever algorithm and from what I've seen research is still active for implementing cost effective forms of stochastic motion blur which shouldn't necessarily cost as much performance as spatial temporal AA.
 
I guess most of us think G92 performs better than expected, so instead of the gap-filler between 8600GTS and 8800GTS we got 8800GTS+ performance for a lower price. But where does this leave the 8600 series? There is still a gap that needs to be filled. Will we get a new 8600 or 8700 with 64 stream processors and a 128 bit memory bus?

I'm looking to upgrade my 7300GT, but as my old one the new graphics card needs to be very silent (heatsink, no fan).

Per
 
I guess most of us think G92 performs better than expected, so instead of the gap-filler between 8600GTS and 8800GTS we got 8800GTS+ performance for a lower price. But where does this leave the 8600 series? There is still a gap that needs to be filled. Will we get a new 8600 or 8700 with 64 stream processors and a 128 bit memory bus?

I'm looking to upgrade my 7300GT, but as my old one the new graphics card needs to be very silent (heatsink, no fan).

Per

Well maybe 8800GT is counteraction to the rv670 and so they are not forced to do similar for the lower range ? If they did pull off the same trick though then a 64 shader part at $100-$150 would be very nice indeed. That would be a good ugrade for a 7300GT .
 
Back
Top