The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
It doesn't support DX10.1 (and i don't see any reason to actually). But this may change in the future with simple driver update for all G8x/9x family (if those rumours were true).

That's not possible afaik. nVidia needs to change their TMUs in order to support Gather4 (Fetch 4 in DX10.1) in their hardware. And that's something a driverupdate won't fix....
 
That's not possible afaik. nVidia needs to change their TMUs in order to support Gather4 (Fetch 4 in DX10.1) in their hardware. And that's something a driverupdate won't fix....


What exactly has to be changed in order to support it? Why can't a driver be updated to support the feature if it has always been there just never been activated? Nv has done this sort of thing in the past you know, albeit with none DX type of items.
 
That's not possible afaik. nVidia needs to change their TMUs in order to support Gather4 (Fetch 4 in DX10.1) in their hardware. And that's something a driverupdate won't fix....

"Fetch4". Isn't that what ATI calls it ? What's the official Microsoft name for it ?
And what prevents Nvidia from being DX10.1-compliant by giving it another designation altogether ?
 
Hey, I think Turtle meant Arun's guesses...

42 6C 75 73 68 21 20 48 65 78 20 69 73 20 6E 69 63 72 20 3A 44

Jawed


I did. Sorry about that Jawed. Went back and fixed it yesterday all stealth like. :oops:

Maybe NVidia's saving the 8 cluster dies for the GX2, for a repeat of the 7800GTX-512 effect.

Jawed

Except G92_200/8800GT uses (supposedly) 110W for it's 512MB part, even with those modest clocks. Turning on another TCP cluster (x2), plus doubling the ram would surely put it over the 225W marker, which I believe to be the current artificial wall. I suppose it could be done with lower clocks (perhaps?) but why, when yields would probably be higher on a part with less TCPs and just higher clocks to compensate.

You'll also remember that 7950GX2 was essentially a 7900gt(x2) not a 7900GTX(x).

I think it's very possible they may be saving the 8C dies for a 7800GTX 512 part indeed. It could be a crushing single core part, and perhaps negate the need for a dual core part, if clocked decently enough (SP/BW-wise), or perhaps priced well. Not saying it's not possible, that's for sure. Something just rubs me odd about a G92 with something like 8C 800mhz core/2400sp/3200mhz. If that came about, I would be impressed for them pushing that much out of a shrink.
 
Last edited by a moderator:
That's not possible afaik. nVidia needs to change their TMUs in order to support Gather4 (Fetch 4 in DX10.1) in their hardware. And that's something a driverupdate won't fix....
They could just emulate it patching shaders code.
 
DegustatoR said:
As for the chip size -- how about DP added complexity? What if RV670 doesn't support DP and that's why it's relatively small compared to G92 which do support DP?
I might buy the DP theory, but it still seems odd. If G92 is a midrange or 2nd tier chip aimed at mainstream gamers, then why would Nvidia care about DP for it. If it is a high end chip, then when has Nvidia used 2 different chips to round out their high end offerings?

I suppose it is possible that NVIO + Purevideo3 + DP = 130M+ transistors, but it seems inefficient from a financial perspective... besides I didn't think Nvidia was too keen on supporting DP for gaming consumers.
 
EDIT: Anyway, looking at the pic of the supposed G92 core it looks to easily have more than 800M transistors...
Easily more than 800M?

Look more closely at it and you'll see it has the same size as a good old R520...

the g92 video chip has the packing dimensions 37,5 X 37,5 mm, whereas the g80 video chip in video cards GeForce 8800 GTS has the packing dimensions 42,5 X 42,5 mm. , it is possible to observe the packing sizes decrease and core area by more than to 20%
http://xtreview.com/addcomment-id-3515-view-GeForce-8800-GT-G92-UMC.html

G92 die is ~290mm² at 65nm, compared to G80's ~480mm² at 90nm, it gives more or less the same transistors count given the inability to shrink some parts perfectly.
 
"Fetch4". Isn't that what ATI calls it ? What's the official Microsoft name for it ?

Gather 4 is the DX10.1 name for Fetch4 which AMD GPUs have had since the R580, RV515 and RV530.

And believe me, if G92 was a DX10.1 compliant card... NV would have marketed it like that right from the start just like AMD is doing with their RV670. The mass market still blindly goes for "the higher number is better, so DX10.1 must be better than DX10" and nV knows that, AMD knows it and we all know it.
 
G92 die is ~290mm² at 65nm, compared to G80's ~480mm² at 90nm, it gives more or less the same transistors count given the inability to shrink some parts perfectly.

The G92 core includes I/O functionality. The G80 needs the NVIO chip die area on top of those 480mm² in order to output video signals.

CJ, i haven't seen any mention to DX10 support (let alone DX10.1) in any box art for 8800 GT's known so far (Asus, MSI, Gigabyte).
Why ?
 
Gather 4 is the DX10.1 name for Fetch4 which AMD GPUs have had since the R580, RV515 and RV530.

And believe me, if G92 was a DX10.1 compliant card... NV would have marketed it like that right from the start just like AMD is doing with their RV670. The mass market still blindly goes for "the higher number is better, so DX10.1 must be better than DX10" and nV knows that, AMD knows it and we all know it.

By your own reasoning, ATI/AMD should have been trumpiting from the highest mountians that the R600 was DX10.1 and not DX10 if all cards since the R5XX line has had Fetch4. The fact that neither has suggests that it is nothing more than a driver update/activation that you have said can't be done. Wouldn't be the first time NV kept something from the masses until it was activated thru a driver update that had been there since it was made.
 
@ PSU-failure:
Well, I was just using a photo editor to estimate the area (got about 310mm²)... could have been off a little, but I suppose a little adds up in a hurry at 65nm.
 
By your own reasoning, ATI/AMD should have been trumpiting from the highest mountians that the R600 was DX10.1 and not DX10 if all cards since the R5XX line has had Fetch4.

Please don't put words in my mouth. I know exactly what my reasoning is. ;)
 
Remember when thinking of the core sizes - G92 seems to be G80 + VP2 (from G84/86) + NVIO
 
By your own reasoning, ATI/AMD should have been trumpiting from the highest mountians that the R600 was DX10.1 and not DX10 if all cards since the R5XX line has had Fetch4. The fact that neither has suggests that it is nothing more than a driver update/activation that you have said can't be done. Wouldn't be the first time NV kept something from the masses until it was activated thru a driver update that had been there since it was made.
You're assuming that Gather4 is the only feature of 10.1. As CJ says chips that weren't marketed that way don't support it in hardware.
 
This post is redundant considering Arun's and Simon's and XMAN's posts, but nothing like repetition to get a point across nothing like repetition to get a point across.

why is that, pls explain.
Maybe he's not thrilled by your spamming this thread with the same question. Not everyone is as technically informed as Arun and others, but we try not to pollute the thread with insistent repetition and off-topic comments. Next time, why not give someone who might know a chance to answer your question before you repeat it twice?

sorry Arun. i suggest you start your own forum and Nazi it all you want and only let people who are interested in technical aspects post. i saw no such requirement when i registered and I have no desire in the slightest to learn technical aspects and i definitely don't care about feeling at home here, i just want information, don't care where from.
Common sense would suggest you not shout your disdain of learning technology in the 3D Technology subforum. Common courtesy would suggest you not antagonize a worthwhile contributor to these forums (and, oh, an editorial contributor and mod). Lots of us want info, but act more maturely in getting it. Please respect the other forum members and not add noise to an already huge thread.

lopri, what "N***" wouldn't wear that post as a badge of pride, not to mention a shining example of understated, level-headed rhetoric? :rolleyes: If you'd like to comment on the state of the forums, I suppose Site Feedback would be the place, but I'd hope you present your argument in a way more conducive to constructive debate. Really, I'm a little surprised that you'd post something like that--seems out of character.

I'm posting this publicly not to antagonize you two (as clearly that wasn't either of your intentions), but b/c I'm too lazy to PM tips on manners. This post will self-destruct, along with the other recent noise, in a day or two. If you think this post is rude or inappropriate, please use the "Report Post" button (the little triangle with an ! under my name) to call me names privately rather than publicly.
 
CJ, i haven't seen any mention to DX10 support (let alone DX10.1) in any box art for 8800 GT's known so far (Asus, MSI, Gigabyte).
Why ?

Why put DX10 marketing on your boxes when your competitor is marketing a more up to date feature set? Solution is don't but it on at all and hope consumers assume it's a DX10.1 chip.

Of course they are going to have it in small print on the boxes, but I would not be surprised to see something more like: "Windows Vista ready" other than DX10 in bold.
 
Remember when thinking of the core sizes - G92 seems to be G80 + VP2 (from G84/86) + NVIO
- 2MCs - 8 ROPs -2 L2 caches

Apparently...

I'm presuming that double-precision is in there too, and will be done at only a few % overhead.

Jawed
 
Jawed, do you think we will see a card based on a fully functional (8TCP) G92?

If yes, what would you expect for core and memory clocks?
 
Why put DX10 marketing on your boxes when your competitor is marketing a more up to date feature set? Solution is don't but it on at all and hope consumers assume it's a DX10.1 chip.

Of course they are going to have it in small print on the boxes, but I would not be surprised to see something more like: "Windows Vista ready" other than DX10 in bold.

The difference between DX10 and DX10.1 is comparatively much smaller than it was between DX9 SM2.0 and SM3.0.
Yet, that didn't seem to deter ATI from advertising it for close to a year while the Nvidia solution had (on paper) a more advanced feature set.
 
Jawed, do you think we will see a card based on a fully functional (8TCP) G92?
I've presumed we'll see such a thing for a while now, fingers-crossed. And I would like to think it'd be ~8800GTX performance.

The one area G92 appears to be most-limited, in comparison with 8800GTX, is ROPs: 16 instead of 24. That deficit is hard to make up purely by core clock (860MHz ish). But I've long argued that 8800GTX is over-endowed in the ROP department (for the bandwidth it has) so 700-750MHz-ish might be all that's needed to stop people complaining.

Memory clock is obviously also in need of a 50%-ish boost because of the cut from 384 to 256 bits. This is where things come undone as I don't know if there's anything out there in the region of 2700MHz. Can G92 work with GDDR4? GDDR4 looks like it'll go to about 2400MHz right now. So that would be quite a deficit. Is ~90% of GTX's performance good enough?

Obviously, if I'm wrong and there's really 6 MCs (and 24 ROPs) in there, then things are different - 8800GTX performance would be "easy" to achieve - but then why isn't 8800GT 320-bit? Perhaps there's 5 in there? Teehee.

Why would NVidia drop the "fine-grained" redundancy of any 1 from 6 MCs/quad-ROPs/L2s that is seen in G80. Perhaps because G92 was planned not to be so dependent on redundancy (being a considerably smaller die).

Yields may be more of a factor in G92's positioning than we realise - in which case a fully-functional (8 cluster) die might not exist (unless in 7800GTX-512 limited-supply fashion). It's the clusters that consume most of the die, after all (theoretically...).

My flatmate is currently crying himself to sleep: his 3-week old 8800GTX appears to have lost all its TMDSs (one DVI port failed 7 days ago, but could still drive his 20" LCD) - so his Dell 30" is relegated to 1280x800, as that's seemingly all it will accept on each DVI link and his old 7800GT isn't dual-link. He'd really like to see a G92 that's as fast as 8800GTX on Monday, but I think he's out of luck...

Jawed
 
Status
Not open for further replies.
Back
Top