The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
Sooo, is this Foxconn leak true and 8800GT is using G96 after all? Cause if it does then it all starting to make some sense finally :)
 
g92enrb0.png


Can't wait for the G92 vs RV670 fights. :)
 
http://forums.vr-zone.com/showthread.php?t=196581

Nvidia cooks up fresh names stew - Get ready for further confusion

NVIDIA HAS a bunch of things in the pipe for its next generation of parts, and as usual, we bring them to you a bit ahead of schedule. Here are their plans for the next six months or so. Let us hope they get working drivers for the last-gen parts out before they are replaced.

First up is the high end G92, a replacement for the 8800 lines. This is coming out in short order, we have heard two dates for it, November 12th and November 5th, the former being the more likely candidate. We have heard internal names of G92_200 and G92_300 for low and high end enthusiast parts respectively.

There is also a value part, the G98, launching in November as well. Seeing as how Nvidia works PR, it will probably not be on the 12th, twice the headlines is just that. The mid-range G96 is taping out really soon, and production is scheduled for next spring, April if all goes well.

The weird thing is that the above codes are for ASICs, that is fab talk. What you guys will see is a new naming convention. For the high-end, 'enthusiast' parts, it will be called D8E, mainstream will be D8P and value will have D8M. These correspond to G92, G96 and G98 respectively.

Notebooks will also have a new naming scheme, and they go one number up. They will be NB9E, NB9P and NB9M, the last letter signifies the same market segment as the desktop parts. Unless they can keep power under control, unlike the last few revs, their fate in this segment looks increasingly grim.

That brings us back to SLI, and what these beasts are capable of. A long time ago, we told you about three-way SLI, and it sank under the waves a while ago. Several people strongly hinted that this was due to broken drivers, an NV hallmark of late. Well, six months later, we guess they have it less broken, and we are told three-way will be pimped at the launch with quad following early next year.

With Nvidia's track record of late, we would not hold our breath for anything actually working. But, who knows? It might have got it right. In any event, at least you know what the plans are. µ

This gets more confusing as rumours are conflicting with each other (too much confliction). Now we are going to see the actual high end refresh of the 8800 line by novemeber?
 
8800gtmv7.jpg


Weird numbers, slide not mention AF is used and here in higher resolution hd2900xt almost in 8800gtx level.

Hmm, I somehow fail to see why the 8800GT should be faster than the 8800GTX in anything, if it's really pretty much a die-shrunk chip. At the rumoured configuration of 112SP/1500Mhz, 600Mhz core (16 ROP/28 TMU), and 256bit/900Mhz memory it has (all to reference GTX/GTS clocks):
97% of the ALU performance of a GTX (145% of a GTS)
91% of the texturing performance (140%)
70% of the "ROP power" (pixel fill, z test etc.) (96%)
67% of the memory bandwidth (90%)

So given these numbers, it's easy to see why it should be faster than a GTS - significantly more ALU/texturing performance, just very slightly less memory bandwidth/ROP power. However, it's behind a GTX in every aspect (though almost matches it in shader power), so especially since the 8800 seems to scale quite well with memory bandwidth how could it be faster?
So, is there more to it than a die shrink? Larger caches to make up for the lower memory bandwidth? Different compression schemes? Or just other internal tweaks?

(edit: ok failed to notice it's not actually faster than the GTX in this test, since the GTX/GT/Ultra are so close it looks like it's just noise and cpu-limited.)
 
It seems UT3 laps up ALU capability.

Perhaps the ALU architecture is improved enough to make this difference. There's a general expectation that some G84/6 goodness will be inherited by G92.

Also, additionally or alternatively (who knows, eh?) the expected double-precision capability of G92 may have a positive effect on single-precision performance.

Some clever cloggs should, perhaps, muse on the idea that G84/6's "enhanced" ALU design is a half-way step towards the double-precision capability of G92.

Jawed
 
Ummm why are you stressing over a vr-zone post that itself is just a copy and paste of a 3 week old article straight out of the inquirer accompanied by a link to hardwarezone that has a copy and paste of an article straight out of fudzilla?

Since when was i stressing? :LOL:

Theres two distinct camps in the rumourmill. GX2 on one side and a single GPU high end solution on the other.

Id like to think that nVIDIA has chosen the latter.
 
Since when was i stressing? :LOL:

Theres two distinct camps in the rumourmill. GX2 on one side and a single GPU high end solution on the other.

Id like to think that nVIDIA has chosen the latter.

Me too. What I'm really curious about is how R600's delay and relatively poor performance over the past few months have affected Nvidia's strategy. I still can't believe all we're getting is a new $250 card a year after G80 first landed.
 
Why is that so hard to believe? It was 14 or 15 months between 6800 Ultra and 7800 GTX, and that was without having the single-card performance crown.
 
Me too. What I'm really curious about is how R600's delay and relatively poor performance over the past few months have affected Nvidia's strategy. I still can't believe all we're getting is a new $250 card a year after G80 first landed.

Think of it like this:
For ~$500 you can now get much higher performance than what you'd be able to have a year ago.
Plus, there's an extra incentive for the consumer to purchase SLI motherboards since the GPU's worthy of it are now much more powerful and cheaper (and you thought Nvidia's marketing department was sleeping, hey ? ;)).

Why have the consumer buy just a high-end GPU, when they can try to make him buy two of them, plus their own mainboard chipset ?
 
Why have the consumer buy just a high-end GPU, when they can try to make him buy two of them, plus their own mainboard chipset ?

Because SLI does not work with multi-monitor setups.
 
Status
Not open for further replies.
Back
Top