The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
ShaidarHaran, they would have had to have canceled the next high-end project (G90+) before the ATI R600 was even unveiled.

Also, do you think Nvidia is doing to bad or hurting so bad with their 44+ margins? I don't think so. If it's not broken, don't fix it. What got them to that point is having high-end parts.

Why do you people keep missing the fact that I'm not suggesting NV should not introduce a high-end SKU this Fall? How many times do I need to say "65nm GX2-style G80 derivative" for it to sink in? Nothing says "high margin" to me like G92 sounds, and 2x G92 is still a high margin part if it takes over the Ultra's price position (as it no doubt would).

You can always sit on a completed project... you can not, however, just pull one out of thin air.

/step 1: Can R&D on G90.
/step 2: Rehash G80.
/step 3: Profit...
/step 4: Release competitor to R700...
/step 5: Oh #@%& !!!
/step 6: Fire everyone who thought step 1 was a good idea.

lol, you say this as though G100's development has anything to do with G80's refresh.
 
And how many times do we need to say "They already spent all the money on creating the new high-end GPU. Why should they not recoup money from it? Why should they waste money on creating yet another SKU?" for it to sink in? To us, it's a case of Either a new high-end single GPU or a high-end dual GPU part. How could they pull off both?
 
And how many times do we need to say "They already spent all the money on creating the new high-end GPU. Why should they not recoup money from it? Why should they waste money on creating yet another SKU?" for it to sink in? To us, it's a case of Either a new high-end single GPU or a high-end dual GPU part. How could they pull off both?

Why don't you stop and think about this for a second? G80 is obviously over-sized and somewhat difficult to produce on 90nm. Obviously NV knew this fact before it came to market, or there wouldn't be an NVIO chip in existence. Wouldn't it make sense to maximize margins by shrinking the existing chip (perhaps targeting higher clocks and/or a GX2-style implementation for the enthusiast segment) as opposed to spending your transistor budget on more *needless* trannies? (needless for this generation, of course) Remember G70->G71? Obviously this was a *very* successful strategy for NV, why not use it again?
 
Remember G70->G71? Obviously this was a *very* successful strategy for NV, why not use it again?
Not a bad example, I'll admit: if the rumour mill is to be believed, then the 32PS refresh of G70 was canned very late in development. You could easily imagine a similar scenario, with G92 being a G80 shrink and the 'monolithic' GPU having been canned.

However, your NVIO arguement is flawed, as has been pointed out by Bob to Jawed in the past: the decision to have a separate IO/analog chip was made well before any reliable die size estimate was available.
 
I did stop to think about it. They already spent money on it. They should recoup that money. Correct?

As for maximizing money money from a GX2-like setup, until they can get SLI working on dual-monitor setups*, they will have lost a large portion of their potential market.





*As I understand it, you need to disable SLI to run dual-monitors.
 
Why do you people keep missing the fact that I'm not suggesting NV should not introduce a high-end SKU this Fall?

No what you're saying is that they designed G90, realized that AMD can't compete (which is highly debatable given more recent benchmark results) and made a tactical decision to can that SKU and do a GX2 with dual-G92's. The G71 analogy makes no sense since their motivation then obviously wasn't that they could afford to pussy foot around while AMD catches up. Also, G71 was the high end for a while. Current G92 rumours put it at slower than the 8800GTS which means any GX2 SKU would have to be a strategic decision - i.e. G90 never existed.
 
No what you're saying is that they designed G90, realized that AMD can't compete (which is highly debatable given more recent benchmark results) and made a tactical decision to can that SKU and do a GX2 with dual-G92's. The G71 analogy makes no sense since their motivation then obviously wasn't that they could afford to pussy foot around while AMD catches up. Also, G71 was the high end for a while. Current G92 rumours put it at slower than the 8800GTS which means any GX2 SKU would have to be a strategic decision - i.e. G90 never existed.

No, what I'm saying is that NV had a smaller margin maximized part in mind already, and perhaps G90 never existed in the first place. If it did, it was likely canned in favor of a more profitable scale-able G92 solution.

lol, you say this as if I don't know what I'm talking about....

Probably because your hypothetical was based on a misunderstanding of my argument (at best) or a strawman (at worst).

However, your NVIO arguement is flawed, as has been pointed out by Bob to Jawed in the past: the decision to have a separate IO/analog chip was made well before any reliable die size estimate was available.

TBH I don't know that I believe that, no matter how reliable the source. To me it sounds like FUD to justify the extra expenditure. I'm not saying NVIO was a bad idea, especially given G80's already massive size that would've no doubt increased by the inclusion of display hardware. I don't know how much this would've affected yields, margins, transistor budget, or clockspeed targets, but I don't think this effect can be ignored completely.

I did stop to think about it. They already spent money on it. They should recoup that money. Correct?

This assumes there was a G90 in the first place. I'm arguing that a scalable G92 was the plan all along. I may be wrong here, but it makes a lot of sense to me.

As for maximizing money money from a GX2-like setup, until they can get SLI working on dual-monitor setups*, they will have lost a large portion of their potential market.





*As I understand it, you need to disable SLI to run dual-monitors.

Agreed. I used to run a CF X1900 XTX multi-mon setup and had all kinds of issues because of it. Really soured my opinion of multi-GPU in general.
 
Last edited by a moderator:
Just floating an idea here. If I'm being obtuse about this and anyone has actual evidence of G90's existence please feel free to beat me over the head with it.
 
Just looking at the numbers I'm not sure a dual-G92 could fulfill Nvidia's promise of close to 1Tflop in the high-end. If we assume the 64 shader rumour is correct we're looking at north of a 2.4Ghz shader clock to even come close to that.

Also, I'm not clear on where G92 is targeted. Is it a G84 replacement or is it going up against RV670? If it's the latter is Nvidia going to keep G84 around cause it sure looks like ATi is going to have a 2600XT class part in Q1.
 
ShaidarHaran said:
Probably because your hypothetical was based on a misunderstanding of my argument (at best) or a strawman (at worst).
/lol

ShaidarHaran said:
I'm arguing that a scalable G92 was the plan all along. I may be wrong here, but it makes a lot of sense to me.
Sure, if lower performance per transistor is your goal...

/anyway That is enough for me... think what you want.
 
Just looking at the numbers I'm not sure a dual-G92 could fulfill Nvidia's promise of close to 1Tflop in the high-end. If we assume the 64 shader rumour is correct we're looking at north of a 2.4Ghz shader clock to even come close to that.

I don't think ~ 50% SP clock increase for ~ 50% process shrink is unreasonable.

lso, I'm not clear on where G92 is targeted. Is it a G84 replacement or is it going up against RV670? If it's the latter is Nvidia going to keep G84 around cause it sure looks like ATi is going to have a 2600XT class part in Q1.

I'm not clear either, but it sounds like the replacement for G80 GTS in single-GPU SKUs. It's far too beefy to be a simple G84 replacement, IMHO, unless of course clockspeed targets are no higher than current 90/80nm parts.
 
...and shifted towards a margin-maximization strategy.
NV´s commitment to always try to improve their margins has started years go and you won´t see them to just stop and sit on their asses, not in the life of Jen-Hsun Huang, as he doesn´t even speak that language.

So, what is there to shift? Exactly, nothing.
 
Considering that part was merely the G80 diagram cropped, I wouldn't exclude a 'bad copy-paste'... But still, strange! 6 Quad ROPs would certainly go against everything said so far...
It would, however, yield your dearly expected increase in blending. Plus, maybe they're switching towards 32 Bit from ROP to Mem?
 
NV´s commitment to always try to improve their margins has started years go and you won´t see them to just stop and sit on their asses, not in the life of Jen-Hsun Huang, as he doesn´t even speak that language.

So, what is there to shift? Exactly, nothing.

So the wording was incorrect. You're only lending support to my argument if they're already pursuing such a strategy.
 
So the wording was incorrect. You're only lending support to my argument if they're already pursuing such a strategy.
Not necessarily incorrect, since you can always try to maximize margins even further each new generation. Jen-Hsun Huang loves to ride it´s margin horse, whenever possible, that´s everything i wanted to point out. I also have absolutely no problem supporting someone´s arguments, especially if their well thought out.
 
Last edited by a moderator:
http://www.hardspell.com/english/doc/showcont.asp?news_id=1428

The complex architecture of the graphics chip and its capabilities are responsible for the fact that the G92 has over one billion transistors withing.......

Talking about parallel computing capabilities, well, the G92 GPU will hit the one teraflop mark with its shader processing units that comes in a MADD+ADD configuration which translates in 2+1 FLOPS=3 FLOPS per ALU (the shorthand for the arithmetic logic unit). The fully scalar design of the G92 series of GPUs is combined with a 512bit wide memory interface and an extended support for as much as 1GB of GDDR4 graphics memory. Graphics APIs supported are represented by the latest (in fact not yet released) DirectX 10.1 and its open source equivalent, OpenGL 3.0. Other new features of the G92 series are the support for "FREE 4xAA", an audio HDMI compliant chip, a tesselation unit built directly into the graphics core and improved performance and quality output of the AA and AF units.
While pricing for the GeForce 9800 series will vary wildly across the different manufacturers, two price ranges are being shuffled: 549-649 USD for the GeForce 9800 GTX and 399-449 USD for the GeForce 9800 GTS.
 
Status
Not open for further replies.
Back
Top