The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
Also, if G92 is really as underwhelming as CJ says then Nvidia deserves a kick in the nuts.
I don't think so. From what everyone is saying, we're going to get a chip with near GTS performance and it's going to barely be bigger than RV630. It'll completely obliterate the midrange.

Who knows what pricing will be, but it'll either give us a huge performance boost over the 8600/2600 for a small premium in price, or NVidia will earn a really fat margin at a larger price. Either way, it's a very smart decision.
 
Well yeah, of course its market placement will determine how it's perceived. As a 8800 GTS replacement it will be a dud IMO assuming CJ's sources are correct. As a 8600GTS replacement it will be a phenomenon. But if it's the latter then there's no way (only) two of them are going to represent the high-end....I just don't see it.
 
Right now i see two possibilities:
1. All we know about G92 is wrong, this is a hi-end chip aimed at beating G80 performance levels by quite good margin (dual-chip board configuration (GX2) is possible if this chip is just a shrink of G80 to 65nm + higher clockspeed). This essentially puts G92 in the RV670 territory (which is assumed to be a shrink of R600 to 55nm plus a 256 bit bus instead of 512 bit).
2. G92 is GF8700 (3-4TCPs, 256-bit bus, 8600GTS-8800GTS performance levels). But this means that there should be another G9x -- the one that would be hi-end and would compete with RV670 either in CF or as a one chip (as GX2 as i described above or as G80U vs R600CF if this hi-end G9x will contain more than 128 SPs, 384-512-bit bus, etc.).
For now i'm leaning towards the 2.1. possibility :) The one where G92 is GF8700, but we're missing shrinked G80 (G90? G91? who knows...) right now and this hi-end G9x either will pop out of nowhere in November (doubtful, yeah) or will come out in the 1st Q of 2008 (more likely).

I'd like to add the possibility that a GX2 configuration could have a chip that is more than a 8800 die shrink. The move to 65 nm is substantial enough to allow plenty more transistors and a decent die reduction. The 7950 was the first GX2 design, but that does not mean that a chip has to be that small for a dual PCB configuration. Especially considering that NVIDIA is presumably improving the way it designs.
 
I'd like to add the possibility that a GX2 configuration could have a chip that is more than a 8800 die shrink. The move to 65 nm is substantial enough to allow plenty more transistors and a decent die reduction. The 7950 was the first GX2 design, but that does not mean that a chip has to be that small for a dual PCB configuration. Especially considering that NVIDIA is presumably improving the way it designs.
Well, technically GX2 is possible even with G80 but economically and market-wise there is no reason to produce such card.
G9x with 192 SPs and 384-bit bus on 65nm with higher than G80U clocks should be enough to counter RV670CF even as a single chip solution. So why try harder? (c) =)
 
As i've said before i find it _very_ hard to believe that it's not. It just wouldn't make any sense.
+ And obviously the next hi-end could be just G80 shrinked to 65nm -- then they'll use GX2 to counter RV670 Gemini, yes.

I don't see how it fails to make sense... I know the desire from an enthusiast perspective is to keep pushing monolithic GPU technology, but multi-core is where the future (and seemingly soon to be present) is at.
 
I don't see how it fails to make sense... I know the desire from an enthusiast perspective is to keep pushing monolithic GPU technology, but multi-core is where the future (and seemingly soon to be present) is at.
I'm not quite sure in this... It seems that AMD is certainly thinks so but that alone doesn't mean that it's true.
 
but multi-core is where the future (and seemingly soon to be present) is at.
Unless you are talking about distinct logic (NVIO and Xenos for example), then I would have to disagree. Multi-core ala VSA-100 is the past not the future. It is simply less efficient. The only reason I can see to follow such a path would be manufacturing issues.
 
Unless you are talking about distinct logic (NVIO and Xenos for example), then I would have to disagree. Multi-core ala VSA-100 is the past not the future. It is simply less efficient. The only reason I can see to follow such a path would be manufacturing issues.

Tell it to R700.
 
I'd argue that it's harder to find a package in an apartment building of 100 apartments when compared to one of only 50, even though the apartments are identical copies of each other.

I'd argue that in your example, you'd be looking for 1 out of 2 packets in 100 apartments, which is just as hard to 1 in 50, except maybe for some extra hide-outs in the interconnecting hallways.

Seriously, I have worked on chips with very similar arrangements: a large basic building block duplicated 12 times with a central crossbar going to a memory controller. Adding building blocks can increase the chance of unexpected performance loss here or there due to some freak interactions or due to bugs in the interconnect, but the vast majority of debugging work happens on the individual cores. It's the same divide and conquer approach as initial pre-silicon verification: debug individual cores first. Then move on to interaction cases. You'll see most problem when going from 1 to 2 cores, but past that going from 2 to 12 is painless.
I don't see why it would be different for a GPU.

Ever increasing margins on which segment?The prices have also been ever increasing...I guess it depends on wheter or not 1000$ cards are economically viable or not...the Ultra hasn't yet answered that question I think.
I wouldn't be surprised if the GTS vs GTX ratio of a wafer is close to 10 to 1, if not higher, so the GTS will have the most influence on overall margins. $280 isn't that unreasonable, right? Yet given the high expected yields (due to massive redundancy), margins on that should be really good.
Why couldn't you use the same story for even larger monolithic chips?
 
I think G92 being the 8700 almost certainly right, there is just too big of a gap between 32 and 96 not to be filled by something which has 64 ! My aesthetic number sense demands it.

As for G90 or whatever the top end part will be I can easily imagine a faster shrinked version of G80, perhaps with an increase up to 160 stream processors. A high end GX2 type chip I also find unappealing.

Who knows though......
 
I dont agree that G92 will be GeForce8700 for various reasons.

1.Its time to release their new GPU already. (1 year anniversary life of GeForce8)
2.The code name is G92 well it should be GeForce 9 isnt it ? :p
3.NVIDIA said its high-end GPU (http://www.theinquirer.net/en/inquirer/news/2007/05/24/nvidia-claims-g92-will-be-a-1-teraflop-beast)
4.The rumour that all speard right now maybe coming from Nvidia,maybe they dont want people to wait for GeForce9 and get more money from 8800 series.
5.Becuase I want GeForce9800 more than GeForce8700 lol.
 
I dont agree that G92 will be GeForce8700 for various reasons.

1.Its time to release their new GPU already. (1 year anniversary life of GeForce8)
2.The code name is G92 well it should be GeForce 9 isnt it ? :p
3.NVIDIA said its high-end GPU (http://www.theinquirer.net/en/inquirer/news/2007/05/24/nvidia-claims-g92-will-be-a-1-teraflop-beast)
4.The rumour that all speard right now maybe coming from Nvidia,maybe they dont want people to wait for GeForce9 and get more money from 8800 series.
5.Becuase I want GeForce9800 more than GeForce8700 lol.

NV44 (launched as "Geforce 6200 Turbocache") ended up having its name changed to "Geforce 7100 GS" later on, so anything is possible. ;)
 
I think G92 being the 8700 almost certainly right, there is just too big of a gap between 32 and 96 not to be filled by something which has 64 ! My aesthetic number sense demands it.

As for G90 or whatever the top end part will be I can easily imagine a faster shrinked version of G80, perhaps with an increase up to 160 stream processors. A high end GX2 type chip I also find unappealing.

Who knows though......

I agree. GX2 style sure does seem unappealing but i think it may be true. Remember the GX2 style waterblocks at cebit during march?

But something is fishy. We are obviously not seeing the "missing link". Novemeber is coming and soon, and not a single G92 die/pcb or anything thing concrete has been leaked yet.

Could we be in for another big surprise as in G100 hitting us earlier than we thought (early in schedule)? or a mistake on nVIDIA's part which i doubt because why would they want to loosen their grip on ATi/AMD?

If they are still sticking with the 8 series moniker (8700 - G92), then could it be possible that the next gen architecture is just around the corner for nVIDIA and i.e scrapping plans for G9x refreshes in favor of the G100 all things considered?
 
Status
Not open for further replies.
Back
Top