The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
I know they've been pushing AFR for quite some time now, so it is the expected outcome at some point, I just didn't expect it until next generation. It makes sense to do so given the recent process shrinks though.

But no matter how much you push AFR. It still requires some support/configuring on the driver level. Otherwise it'd work more out of the box. Very rarely is there an AFR profile that doesnt need some configuring.
 
But no matter how much you push AFR. It still requires some support/configuring on the driver level. Otherwise it'd work more out of the box. Very rarely is there an AFR profile that doesnt need some configuring.

I'm not saying it's a good thing, just stating the fact that both IHVs are pushing it. Believe me, I know AFR isn't the end-all-be-all solution for multi-GPU performance scaling, I owned an X1900 XT-X CF rig last year.
 
I'd imagine the cost savings accrued by producing dual-GPU cards instead of the huge monolithic chips would (much) more than pay for each IHV to employ a couple of specialist employees to produce/tweak CF/SLI profiles for new games? Are we going to see numerous profile downloads available throughout the course of each month as games are released?
 
for me i think SLI must remain an option for extreme performance not standared one.

We're reaching a point where huge, monolithic chips are no longer a feasible alternative, even with the advances in process technology-so irrespective of how we feel about multiple-GPUs, they're probably the next step along the evolutionary pathway, at least for the high-end, IMHO.
 
We're reaching a point where huge, monolithic chips are no longer a feasible alternative, even with the advances in process technology-so irrespective of how we feel about multiple-GPUs, they're probably the next step along the evolutionary pathway, at least for the high-end, IMHO.

I can see how multiple dies might be a good solution at the high end for several reasons. But I would really prefer it if such a solution looked like a monolithic GPU to the driver / API etc

Do we know whether R700, for instance, will rely on multi-GPU software or whether it will be wired up to look like a single chip?
 
We're reaching a point where huge, monolithic chips are no longer a feasible alternative, even with the advances in process technology-so irrespective of how we feel about multiple-GPUs, they're probably the next step along the evolutionary pathway, at least for the high-end, IMHO.

Why are they no longer feasible? Obviously the 8800 is pretty big, but it is the first of a new generation on a relatively old process.

Why can't there still be a very high end chip that has the power envelope for a dual pcb card? That's the best of both worlds from NVIDIA' perspective isn't it? I don't understand what happened to suddenly make this unfeasible.
 
We're reaching a point where huge, monolithic chips are no longer a feasible alternative, even with the advances in process technology-so irrespective of how we feel about multiple-GPUs, they're probably the next step along the evolutionary pathway, at least for the high-end, IMHO.

That might be true but I've always assumed that the G80 architecture would scale pretty well at 65nm. Coupled with the high clock potential of the shader core I don't see why they couldn't make a small-yet-fast single chip solution at the high end.
 
On future: the future is definitely a single-chip computer in some form.

Multichip is just a stopgap thanks to the lack of viable alternatives, but as always the ultimate goal in chip design is to make things smaller and more eficient, not bigger and more power hungry.
 
Single chip computers are certainly the way we're heading - for the budget/embedded sector.

It seems to me that demand for high-end computing isn't likely to disappear, however, and I'd guess multi-chip may still be the preferred way forward here - at least, that's what the indications are at present.
 
Why are they no longer feasible? Obviously the 8800 is pretty big, but it is the first of a new generation on a relatively old process.

Why can't there still be a very high end chip that has the power envelope for a dual pcb card? That's the best of both worlds from NVIDIA' perspective isn't it? I don't understand what happened to suddenly make this unfeasible.

Debugging a hugeish chip is non-trivial. Yields aren't really that great if you're forced to always skim the limits of the process you're using. Scaling a huge chip for all market segments is also non-trivial. See the gap in both IHVs line-ups and the significant delta from the lower-end parts to the higher-end ones(think 8600 to 8800, f.e.).

Economically, the best for nV would be to have a multi-chip/multi-die thing, comprised of two simpler dies that have great yields and don't require significant foundry prowess/aren't a pain in the ass to debug.

I also think that we'll probably move beyond SLi and CF as they are today, and most likely a more "intimate" manner of inter-GPU communication will be employed, and probably SW will see those future architectures as a single monolithic design. I simply doubt that we'll keep on scaling to single chip single die billion+ design.
 
Just heard from a fairly reliable source that the G92 aka GF8700 performs in between a GF8600GTS and GF8800GTS. It looks like AMD really succeeded in crashin nV's party by releasing the HD2900Pro which offers better performance for the same price one and a half months sooner. And G92 will go up against RV670 in November... and that fight should be in favor of the RV670...

There will be two versions of the GF8700, a GTS and a GX2, which was sort of confirmed by Kinc yesterday who said that there will be a GX2 version of the "die shrink".
 
I also think that we'll probably move beyond SLi and CF as they are today, and most likely a more "intimate" manner of inter-GPU communication will be employed, and probably SW will see those future architectures as a single monolithic design

That's what I was expecting from a multi-chip approach. But with the latest news of the same multi-PCB crap I'm not so hopeful about the future prospects in the high-end.

Also, if G92 is really as underwhelming as CJ says then Nvidia deserves a kick in the nuts.
 
Just heard from a fairly reliable source that the G92 aka GF8700 performs in between a GF8600GTS and GF8800GTS. It looks like AMD really succeeded in crashin nV's party by releasing the HD2900Pro which offers better performance for the same price one and a half months sooner. And G92 will go up against RV670 in November... and that fight should be in favor of the RV670...

There will be two versions of the GF8700, a GTS and a GX2, which was sort of confirmed by Kinc yesterday who said that there will be a GX2 version of the "die shrink".

I'm wonder does GX2 mean dual chip ? :???:
and if its ture it means perfermance better than 8800GTS ?
 
Also, if G92 is really as underwhelming as CJ says then Nvidia deserves a kick in the nuts.

Well if the information I just got is true, then nV has a little less than 2 months to bump up clockspeeds to make it a bit faster. And they are still undecided about the price, so for a decent price it might still sell well.

The information I got from my most reliable source in the last few months have always indicated that the G92 was a 64 streamprocessor part on a 256-bit memory interface scoring around 9K in 3DM06 to replace the GF8800GTS320.

But who knows... maybe they've got a surprise in store for us and fooled all of my sources. :p
 
Debugging a hugeish chip is non-trivial.
Why would there be a big difference between debugging an 8800 and an 8600? If the majority of the chip consists of replicated parts, it comes down to (almost) the same thing.

Yields aren't really that great if you're forced to always skim the limits of the process you're using.
As shown by ever increasing margins, I don't see indications that yields are a big problem right now. I'm sure there will some point in the future where it may be, but it's not common for companies to stop pushing the envelope before they are getting burned. ;)

Economically, the best for nV would be to have a multi-chip/multi-die thing, comprised of two simpler dies that have great yields and ...
... that are still facing the problem of not having a very high bandwidth data exchange interface.

High-end customers better hope that we're quite a bit away from only multi-chip solutions.
 
Last edited by a moderator:
Well if the information I just got is true, then nV has a little less than 2 months to bump up clockspeeds to make it a bit faster. And they are still undecided about the price, so for a decent price it might still sell well.

The information I got from my most reliable source in the last few months have always indicated that the G92 was a 64 streamprocessor part on a 256-bit memory interface scoring around 9K in 3DM06 to replace the GF8800GTS320.

But who knows... maybe they've got a surprise in store for us and fooled all of my sources. :p



G92 at least has more than 96 SP.
 
Status
Not open for further replies.
Back
Top