The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
Nvidia has never been a big supporter of GDDR4, and with a 512bit bus the motivation to start doing so would be even smaller.
That report sounds too good to be true, and is somewhat similar to an earlier rumor anyway.
Let's wait for something a bit more consistent before we jump to conclusions.
 
I don't think ~ 50% SP clock increase for ~ 50% process shrink is unreasonable.

FWIW, as a general observation: if you're talking about the same basic architecture, a 50% clock increase by just shrinking to a smaller process is very unrealistic. Not saying that it won't run 50% faster, but in this day and age, these kind of increases require significant architecture work.

Wiring accounts for a major part of the delay and that doesn't get any faster with smaller processes.
 
90nm to 65nm are two process-steps(90->80->65nm) and G92 is supposed to be less complex than G80 (64SPs), so a shader-clock far above 2GHz (which can reach G84 @ air with 1.65V) should be a piece of cake.
And even with 128SPs it should be no problem, on 8800Ultra was 1.8GHz reached @air without Vmod.
 
90nm to 65nm are two process-steps(90->80->65nm) and G92 is supposed to be less complex than G80 (64SPs), so a shader-clock far above 2GHz (which can reach G84 @ air with 1.65V) should be a piece of cake.
And even with 128SPs it should be no problem, on 8800Ultra was 1.8GHz reached @air without Vmod.

The number of shader processors has no real impact on the clock speed that can be reached. At the layout level, more or less shader processors is a matter of cut-and-paste. Overall chip complexity (if you define that as the number of transistors) has no impact on local critical paths.

I haven't seen any G84 @2GHz or G80 @1.8GHz in the stores (overclocking doesn't count: we're talking about reliable volume production). I don't really follow the latest and greatest XXX Superclocked mega editions, so I could be wrong here, but a quick look on evga.com shows the 8800Ultra Black Pearl at 1.66GHz.

Anyway, I'd love to see 2.4GHz too. I'm just not holding my breath.
 
50% more should be not too hard considering that it is a step from 90nm to 65nm, so it is a full step and not hust one half-node.
 
Not that I think it's at all likely, but a 50% clock increase has happened in the past.
A few examples with similar basic architecture:
Geforce1 125mhz-Geforce2 200mhz(250mhz ultra).25nm?-.18nm 60%-100% increase
voodoo2~90mhz - voodoo3 166mhz .35nm? - .25nm ~80% increase
G70 - G71 was even 40-45% from .11 to .09(not counting their nonexistent makebelieve 512 chip)

feel free to correct any errors
 
osirisxs

GeForce 256 was 220 nm @ 120 MHz (67%-108% increase for 180 nm GF2 GTS - GF2 Ultra).
Voodoo 2 was 350 nm @ 90 MHz (103% increase for 250 nm Voodoo 3 3500 [183 Mhz]).
 
Just because it's happened in the past, doesn't mean it'll be as easy to do in the future.... it's like folding a piece of paper in half multiple times...
 
Why even produce G90 when it's clear ATI won't have an enthusiast response to G80 until R700 comes out perhaps a year from now? Save the R&D budget and switch the engineers over to other projects. It's the smart thing to do, IMHO.

Late post But what about INTEL graphics project next year ?!
 
Yet nobody says they will increase the clockspeed of the whle chip by 50%.

Nor did I mention anything about that. :???:

50% more should be not too hard considering that it is a step from 90nm to 65nm, so it is a full step and not hust one half-node.

Just because it's happened in the past, doesn't mean it'll be as easy to do in the future.... it's like folding a piece of paper in half multiple times...
 
Depends if INTEL uses exclusively in house developed architectures for anything 3D in the future, or also 3rd party IP and how high exactly the intend to scale those and for which markets. If the latter planned projects are limited to IGP and/or UMPC shiznit, then it's hardly worth debating. Should it go a couple of steps higher though than those, I wouldn't laugh at what INTEL might release at this stage one bit.

INTEL has already the largest portion of the market in terms of (graphics) units sold worldwide (not necessarily in terms of revenue though). If INTEL should truly intend to widen the markets it's targetting (be it low end professional markets or anything else), it makes it a larger player than it is today in the graphics market and possible with a much higher revenue out of it too. Of course such an option can have both advantages as disadvantages, since so far INTEL hasn't shown any signs that it's truly pushing the technological progress for graphics at all. Neither from a hardware nor from a software/driver level.

Significant changes in policy for the latter might also mean a totally new philosophy from INTEL for graphics in the future (if they're wise that is...LOL).
 
Could be this a G92-Sample?

g92kg8.jpg


Fudo: Nvidia Display port card pictured

edit:
Board seem very short, so G98 would be more probably. Or are supporting current G8x already display-port?
 
Last edited by a moderator:
You can support DP via external transmitters. The ports seem to be connecting to two thngs that are partially covered by the HSF.
 
You can support DP via external transmitters. The ports seem to be connecting to two thngs that are partially covered by the HSF.

In a quick look the two don't appear to be the same size, though...
Also: Where's the SLI connector ? ;)

That must be a low end card, probably the replacement for 8400 GS/8500 GT (G86).
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top