NVIDIA: Beyond G80...

Pretty much as you expect considering the G80's shader setup. The short simple Geometry Shaders look too be very fast. While the longer ones must bog down the cache.
 
But then, who needs a low-power DX10 setup? surely not to play games since the requirements are simply too steep seen the demanding outlook of FS-X, Crysis etc. and there's currently nothing requiring dx10 which can be powered by a notebook chip.

A low powered G80 will certainly be faster than the G71 part it replaces. Sure it won't be great for new stuff but the efficiency gains should prove useful in DX9 games where the low end variants of conventional architectures tend to struggle.
 
Pretty much as you expect considering the G80's shader setup. The short simple Geometry Shaders look too be very fast. While the longer ones must bog down the cache.
Its not really the length of the shader that is the concern, its the amplification of verticies.
 
it could be memory or amount of vertices, really depends on the shader and assets used and amount of vertices being made. If its just pure generation of vertices, then the chart would be like that for any card (with unified shaders) IMO, other then the base line speed change of generation of additional vertices.

I can't get the forum translation to work right, any one got a link?

And the demo isn't working for me it just crashes well not really it loads up but just not shows up. No window no nothing.
 
The indication (without looking at the GS program) is that it's not terribly good when asked to emit a large primitive count out of the GS. On-chip memory and the nature of the GS stage in shading is likely the reason why (again, without looking at the program).
 
The following is speculative, and should not be considered based on any non-public information, as it quite simply is not so!
The codenames used are most certainly incorrect, and some might not even have their own codenames. As such, they are used exclusively to permit further discussion, and nothing else!

February->March 2007
G81: Optical shrink of G80 on 80GT (G80 currently is on 90GT - R600 is either on 80GT or 80HS). Introduces the 7/8 clusters SKU. Exact specifications (identical to G80 on 90GT or not; GDDR4 or not; etc.) depend on the R600's specifications. 1.1GHz GDDR4 and 600-625MHz is possible, if considered necessary. If such a step is taken, the parts with redundancy would likely also use GDDR4, which would define the 8900 line-up. There will NOT be a notebook version.
G82: G8x on 80GT with 6 native clusters and 5 native ROP partitions. Can act as a 8800GTS (which will then cost $399) or a 8700-series GPU, which would likely have 5 active clusters, 4 active ROP partitions and a 256-bit memory bus. This is needed for the 8800(/8900?)GTS, as G81 will mostly be 8/7 cluster parts due to improved yields with the smaller 80nm die. There will be a notebook version. Maybe pin-compatible with G81(?)
G83: G8x on 80GT(?) with 4 native clusters and 2 native ROP partitions. 192-bit memory bus and roughly $199 target introduction price. 8600/8500-Series, possible version with 3 active clusters.

March->May 2007
G84: G8x on 80GT(?) with 3 native clusters and 2 native ROP partitions. Version with 2 active clusters and 2 native ROP partitions. 128-bit memory bus. 8400/8300-Series. Maybe pin-compatible with G83(?)
G8i: G8x-based Intel IGP on 80GT(?) with 1 native cluster and 1 ROP partition, that has even less blending units, maybe no extra double-z, etc.(?)
G8a: Same as G8i, but for AMD's socket AM2.

June->August 2007
G85: G8x on 65nm, 2 native clusters and 1 native ROP partition. Sold at ~$99. Finally replaces G73's 80nm shrink. G72's 65nm shrink will already have been released for some time, and will remain in heavy production for the ultra-low-end SKUs.

September->December 2007
G90:
65nm, Q4 2007; 400mm2+
1.4-1.6GHz GDDR4, 384-bit Bus
1.45-1.75GHz Shader Core Clock
6 ROP partitions, but beefier
625-675MHz Core Clock
FP64 Support (Slow!)
32 MADDs/Cluster
24 Interps/Cluster
10 Clusters
Q1->Q2 2008
G91: 7(?) native clusters, 65nm
G92: 4(?) native clusters, 65nm
Q2->Q3 2008
G9I/A: 2(?) native clusters IGPs, 55nm(?) (competes with Fusion?)
G93: 3(?) native clusters, 55nm(?); replaces G85
Q4 2008
G94: 7(?) native clusters, 55nm(?)
G95: 12(?) native clusters, 55nm(?); 9900-Series
G96: 4(?) native clusters, 45nm(?); G93 moves into the ultra-low-end
Q2 2009
G100: ...

BTW, in terms of Quad-SLI, I think you're much more likely to see those with G82 and G91 than with G81 or G90. That's still a hefty performance gain, and would be much more reasonable in terms of power consumption.


Uttar


few years down the line (2011-2012) a custom PlayStation4 GPU based on the lastest or near-latest Nvidia GPU (G110-G120) but more suited to a console environment with massive, massive bandwidth (~3 TB/sec like Nvidia's Tony Tamasi said for future games) far beyond what PC GPUs have, thanks to EDRAM, and far more impressive for the time than RSX was in 2006.
 
Last edited by a moderator:
Nice catch, left some green love. ;)

What strikes me as amazing is that they are showing them at plain sight, actually working and running Vista, and on two ordinary barebone laptops, not even own-brand.
How come the Inquirer didn't jump on this yet ? :D
 
What strikes me as amazing is that they are showing them at plain sight, actually working and running Vista, and on two ordinary barebone laptops, not even own-brand. :D
Yeah.

If these chips are pretty much done, then whats holding them up till April/May? I want to taste some G8x goodness. :love:
 
Yeah.

If these chips are pretty much done, then whats holding them up till April/May? I want to taste some G8x goodness. :love:

Maybe driver support ?

According to the info on the next page of the article, the current Geforce driver has the same amount of lines of code as the entire Windows NT 4.0 operating system (!), and Geforce 8 will require separate executables, not only for single-card, but for SLI too.
Performance/Stability may be an issue too.
 
that would put it a healthy 1.5 year after the geforce 73 and 76 which is sane.
But then, who needs a low-power DX10 setup? surely not to play games since the requirements are simply too steep seen the demanding outlook of FS-X, Crysis etc. and there's currently nothing requiring dx10 which can be powered by a notebook chip.
Lots of people here claimed that unified shaders will give biggest improvement price/performance in low and mid cards. So why not DX10 card?
I'd buy such. I can't afford to buy expensive card, but any DX10 hardware will be very interesting for me.
 
Lots of people here claimed that unified shaders will give biggest improvement price/performance in low and mid cards. So why not DX10 card?

Well.. we'd have to see.. I can imagine mid range cards to be so bottlenecked one way or another because of the way the first generation of dx10 games are coded that the benefit wouldn't be all to clear when the first benchmarks arrive
 
Back
Top