NVIDIA: Beyond G80...

me more waiting for G90 @ 65nm with 192 improved SP running at 2,5GHz to replace my GTX at end of the year :p
So, I got the right specs, but the wrong clocks - thanks! ;) Although you got the wrong codename, but I guess that doesn't really matter. So... full custom?

I'm midly surprised it got delayed to December - I guess that confirms it missed the tape-out date. I wonder what's the timeframe for G98 now, probably Q1?
 
me more waiting for G90 @ 65nm with 192 improved SP running at 2,5GHz to replace my GTX at end of the year :p

You mean 2008? ;)

This is not the way NV acts, more probable is a G80@65nm with some improvements to increase margins. And not your (192 x 4 FLOPs(MADD+MADD) x 2.5GHz =) 1.9TFLOPs chip.

But the 192 SPs could be possible on one card, but not chip -> 2x G92(supposed to be 6 cluster/4 rop-partions).
 
You mean 2008? ;)

This is not the way NV acts, more probable is a G80@65nm with some improvements to increase margins. And not your (192 x 4 FLOPs(MADD+MADD) x 2.5GHz =) 1.9TFLOPs chip.

But the 192 SPs could be possible on one card, but not chip -> 2x G92(supposed to be 6 cluster/4 rop-partions).
Nope. According to the release notes of one of the more recent versions of CUDA, nVidia claims to be planning on releasing hardware by the end of this year that is capable of double-precision floating point.
 
So, I got the right specs, but the wrong clocks - thanks! ;) Although you got the wrong codename, but I guess that doesn't really matter. So... full custom?

I'm midly surprised it got delayed to December - I guess that confirms it missed the tape-out date. I wonder what's the timeframe for G98 now, probably Q1?


yeah G90 or G9x call it whatever you want, it's the same chip. me don't have yet details of architecture, only summary numbers
 
Last edited by a moderator:
Nope. According to the release notes of one of the more recent versions of CUDA, nVidia claims to be planning on releasing hardware by the end of this year that is capable of double-precision floating point.

And why this is not combineable with my post? I said "with some improvements". ;)


yeah G90 or G98 call it whatever you want, it's the same chip.

Same chips? :unsure: G98* should be an absolute low-end part to compete with RV610.

* Nvidias new codename scheme: Gx0 > Gx2 > Gx4 > Gx6 > Gx8
 
You mean 2008? ;)

This is not the way NV acts, more probable is a G80@65nm with some improvements to increase margins. And not your (192 x 4 FLOPs(MADD+MADD) x 2.5GHz =) 1.9TFLOPs chip.
if my source is right, no more G80@65nm planned...
 
This is not the way NV acts, more probable is a G80@65nm with some improvements to increase margins. And not your (192 x 4 FLOPs(MADD+MADD) x 2.5GHz =) 1.9TFLOPs chip.
NV acts the way it makes sense for them given their target markets... Don't forget that this chip will be their GPGPU flagship for 1+ year. Draw your own conclusions based on that, I think... :)

And I'm not sure why you are presuming MADD+MUL again. The reason behind the MADD on G7x was to ease the compiler's job. It couldn't actually run two MADDs per clocks due to register file restrictions. In G8x's case, adding a second MADD would complicate things, not simplify them, imo.

For what it's worth, my guess for G92 for a long time has been 6 clusters with 32 SPs per cluster and perhaps only 24 interpolators, so 24 free-trilinear TMUs and 16 beefed-up ROPs (compared to G8x). Sounds like it might not be so wrong in the end...

Aeryon: G92 is the codename I'm familiar with, G98 would presumably be the ultra-low-end derivative. I hope you're not presuming a 75-85mm2 chip has 192 SPs! ;)
 
Nope. According to the release notes of one of the more recent versions of CUDA, nVidia claims to be planning on releasing hardware by the end of this year that is capable of double-precision floating point.
right full IEE compliant :cool:
 
Well, no, the SIMD nature of the processors will mean that it won't be able to be fully IEEE compliant. I actually expect it to have pretty much the same compliance as the G8x.

well this is not what I've heard of (but it could be wrong). because of so much importance of GPGPU market, Nvidia is moving more and more to "crushing numbers machines" like Quadro-Plex where high margins are.

edit: now I remember what my source said, the full IEEE compliant chip is planned for next big architecture change aka G100 (with Erik Lindholm as the lead shader/computing architect, that's why I remember now because his name was associated with IEEE). G9x is "only" improved G80. Of course, take everything with grain of salt...
 
Last edited by a moderator:
edit: now I remember what my source said, the full IEEE compliant chip is planned for next big architecture change aka G100 (with Erik Lindholm as the lead shader/computing architect, that's why I remember now because his name was associated with IEEE). G9x is "only" improved G80. Of course, take everything with grain of salt...
I guess it's important to be sure we are talking about the same level of IEEE compliance... In terms of precision, G8x is already there for FP32. You might expect G9x to be compliant for FP64, but only in terms of precision.

If G100 was fully compliant (denorms etc.) that would be a first for a SIMD machine as Chalnoth pointed out I think... Assuming, of course, that it is a traditional SIMD machine. For all we know, it could be a hybrid clockless throughput processor with variable branching coherence that's also pretty good at making coffee and toast. I'd certainly buy a couple of those, although probably mostly for the coffee. In the end, we really don't know what NVIDIA is working on beyond G8x architectural derivatives... (but if someone knows, PM ftw! ;))

P.S.: Erik Lindholm was, afaik, already the lead for the G80 Shader Core. Some of his more recent patents aren't applied in G8x though, so that might lor might not give us some hints...
 
P.S.: Erik Lindholm was, afaik, already the lead for the G80 Shader Core. Some of his more recent patents aren't applied in G8x though, so that might lor might not give us some hints...
Yes I know him for long time, he was also the lead for GF256 TL engine. Very smart and kind guy. Maybe one of the best, if not the best 3D/computing architect. IMHO, with Erik on board, Nvidia will have bright future, even against intel (me hoping that manufacturing process at TSMC/IBM/Chartered will not fall much behind intel...)
 
Yes I know him for long time, he was also the lead for GF256 TL engine. Very smart and kind guy. Maybe one of the best, if not the best 3D/computing architect. IMHO, with Erik on board, Nvidia will have bright future, even against intel (me hoping that manufacturing process at TSMC/IBM/Chartered will not fall much behind intel...)
Yeah, Erik is also behind the NV20 VS engine, arguably the first highly programmable graphics engine in the consumer space. (sorry, the NV1x register combiners and their natural evolution in NV2x doesn't count!) - and how many high-ranked NVIDIA employees did you meet, damnit?! hehe.

There are a number of very bright guys at NVIDIA, that's not very surprising - they wouldn't be where they are today otherwise. Unsurprisingly, they aren't the only ones out there with some really great talent; the R300 didn't just come out of the blue, for example. I also wouldn't underestimate Intel... let alone because I wouldn't underestimate PowerVR! ;)
 
if my source is right, no more G80@65nm planned...

Well, as a source-less observer, that would be how I read things. Odd "mid-range" cards and the reappearance of an Ultra both smell like "backup" plans to me. While there was a small chance that nVidia was winding up for a body blow, it seemed far more likely that something slipped.

While we'll probably never know, one wonders whether it was h/w (65nm problems?) software (drivers? design? gpgpu?) or w/w (google is sucking the Valley dry, recently).

Fortunately, R600 will tide us over with a whole 'nother set of engineering tradeoffs to debate, before we get to the inevitable "nvidia is doomed, late, runs hot, attached to leaf blower" posts :|

-Dave
 
Back
Top