Next G80 product in February?

nicolasb

Regular
My apologies if this has already been discussed to death in other threads....

According to DailyTech (who are usually fairly reliable) there will be a new G80-based product coming out in February. But no one wants to say what it will be.

http://www.dailytech.com/article.aspx?newsid=4867

An obvious possiblity would be an 8800GT (as this has been referenced in driver files). That would presumably sit somewhere between the GTX and GTS products in performance terms.

A less plausible (but more enjoyable :smile: ) possibility is that it might be a new, even-higher-end product to compete directly with a hopefully high-performing R600. I vaguely recall reading that the G80 is capable of supporting GDDR4 memory without any hardware modifications. I also heard a rumour (I wish I could remember where I heard it! - probably somewhere dodgy like The Inquirer :oops: ) that it is actually able to support a 512-bit memory bus too. I don't like to think about how expensive a G80 with 1GB of GDDR memory on a 512-bit bus would be, but it's fun to daydream about its performance. :cool: Even if it were simply a jump from DD3 to DDR4, that might still be interesting.

Anyway, does anyone have more definite news about the likelihood of another G80 product in Feb, or, if there is one, about what it might be like?
 
You'd expect there to be a 384-bit GTO or whatever eventually, with 7 clusters enabled instead of 6 like on the GTS, but also with slower memory than on the GTX. The GTS basically has nearly 1/4th of the die disabled for redundancy and lower clock speeds; I'm not sure they could more functional dies by disabling more than that!

Now, another thing they could do for yields this generation is disabling triangle setup, rasterization, part of ROPs etc. and sell it as a Physics/CUDA-only card. Clearly, they're aiming at that sooner rather than later, and the third slot on 680i (and fourth on the 680a!) is clearly hinting at that. I think the fourth slot on 680a might be very interesting in fact; imagine CUDA guys making clusters with four $299 or $399 GPUs on a single motherboard. There'd be clear benefits for everyone there.

Anyway, I think the dynamics here is that larger die sizes aren't going to help much in segments where you're bandwidth limited. G72 and G73 are there to stay for a while, although I'm sure they'll get replaced by H2 2007 anyway. But G71 is bound to go away faster imo, and it'll get replaced by a midrange G8x rather fast. Good evidence of that strategy can be seen in NVIDIA's upcoming shrinks. They'll be shrink the 77mm2 low-end part to 65nm first, for example. I ponder if this chip is going to have a 64-bit or 32-bit memory bus, on that note... Clearly, it looks like its only goal is to have Vista Premium compliance for OEMs with a discrete card and 512MiB of memory, instead of 1GiB dual-channel. Same for the 32-bits memory bus RV505 SKU, imo.


Uttar
 
I am not an expert. But if I am not wrong now GDDR3 are the "cheap and common" ones and GDDR4 are the "expensive and rare" ones. But could Nvidia assemble an 8800GT or 8800GS with "defective" GPUs scales for 94 streams and 256 bus with the "defective" slow GDDR4???? Or I just drink too much :rolleyes:
 
I would think that nVidia's plan is to go aggressively at the "mid-range." Given ATI knows it's losing sales at that point, AND there's a reason to have a Dx10 answer around that pricepoint, an early attack by ATI could cut nV off. And if NV isn't worried about that, they're loopy :) As much as y'all would like to see double-clocked, 65nm, 256 pipelines with a 1G dataroute to external memory by Feb ( :D ), I don't think we're looking at high-end refreshes until at least the mid-range is fleshed out.

imho, anyway
 
and sell it as a Physics/CUDA-only card.

Silly aside:
Of all names, why did they use an acronym that can easily be pronounced as "coulda"? As in "that CUDA been a great physics card"?

If I were them, I'd wait for a shrink before doing a physics card. Or even go dedicated. The chip is pretty hot and big, with an extreme access to memory, which seems a bit unbalanced to me.... If they're already getting 70% yields, it might not make sense. But if the newer 80nm yields start off a bit rough....
 
Of all names, why did they use an acronym that can easily be pronounced as "coulda"?
I think the "u" is pronounced long rather than short. (I.e. the first syllable rhymes with "mood" or (if you're British) with "queued" rather than rhyming with "hood"). It's supposed to make people think of "barracuda".
 
Even more confusing follow-up from DailyTech:

http://www.dailytech.com/article.aspx?newsid=4891

Additionally, during the Q&A section of the investor call, Huang also alluded to the fact that the company would announce no less than nine DirectX 10 graphics cards based on the G80 family of GPUs. Two G80 products have already been announced, the GeForce 8800 GTX and GTS. Last week manufacturers filled DailyTech in on the possibility that another G80-derivative product is on the way in February of this year. In a private briefing, DailyTech was then updated to the fact that there are three distinct products launching in February, each of which will be divided into three sub-products. NVIDIA's public and private roadmaps appear to coincide.
 
mid-range + low + physics?!

That would be an astonishing move, if true....

[And yes, it is supposed to be cooda -- they pronounce it in the earnings call]
 
Desktop, Workstation & IGP?

or Desktop/Workstation, Mobile & IGP
 
It would make the most sense for the next G80 version to be a mid-range part on 80nm. This would be pretty much "by the book" and I don't really see any reason for Nvidia to abandon this formula during this generation. I think it would be hard to shoe-horn a SKU in between the GTS and the GTX.

Any reason why they could not or would not want to do a mid-range DX10-class product at that time?
 
According to this piece, there are 5 new cards expected using the G80 chip:

At least five more G80-based graphics chips are on their way from Nvidia, if entries in the latest driver package to leak out of the GPU maker's labs is anything to go by.
According to an NGOHQ report, one of the files inside the ForceWare 96.94 package refers directly to the G80-200, G80-400, G80-600, G80-850 and G80-875, each with a unique ID code.

http://www.reghardware.co.uk/2006/11/13/nvidia_preps_five_g80_gpus/
 
A decent mobile variant will be critical for the IHVs for 1H07. Mid-range will be next focus. Given we have a good idea of G80 tech now, I'd expect extremely compelling solutions from Nvidia. Amd had better have made wise scalability choices...
 
Oh please let me be able to swap in a G80 based mobile part in my XPS. Need some more powa to drive this 1920x1200 display.

The Geforce Go 7900 GS had a 20W TDP (lower than a Core Duo CPU).
If, as Nvidia is saying, the performance per watt of the G8x architecture is almost 2x higher than the previous generation, isn't a 250/300MHz "Geforce Go 8800 GS" plausible ?
 
Well I honestly am not expecting any mobile parts until they move to 65nm. They have no good reason to refresh their mobile line up right now especially given the relatively longer adoption process for mobile parts. So hopefully we will see something next summer.
 
Back
Top