NVIDIA: Beyond G80...

Perhaps it's for the mobile sector that also uses eDRAM ?

eDRAM in the mobile sector (i assume you meant handheld sector, which is not exactly the same thing) ?
I don't find it likely they'd use a half node for that, but anything is possible.
It's still close to a year from selling, according to the news bit.
 
Back on topic.

NV's jumping directly from 90/80nm to 55nm ??
This means we'll probably have 55nm half-node GPU's in 2007/early 2008, which i find very surprising to say the least (considering the 80nm delays from both IHV's).

http://www.digitimes.com/bits_chips/a20070328PB202.html

That story didn't say anything about Nvidia skipping 65nm, just that it's started work on 55nm. This would make sense, since it would be prudent to overlap pilot work on 55nm as the verification work goes through with 65nm.

If the 55nm process is related to 65nm as 80nm is to 90nm (in that they present similar transistor performance), then skipping 65nm would just make the transition to 55nm harder.
 
That story didn't say anything about Nvidia skipping 65nm, just that it's started work on 55nm. This would make sense, since it would be prudent to overlap pilot work on 55nm as the verification work goes through with 65nm.

If the 55nm process is related to 65nm as 80nm is to 90nm (in that they present similar transistor performance), then skipping 65nm would just make the transition to 55nm harder.

I was obviously talking about high-end parts.
We already know that Nvidia had a G72 shrunk from 90nm to 65nm, and more SKU's are likely to hit before then.

But, like it has done before, they may be "testing the waters" for yet another half-node high-end launch, not seen from them since the 7800 GTX debut at 110nm (a process previously used on the midrange/low-end 6600 and 6200 series).
The 90nm full-node also premiered on a low-end IGP (6150), remember ?

After all, R600 is debuting on a half-node too.
 
I was obviously talking about high-end parts.
We already know that Nvidia had a G72 shrunk from 90nm to 65nm, and more SKU's are likely to hit before then.

But, like it has done before, they may be "testing the waters" for yet another half-node high-end launch, not seen since the 7800 GTX debut at 110nm (a process previously used on the midrange/low-end 6600 and 6200 series).
The 90nm full-node also premiered on a low-end IGP (6150), remember ?
I would bet that the next high-end part from Nvidia will be on 65nm rather than 55nm. (skipping 80nm altogether for high-end)
 
I would bet that the next high-end part from Nvidia will be on 65nm rather than 55nm. (skipping 80nm altogether for high-end)

That could be inferred right from the Digitimes article.
If Nvidia sticks to the late Summer->Fall refresh timetable in the High-End, then they wouldn't be using a process scheduled for early 2008. ;)

Of course, if indeed the "8800 Ultra" is not just an overclocked or GX2'ed version of the current 90nm GPU, then this timetable would have to be adjusted.
But i still have my doubts on this "secret die shrink" theory.
 
Wasn't there a rumor earlier about a higher end 8800 GPU with 160 stream processors?

It's not out of the question. The R420 also had a surprise 16 pipelines IIRC, even though it was initially claimed to be 12 pipelines. Some spare shader units probably do exist.
 
That sounds to me like a reasonable number for a hypothetical 80nm high-end G8x part.

I think 10 clusters has, indeed, been discussed. If the high speeds of the midrange chips carry over to a hypothetical 10 cluster monster, that would certainly make things interesting. 575 -> 675, plus 25% boost in number of clusters....
 
It's not out of the question. The R420 also had a surprise 16 pipelines IIRC, even though it was initially claimed to be 12 pipelines. Some spare shader units probably do exist.
Surprise from a rumors pov, it wasnt something that came after launch . .
 
It could be redundancy as well but 160 vs 128 is quite a bit of redundancy. They could have been cherry picking the good cores for use on a possible ultra part if they were having problems with R600 I suppose.
 
8950 GX 2 and 8800 ULTRA specs?
http://xtreview.com/addcomment-id-2066-view-GeForce-8950-GX-2-and-GeForce-8800-Ultra.html

* GeForce 8950 GX 2-> two chips g80 on one card, 2 X 96 stream processors, 2 X 320- bit memory system bus , 2 X 640 mb. memory of type GDDR-4, frequency 575/2000 MHz, the frequency of shader domain is 1700 MHz, recommended retail price $649;

* GeForce 8800 Ultra - > 128 stream processors, 384- bit memory system bus , 768 mb memory of type GDDR-4, frequency 650/2200 MHz, the frequency of shader domain 1800 MHz, recommended retail price $549.
 
It could be redundancy as well but 160 vs 128 is quite a bit of redundancy. They could have been cherry picking the good cores for use on a possible ultra part if they were having problems with R600 I suppose.

Not necessarily too much redundancy. If the 128 scaler shaders are grouped in blocks of 16, then it's going from 8 shader blocks to 10 shader blocks. For a chip the size of the G80, 2 redundant blocks is very reasonable.
 
well if that 8800 ultra spec is true, puts it somewhat close to where i thought nvidia would go. shader clock gets a big increase but i dont expect it to do much in any current games. i think it would be faster at 700/2200 with a 1500 mhz shader clock.
 
well if that 8800 ultra spec is true, puts it somewhat close to where i thought nvidia would go. shader clock gets a big increase but i dont expect it to do much in any current games. i think it would be faster at 700/2200 with a 1500 mhz shader clock.

Not that I put stock in rumours of 1800Mhz shader domains but I very much doubt that a 7% increase in core clock would outshine a 20% increase in shader clock when it counts. Fillrate and texturing are things G80 certainly has no shortage of at the moment. I'd be curious to know why you made that statement - you're developing quite a reputation for just tossing opinions out without any supporting points or evidence....
 
Back
Top