The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
TSMC capacity constrained on 65nm? Didn't we get signs of this earlier in the year? Did AMD get the lion's share?

Jawed

I'd guess this is fall out of the earlier comments about being capacity constrained in 2H. I can't recall UMC ever getting anything this far up the foodchain from NV or ATI before.
 
I'd guess this is fall out of the earlier comments about being capacity constrained in 2H. I can't recall UMC ever getting anything this far up the foodchain from NV or ATI before.
Also, I presume that 55nm wafers count as 65nm wafers when it comes to capacity, so it seems possible that AMD's first foot in the door with both has given it some priority.

Though you have to wonder if, RV670 in particular, rumoured to be coming forwards from Jan to November would have allowed AMD enough lead time to "bring forward" 55nm orders. Unless AMD "converted" existing 65nm wafer orders into 55nm orders. But that kind of conversion would be at the expense of RV630/610 dies, since there's no rumour of these lower RV parts transitioning to 55nm before Jan.

Jawed
 
No. The PCB would be much more complicated and more expensive (more layers as well probably), also not that nice for the memory layout.
Do you really think the PCB adds that much to the cost? The HD2900 Pro is 512-bit and costs $250, right? I'm talking about a $500 part.

Granted, they need 16 memory chips instead of 12, but each is half the capacity, so I'd expect at least a 25% savings in memory cost. No way does the increase in PCB cost from a 384-bit board to a 512-bit board outweigh that.
 
How soon will a 55nm wafer be as cheap as a 65nm wafer?
That's a good question, and I'm not 100% sure - but with 90/80nm, when the first 80nm chips came out (or even before that I think), TSMC issued a PR saying that it was a "20% cost reduction" (it might not have been exactly that, but you get the point).

That corresponded perfectly to the wafer being the same cost (at least for the general-purpose processes, not 80HS). So my guess is the same is roughly true for 55nm; the only extra 'cost' is probably that the yields are lower initially. Which doesn't matter as much for RV670-sized chips than for G80/R600-sized ones, of course...
 
I'd imagine the 65nm migration was already underway by then (one which may well have sparked the rethink) and, if so, they would want to get the ROI on that.
Yup, totally agreed. Plus, CELL can also be manufactured at IBM rather than Sony/Toshiba's own fabs, and if they didn't want to do it there, it'd be easier to port the design to Chartered than TSMC.

On 45nm, *maybe* they'll want to make a single-chip SoC out of all this. Although my guess is it'd still be too big (>150mm2), but we'll see.
 
Yup, totally agreed. Plus, CELL can also be manufactured at IBM rather than Sony/Toshiba's own fabs, and if they didn't want to do it there, it'd be easier to port the design to Chartered than TSMC.

On 45nm, *maybe* they'll want to make a single-chip SoC out of all this. Although my guess is it'd still be too big (>150mm2), but we'll see.

So, you're attributing a ~100W power consumption drop to RSX@65nm alone or RSX@65nm + CELL@65nm in Sony's Nagasaki and IBM's East Fishkill plants ? ;)
Surely it's not just due to the smaller hard drive, the absence of a memory card reader or only two USB ports... :smile:

Anyway, this console chip demand from the likes of TSMC and Chartered targeting this holiday season could have pushed AMD to order more RV670's from TSMC, thereby forcing Nvidia to add a second source for G92.
 
I have always been very confident the 40GB model would have a 65nm CELL, even before its release. However, I am not so sure that the 65nm RSX will be in it, as ~100W doesn't mean so much when the PSU was AFAIK already overspecced!

Anyway, it is possible that it has a 65nm RSX, and if so this would indeed take capacity at TSMC, and NVIDIA certainly couldn't take priority over that since Sony would be very pissed off there. We'll see...

EDIT: Oops! I just realized that was a really stupid thing to say, since RSX is done in Sony's own fabs right now, and I see no reason for them to switch it to TSMC. So ignore me here! :)
 
Note that with "console chip demand" on TSMC and Chartered i was referring to both the PS3, X360 and Wii, not PS3 alone. :smile:
 
Note that with "console chip demand" on TSMC and Chartered i was referring to both the PS3, X360 and Wii, not PS3 alone. :smile:
Well...
CELL: IBM/Sony/Toshiba
Xenon: IBM/Chartered
RSX: Sony/Toshiba
Xenos: TSMC (90nm)
Broadway: IBM (~0mm2)
Hollywood: TSMC (90nm or above)

Based on all this, I have some difficulty figuring out why consoles would affect 65nm capacity at all, and even for 90nm it could be much worse.
 
What exactly is UMC's track record in supplying high transistor count GPU's in large order? In order to move release two weeks ahead of schedule, did this force nvidia to move production to UMC for G92?
 
You think a decision like adding another Fab to your production for a given chip has that kind of timing? Wheeeee. I highly doubt it. It would have been in train for several months (at least). And the more complex the chip, the longer the lead time. As I said, to my memory (anyone know different?) G92 would be the most complex GPU anyone's given UMC. Or maybe that's not a fair way to put it as they're always more complex "than ever". . . the "highest up the foodchain" is how I usually think of it (i.e. a relative point is more accurate for what we're pointing at than an absolute one).
 
What exactly is UMC's track record in supplying high transistor count GPU's in large order? In order to move release two weeks ahead of schedule, did this force nvidia to move production to UMC for G92?


And, btw, what I'd suggest you might want to pay attention to on how they "moved the release up two weeks" is how good/poor initial availability is.
 
Nvidia's really getting good at it w.r.t. spreading fud about their GPUs or nailing down their leaks. G92 went from 64->96 and now 112SPs. I guess most of it depended on the RV670 vibe which went from ok->good->really good category. I suppose G92 is nothing but a full G80 on 65nm with some tweaks. Rumors relating G92 are quite reminiscent of R420, if some one forgot about that. :D

My guess about Nvidia's line up would be:

GX2 G92 (6 or 7 TCPs) Q1 '08 $599
G92 (8 TCPs) Q4 '07 $399
G92 (7 TCPs) Q4 '07 $299
G92 (6 TCPs) Q1 '08 $199
 
Last edited by a moderator:
You think a decision like adding another Fab to your production for a given chip has that kind of timing? Wheeeee. I highly doubt it. It would have been in train for several months (at least). And the more complex the chip, the longer the lead time. As I said, to my memory (anyone know different?) G92 would be the most complex GPU anyone's given UMC. Or maybe that's not a fair way to put it as they're always more complex "than ever". . . the "highest up the foodchain" is how I usually think of it (i.e. a relative point is more accurate for what we're pointing at than an absolute one).


But I highly doubt UMC will give Nvidia certain discount in price at volume .
 
What do you think about this?? Lol

http://www.nordichardware.com/news,6911.html

Do you think it can be true?? And when GF9800 is supposed to be released??

yourlink said:
Warning: mysql_connect(): Too many connections in /home/nh/public_html/new.www.nordichardware.com/index.php on line 4
FAIL

Yes, I think it is true.

Ok, working now

nordichardware said:
GeForce 9800 will be a monster. Not that I think that this statement surprises you the least bit, but we were told by a friend that the next high-end GPU will not only sport double the performance of the ATi R680, but actually triple the performance. The card "R680" is suppose to have two RV670 chips side by side on a single PCB and be able to output around 1 TFLOPS. This is the next high-end card from AMD/ATI, but we should not forget about R700 which is due this Spring. G92 or whatever we should call it, is also slated for launch until Spring. When we first heard about the 3 TFLOPS it was too early for us to actually believe them, but it seems that NVIDIA is still telling its partners that 3 TFLOPS is what they will be getting.

No way. No way in hell. Not unless they're talking about Quad SLI performance. Single GPU? Hell no.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top