Nvidia GT200b rumours and speculation thread

From a newer XFX XXX:
... so it seems there is a better version of GT200A out, since usual GTX 280 an older XXX used G200-300-A2?
Here is, what's hidden under the hood of the latest G200-302 batch:

g200a2mg3.jpg


Still no 55nm in there.
 
They likely have excess inventory of older GPUs to burn through...
 
Well, the straight visual difference between 578 and 484 sq.mm shrink is not that perceivable at first glance. ;)

-Edit-

Side-by-side comparison tells enough (perspective corrected):

g200a2mg31fb6.jpg
 
Last edited by a moderator:
I think I read here that Tesla cards would be first to use the 55nm GPU (perf/watt is more vital in supercomputing than in the high-end gamer market).
That Quadro FX5800 has Tesla specs (the speed and amount of ram) and is probably intended for an amount of GPGPU too.
 
In fact, the die-count on the 300mm GT200 wafer suggests an area, larger than the officially announced 576 sq.mm, anyway. ;)
 
Ooh, nice picture :D

Though it doesn't answer the question as to the amount of wastage there is between dies on the wafer. How much area does the sawing of a wafer into individual dies consume?

Jawed
 
Me, I need to see it for myself before i believe it. Only TDP indicates a lower power chip and in turn for GT200b.


If I was Nvidia, I'd strip GT200b of DP-Units for improving margins on consumer-Level products and keep the enourmously high-margin Quadro and Tesla at 65nm (lower-) volume production until the next chip.
 
thus designing a entirely new chip about to be replaced by GT206 and GT212. There's a reason half-node re-releases are also called an optical shrink, and it's the same reason they make save money.
Power usage is also critical for the market where quadro and tesla sells, whereas most high end gamers don't care.

the cheaper variant will be here, the rumored GT206, probably that one won't have the DP units.
 
thus designing a entirely new chip about to be replaced by GT206 and GT212. There's a reason half-node re-releases are also called an optical shrink, and it's the same reason they make save money.
Power usage is also critical for the market where quadro and tesla sells, whereas most high end gamers don't care.

I am aware of that, but given Nvidias current situation, I think, a reassessment of old wisdom might be in order, no?
 
Back
Top