NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
Just out of curiosity, why would a buggy, non-working (wasn't shown to work), watercooled, AIB prototype of a 3870x3 lead you to believe there will be a production 4870x3 when it didn't even mean there was a production of itself?

Derek Wilson mentioned getting a working one

and i didn't see this posted yet:

http://www.tgdaily.com/content/view/37554/135/

it is a bit of a rehash but is today's date. This did interest me:

Our sources state that the manufacturing cost of the GT200 die is somewhere between $100 to $110 per piece. It is pricey and you will be getting a lot more processing logic inside this core than with any other semiconductor part in the short history of the IT industry.
 
I heard transitorcount is 1.4 billion and GT200 is not a teraflop-gpu, but near.:LOL:
Well, even if that was the case, I'm sure they couldn't resist binning the Tesla variant so that they can clock it high enough to achieve a teraflop... ;)
 
Sure, but I have still no idea if this ~930GFLOPs refer to MADD+MUL or only MADD, in second case it could be a bit more difficult to break the 1 TFLOPs.
 
I heard transitorcount is 1.4 billion and GT200 is not a teraflop-gpu, but near.:LOL:

Hmm GT200 is a Highend GPU and has no 1Tflops but only near (over 900Gflops) and Rv770 is Performance-Mainsteram GPU and there is possibility that it has about 1Tflops performance. How could it be possible? HD 4870X2 will be about 2Tflops so this is huge advantage over GT200.
Moreover there are some rumours GT200 scores over 7k in 3dmV in the Extreme settings and runs Crysis at 1920x1200 - AA/AF with playable framerates. If this is true then we will see about 2 - 2.5 times performance bump over G80/G92 :)
 
The answer are magic FLOPs invented by Oompa-Loompas or in other words:
In this generation performance is even less comparable through theoretical peak FLOPs than in the previous one.
 
The answer are magic FLOPs invented by Oompa-Loompas or in other words:
In this generation performance is even less comparable through theoretical peak FLOPs than in the previous one.

Do you think its possible that they count only MADD performance and clock the SPs at nearly 2 GHz?
 
According to TG Daily, the GT200 is 576mm^2 big.

http://www.tgdaily.com/html_tmp/content-view-37554-135.html

From the report (quoting the important bits)
-Not many chips will actually fit on a 300 mm wafer
-512bit for 280 and 448bit for 260
-15 processing units (240 shader processors) available on the GTX280, while the GTX260 will come with 12 units for a grand total of 192 shader processors. GT200 has 16 clusters in total. They mention the G80 having 9 clusters in total?? probably a typo.
-manufacturing cost of the GT200 die is somewhere between $100 to $110 per piece
-240 shader units (240FP+240MADD)
-~1 billion transistors
-120 G80 dies per wafer, and <100 GT200 dies per wafer.

And er sorry about the leeching. Should've linked the actual site. :smile:

edit - ah so its a repost after all. Im still wondering about the cluster count. Wouldn't there be a massive overhead because of this?
 
If there's really 16 clusters then activating them all for Ultra should give 1TFLOP :D Perhaps that'll be the Tesla CUDA beastie.

That report says that 8800GTX was really 8 clusters out of 9 enabled. If true it's interesting that there was never an Ultra with all 9 enabled.

Jawed
 
Im still wondering about the cluster count. Wouldn't there be a massive overhead because of this?

Yep, it also means a massive amount of texturing filtering ability (again). Somewhere on the order of 65 - 75MT/s. I'm not giving up on the 24 SP/cluster as yet though.
 
I calculate the yield for GT200 initially is between 10% and 15%.

Interesting,
Sounds low to me but it is a huge chip and I really don't have any idea of how high the yields generally are.
How did you do the calculations? Could you please post your calculations with all the inparameters e.g. size, defect density. How have you considered redundancy?

Do we have recent any wafer prices? It would be interesting to see how much each chip would cost in wafer price only, even if just is a small part of the price.
 
not possible, they won't be able to seel the card for $600 even $1000 if the yeilds were that low.

Wouldn't the use of cheap GDDR3 compensate that deficit somehow ?
Top speed GDDR4/GDDR5 is certainly more expensive than the former, and overall memory costs can play a crucial role in price positioning (even more so in a market where prices can fluctuate as fast as in the DRAM IC business).

I'm recalling the example of the 8800 GTS 640MB vs the 320MB version, or the GDDR4 version of R600 against the GDDR3 original, etc.

Also, how much more mature is the TSMC 65nm node against its derivative 55nm half-node right now ? It's an unknown variable.
 
Status
Not open for further replies.
Back
Top