NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
Well that's good cause it never seemed valid in the first place :)


Problem is that how to double performance theortically with G80 architecture ?

For transistion from 128SP to 240 SP, GT200 may suffer certain level of branch performance under extreme condition. G70 is the most effective refined GPU as compared to R580 in terms of the cost of performance.


Dream on...........:D
 
2008042753a048d725f2f92xh6.jpg


Also see:
http://forums.techpowerup.com/showthread.php?p=742435
;)

And 240*2FLOPs*2.25GHz would give the 1 TFLOPs, which should deliver GT200 in the French cluster.
 
That just seems way too idealized... It can't be real.

I agree...high shader count, high shader clock, wide memory bus and relatively small die? And it supposedly has DP support too. No way.

Plus what's up with the 64 rops and 2500Mhz GDDR3?

I'm always wary to use GPU-Z readouts as evidence of anything. They are easily faked and often inaccurate when not.
 
This was also thought, when first G80 data was revealed. ;)

True... It's been over a year and a half since G80, and all that's really come out in the high end sector is 8800 GTS 512, 9800 GTX and GX2. It's possible that they've been hiding this one as well as they hide G80.
 
I agree...high shader count, high shader clock, wide memory bus and relatively small die? And it supposedly has DP support too. No way.

Plus what's up with the 64 rops and 2500Mhz GDDR3?

I'm always wary to use GPU-Z readouts as evidence of anything. They are easily faked and often inaccurate when not.

They should use Impact instead of Tahoma. :LOL:
 
This was also thought, when first G80 data was revealed. ;)

The problem for Nvidia during the Q2/Q3 is that possibilities of advance memory support for G92b. closing the gap between GT200 and GT92 is not a easy task for Nvidia.


Lots of friends are talking about the shift in AMD's business strategy.

I still cannot understand that the huge computational resources cannot leverage its strength on gameing performance. more or less, the importance of shader power has become the synthesis issues not just from the shader computational power itself.

Why G80 is much better than R600 ? an Effective architecture in the contemporary gaming context?
 
Agreed - surely GPU-Z doesn't actually detect any features, it simply detects the chip and then matches it to the stats in the database. No?

Though, recently in some news it was reported that GPU-Z's latest build added GT200 recognition
 
64 ROPs? that kind of gives it away doesn't it? Unless the number 64 is referring to the TMUs. Even then 240 to 64 is such a strange combination.

I dont think GT200 will be on 55nm but instead 65nm because ever since their NV30 debacle, they've always been cautious with this. Same goes for the memory being used (GDDR3 instead of GDDR5).

Its a 50/50 imo. Some things dont sound right, but you can't blame w1zzard since its a test build. (GPUZ read my 6800GT having 6 ROPs at one stage!)
 
The conclusion for a supposed "dual chip" config originates from the supposed high TDP?
 
Status
Not open for further replies.
Back
Top