Hello All!
Its my first post here at Beyond 3D, although being a lurker for many time.
Im going here on a limb and guess GF100 is NOT same chip as FERMI compute.
Why?
Because of these posts at Xtreme Systems:
http://www.xtremesystems.org/Forums/showpost.php?p=4243892&postcount=458
http://www.xtremesystems.org/Forums/showpost.php?p=4244268&postcount=491
http://www.xtremesystems.org/Forums/showpost.php?p=4244316&postcount=498
Gemini is always viewed as Dual Fermi. But Gemini is also word for Twin. And there are false twins. So my guess is GF100 is GT300 twin, but not at 100%. Thats why they were presented at two different time. And internally at nVIDIA it might be the word for GF100.
With this in mind:
- GF100 might really be a smaller chip. Remember the guy that claimed to be an ex-nvidia employee said GF100 doesnt like to be called fatty
- Maybe software is really what is delaying GF100 launch. Paralelizing geometry probably is not easy.
Cheers
There would be limits to what could be expected for TMU and ROP throughput gains when memory bandwidth grew modestly over GT200.
If one were to expect gains anywhere, it would either be in places where this is not as great a problem, or the TMUs and ROPs are made more efficient within the scope of the peak numbers provided, which Nvidia claims to have done.
In games like COD 4,FEAR 2 , GRID the 4870 was almost as fast as 280gtx which had double ROPs and TMUs and clocks were quite close (750 vs 600)
So it seems that GT200 wasnt to efficient with those ROPs and TMUs in real games and was limited elsewhere.
From my understanding yes, quite a bit.Although Picao raises 1 interesting question: is there much software programming involved in NVidia's solution to tesselation?
Although Picao raises 1 interesting question: is there much software programming involved in NVidia's solution to tesselation? It's a remarkable hardware solution, but in all the fuss I forgot about drivers. How will their solution translate to drivers?
Unfortunately, I don't think that's something that we'll fully understand until the card comes out. But given what we know so far, it would seem exceedingly odd if there were much of any processing on the driver side with regards to tessellation, given nVidia's obviously very strong focus on making the GF100 a very fast GPU in this regard.Although Picao raises 1 interesting question: is there much software programming involved in NVidia's solution to tesselation? It's a remarkable hardware solution, but in all the fuss I forgot about drivers. How will their solution translate to drivers?
And they will have to optimize the most usual benchmarks, as they will fail otherwise. With the cut-down 448SP Version being released and immature drivers, 5870 would walk all over them.
But maybe they will have something like 30 working 512SP cards as Press edition.
Well, such work tends to be slow and not terribly productive when actual hardware isn't available.Which would suggest that their driver team was sitting around only killing flies for several months until March.
Well, such work tends to be slow and not terribly productive when actual hardware isn't available.
And 2009 A2 samples were used for key-chains only?
Yes, and then when someone asks you what that oversized keychain in you jacket is, you will say it's "Tha Bomb"
(ref: http://www.fudzilla.com/content/view/17690/1/ )
Well, there you have the issue that a fraction of the work is likely to be specific to the A2 samples, and it may not be terribly likely that they can get their full test suite running properly if the chips aren't production-ready (from what I understand, they use a very large number of computers for driver validation).And 2009 A2 samples were used for key-chains only?
Which would suggest that their driver team was sitting around only killing flies for several months until March.
If the 5870 will walk all over the 14SM variant, how high are the chances of a 16SM variant making any worthwhile difference?