NVIDIA GF100 & Friends speculation

Picao, I approved your post, but I'll go out on a limb and suggest you think better about your theory, as it's not quite realistic or aligned with realities that exist in this business.
 
Hello All!

Its my first post here at Beyond 3D, although being a lurker for many time.
Im going here on a limb and guess GF100 is NOT same chip as FERMI compute.
Why?

Because of these posts at Xtreme Systems:

http://www.xtremesystems.org/Forums/showpost.php?p=4243892&postcount=458

http://www.xtremesystems.org/Forums/showpost.php?p=4244268&postcount=491

http://www.xtremesystems.org/Forums/showpost.php?p=4244316&postcount=498

Gemini is always viewed as Dual Fermi. But Gemini is also word for Twin. And there are false twins. So my guess is GF100 is GT300 twin, but not at 100%. Thats why they were presented at two different time. And internally at nVIDIA it might be the word for GF100.

With this in mind:
- GF100 might really be a smaller chip. Remember the guy that claimed to be an ex-nvidia employee said GF100 doesnt like to be called fatty ;)
- Maybe software is really what is delaying GF100 launch. Paralelizing geometry probably is not easy.

Cheers

Seeing as how most of those posters haven't substantiated very much in their postings (and for that matter, that "ex-employee" as well), I wouldn't put too much thought into it
 
There would be limits to what could be expected for TMU and ROP throughput gains when memory bandwidth grew modestly over GT200.
If one were to expect gains anywhere, it would either be in places where this is not as great a problem, or the TMUs and ROPs are made more efficient within the scope of the peak numbers provided, which Nvidia claims to have done.

In games like COD 4,FEAR 2 , GRID the 4870 was almost as fast as 280gtx which had double ROPs and TMUs and clocks were quite close (750 vs 600)
So it seems that GT200 wasnt to efficient with those ROPs and TMUs in real games and was limited elsewhere.
 
In games like COD 4,FEAR 2 , GRID the 4870 was almost as fast as 280gtx which had double ROPs and TMUs and clocks were quite close (750 vs 600)
So it seems that GT200 wasnt to efficient with those ROPs and TMUs in real games and was limited elsewhere.

More likely the specific games you've cited are more math-bound than fillrate-bound.
 
Although Picao raises 1 interesting question: is there much software programming involved in NVidia's solution to tesselation? It's a remarkable hardware solution, but in all the fuss I forgot about drivers. How will their solution translate to drivers?
 
Although Picao raises 1 interesting question: is there much software programming involved in NVidia's solution to tesselation? It's a remarkable hardware solution, but in all the fuss I forgot about drivers. How will their solution translate to drivers?

Do you mean software work on the part of the company coding games, or the people coding drivers at NV, or both?

-Charlie
 
Although Picao raises 1 interesting question: is there much software programming involved in NVidia's solution to tesselation? It's a remarkable hardware solution, but in all the fuss I forgot about drivers. How will their solution translate to drivers?
Unfortunately, I don't think that's something that we'll fully understand until the card comes out. But given what we know so far, it would seem exceedingly odd if there were much of any processing on the driver side with regards to tessellation, given nVidia's obviously very strong focus on making the GF100 a very fast GPU in this regard.
 
With the given time they have and march:)?:) release the first drivers surely wont work on edge in some games and aplication. They have cuda aplications ,physx games and the new stereo gaming is just pure software. Thats a lot of work and testing :rolleyes:.
 
And they will have to optimize the most usual benchmarks, as they will fail otherwise. With the cut-down 448SP Version being released and immature drivers, 5870 would walk all over them.

But maybe they will have something like 30 working 512SP cards as Press edition.
 
And they will have to optimize the most usual benchmarks, as they will fail otherwise. With the cut-down 448SP Version being released and immature drivers, 5870 would walk all over them.

Which would suggest that their driver team was sitting around only killing flies for several months until March.

But maybe they will have something like 30 working 512SP cards as Press edition.

If the 5870 will walk all over the 14SM variant, how high are the chances of a 16SM variant making any worthwhile difference?
 
Yes, and then when someone asks you what that oversized keychain in you jacket is, you will say it's "Tha Bomb" :mrgreen:
(ref: http://www.fudzilla.com/content/view/17690/1/ )

I laughed with that one yesterday; the bloke could have said that his sperm is nuclear and still would had been arrested LOL. Jokes aside it's fairly impossible that they PCs at CES they showed contained anything else than A2 samples.
 
And 2009 A2 samples were used for key-chains only? :LOL:
Well, there you have the issue that a fraction of the work is likely to be specific to the A2 samples, and it may not be terribly likely that they can get their full test suite running properly if the chips aren't production-ready (from what I understand, they use a very large number of computers for driver validation).

Anyway, that said, nVidia has done quite a good job of having some pretty high-performance drivers out for their products at launch since the NV40, so I'm expecting that driver performance won't be much of an issue here. I'm just not sure that the launch delay will have helped them further in improving driver performance.
 
Primary target (even more for a new architecture) is stability. When they had their first operational chip in their hands it must have been a point where they could have set more specific performance landing zones for games. Performance tuning beyond those landing zones comes typically from all IHVs some reasonable time after the release.
 
Which would suggest that their driver team was sitting around only killing flies for several months until March.
If the 5870 will walk all over the 14SM variant, how high are the chances of a 16SM variant making any worthwhile difference?

It was an ironic remark to the speculation that NV drivers could be exceptional immature at launch. Which is highly unlikely considering that they even had working A1 samples. Obviously drivers for a new generation will have growth potential, but I see no reason why the launch drivers should be worse than usual.

The "everything" done at NV sucks way of thinking is something, I can only take with some irony.*

(* not saying that they do not need to execute better again)
 
Back
Top