NVIDIA GF100 & Friends speculation

The real problem is that no AIBs/OEMs have silicon yet. That is generally where the leaks come from, an we are told that they won't get silicon until (likely late) February. Even no, the number of chips in the wild is vanishingly small, and that is not on purpose, NV lacks good chips.
I'm not knocking or disagreeing with you Charlie, but a number of nVidiots have been telling me that some AIBs have working silicon....I can feel safe in calling them liars? (Personally I think they are, but I'd like at least some level of verification first. ;) )

 
Huh? Your abject disappointment was explicitly laid out in several of your posts. There wasn't much room for misinterpretation :LOL: If you're looking for an agenda I think you'll have more luck elsewhere.

And you translated a disappointment from a video of a benchmark into "drawing far reaching conclusions". Instead of going after the ball, you go after the man. Instead of arguing against my opinions, you try to discredit and patronize them. Whats your agenda?

I mean, at first you are complaining that the leaked video I am referring to is not released or shown publicly from Nvidia and then you say that the benchmark is (as in is, not can be) run on a stripped down Fermi part because you read that on a chinese forum. Its like your're washing the posts through an Nvidia pr agency putting a spin to them. I'm just disappointed with what I have seen so far. I expected more. Deal with it.
 
I expected more.

60% over GT200 from a 448-core Fermi in an older title seems ok to me. What were your expectations?

With respect to the rest of your post there's no need to get emotional. Not a single one of my statements was directly at you personally. As I said before I was simply pointing out that your deep disappointment is based on incomplete information and lots of guesswork. I'm sorry if that offends you in some way.

The chinese post comes into play because it correlates with the GTX285 and HD5870 numbers from PCGH. Pointing that out doesn't equate to taking it as gospel truth though.
 
I don't see how that game performs better on Geforce hardware ?

Far Cry 2 :
HD 4870X2 is better than GTX 295
HD 4890 is equal to GTX 285

http://images.anandtech.com/graphs/rv870_092209122344/20103.png

I sense an unbelievable amount of prejudice toward Nvidia and Fermi , I don't know why .. but people are trying to strip the card of any expected advantage , I think that should stop .

Interesting thing to note is that Far Cry 2 is a game that supports upto DX10.1 . Seems DX10.1 used for improvements though Nivida GPU might use this to, no?

Also ATI CCC settings please (AI setting) and Nvidia CP quality setting. This can make a drastic difference.
 
Last edited by a moderator:
Older titles won't see the same benefits as newer titles, estimates based on what people are saying who saw demos/games running on Fermi ~1.8 X faster then a gtx 285 on older titles, newer titles up to 2.2 X faster for a full Fermi chip.
 
Older titles won't see the same benefits as newer titles, estimates based on what people are saying who saw demos/games running on Fermi ~1.8 X faster then a gtx 285 on older titles, newer titles up to 2.2 X faster for a full Fermi chip.
So can we say that GTX 380 is 512 core chip ? and GTX 360 is 448 ?
 
Is there a way to put AIB/IHVs under an NDA to not be able to say they do NOT have any chips yet? I'd think they could only not talk about it after they had chips. :???:
 
CrisRay (with all respect) have been using the expression impressive about Fermi in various forums. Nvidia have called it impressive

Specifically I said Fermi's tessellation engine is impressive. I think its the biggest investment Nvidia has put into a new API to accellerate new API features in a very long time. And what I mean by that is Nvidia's tessellation engine certainly not implemented in a half assed way. And I stand by that statement :) Its not long from now when everyone will have all their information about it.
 
There's an interesting thing happening in forums with these revelations happening. Months ago, there was much optimism and props given to AMD for their focus on tessellation in DX11, and from that came the assumption that NVidia put no work into it, and if they supported it at all, it would be some late additional, half-assed, bolted-on, or emulated tessellation and would not perform as well as AMD's. I'll note for the record that much the same story was repeated with Geometry Shaders (speculation that NVidia would suck at it, and that the R600 would be the only 'true' DX10 chip) AMD has had some form of tessellation for several generations all the way back to N-patches, so there's some logic to these beliefs. Also, the Fermi announcement mentioned nothing about improvement to graphics (only compute), so there has been a tacit assumption that the rest of the chip is basically a G8x with Fermi CUDA tacted on.

But as more and more leaks seem to indicate that NVidia has invested significant design work into making tessellation run very fast, it seems like some are in disbelief, while others are now starting to downplay the importance of tessellation performance and benchmarks (whereas once it was taken for granted that this was AMD's strong point) If indeed NVidia has significantly boosted triangle setup, culling, and tessellation, this could be like G80 all over again, where the total lack of information caused people to assume the worst, and the final chip coming as a big surprise. I think they deserve much props if they did increase setup rate.

As Mint said, it's been far too long to leave this part of the chips unchanged. Setup seems exactly where it was 10 years ago.

Yup.. from the Nv3x to G80.. geometry setup has improved what? By a factor of two? Improving geometry setup is something that needs to be done... So did Fermi do that? ;)
 
Specifically I said Fermi's tessellation engine is impressive. I think its the biggest investment Nvidia has put into a new API to accellerate new API features in a very long time.
New API? That is a strange choice of words ... we are talking about DirectX 11 tessellation right?
 
Its not long from now when everyone will have all their information about it.

I'm still amazed that NDA expiration is so close and there have been no big leaks. How did Nvidia contain things this time?

Now I'm sure a lot of people already have info behind the scenes but I expected slides or something to have been posted by now. That link DavidGraham just posted says something about info being released tomorrow but it's going to be Monday for a global release? Does anyone have solid info on when the NDA actually expires?

There's an interesting thing happening in forums with these revelations happening.

It's really fascinating to watch. The anti-Fermi campaign is in full swing. :)

If the rumours are true then colour me surprised as well. I didn't expect piss-poor geometry performance but I also didn't expect big things. Knowing Nvidia they will try to push any perceived advantage so it'll be interesting to see how they try to influence developers if they in fact have a big advantage in geometry processing.
 
Back
Top