NVIDIA GF100 & Friends speculation

"tacked" on?
Either code is DX10...or it's not.

"Tacked on" meaning it was not included in the original codebase, not designed from the ground-up with this code in mind.

For a perfect example of DX10 as an afterthought, see Microsoft Flight Simulator X.
 
Unigine 2.0 is not the same benchmark as Unigine 1, there is a zeppelin and stuff.

The dragon remains intact, though, and at the dragon scene there is definitely a lot of FPS improvement. However, Unigine 1 and 2 bench results cannot be compared because the scenes are nearly completely different.
 
From the looks of it, most people would choose eyefinity over physX if they had to pick just one.

No, most people clicked an option in an internet poll. The only real vote is ponying up money for a new video card and three monitors. How about we start a poll of people who have actually "chosen" with their wallet? PhysX has a few million head start, how many people are running three monitors?

Feel free to produce other poll proving your point.

What point would that be? The point is that it's a freebie when you buy an Nvidia card and hence is an added value. Popularity polls on [H] don't change that.
 
The Eyefinity vs PhysX is the most retarded comparison I have ever seen in my life. They are completely unrelated to each other. PhysX gives you extra eye candy in supported games. Eyefinity gives you the ability to play games in multiple monitor setups. How can they be compared?

I'd totally get it if it was PhysX vs. Bullet or Havok. Or if it was Eyefinity vs 3dvision (even these can't easily be compared) but physics simulation vs multiple monitors??
 
Thanks for the tip, but I was more asking a question than making a comment there.
As processes mature over time, defect density goes down and yields shoot up.

So rather than not seeing how yields will improve over time for the same chip, it's impossible to see how yields will not go up.
 
As processes mature over time, defect density goes down and yields shoot up.

So rather than not seeing how yields will improve over time for the same chip, it's impossible to see how yields will not go up.

See my original post, I asked I didn't get how they would go "a lot" more up by themselves. If the power draw (130w more than HD5870 at load) and temperature situation of this card (92c load) is as bad as it's reported, and keeping in mind that Nvidia was unable to release the 512sp part at all, I would think that they would give B1 a go as soon as they could, rather than waiting for the yields to go up. I would also think that the yields would have to go up as soon as possible, seeing as how a 5890 is probably going to be just as fast as the $500+ GTX 480.
 
The Eyefinity vs PhysX is the most retarded comparison I have ever seen in my life. They are completely unrelated to each other. PhysX gives you extra eye candy in supported games. Eyefinity gives you the ability to play games in multiple monitor setups. How can they be compared?

I'd totally get it if it was PhysX vs. Bullet or Havok. Or if it was Eyefinity vs 3dvision (even these can't easily be compared) but physics simulation vs multiple monitors??

True. The question is who buys a videcard based on physX support. I tried to find a poll for this and could only find that one at [H].
 
True. The question is who buys a videcard based on physX support. I tried to find a poll for this and could only find that one at [H].
http://www.forum-3dcenter.org/vbulletin/showthread.php?t=465369&highlight=physx+wichtig
Here is a Poll

Option1
Np, I don't care about PhysX
151

Option2
I prefer Radeons but I'm interested in PhysX via Agaia or Geforce
26

Option3
I prefer Geforce and take the advantage of Physx along with me
96

Option4
I only buy Geforce because of PhysX
16
 
Option 5
I only post in the gt300 thread about gt300 related topics
100%

Come on guys three days left, I know you can do it. Yes we can! :p
 
Back
Top