NVIDIA GF100 & Friends speculation

Doubtful. Highly doubtful. If something can be done in sw with no penalty, it would be done in sw. But for an operation as simple as bit manipulation, doing it in hw makes much more sense.

AFAIK, both G80 and GT200 alu's have it. Stuff like this is explained in hardware docs.

If they don't have hw support for it, then of course they'll need sw emulation for it.
Thanks for the detailed answer ..

By the way , someone used version 2 of the Heaven benchmark and found out that there is a huge performance gain in the dragon scene (nearly 40%) .. using an HD5870 of course :

http://translate.google.com/transla...rabhardware.net/forum/showthread.php?t=164761
 
Fermi 1 die-shrink ala G92 and G92x2.

They won't do it unless they really have no choice at all. In recent history, nv has done dual cards only when it lost the halo with it's biggest gpu. Making a dual gpu card first up will go against that track record.

But why do you think their hand will be/has been forced into making a gf92?
 
It has now 3 tesselation options. Disabled , enabled and extreme. The problem is i dont have a DX11 card :LOL:.

So is there a performance increase, or a lowering of detail? Is the original one "extreme"? That is my assumption.
 
So Fermi2 must have been pulled ahead. That would mean fermi2 will be another monster die.

How do you figure that ?

I really don't get the assumption of some that if a strategy somehow fails for a certain company (which in here is always NVIDIA), that that company will keep insisting on it.

That's not even true for a single person, much less for a money making company...
 
How do you figure that ?
If I may appeal to the rv770 story, then in words of Carell Killebrew, "They aren't going to lose".

As far as changing strategies is concerned, may be you should read (and try to grasp) Anand's pieces on how difficult it is to change company wide strategies.

Also note, that Fermi2 would have been conceived around the G92/rv670 launch time. Why would they want to change strategies then?
 
So B1 is caned or not exist at all??

Or Kyle's info is bogus ?

Kyle didn't even say that there won't be any B1. He said he HEARD there was not going to be any B1. Two entirely different things.

IMO, I believe it's true. If A3 is going to ship, what is so fundamentally wrong that will be fixed in B1 ? And if there is something fundamentally wrong, why is it shipping in A3 ?

Never really made much sense to me...
 
If I may appeal to the rv770 story, then in words of Carell Killebrew, "They aren't going to lose".

As far as changing strategies is concerned, may be you should read (and try to grasp) Anand's pieces on how difficult it is to change company wide strategies.

Also note, that Fermi2 would have been conceived around the G92/rv670 launch time. Why would they want to change strategies then?

It seems you give too much credit to those types of "articles" :)

One and only one thing is relevant to a money making company and that is ...well...making money. If their current strategy is not yielding enough profits, they will change it. It's that simple. Obviously the process of changing that strategy may take a while, but it will be changed sooner or later.

As for Fermi 2 to be "conceived" around the time of G92 and RV670, well there you go. Certainly a focus on smaller chips to increase profits and decrease production costs, was the key point during that time, for both ATI and NVIDIA. At that time, they also knew that Fermi 1 was going to be big @ 40 nm, so Fermi 2 is probably an effort to reduce the transistor count or tweak some parts of the architecture that could be improved with time and enough testing.
 
Hey, change "PhysX" with "Tessellation" and you have the same situation with evergreen. :LOL:
I don't own an evergreen card, and just try to run PhysX on a midrange NV part at the same time you're gaming on it and you can laugh yourself blue. Pretty much anything outside of a GTX285 is going to choke itself to death playing at 1920x1200 with all the eyecandy on and PhysX enabled. It would be worse than a slideshow on a GT220?

In your example, tessellation is at least usable and playable on epic resolution on any of the 58xx series from ATI. And probably the same with the upcoming Fermi. I'm suspicious (at best) of the same being able to be said about PhysX running smoothly on Fermi with the rest of the eyeball candy on, especially given it's current performance expectations.
 
In your example, tessellation is at least usable and playable on epic resolution on any of the 58xx series from ATI. And probably the same with the upcoming Fermi. I'm suspicious (at best) of the same being able to be said about PhysX running smoothly on Fermi with the rest of the eyeball candy on, especially given it's current performance expectations.

Depends on what you call tessellation. Games with laughable tessellation such as Dirt2 and Stalker: CoP would just run fine - but if Unigine benchmark was a game, try running it on 1920 with tessellation and AA enabled on a 5850. Framerate would go down to the 10's range.

Also the game Metro 2033 - while bringing nearly no visual benefits with tessellation - kill even 5800 series cards when tessellation is turned on.
 
Depends on what you call tessellation. Games with laughable tessellation such as Dirt2 and Stalker: CoP would just run fine - but if Unigine benchmark was a game, try running it on 1920 with tessellation and AA enabled on a 5850. Framerate would go down to the 10's range.
Oh, so games that perform admirably are dismissed, but synthetic benchmarks prove the point? Err...

Also the game Metro 2033 - while bringing nearly no visual benefits with tessellation - kill even 5800 series cards when tessellation is turned on.
And has been discussed, is that due to tessellation, or due to shadowing that monumentally changes load during the same settings change? In fact, last I recall, tessellation wasn't a specific feature you could enable / disable, but instead it was entire "Switch" to DX11 or !DX11. What else truly changes?
 
Albuquerque said:
Oh, so games that perform admirably are dismissed, but synthetic benchmarks prove the point? Err...

No, things which actually use tessellation for more than drawing the button on a certain guard's left shirt pocket should be counted. Dirt and Stalker:CoP (which BTW is far from performing admirably, with or without tessellation) bring next to nothing with tessellation. On the other hand, the Unigine benchmark brings clearly great looking graphics THANKS to tessellation, such as the cobble stones and stuff.

Albuquerque said:
In fact, last I recall, tessellation wasn't a specific feature you could enable / disable, but instead it was entire "Switch" to DX11 or !DX11. What else truly changes?

No, it can be specifically enabled or disabled, along with Advanced DOF. Again I'm not using Metro 2033 as a proof for anything since I'm yet to see any benefits of tessellation, but it sure brings down the cards to their knees.
 
Yep, you might as well make the poll question "ATi vs Nvidia?" since that's going to produce the same result. Besides, there are lots of polls about PhysX already on bigger sites. But the question isn't whether you would buy something for PhysX though, that is silly. The question is would you take it for free. That's essentially what's being offered to Nvidia's customers.

How do you capture the "normal people" who google for Batman reviews and come across this?

Interesting:

http://hardforum.com/showthread.php?p=1035300182

[H] readers say:

PhysX 37 12.89%
Eyefinity 210 73.17%
Don't Care 40 13.94%
 
Back
Top