For The Last Time SM2.0 vs SM3.0

Which is why they're supporting 3Dc, of course.

And halflife 2 has mixed mode code paths, where is far cry's fallbacks for older cards which can do similar effects? Are they exempt because they arbitrarily set up a minimum point for no reason?
 
How about they have a multipass lighting algorithm for PS 2.x video cards. Put simply, HDR is incompatible with the lighting that they do, unless FP framebuffer blending is supported. There's also the alpha-blended grass that doesn't help things.
 
Correct me if I am wrong, as I certainly could be,

Doesn't HDR exclude the eye candy of AA? As I understand it, AA can not be used with the HDR Crytek may patch in.
 
DaveBaumann said:
Who's talking about emulation, and who's said that this is something that has already occured? Afterall, Vertex Instancing isn't available anywhere yet. [Edit: Whoops, as Dean points out, with the exception of OGL!]
What do you mean not available yet? Isn't the Nalu demo about instancing?
 
The thing about this subject and others like it is that its completely pointless.

First it was ps 1.1 Vs 1.4 then 2.0 FP16/FX12/FP24/FP32... Now its SM3.0 Vs Sm2.0 Vs sm 2.0B..

Each generation or sub generation a mere 6-8 months appart changes all the arguments that people fought, argued, insisted on etc etc etc and makes them totally moot. None of this matters at all as far as i can see. Worst case sinario for anyone, wait till october and see the next round. When that round comes out the debate will just get taken to the next level, the next technology, the next implimentation etc etc etc...

The only thing that matters is can the card you can afford Play all the games you want to play with the options important to you at FPS you can live with.

Honestly, If i was buying a new card today I would get an Nv40 simply becuase overall its a better piece of hardware. If I had just gotten a 9700 or 9800 level card i would keep that until the next round and not get anything.

The real acid test of wether the X800's are going to cut it will be Doom-III, HL2 and Stalker. The faercry 1.3 patch is already dagger in the back number one, if Quality and performance is better on another one or two of those games mentioned above i think its a no brainer which architecture you should put your money into. IMO to determine the real impact of SM3 Vs SM2 For now dont take any action on a new card until a few more games with at least rudimentary SM3 support are released.

(and dont be so sure about that Valve "friends" with ATi thing. You may be supprised by the end results you see)
 
poly-gone said:
Geometry Instacing is supported on VS 3.0 cards for all shader versions from 1.1 to 3.0
The DX docs seem to suggest otherwise.

Its changed for the latest version, not sure if they have updated the docs yet.

IF the card supports VS3.0 you can use vertex instanceing on any/all code regardless of vertex shader version currently set (including fixed function). It makes sense when you think about it, geometry instancing has nothing to do with shaders as its programming the DMA subsystem.

Its a bug in the specification really that links the two together at all (we should really have a cap bit for vertex instancing seperate from vertex shader version). Same thing almost happened with VS2.0 and newer vertex types when Dx9 came out (if you read the Dx9 headers it still claims there linked), luckily it was caught and even ATI8500 can use some of the new vertex types.
 
DeanoC said:
I wonder if bones in a vertex texture and vertex frequency for other instance data might be a win (if you can hide the latency by doing lots of vertex ops) in PC land but haven't tried it.
If constant storage is a problem, could you pack some of the animation data into a few different vertex components and run those at a much lower frequency i.e. only once per instanced character?
 
Evildeus said:
What do you mean not available yet? Isn't the Nalu demo about instancing?
No, I don't think that demo used any geometry instancing. Their asteroid demo showed that feature off. The Nalu demo showed off, among other things, dynamic branching to write an "Uber shader."
 
Hellbinder said:
The faercry 1.3 patch is already dagger in the back number one,

I look at this a different way. It seems to me that the NV hardware needed a SM3.0 patch just to be competitive with ATI's new offerings in a shader heavy game like Farcry.

Since SM2.0 is pretty well going to be the baseline programming model for a long time the X800 is looking pretty good if Farcry is any indication. Are the 6800's going to require a SM3.0 patch in new games (with a fair amount of shaders) to be competitive with the ATI cards? Like in TRON 2.0 for instance. If so, how many patches is NV going to fork $ out for?
 
It's also worth noting that NV3x hardware seems to end up going downhill from now on. TWIMTBP developers will be used to push NV4x with PS3, and thus the FX cards will get the treatment that Valve suggested - DX8.1 codepath by default, PS2.0 at your own risk 'cause we had no time left to optimize it...
 
Simon F said:
DeanoC said:
I wonder if bones in a vertex texture and vertex frequency for other instance data might be a win (if you can hide the latency by doing lots of vertex ops) in PC land but haven't tried it.
If constant storage is a problem, could you pack some of the animation data into a few different vertex components and run those at a much lower frequency i.e. only once per instanced character?

Too many bones, we currently have a blended animation system even with caching between soldiers we still get several hundred different bone matrices in an army. BTW By armies I'm talking hundreds (tending toward to thousands) of skinned characters.

To fully batch we would need a very large constant store, arbitary mem read from the shader or vertex texturing are the techniques we are looking at. The other techique would be some kind of multi-pass render-to-vertex buffer technique using the pixel shader hardware to do the skinning.
 
Blastman said:
Hellbinder said:
The faercry 1.3 patch is already dagger in the back number one,

I look at this a different way. It seems to me that the NV hardware needed a SM3.0 patch just to be competitive with ATI's new offerings in a shader heavy game like Farcry.

Since SM2.0 is pretty well going to be the baseline programming model for a long time the X800 is looking pretty good if Farcry is any indication. Are the 6800's going to require a SM3.0 patch in new games (with a fair amount of shaders) to be competitive with the ATI cards? Like in TRON 2.0 for instance. If so, how many patches is NV going to fork $ out for?
Not once the 1.3 farcry patch is released with noticably increased IQ at playable FPS.
 
Back
Top