For The Last Time SM2.0 vs SM3.0

Blacklash

Newcomer
I keep getting conflicting answers on a question of mine I would like answered. This is purely from a gamer/user perspective, NOT a coder or developer. I am not interested in things happening on the driver level.

So here's the question. Is there any visible effect in a game that SM 3.0 can produce that SM 2.0 can not with alittle more work? So if I do go with the X800XT over the 6800GT will I be missing 'neat effects', as some have put it.

Thanks in advance.
 
Yes, but not in the pixel shader. The biggest difference you'd notice would be the FP16 texture filtering and blending. Without these features, to get high dynamic range rendering, you are either going to have to utterly kill performance (i.e. it will no longer be realtime) or restrict yourself in the types of algorithms you can execute.

Blending is commonly used in transparent objects as well as explosions/fire/smoke. Without floating-point blending, you just can't render such things with high dynamic range properly. So, one thing I would expect that could visually differentiate SM3 from SM2 would be more realistic explosions and fire effects.

Now, in the meantime, I expect the primary difference we'll see over the next 12 months will be in performance. This performance difference could potentially be very significant in some games, and will grow steadily over time.
 
Chalnoth said:
Yes, but not in the pixel shader. The biggest difference you'd notice would be the FP16 texture filtering and blending. Without these features, to get high dynamic range rendering, you are either going to have to utterly kill performance (i.e. it will no longer be realtime) or restrict yourself in the types of algorithms you can execute.

Blending is commonly used in transparent objects as well as explosions/fire/smoke. Without floating-point blending, you just can't render such things with high dynamic range properly. So, one thing I would expect that could visually differentiate SM3 from SM2 would be more realistic explosions and fire effects.

Now, in the meantime, I expect the primary difference we'll see over the next 12 months will be in performance. This performance difference could potentially be very significant in some games, and will grow steadily over time.
http://www.firingsquad.com/print_article.asp?current_section=Features&fs_article_id=1506
FiringSquad: So basically your requirement for HDR is FP32?
Cevat Yerli: Yes FP32 blending.
OMG, that's insane.
FiringSquad: Well, the 5900 series also supports FP32, so will those cards support HDR as well or will it be a feature unique to the 6800 cards?
Cevat Yerli: Yes it’s unique to the 6800 series because of the blending capabilities.
I think this proves that Cevat isn't talking about FP32 shader precision.
 
No, it doesn't support FP32 blending. I just looked it up after reading that interview. There's definitely something wrong with his statement, maybe he just didn't say what he meant to say or something. I don't know.
 
Well also just going to mention that in any situation where the developer wants to use a texture map with the vertex shaders. Most commonly advertised use of that is definately displacement mapping but they can also be used if one wants to do things such as streamlines (not that I can imagine high demand for that in games) and using textures as lookup tables to reduce necessary math and therefore speeding up the VS.

Edit: Oh and on streamlines one of course could do that on the CPU. In fact, if you wanted to you can do the displacement mapping manually without the use of the VS of course. This seems to me a better idea if you aren't having dynamic displacement mapping where bullets damage walls and such. Since its better to go ahead and calculate the vertex positions in advance versus changing them on the fly unless you want to be able to more easily dynamically change them (and if its only minor changes every now and then it just might be better still to do it manually). Mainly displacement mapping with the VS is more I would say for ease. Now if you could handle automatic tesselation and LOD with VS that would be nice (does the displacement mapping supported by the Parhelia support that?)

Edit 2: Looks like the Parhelia does support depth adaptive tesselation.
 
Currently, no available game exists that uses Shader Model 3.0. There is an upcoming patch that is currently unavailable for Farcry (1.2) that when combined with a currently unavailable DirectX 9.0c and combined with a currently unavailable/unofficial driver set for the Nvidia 6800 series will make use of Shader Model 3.0. However, there are no unique effects available for the SM3.0 cards that are not provided for the SM2.0 cards.

There may be other games in development but as of yet unavailable to the end user that will make use of SM3.0 for unique features in the way of eye-candy/effects.

The long and the short of it is this: Upcoming games within the next year will largely be based on PS1.1, far less will make use of PS2.0/VS2.0, and even fewer will have token use of PS3.0/VS3.0.
 
One of the issues with doing vertex perturbation on the CPU is that you have to pass more data over the AGP/PCI Express bus (since with static geometry you can simply use vertex buffers, which the driver can move to the GPU). If this, or CPU cycles, become a limiting factor, than it will definitely be a win to do the perturbation on the GPU, which could be done with vertex textures. You can do it in two passes without vertex textures, but that will clearly be slower.
 
If your going to buy a card today and then one in a year or two its not going to matter either way.

If you plan on keeping the card for 4 years then mabye you would want to go with the 6800ultra .

As capable sm 2.0 parts become more abundant the programing will shift from 1.1 , 1.4 to 2.0a , b .

Sm 3.0 will start to pick up but i highly doubt any game will have sm 3.0 features that wont be emulated for sm 2.0 cards . There are just not enough sm 3.0 cards to do that for .

Perhaps this time next year we will see that start to happen.
 
Four years? I don't think so. The differences are already starting to appear with Far Cry, and the next patch will introduce some HDR rendering, so you'll see some differences within 3-6 months.

Of course, saturation (support in most new games) won't occur for a while after that, but there will definitely be some difference in the near-term.

In the meantime there are a number of reasons to buy a GeForce 6800 that are independent of SM2 vs. SM3, and image quality and performance differences. For me, these are:

1. Drivers. The drivers for nVidia cards are much, much better. That includes the interface and stability (for me, SW: KoTOR doesn't crash any more).
2. Application profiles. If you've ever played a game that was too slow with 4x AA, or one that doesn't work properly with anisotropic filtering or FSAA enabled (Diablo II, Baldur's Gate 2, for example), then you know what a pain it can be to continually switch these settings around.
3. Better OpenGL drivers. Try playing UT2k4 in OpenGL mode with high details on a Radeon and you'll see what I mean.
4. Linux support. nVidia is vastly ahead of ATI once again here.
 
Chalnoth said:
No, it doesn't support FP32 blending. I just looked it up after reading that interview. There's definitely something wrong with his statement, maybe he just didn't say what he meant to say or something. I don't know.

Yes, I was under the impression HDR was limited to FP16, but wasn't sure if my memory had failed me at the time. I'm trying to see if I can get some questions answered in a follow-up (at the time of my interview I was dealing with benchmark fiasco so didn't have time to prepare questions beforehand), I'll add that one to the list.
 
Chalnoth said:
One of the issues with doing vertex perturbation on the CPU is that you have to pass more data over the AGP/PCI Express bus (since with static geometry you can simply use vertex buffers, which the driver can move to the GPU). If this, or CPU cycles, become a limiting factor, than it will definitely be a win to do the perturbation on the GPU, which could be done with vertex textures. You can do it in two passes without vertex textures, but that will clearly be slower.

I assume by two passes you mean render-to-vertex-array (which is not really available yet), but that could easily wind up being faster. Vertex texturing is not too efficient yet due to the latency involved in texture fetches, and the inability of the vertex pipeline to completely absorb them.

So I wouldn't say clearly. It is much easier, though, and as I just mentioned, it's available now as opposed to the alternative.

We need someone to do some vertex texturing tests for us. Since you have a 6800 now, may I assume you wouldn't mind? :D
 
I would guess it was a misunderstanding, such as something along the lines of doing most shader calculations at FP32 within the shader, which would be completely different from blending.

If FP32 blending was supported, I could make great use for that for much more efficient sparse matrix multiplication (if you think of the matrix multiplication as updating one matrix, then multiplying that matrix by a sparse one won't change many values, and so it'd be inefficient to create a new texture, copying all values, quite a bit moreso than just blending with the existing texture).
 
Chalnoth said:
Four years? I don't think so. The differences are already starting to appear with Far Cry, and the next patch will introduce some HDR rendering, so you'll see some differences within 3-6 months.

Of course, saturation (support in most new games) won't occur for a while after that, but there will definitely be some difference in the near-term.

I'm sorry did you even read what I wrote ?

I said if you are buying a card and plan to update it in the next year or two it wont matter either way .

hdr but there are ways to do it on the sm 2.0 hardware as seen in half life 2.

the x800xt is faster with high fsaa and aniso than the 6800ultra .

With farcry getting 3dc suppot the gap may widen again.

If you playn on keeping the card for a long time like a spand of 4 years then you might want to move to the sm 3.0 cards .

Not because thats how long it will take for games to use sm 3.0 features.

But because thats how long it will take for the next shader model to completely take over and force sm 3.0 out . If what dave says is correct and dx 10 isn't coming out till the end of 2006/2007.

Please read correctly before you jump the gun .
 
Mintmaster said:
I assume by two passes you mean render-to-vertex-array (which is not really available yet), but that could easily wind up being faster. Vertex texturing is not too efficient yet due to the latency involved in texture fetches, and the inability of the vertex pipeline to completely absorb them.
Well, you're trading the latency of performing the extra pass with the latency of the texture fetch at the beginning of the single pass. Using vertex textures is clearly going to take fewer CPU cycles, so that's one benefit. If you can also manage to manually hide the latency, then performance could potentially be quite high.

We need someone to do some vertex texturing tests for us. Since you have a 6800 now, may I assume you wouldn't mind? :D
Sorry, I've got just too much on my plate right now. If I write something for the 6800 soon, I'll let you know.
 
I just think it's great to see them supporting so many upcoming technologies. You don't see id or Epic going back and adding this many features to their engine once the initial work is done. For that matter neither DOOM 3 or HL2 will support all the features that will be found in Far Cry 1.3, at least initially. Of course, we're still waiting on 1.2...
 
At the same time, it's only been recently that adding technology features to the engine after production has been actually useful. That is, way back in the annals of history, when, say, DOT3 bump mapping was first introduced, it would have required the creation of lots of new content to make use of that technology. Today, it's simply a matter of changing some shaders and rendering algorithms, then see if it works. Not that that work is trivial, but it's much less than drawing a bump map for every texture in a game.

And it actually makes good business sense for Crytek to do this, considering that it's a great way to advertise the engine to developers.

I really wouldn't be too surprised if this became a trend, actually. But there's still a limit to exactly how much can be added, and at some point it's better to go back to the drawing board, start from scratch, and show off technology using completely different artwork.
 
pat777 said:
With farcry getting 3dc suppot the gap may widen again.
It's not like the 6800U can't do 3DC.

Technically, it can't, but it can do close to it using DXT5. That is if they code it that way. I should hope that they will.
 
BRiT said:
pat777 said:
With farcry getting 3dc suppot the gap may widen again.
It's not like the 6800U can't do 3DC.

Technically, it can't, but it can do close to it using DXT5. That is if they code it that way. I should hope that they will.
I think the main part of 3Dc is it's pixel shading. It uses some pixel shading to help compress normal maps with less loss.
 
pat777 said:
BRiT said:
pat777 said:
It's not like the 6800U can't do 3DC.
Technically, it can't, but it can do close to it using DXT5. That is if they code it that way. I should hope that they will.
I think the main part of 3Dc is it's pixel shading. It uses some pixel shading to help compress normal maps with less loss.
That's the case with the DXT5 method as well. The only difference is that because 3Dc only encodes two colour channels instead of 4, the bits are used more efficiently (i.e. fewer wasted).
 
Back
Top