For The Last Time SM2.0 vs SM3.0

Oh, yeah, I'd completely forgotten about that. But since Crytek says they're using 96-instruction shaders, adding an instruction or two for the decompression really shouldn't do much to performance. The real question is whether or not the memory bandwidth savings will help performance.
 
3Dc vs DXT5 isn't a performance issue, it's a quality issue. DXT5 is a fine fallback, but 3Dc looks better. The performance different may well be zero in many cases.
 
I know of at least one technique that can't be emulated (efficiently) using PS2.0. However, it doesn't even run efficiently on PS3.0 hardware due to API and hardware restrictions.
 
Well, I was thinking more about 3Dc vs. no compression. You save on memory bandwidth, but lose on pixel shader instructions.
 
Xmas said:
I know of at least one technique that can't be emulated (efficiently) using PS2.0. However, it doesn't even run efficiently on PS3.0 hardware due to API and hardware restrictions.
Well, don't just throw that out there! What is it? :)
 
Chalnoth said:
Well, I was thinking more about 3Dc vs. no compression. You save on memory bandwidth, but lose on pixel shader instructions.
Even then, though, it's not just a performance question though (or not necessarily a pure one).

If with 32-bit normal maps you need 200M of texture to render one scene then compression's going to be important.
 
Texture tiling, i.e. texturing a surface with a set of tiles. Useful to avoid the regularity of the repeat wrap mode.

Actually, this is possible with PS2.x as implemented in NV3x. It simply requires gradient instructions.
 
Hmmm I fail to see the issue with texture tiling. Seems to me you just need a large texture with all the tiles (say 2048x2048 with a 4x4 grid of 512x512 tiles) and either another texture or something passed that transforms to which one of the tiles to use. Or am I missing something?
 
Cryect said:
Hmmm I fail to see the issue with texture tiling. Seems to me you just need a large texture with all the tiles (say 2048x2048 with a 4x4 grid of 512x512 tiles) and either another texture or something passed that transforms to which one of the tiles to use. Or am I missing something?
You'd need to do the filtering yourself which is a bit of a pain.
 
Cryect said:
Hmmm I fail to see the issue with texture tiling. Seems to me you just need a large texture with all the tiles (say 2048x2048 with a 4x4 grid of 512x512 tiles) and either another texture or something passed that transforms to which one of the tiles to use. Or am I missing something?
Apart from the known problems with "texture atlases" (which aren't usually that bad when using seamless tiles, if correctly arranged), there is the problem of LOD calculation. At the edge from one tile to another, you might have a huge jump in "real" texture coordinates, but in "virtual" texture coordinates it should be very small. This discrepancy means that using a simple texld gives you artifacts along the edges of tiles where a much lower-res LOD than required is used.

texldd can be used to work around this. However, it seems to me that texldd is flawed by design, or at least there should be another instruction that works a bit differently, by taking virtual texture coordinates instead of gradients.
 
Chalnoth said:
In the meantime there are a number of reasons to buy a GeForce 6800 that are independent of SM2 vs. SM3, and image quality and performance differences. For me, these are:

1. Drivers. The drivers for nVidia cards are much, much better. That includes the interface and stability (for me, SW: KoTOR doesn't crash any more).

That's a bit of a sweeping statement. If I compare my 9700Pro and 5900XT, then Nvidia's drivers have just as many issues at ATi's do. The ForceWare interface is better though.

2. Application profiles. If you've ever played a game that was too slow with 4x AA, or one that doesn't work properly with anisotropic filtering or FSAA enabled (Diablo II, Baldur's Gate 2, for example), then you know what a pain it can be to continually switch these settings around.

You can have application profiles on Radeons using Radlinker, though its about time this was incorporated into the drivers rather than being a seperate app.

3. Better OpenGL drivers. Try playing UT2k4 in OpenGL mode with high details on a Radeon and you'll see what I mean.

True, though arent ATi meant to be re-writing their OpenGL drivers for Catalyst 4.7 or 4.8 ?
 
Aside from the technical merits of each company's new parts, there's the consideration that PC development 'appears' to be heading in the direction of somewhat like that of consoles, with publishers/developers offering special content for marketing deals. If this is a trend that grows more common in the PC market, all bets are off for buying one graphics card that covers all bases.
 
Xmas said:
Apart from the known problems with "texture atlases" (which aren't usually that bad when using seamless tiles, if correctly arranged), there is the problem of LOD calculation. At the edge from one tile to another, you might have a huge jump in "real" texture coordinates, but in "virtual" texture coordinates it should be very small. This discrepancy means that using a simple texld gives you artifacts along the edges of tiles where a much lower-res LOD than required is used.

texldd can be used to work around this. However, it seems to me that texldd is flawed by design, or at least there should be another instruction that works a bit differently, by taking virtual texture coordinates instead of gradients.

Damn, hehe I completely forgot about LOD issues which seems fairly serious without using a lot of memory for each of the possible border cases.

Edit: I'm guessing you've tried to implement a tiling scheme then with a pixel shader.
 
Far Cry will have exclusive HDR Lighting for the 6800 series. X800 will not get HDR according to Crytek.... So there's one exclusive eye candy feature for you.
 
Thats an implementation specific detail - the thread is about the technology. HDR effects can be, and have/will be supported down to DX8.1 cpabilities and so there can be workarounds - if, you wanted to talk about FP blending and filtering specifically that would be a different issue (but, as has been pointed, not specifically one that pertains to shader model differences).
 
DaveBaumann said:
Thats an implementation specific detail - the thread is about the technology. HDR effects can be, and have/will be supported down to DX8.1 cpabilities and so there can be workarounds - if, you wanted to talk about FP blending and filtering specifically that would be a different issue (but, as has been pointed, not specifically one that pertains to shader model differences).

That's true dave, but the original poster was asking if he would be missing any eye candy with an X800 over a 6800, not just a technology comparison. In the case of Far Cry, he would be missing HDR Lighting, which is a very significant piece of eye candy. Yes, like you said, it is possible to opt to support a lower quality HDR, but in the case of CryTek they are not doing this and only supporting HDR with FP blending, which only Nvidia's 6800 supports.
 
Is there any visible effect in a game that SM 3.0 can produce that SM 2.0 can not with alittle more work?

Thats talking about the technical differences. Technically SM2.0 can support HDR effects, its a point of detail that a developer is choosing not to support it.
 
Cryect said:
Damn, hehe I completely forgot about LOD issues which seems fairly serious without using a lot of memory for each of the possible border cases.

Edit: I'm guessing you've tried to implement a tiling scheme then with a pixel shader.
Yes I have, and it works quite nicely (though using refrast can be a real pain). There are some issues "in the distance" where the lowest mip-levels are used, but I think I can work around this.
 
DaveBaumann said:
Thats talking about the technical differences.

This is purely from a gamer/user perspective, NOT a coder or developer... So if I do go with the X800XT over the 6800GT will I be missing 'neat effects', as some have put it.

That's not. Based on the statements in his original posts telling him that he will get all the same eye candy as an end user on the x800xt in FarCry is misleading IMO when we now know that Crytek is opting to use a 6800 feature for HDR lighting that the x800 does not support, and therefore an (huge) effect that the original poster would miss out on as an "gamer" with an x800 playing FarCry.
 
Nobody is saying that he will necessaily get all the eye candy in specific implementations, however the context of the question is still about whether there are effects that can be programmed in SM3.0 that can't be in SM2.0.
 
Back
Top