HardOCP and Doom 3 benchmarks

How does ATi support shadow buffers under OpenGL? If they have the hardware, why can't they support it under D3D? Is it the automatic compare for the depth texture v the projection?[/code]

They do work. At least for me.
 
Re: Doom3 benches

chavvdarrr said:
Come on, why ATi should know for every single benchmark?!
Imagine the situation:
AMD releases Barton 3200+, lots of sites make reviews... and here comes Intel saying:
"oh, but its not fair, we didn't know, we could prepare making more optimised CC2003SE benchmark suite...."
AMD replies:
"But iP4 has SSE2 instructions... for apples2apples comparison we demand that no-SSE2 optimised apps be used"
Intel:
"And no 3dnow! too"
AMD:
"P4 has wider bus - 200x4 vs 200x2, NO bandwidth limited tests!"
Intel:
"Barton has more cache (640 vs 512) - no cache limited tests!"
......
......

These are curent results in Doom3 engine, like it or not.
.

The problem is: nVidia _had_ time to optimize for the benchmarked D3 build. ATi hadn't. So your example is not really applicable here.
 
BenSkywalker said:
For the particular implementation utilized in SC there are no color values nor are there any soft shadows used which differ considerably from the particular example that nVidia has in that doc. For instance-

Just where does colour come into any of this discussion about generating shadows?

The implementation described in SC will be essentially exactly the same as that in nVidia's white paper - that is how you do shadow buffers on nVidia hardware in D3D. That is why they have a white paper on how to do it, and it follows on directly from the XBox implementation where they didn't have to worry about non-conformance with the spec.

Oh, and if you look at the edges of the shadows in SC they are not hard-edged. There is definitely filtering going on, although it is fairly subtle in most places as it appears that the shadow buffers are of a reasonable resolution. In some places it can be clearly seen. The percentage closer filtering is an inherent part of the GF3 hardware as well, AFAIK, so it hasn't been removed, although possibly there is some way to switch to point sampling.

That doesn't appear to be happening in SC, nor are there color values being utilized for the shadow maps generated which changes D3DFMT_24S8 to D3DFMT_D16.

Whether the format of the shadow buffer is D24_S8 or D16 is irrelevant. All that does is alter the precision of the generated shadow texture (D16 is often sufficient). Neither of these formats will work on refrast, and the change from D24_S8 to D16 has nothing to do with colour.

What are you talking about when you mention colour values being utilised for the shadow maps? Shadow maps have no colour - only depth.

There is no code in this shader that actually does any depth comparison to decide if something is in shadow!
The depth comparison is done prior, although I see what you are saying in terms of it not being the same as in Stencil Buffer implementations.

[sarcasm]Really? They're actually doing it before the shader? Oh, silly me - it all makes sense now.[/sarcasm]

There is no specification anywhere in the API for this behaviour.

Nada, nothing, zip, zero.

According to the D3D specification this code :
Code:
tex t2

Does the following -
tex

Loads the destination register with color data (RGBA) sampled from a texture

It should load data directly from a texture map. That's it. It should not load the results of any comparison operation from anywhere. It doesn't matter what texture format I attach to this texture stage, I should get data from that texture. That behaviour should not change.

That is the specification

In terms of what SC in particular is doing, it appears to be rather considerably simplified from what is possible using shadow buffers under D3D on nV hardware and removes some of the out of spec functionality versus the sample code. I see what you are saying now in terms of the differences, although the example you are using seems to be quite a bit beyond what is happening in SC in particular.

If it had removed the out of spec functionality it would work on ATI cards, as they would then not be doing anything we don't support. What they describe and what they appear to have implemented is exactly the technique from the white paper. To my knowledge there is no effective way, other than that defined in nVidia's white paper, to perform shadow buffering on a Geforce3 under D3D. If someone has come up with an alternative, usable method then they can correct me.

How does ATi support shadow buffers under OpenGL? If they have the hardware, why can't they support it under D3D? Is it the automatic compare for the depth texture v the projection?[/code]

We support them using fragment programs, which is also how we support them in D3D. In D3D this is the legal way.

I guess we could also hack our driver under D3D to support nVidia's non-compliant behaviour, but why would we support something as woefully out-of-spec as that? That would seem to make the spec something you just throw out the window when you feel like it. There are other companies around who do that quite well without our assistance.
 
Question -

If Splinter Cell has a specific path for Nvidia hardware, why did it do so poorly in the recent NV35 benchmark? Poor shader performance?

There appears to be has large of a gap between the 9800 Pro and NV35 in Splinter Cell as there is with the NV35 and the 9800 Pro and Doom 3.

Just wondering.
 
saf1 said:
Question -

If Splinter Cell has a specific path for Nvidia hardware, why did it do so poorly in the recent NV35 benchmark? Poor shader performance?

There appears to be has large of a gap between the 9800 Pro and NV35 in Splinter Cell as there is with the NV35 and the 9800 Pro and Doom 3.

Just wondering.
I haven't really looked with particular interest at all the SC benchmarks in the various NV35-vs-R9800PRO256 shootouts but did any of the reviewers mentioned that shadow projection (available to both cards) was used instead of shadow buffer (available only to NV cards)?

WRT Andy's and Ben's discussion about SC and shadows in the game -- there are some pertinent points raised about shadows implementations per se but there appears to be a lack of understanding about how SC is coded to use both implementations (texture projection vs buffer). I'd like to talk about this but Ubisoft''s prohrammer had told me quite a number of days ago that he wanted to participate (for the firt time) at B3D to explain SC's shadow paths. I'll leave it to him while reminding him of this specific discussion in this thread at the same time. I don't have the time to write a lengthy post about this that definitely must be the case (a lengthy post, that is)!!

Just think XBOX and what it allows and the picture should be clearer because there are only so many things Ubisoft can change for the PC port of the game without delaying its (PC port) release too long.
 
Reverend said:
WRT Andy's and Ben's discussion about SC and shadows in the game -- there are some pertinent points raised about shadows implementations per se but there appears to be a lack of understanding about how SC is coded to use both implementations (texture projection vs buffer). I'd like to talk about this but Ubisoft''s prohrammer had told me quite a number of days ago that he wanted to participate (for the firt time) at B3D to explain SC's shadow paths. I'll leave it to him while reminding him of this specific discussion in this thread at the same time.

Aha - that should be interesting. :D
 
Rev-

Anand stated he forced all boards to class 2, I don't recall if the others did or not.

Andy-

I see what you are saying, want to clarify a few points on my end. First is the color values relating to the shadow, check out the smiley shadow on the doc you linked to. To me, it appears that they can pull off colored shadows if they want to spend the time doing it. Soft shadow edges- where can these be seen? If you could point me to a good example of soft shadow edges when using shadow buffers?

I guess we could also hack our driver under D3D to support nVidia's non-compliant behaviour, but why would we support something as woefully out-of-spec as that?

Because SplinterCell does :)
 
Because SplinterCell does

Not a valid arguement, it is the developers fault not sticking again to a agreed Standard..."The way its meant to played" should not take priority to standards all the other IHVs follow.
 
Not a valid arguement, it is the developers fault not sticking again to a agreed Standard

The developer must have had MS's blessing for this one, it came out on the XBox first ;) From what I have read the only reason this option is in the game is because it was there for the XBox, why bother taking it out if it works on PCs?
 
Doomtrooper said:
Because SplinterCell does

Not a valid arguement, it is the developers fault not sticking again to a agreed Standard..."The way its meant to played" should not take priority to standards all the other IHVs follow.
SC is a special case really, being an XBOX port and Ubisoft is (personally, and subjectively... I think) thrilled by the XBOX's "allocation" of the shadow buffer implementtation Ubisoft used.

I do not really agree with the "standard" reference because I feel that it is down to the developer to consider the risks (and personally, I think Ubi did a great balancing job about looking-after-all-sorts-of-hardware-owners). IMO your comment is a sulking reflection of what hardware you prefer. but you ahve every right to feel that way.

Ubisoft tried to give everyone a good-looking game based on a variety of PC hardware and its limitations according to what a hardware's full capabilities affords... and we really shouldn't complain. I know we all want a perfect world but as the 3D industry progress we may not see it, whether the "advantage" goes to NVIDIA, or ATi, or some other IHV. Your (=Doomtrooper) past posting history indicates you're more concerned about games than hardware.... your above post contradicts that.
 
BenSkywalker said:
I see what you are saying, want to clarify a few points on my end. First is the color values relating to the shadow, check out the smiley shadow on the doc you linked to. To me, it appears that they can pull off colored shadows if they want to spend the time doing it. Soft shadow edges- where can these be seen? If you could point me to a good example of soft shadow edges when using shadow buffers?

The smiley face is a function of the shader used to render the light source - the shadow map just decides if it's in shadow or not. The smiley face is simply another texture applied during the lighting pass, in this case projected to appear as if it's from the light.

Try to get into a position in the game where your shadow is cast a reasonable distance onto an object. You should be able to see slightly fuzzy edges around the shadow (it'll probably be easier if you set the shadow maps to their lowest resolution in the options menu)
 
X-box does not = PC...don't get me started on Metal Gear: Substance.
We could try chiseling out the X-box GPU and putting it in a PC though.

My past history has always been that all games should look the same on all hardware If the hardware is capable. In cases like these X-box ports, that appears to be not the case.
 
Reverend said:
I do not really agree with the "standard" reference because I feel that it is down to the developer to consider the risks (and personally, I think Ubi did a great balancing job about looking-after-all-sorts-of-hardware-owners). IMO your comment is a sulking reflection of what hardware you prefer. but you ahve every right to feel that way.

My argument is not with the developer using a feature once it's there - my argument would be with an IHV extending the spec in non-standard ways. As you can imagine it doesn't necessarily look good for another IHV when suddenly there's some feature in a game that you don't support even if you support every standard path through the API.

What if we came along and decided we wanted to make a special, but different use of depth textures (also outside the spec). Our mickey-mouse extension might then interact with nVidia's and you then end up with a lot of broken software. That's why standards exist and should be adhered to.

Ubisoft tried to give everyone a good-looking game based on a variety of PC hardware and its limitations according to what a hardware's full capabilities affords... and we really shouldn't complain.

I agree - no blame attaches to Ubisoft for taking advantage of resources at their disposal. They want to give everyone the best-looking game possible.

I do believe, however, that people should be concerned by attempts to circumvent agreed standards.
 
The question that comes to my mind here is, why didn't this kind of shadow buffers make it to the DX spec?
 
I do believe, however, that people should be concerned by attempts to circumvent agreed standards

I totally agree, the idea from my point of view was to make programming easier. I got in a heated Arguement with Kyle at [H] about standards...nobody seems to care..not even developers.
 
Splinter Cell - Shadows

Hi all,

The Shadow Buffer version of Splinter Cell is using the D3DFMT_D24 format (Render to Texture). However, the implementation has several tradeoffs that don't really make sense on PC.
1) Even when nothing is moving in the shadow cone (static light, static objects), the computation is done again and again every frame. This represents a huge waste of fillrate but that was the right tradeoff to save memory on xbox. There wasn't any need to implement a more PC efficient technique because GPU that were able to run SC with shadow maps were fast enough anyway.
2) In projector mode, some of you may have notice that 32 MB cards are only able to render the game at 640x480 (800x600 with 1.2 patch). This is because of the huge cache buffer needed to store the "static shadows".
3) The performance signature of both modes is very close. In shadow buffer, you render from the light point of view in a depth texture (depth only), in projector mode, you render from the light point of view in a texture (color only). In general, ATI has the upper hand for this operation (shadow projectors) because they render 8 pixels/clk single texture. In shadow buffer mode, it seems, for now, that NV35 is only able to render 4 pixels/clk. This may happen because we are always rendering textured triangles (for alpha transparency test (like the gates)) in textures even if we output Z only. In a way, it's very similar to single texturing.
4) I refer the the patch 1.2 readme file (SC) for more details about the rendering modes.

Thanks.

Dany Lepage
Lead Programmer
Splinter Cell 2
 
Shadow Projector implementation

Extract from the readme 1.2 patch file I wrote a while ago.

On all other current hardware, the game is using another technique called projected shadows (shadow projectors). The technique is somewhat similar, we render the scene from the light point of view but instead of storing the depth, we are storing the color intensity in a texture. That texture is then mapped per vertex on each object that is going to receive the shadow. To be able to have objects casting shadows on other objects that are themselves casting shadows, Splinter Cell is using a 3-depth levels shadow casting algorithm. In general, the first level is used to compute the shadow to be used on the dynamic actors like Sam. The second level is used to compute the shadow used by the static meshes like a table or boxes. The final level is used for the projection on the BSP. This system is allowing Sam to receive the shadow of a gate on him, then Sam and the gate can cast on a box and finally all three objects can cast on the BSP (ground). This system also has a distance check algorithm to determine if Sam’s shadow should be projected on a static mesh (like a box) or if it shouldn’t base on their relative position. Both systems have their own strength/weaknesses. The main advantage of the Shadow Buffer algorithm is how easy it is to work with. Shadow Projectors are tricky and difficult to use.

Dany Lepage
Lead Programmer
Splinter Cell 2
 
Splinter Cell - Fairness of using Projector mode for benchs

Splinter Cell, running in projector mode, runs the same code on NV2x/3x and R2xx/3xx chips. This mean they are all using the same PS 1.1 code.

It certainly not fair to compare ATI's cards running in projector mode against NV cards running in shadow maps mode. The results, visually are different enough. In that respect, the only acceptable way of using SC as a benchmark is to make sure ATI/Nvidia's cards are rendering ~the same thing.

I notice that some people were arguing about DirectX standards and such. Here is my take on this: With DX8 hardware, NVidia was the standard, they got the xbox contract and had their first DX8 part ready a long time before ATI did. NV20 was the standard for DX8, whatever was in it. Whatever features ATI was coming up with (like PS 1.4) didn't matter much from my perspective. With DX9, this is the exact opposite, R300 is the standard. My hope is that Microsoft will stop supporting "vendor extensions" like PS 1.4 and 2.0+ and just make the PC a console that gets a new API every 2 years (evolve much faster than a console anyway). As for the D3DFMT_D24S8, well, yes, it's a vendor extension but like I said, NV20 is standard for DX8.

Anyway...

Dany Lepage
Lead Programmer
Splinter Cell 2
 
Back
Top