HardOCP and Doom 3 benchmarks

I agree somewhat, but being 1st doesn't mean it should be the standrd :?: The standard should be the superior version/extension that delivers the most ease for a programmer and the best visuals for the consumer.

There was two years of DX8, including DX 8.1...plenty of time for at least support for DX8.1 featureset IMO.
 
Then there might as well be no standard. Superior versions are always going to come out - only to be surpassed by another version tommorrow.
 
True...normally there isn't such a long time for DX revisions and would make sense but the DX8 and DX 8.1 was a good two years so supporting the newer standard doesn't seem impossible.

DX 8.1 came out approx 8 months after 8 did it not :?:
 
Dany Lepage
Lead Programmer
Splinter Cell 2
^^^^^^^^^


Woot Woot - Splinter Cell 2? Very nice. Any Screen Shots? I know, I'm late in catching it :(
 
Re: Splinter Cell - Fairness of using Projector mode for ben

First of all, thanks for the posts. Very informative.

Dany Lepage said:
I notice that some people were arguing about DirectX standards and such. Here is my take on this: With DX8 hardware, NVidia was the standard, they got the xbox contract and had their first DX8 part ready a long time before ATI did. NV20 was the standard for DX8, whatever was in it. Whatever features ATI was coming up with (like PS 1.4) didn't matter much from my perspective. With DX9, this is the exact opposite, R300 is the standard. My hope is that Microsoft will stop supporting "vendor extensions" like PS 1.4 and 2.0+ and just make the PC a console that gets a new API every 2 years (evolve much faster than a console anyway). As for the D3DFMT_D24S8, well, yes, it's a vendor extension but like I said, NV20 is standard for DX8.

I think that's an attitude that can be quite common. nVidia to some extent were the standard for DX8, and as a result many things in it were tailored closely to their specifications. Surely that should not, however, give them the right to then alter that specification as they choose at a later time? That makes the job of anyone else trying to create hardware and compete in the market very difficult.

Shadow buffers didn't make it into DX8 for whatever reason, and now you see the result - when they are later implemented through some special extension people get the idea that in some way other vendors failed to implement all of the spec, rather than the true situation. It can make life more difficult for us in the market.

In DX9, as you say, R300 was the first, and therefore the standard. There are, perhaps, things on R300 that are not fully exposed in DX9, but that being said, to the best of my knowledge, we have no plans to go around adding non-standard texture formats or circumventing the defined standard in any way. It would cease to be a standard then - Microsoft define the platform for ISVs and IHVs to aim at, not us.

I understand that the shadow buffer scheme was available on XBox, and like I said I think that it's fine that when you were presented with the opportunity to use that code on the PC you took it. Makes perfect sense, and makes your job in porting simpler.

However, on the subject of an IHV redefining the standard I think we'll just have to disagree. I believe that simply being the first (as with NV20 or R300) should surely be advantage enough.

Cheers
- Andy.
 
Hi Andy,

I'll give you that one.
For my position to be fully consistent, I should forget about vendor specific extensions and as you said, Nvidia got the shadow map algorithm as a bonus because they were in xbox, this is something by itself but doesn't have anything to do with the core question. And to make your point even more true, if the game would have been developped for PC first, the shadow map algorithm would have never made it in.

It seems that developpers can fall, more or less, into 2 categories:

"The platformers" like myself, who would like to have Microsoft provide a single API for a single DX, you are compliant or you don't and this is it. We would get something like 1 API every 2 years. IMHO, I believe it would make the PC a better gaming platform and certainly easier to understand for the consumer. In a perfect world, they would also require a minimum performance level so game developpers could, for exemple, target that API with that minimum level of performance and balance their game for 640x480 (How do you define/measure that level is another matter).

"The PCers" like Epic/ID. It sounds like they are trying to squeeze every bit of juice from every platform. That may make sense for them because that could actually be a selling point for their engine. They are ending up with a potentially better result at the expense of development time. I believe a lot of PC enthousiasts are falling in this category because they would feel like the business would become boring if MS would force IHVs to target a single API. The only thing they would compete on would be speed.

Probably mean you have to use a mix of benchmarks from the "platformers" (like SC, runs ~same code on different hardware) and "pcers" (like Doom3, runs different code on different hardware) to get a good feel about overall performance.

Dany
 
I understand fairly well Shadow Buffers but I don't clearly see how Shadow Projetctors works, from what I've readen in previous posts there's no depth information stocked in the texture but just colors so how does it works ? Sorry it may sound stupid but I think that I missed the point here :? so can someone explain me the principle behind this technique or point me some ressources to understand it better. Just curious ;)
 
How "Shadow projector" works.

First of all, lets look at a classic scene of SC-PC. Either in the retail game or the demo #2(a/b), open the console (F2) and type "open 1_1_2tbilisi".
If you are running on a GF3/4Ti/FX card, be sure to uncomment the line ForceShadowMode=0 in splintercell.ini (SYSTEM).

Notice that Sam can receive the shadow of the butterflies and the tree.
Notice that the 2 benchs near the wall, they receive the shadow of the butterflies/tree and Sam.
Notice that the the ground (BSP) receives the shadow of Sam/tree/butterflies/benchs.

Projective shadow algorithm:

Step 1:
---------
Decide how many shadow textures you are going to support per light.
SC-PC supports 3 of those per light. This mean that in SC-PC, it's impossible to have an horizontal light doing:

a) A tree casting a shadow on Sam (texture on Sam is texture #1)
b) Sam casting his shadow on a box (texture on first box is texture #2)
c) box casting its shadow on another box (texture on second box is texture #3)
d) finally the ground receiving the shadow of everything (texture on the ground is texture #4)

This would require 4 shadow textures per light. Fortunatly, this doesn't happen very often and 3 textures is enough 95% of the time. In splinter cell, even if the same is fairly flexible, the first texture is called the actor texture and is generally used to render the shadow that will be displayed on Sam. The second texture is called the static mesh texture because it is generally used for shadow pattern casted on static meshes. The last one is the BSP texture (self-explaining).

Several objects in the scene can receive the same shadow texture if they don't cast a shadow on each other.

Step 2:
---------
You need to decide what will be rendered in which texture.
For actor texture, to optimize performance, we tag objects that are always going to be displayed on Sam because Sam can't get in the way between the light and the shadow casting object. This works well for gates and such (butterflies, trees). The actor texture has severe limitations (like no distance check) and if you flag a table to cast a shadow, Sam will end up with the shadow of the table even if he is standing between the light source and the table.
The BSP texture is the easiest, it basically receive every object tagged to cast shadow. No distance check needed. BSP can't cast shadows in SC-PC "projector version" (quite restrictive) so you need to make sure your lights are matching the BSP's "contour" (~shape).
The SM texture is more complicated because there is a system to decide if Sam is going to be displayed in the SM texture base on a distance check. This is tricky because artists rarely put their object center in a consistent manner and an offset often has to be manually tuned.

Step 3:
---------
Do post filtering on the shadows
a) brightness/contrast adjustment
b) base on a per light parameter, blur the shadow, it's constant accross the whole shadow texture so it only creates "fake soft shadows" but I think they look quite nice. The brightness/contrast tuner also add a lot of control in the artist's hands to match the lightmap shadows with the dynamic shadows.

Step 4:
---------
Project the shadow texture (the right one, depending which one they should receive) on each actor/SMs/BSPs if they are meant to be receiving any.

This is it !

I hope this help. Half Life 2 seems to implement a simpler version of this algorithm (only one level of projection on the ground).

Dany Lepage
3D Programmer
Splinter Cell 2
 
Hi Dany Lepage,

I've got a question related to Splinter Cell benchmark result.

Do you know with Radeon 9600 Pro score is better than 9500 Pro one and why GeForce FX 5800 Ultra score is better than 5900 Ultra one ?
 
i want to know where the hell was the Wildcat VP when they ran the benchmarks? I can't believe that they left it out after all of the enthusiasm JC has for OGL2.0
 
indio said:
Very sad that HArdOCP would do this.They benchmark a game that isn't out with cards from two manufacturers . Manufacturer A sets up the tests. Manufacturer B PROBABLY isn't aware that test for the game (which isn't available yet) is even going to happen. One driver set is clearly broken , the other cripples the hardware. This clearly indicates ATI wasn't prepared for this (is that any surprise?) Nvidia was .
Where are HARDOCP's journalistic standards?


thats the most rigged POS benchmark ive ever seen...
 
Back
Top