Chalnoth said:Oh, you mean like:Doomtrooper said:Again there is only one company in the Openg ARB that have Proprietary extensions.
GL_ATI_separate_stencil
GL_ATI_texture_float
GL_ATI_vertex_array_object
GL_ATI_fragment_shader
Proprietary=vendor specific.
BenSkywalker said:Would you consider this a decent description of what they are doing in SC?-
In this sample, the 3-D object that casts shadows is a bi-plane. The silhouette of the plane is computed in each frame. This technique uses an edge-detection algorithm in which silhouette edges are found. This can be done because the normals of adjacent polygons will have opposing normals with respect to the light vector. The resulting edge list (the silhouette) is protruded into a 3-D object away from the light source. This 3-D object is known as the shadow volume, as every point inside the volume is inside a shadow.
Next, the shadow volume is rendered into the stencil buffer twice. First, only forward-facing polygons are rendered, and the stencil-buffer values are incremented each time. Then the back-facing polygons of the shadow volume are drawn, decrementing values in the stencil buffer. Normally, all incremented and decremented values cancel each other out. However, because the scene was already rendered with normal geometry, in this case the plane and the terrain, some pixels fail the z-buffer test as the shadow volume is rendered. Any values left in the stencil buffer correspond to pixels that are in the shadow.
Finally, these remaining stencil-buffer contents are used as a mask, as a large all-encompassing black quad is alpha-blended into the scene. With the stencil buffer as a mask, only pixels in shadow are darkened
BenSkywalker said:Perhaps if you substituted S3 or Matrox that would be a good point. Throwing the demo up on a website likely wouldn't be the best way to avoid leaks if that is the reason they are doing it this way
Already addressed. I'm sure ATI (assuming it was them) has taken measures to avoid further leaks and communicated those measures to id. Like any company would.
In any case, if id thought it was "important at this time" to have some benchmark comparisons, having an IHV sponsor those benchmarks, without the knowledge of the other competing IHVs is simply bad form, don't you agree?
Natoma said:For instance, R300 runs the ARB2 path, and from what I understand, ARB2 extensions are standard OGL2 extensions, meaning that any OGL2 card should be able to run ARB2 extensions just fine.
Indeed! All one has to do is look at the OpenGL extension registry. It says in NV extensions: "IP Status: NVidia Proprietary". No other company appears to do the same. . .Humus said:Proprietary != vendor specific
Splinter Cell shadow system is a major part of the game. On NV2x/NV3x hardware, it runs using a technique called Shadow Buffers. This technique is rendering the scene from every shadow casting light and store a depth buffer that represent each pixel viewed by this light source. Each pixel has an X, Y, Z coordinate in the light system and these coordinates can be transformed, per pixel, in the viewer coordinate system. It’s then easy to compare with the actual depth stored in the Z buffer to figure out if the pixel viewed by the camera is the same or is occluded by the pixel viewed by the light. If they are the same, it means the pixel is lighted, if the light pixel is in front of the viewer pixel, it means the pixel is in the shadow.
On all other current hardware, the game is using another technique called projected shadows (shadow projectors). The technique is somewhat similar, we render the scene from the light point of view but instead of storing the depth, we are storing the color intensity in a texture. That texture is then mapped per vertex on each object that is going to receive the shadow. To be able to have objects casting shadows on other objects that are themselves casting shadows, Splinter Cell is using a 3-depth levels shadow casting algorithm. In general, the first level is used to compute the shadow to be used on the dynamic actors like Sam. The second level is used to compute the shadow used by the static meshes like a table or boxes. The final level is used for the projection on the BSP. This system is allowing Sam to receive the shadow of a gate on him, then Sam and the gate can cast on a box and finally all three objects can cast on the BSP (ground). This system also has a distance check algorithm to determine if Sam’s shadow should be projected on a static mesh (like a box) or if it shouldn’t base on their relative position. Both systems have their own strength/weaknesses. The main advantage of the Shadow Buffer algorithm is how easy it is to work with. Shadow Projectors are tricky and difficult to use."
In the first mode they use the Shadow buffer technique, where the scene is rendered to a depth texture from the light source POV and then this depth value is transformed into the view space and used as a comparison with the rasterized depth to decide whether a given pixel object is in shadow.
In the second mode they use normal stencil based shadow projection similar to what you describe above. This would be used by default on other vendors cards, such as ATI, Matrox, SIS etc.
Stencil buffers are a depth-buffer technique that can be updated as geometry is rendered, and used again as a mask for drawing more geometry. Common effects include mirrors, shadows (an advanced technique), dissolves, and so on.
Humus said:Natoma said:For instance, R300 runs the ARB2 path, and from what I understand, ARB2 extensions are standard OGL2 extensions, meaning that any OGL2 card should be able to run ARB2 extensions just fine.
Gah! All these '2's. There are ARB extensions, no "ARB2 extension". OpenGL2 extensions are not yet finalized. DoomIII does not use OpenGL2, except for some experiments Carmack mentioned he did when he got his 3DLabs cards. Since then, GL2 specs has changed a lot.
BenSkywalker said:That isn't supported under DirectX?
Initially the conversation about shadows in Splinter Cell was sparked by your assertion that nVidia cards produce higher quality shadows than ATI cards and are thus being penalised in the benchmarks.
LeStoffer said:The problem, however, is that ATI's products was benchmarked as well when we don't have a clue about whether ATI was informed or accepted that this was taking place.
Maybe the fault is really with the reviews this time aound? (Sorry the spoil the NV vs ATI war)
BenSkywalker said:Andy
I don't see that we agree
http://msdn.microsoft.com/library/d...mplesAndToolsAndTips/Samples/shadowvolume.asp
The conversation went the way it did as I had already dug up the relevant doc from Microsoft and the quotes from Ubi Soft. What is it that they are supposed to be doing that is not supported under DX?
Edit-
Forgot to mention-
Initially the conversation about shadows in Splinter Cell was sparked by your assertion that nVidia cards produce higher quality shadows than ATI cards and are thus being penalised in the benchmarks.
The NV3X boards sit with pixel pipes idle using projected shadows that would be utilized if they ran shadow buffers. Will that allow them to run faster? Can't be sure, but it wouldn't shock me. If they can both run faster and look better then they are being penalized twice.
Shadow buffers implemented the way they do them are not supported in the DX spec. If you read Ubisoft's explanation of shadow buffers you can see that it is very different to the stencil shadows in the document you point to above.
BenSkywalker said:Shadow buffers implemented the way they do them are not supported in the DX spec. If you read Ubisoft's explanation of shadow buffers you can see that it is very different to the stencil shadows in the document you point to above.
Looking for more detail on how it is they(nv and US) are handling it. The only big difference I can see is that they are rendering out to a depth texture instead of to the stencil buffer(which is odd as SC doesn't have soft shadows from what I have seen). Everything they are doing in the samples given by nV is supported by DirectX, although it is an odd approach(utilizing resources in a fashion they weren't designed for).
HRESULT hr = pD3D->CheckDeviceFormat(
D3DADAPTER_DEFAULT, //default adapter
D3DDEVTYPE_HAL, //HAL device
D3DFMT_X8R8G8B8, //display mode
D3DUSAGE_DEPTHSTENCIL, //shadow map is a depth/s surface
D3DRTYPE_TEXTURE, //shadow map is a texture
D3DFMT_D24S8 //format of shadow map
);
Note that since shadow mapping in Direct3D relies on “overloading†the meaning of an existing texture format, the above check does not guarantee hardware shadow map support, since it’s feasible that a particular hardware / driver combo could one day exist that supports depth texture formats for another purpose. For this reason, it’s a good idea to supplement the above check with a check that the hardware is GeForce3 or greater.
tex t0 // normal map
tex t1 // decal texture
tex t2 // shadow map
dp3_sat r0, t0_bx2, v0_bx2 //light vector is in v0
mul r0, r0, t2 //modulate lighting contribution by shadow result
mul r0, r0, t1 //modulate lighting contribution by decal
tex t2
This allows you to determine for other objects where they pass in and out of that object's cast shadow by using the resulting stencil value as a mask and rejecting pixels based on the stencil test.
There is no commonality between these techniques at all, except that they are both used to generate shadows.
nVidia's implementation also apparently detects the format of this depth texture when it is attached to the shader, and uses a type of texture filtering not specified anywhere in D3D
tex t0 // normal map
tex t1 // decal texture
tex t2 // shadow map
dp3_sat r0, t0_bx2, v0_bx2 //light vector is in v0
mul r0, r0, t2 //modulate lighting contribution by shadow result
mul r0, r0, t1 //modulate lighting contribution by decal
There is no code in this shader that actually does any depth comparison to decide if something is in shadow!