GeForceFX and displacement mapping?

Well, will DM even ever really be used?
(dont take this as a trash at the tech, i see its uses)
It seems to me that PC graphics are headed towards a better lighting model (ala Doom3) and it is my understanding that DM/Truform screws up shadom algorithms. (Especially if the DM'ed object has a shadow cast upon it - otherwise, silouhette just doesnt quite match, no biggie).
 
Althornin said:
Well, will DM even ever really be used?
(dont take this as a trash at the tech, i see its uses)
It seems to me that PC graphics are headed towards a better lighting model (ala Doom3) and it is my understanding that DM/Truform screws up shadom algorithms. (Especially if the DM'ed object has a shadow cast upon it - otherwise, silouhette just doesnt quite match, no biggie).

Only thing we do know is that Carmack won't use it. At least not for Doom 3.

I need to apologize to Matrox -- their implementation of hardware displacement mapping is NOT quad based. I was thinking about a certain other companies proposed approach. Matrox's implementation actually looks quite good, so even if we don't use it because of the geometry amplification issues, I think it will serve the noble purpose of killing dead any proposal to implement a quad based solution.
 
Althornin said:
Well, will DM even ever really be used?
(dont take this as a trash at the tech, i see its uses)
It seems to me that PC graphics are headed towards a better lighting model (ala Doom3) and it is my understanding that DM/Truform screws up shadom algorithms. (Especially if the DM'ed object has a shadow cast upon it - otherwise, silouhette just doesnt quite match, no biggie).

Well, JC's current issues with HOS should be resolved with a fully-hardware shadow technique, one that takes into account the HOS.

Personally, I see other issues being more important in deciding whether or not to use DM, such as collision detection.

As an example, if the developer wants accurate collision detection, DM can't very well change the silouhette of the model very much, or else the collision detection just won't be accurate anymore. This can be even more problematic if DM is used for, say, terrain.

In other words, HOS like DM can really screw up essentially anything calculated on the CPU that is dependent upon geometry information.

I'm hoping that simpler HOS techniques, where the surface can be calculated as a relatively simply mathematical construct, won't have this problem. That is, hopefully developers will find ways to do the geometry-dependent calculations on the curved surfaces themselves (without tesellation...that is, directly on the curved surface). This would definitely cost more CPU power for a patch than a single triangle, but if the mathematical construct that makes up the patch is simple enough, then it should be a lot cheaper than tesellation (As a side note, since 3D display needs a lot more than just geometry position information, I see tesellation as the only realistic way to do 3D display).
 
Ray, I know what types (and their limits) of DM that DX9 supports - I was asking if adaptive tesselation is a must as part of the DX9 compliancy spec of a hardware. The way I see it, the R300 and the NV30 do not fail to meet DX9 compliancy in this regard but this issue appears to be hyped up by the fanbois and various sites. So, again, does pre-sampled maps mean failure for DX9 compliancy? Note that I'm not for the tit-for-tat features of the R300 or NV30.
 
Chalnoth said:
Personally, I see other issues being more important in deciding whether or not to use DM, such as collision detection.
As an example, if the developer wants accurate collision detection, DM can't very well change the silouhette of the model very much, or else the collision detection just won't be accurate anymore. This can be even more problematic if DM is used for, say, terrain.

In other words, HOS like DM can really screw up essentially anything calculated on the CPU that is dependent upon geometry information.
Like i said, for any intelligent collision detection system, that uses hierarchical bounding volumes and utilizes temporal coherency, HOS WONT be a problem. Bounding volumes for HO surfaces need to be precalculated and thats it.
Currently, most physics systems in games dont use per-polygon collision detection anyways. They use simple "proxy objects" like spheres, cylinders and boxes to approximate real geometry for physics calcs.
Doing collision detection per tessellated tris of HOS in games at this day and age would be _far beyond_ pushing the envelope, even though we do have practical algorithms and hardware that would permit doing it.
 
The question are:
How difficult is it to implement a "intelligent collision detection system"?
Does the extra work worth the implementation of DM considering the other possibilities?
 
Evildeus said:
The question are:
How difficult is it to implement a "intelligent collision detection system"?
Does the extra work worth the implementation of DM considering the other possibilities?

I haven't even started a "stupid collision detection system" in our program, and my head already aches from thinking about what a PITA it will be. But then, I suppose that's why the guys figuring this stuff out are driving Ferraris, and I'm driving a '77 Lincoln.

Truthfully, though, I don't really think you need to modify your collision detection to accomodate displacement maps. Generally a DM would be used to add small details, or just tessellate a mesh for smoothness of appearance and lighting purposes. The basic geometrical shape of the mesh won't be modified that much, and the pre-displaced geometry should still be accurate enough for collision detection. Usually, the bounding boxes/cylinders/spheres used in collision detection would offer enough padding so that the difference between the displacement map and the original mesh wouldn't be noticable anyway (in terms of models sticking into walls, and things of that nature).
 
Currently, most physics systems in games dont use per-polygon collision detection anyways. They use simple "proxy objects" like spheres, cylinders and boxes to approximate real geometry for physics calcs.


In UT2k3 you can if you want select for your objects a per-polygon collision but be warned, doing this slows down the game with just one of these object with a decent amout of polys. Per-polygon collision detection is still a very expesive opperation and we were told by Epic not to stay away from this if we could.
 
Crusher said:
Truthfully, though, I don't really think you need to modify your collision detection to accomodate displacement maps. Generally a DM would be used to add small details, or just tessellate a mesh for smoothness of appearance and lighting purposes. The basic geometrical shape of the mesh won't be modified that much, and the pre-displaced geometry should still be accurate enough for collision detection. Usually, the bounding boxes/cylinders/spheres used in collision detection would offer enough padding so that the difference between the displacement map and the original mesh wouldn't be noticable anyway (in terms of models sticking into walls, and things of that nature).

But if the displacement maps are only used for small details, then bump maps are 99% as good, with much less of a performance hit for the image quality (DM would need sub-pixel polygons for it to look as good as BM with a high-res map).

This was what I was trying to get at. If you use DM to control major features, then it screws up your collision detection (Yes, I suppose you could recalculate the boundinv volumes...). If you only use it for very small features, its benefits are reduced dramatically.

As a side note, in the Cg demos are some neat little self-shadowing bump maps, eliminating another possible benefit of DM. They can be rendered okay on a GeForce4, but the NV30 emulation makes the shadows look very good.
 
The problem with bump maps is when you look at them obliquely, it becomes obvious they're trickery.

Sure, they make great eyecandy from a distance, but if you get down to inspecting them it simply ruins the effect.
 
This was what I was trying to get at. If you use DM to control major features, then it screws up your collision detection (Yes, I suppose you could recalculate the boundinv volumes...). If you only use it for very small features, its benefits are reduced dramatically.

This has been mentioned a number of times in this thread - what developer uses anything but very 'approximated' geometry to calculate collision detection anyway?

Collision detection is done on the CPU, so you wouldn't be caclulating high detail geometry on the CPU; very low detail approximations are used for collision detection.
 
Reverend said:
A manager at ATI told me. There is no way for adaptive tesselation on the R300. As mentioned, it is a hardware limitation. And yes, VS2.0 can do this.

OK, Rev, so this information:

This is from the educational demo, under the Trueform 2.0 section:

http://mirror.ati.com/vortal/r300/educational/main.html

"An adaptive tesselation option is also supported, which dynamically adjusts the tesselation level of a surface depending on the distance of the viewer. Thus nearby surfaces will have more detail and more polygons, while distant surfaces will have less."


...is completely incorrect, then? Just trying to keep things straight for my own sake, here...

(I know this is from the "TrueForm" section but it's clear from the above-referenced demo that N-patches are used for both Trueform and continuous tesselation displacement mapping. Since adaptive tesselation is possible with n-patches for TrueForm, it seems to make sense it's also possible for n-patch displacement mapping. Right? Wrong?)

So you're saying the manager is saying that no adaptive tesselation using n-patches for any reason is possible...?
 
GeForce FX can do DM even N-patch .

i saw one nvidia driver support n-patch in D3D's HAL where it list in 3dmark2001 and dx8 capsviewer with gf4 ti /gf3 ti.
 
@cho

afaik. newer DX Versions use RT-patches for n-patches emulation. Some older driver versions had RT-patches enabled. But the emulation was extremly slow, so nvidia removed the directx hos support from the drivers.
 
Walt,
Adaptive tesselation is possible and displacement mapping is possible, but the two cannot be enabled at the same time because the R300 doesn't support "true" displacement map sampling. It only supports a kind of displacement map sampling that doesn't work unless tessellation is turned off.

Pre-sampled displacement maps use the index of each vertex to find the sample. The developer has to line up the mesh and the map ahead of time statically. Adaptive tesselation changes the number and index (position) of vertices in the stream, so that they will no longer match up with the pre-sampled map, hence, tesselation has to be disabled. ATI's Flash presentation isn't developer documentation, it is a bullet pointed feature list that does not list the limitations of their bullet points.

To do "real DM", the R300 would need a texture unit in the vertex shader. This texture unit would be able to convert vertex positions into interpolated texture coordinates and fetch displacement map values (possibly with atleast bilinear filtering). I don't believe this unit exists.
 
WaltC said:
Reverend said:
A manager at ATI told me. There is no way for adaptive tesselation on the R300. As mentioned, it is a hardware limitation. And yes, VS2.0 can do this.

OK, Rev, so this information:

This is from the educational demo, under the Trueform 2.0 section:

http://mirror.ati.com/vortal/r300/educational/main.html

"An adaptive tesselation option is also supported, which dynamically adjusts the tesselation level of a surface depending on the distance of the viewer. Thus nearby surfaces will have more detail and more polygons, while distant surfaces will have less."


...is completely incorrect, then? Just trying to keep things straight for my own sake, here...

(I know this is from the "TrueForm" section but it's clear from the above-referenced demo that N-patches are used for both Trueform and continuous tesselation displacement mapping. Since adaptive tesselation is possible with n-patches for TrueForm, it seems to make sense it's also possible for n-patch displacement mapping. Right? Wrong?)

So you're saying the manager is saying that no adaptive tesselation using n-patches for any reason is possible...?

Maybe ATi means what they wrote:

truform 2.0 supports continuous tessellation, which allows fractional tessellation level... An adaptive tessellation is also supported....

thats only for N-Patches !!!

truform 2.0 also supports displacement mapping....

no word of adaptive tessellation for DM !!!

So

ATi: supports "precomputed DM" and n-patches + (DX8/DX9 VS can do "sampled DM")
NVIDIA: no n-patches nor rt-patches nor DM (DX8/DX9 VS can do "sampled DM")
Matrox: no n-patches nor rt-patches but "sampled DM"

If DX8 VS could do "sampled DM" - where is the advantage of the Matrox solution?

Am I wrong?

Thomas
 
To do "real DM", the R300 would need a texture unit in the vertex shader. This texture unit would be able to convert vertex positions into interpolated texture coordinates and fetch displacement map values (possibly with atleast bilinear filtering). I don't believe this unit exists.

But you can do this in two passes with render to vertex buffer IF you can guarantee exactly the same tessalation on both passes.
I am assuming that you can specify a fixed input stream along with those created by the tessalator, and I'm not positive that this is possible. (haven't looked closely enough at the DX9 spec).
Even if this were possible it's questionable whether or not it would be desirable from a performance standpoint.


[Edit] -- After thinking about this you also need to know in advance how many final verts you would end up with, otherwise there is no way to guarantee writing the texture samples to sequential memory locations [/Edit]
 
If there's still any doubt, I sent an email to ATi asking about this and got it confirmed that the 9700 does NOT support any other form of DM than presampled.
 
Back
Top