GeForceFX and displacement mapping?

WaltC said:
a lot of 3D games don't have a lot of collisions going on (CRPG's, adventures, etc.)

Space and flight sims would be about the only games I can think of with a minimal ammount of collision detection. I can't remember the last time I played a RPG and I was able to walk through walls. In fact, I think a lot of games need more collision detection than they have. I recall multiple times where I fell through the ground in Hidden and Dangerous, for example, and I believe it's due to a poor collision detection algorithm, as there don't appear to be visible gaps in the terrain itself.
 
Humus said:
Collision detection will probably not be much of a problem. On a terrain you can use the very same texture to compute the height of the current posistion of where the guy is.
Is this as trivial as you make it out to be?
On a character you'll probably approximate the guy with a cylinder or set of simple shapes, like most developers do already.
This is adequate for some things, but if you're talking about increasing detail by using displacement maps, then you'd want more detail for the collisions as well or else you end up with characters colliding with thin air.
If you use it as a replacement for bumpmapping you don't really need to change you collision detection at all.
I don't understand this: Bump maps don't affect position.
 
OpenGL guy said:
Is this as trivial as you make it out to be?

Yes. So long as you use a decent tesselation level it should work without probems.

This is adequate for some things, but if you're talking about increasing detail by using displacement maps, then you'd want more detail for the collisions as well or else you end up with characters colliding with thin air.

Doing anything beyond simple approximations for characters, like bones as cylinders, are really overkill. It's hardly noticable in most situations, at least if there's some kind of action going on with the characters.

I don't understand this: Bump maps don't affect position.

I was thinking about how using displacement mapping to add fine details, instead of bumpmapping, like on a wall for instance. In that case no change need to be done for collision detection, using the plane equation suffices.
 
Sabastian said:
I don't think those images are really presenting a completely accurate comparison. The thing that displacement mapping gives you is obviously 'rough' silhouette edges. For "flat on" areas ordinary bump mapping should give nearly the same effect.

In those images, it looks to me that the bump map (in the first picture) is not really an accurate representation of the surface.

I'm not anti displacement mapping, it's just that unless you subdivide to tiny polygons, it really only the silhouette/circumference of a model whereas bump mapping helps the area (which for an N^2 pixel model is going to be N* as many pixels as the silhouette) .

All IMHO of course.
PS: Glad you only posted links to those pics!!!!
 
Sage said:
ill bet DM is possible via vertex shader program. it's quite possible that, with the advancement with the vertex shader (such as dynamic flow controll) DM can be executed without dedicated hardware.
Well, i think you are right. I don't know from where comes this "no DM" :-? , but on the official paper Nvidia says the GF FX supports it, more precisely:
- Vertex displacement mapping
- Geometry displacement mapping
(P. 7)
http://www.nvidia.com/docs/lo/2416/SUPP/TB-00653-001_v01_Overview_110402.pdf
 
precise collision detection with DM and HOS is quite possible and generally would not require a lot of extra work.
You see, any intelligent collision detection system is hierarchical, the best ones go down to per-triangle collision. So the detection goes from general area checks down to general bounding volume checks ( in order of computational cost, i.e. AABB's first, then OOBB's and hierarchical OOBB's ) and only when all of those pass, exat per-triangle collision is checked between two colliding volumes ( both containing perhaps 20-50 poly maxx ).
With DM and HOS one extra step is required. One needs to extrude bounding volume around triangles ( based on precalculated max displacement value in displacement map ) or create bounding volume around HOS control points.
This way one needs to tessellate only the polys that really are colliding on the scene, and this info can be temporally cached in most of the cases.
Also, one could figure out all potentially colliding polys/surfaces first, amd then send them to GPU for tessellation as one batch, and read vertices back.

So some extra work, but probably scant little performance hit. Also, if displacement values are quite small in relation to polygon area ( like max displacement is less than 5% of longest edge of polygon ) and really only used for cosmetic purposes, collision can be safely approximated per polygon ( its not exact anyways in any system, all physics/collision systems have some tolerance in how deeply two bodies can be penetrating )
For example, on scene that ATI demoed at r9700's launch, where F50 tires were displacement mapped so you could see individual grooves in there, one could stil safely do collision detection with bounding cylinder.

Also, shadow volumes arent so impossible to deal with in DMed scene. Only thing that gets more complicated is silhouette calc, so you probably need to batch up polys that potentially contribute to silhouette, send them to tessellator, and then redo silhouette cal on resultant vertices.

All that isnt so simple as it seems on paper and will have some pathological cases where you have to implement stupid workarounds or just have to do with artifacts, but hey, thats what developers do, right ? I mean, lightmaps had its downsides as well but Quake rocked nevertheless ?
 
DemoCoder said:
The F50 car is not displacement mapped, it is normal mapped. You can toggle it off in the demo.
Ok i havent seen the demo, only launch materials and presentations.
http://mirror.ati.com/vortal/r300/educational/main.html
Look at Featured technologies/Truform2.0/slide 3 , it outlines normal mapping vs. displacement mapping
I cant seem to find it at the moment, but i remember one flick with red ferrari on highway, shot from behind, where effects of DM were clearly shown off.
 
Joe DeFuria said:
I thought the F50 car toggled between normal and displacement maps...precisely to show the difference.

it toggles between no normal maps and fp normal maps
 
Like I said, as far as I know, there exists not a single demo demonstrating DM on either DX9 refrast or on the Radeon9700 using any API.

The Matrox DM demo is the only one I know that exists.
 
This whole topic cries out for good ole' investigative jounalism :hint: :hint: ;)

drag them into daylight, ATI has DM up on a Flash slide at the moment, when can we expect API or SDK docs and sample code ? Or will the flash file just quietly disappear from the site ?
Whats NV's stance on it ?
Does matrox have OGL extensions or DX9 Beta(RC0) drivers supporting the feature ?
 
The car/normal map demo can be downloaded as part of ATI's free Normalmapper tool on the developer site. No displacement mapping is used.
 
DemoCoder said:
Can't any of the ATI employees here comment on the real deal? Is DM switched off in the ATI DX9 drivers because it isn't implemented yet, or because the hardware only supports pre-sampled displacement maps?

The R300 is a publically released chip and shipping to end users now, there should be some documentation about the actual level of support available in the HW for these features, beyond the whitepapers which don't really tell us what the HW actually does.
Seems no one want to respond :rolleyes:
 
I think OpenGL's somewhat disparaging remarks to displacement mapping may be a clue (a very unofficial clue) that displacement mapping, even if it is in fact a hardware feature of the R300, is not on their short list of things to include in the drivers.

I imagine that if it is well-supported in hardware on the R300, and not on the GeForce FX, that marketing concerns might change the priority of implementing that feature.
 
Couldnt you use the two pass method suggested for the P10 regardless of wether there is direct support? In the first pass you do tesselation like usual and do the lookup in the displacement map in the pixel shader which it writes to the appropriate location in the vertex buffer, to be used on the second pass for displacement mapping.

The drivers could do this transparantly (although probably not very efficiently).

Marco

BTW I doubt even using software displacement mapping would be a problem for hierarchical collision detection.
 
antlers4 said:
I think OpenGL's somewhat disparaging remarks to displacement mapping may be a clue (a very unofficial clue) that displacement mapping, even if it is in fact a hardware feature of the R300, is not on their short list of things to include in the drivers.
I think so ;)
 
They could try a multipass technique consisting of a first pre-processing step followed by the normal step which uses the sampled data.

Assume VS2.0 sampling this involves simple trivial sampling based on a tesselated mesh and a texture. You'd need to create a VS which creates the tesselated texture coordinates so you know where to sample your displacement map. The first problem now is how do you pass that in a sensible way to the PS, since effectively you want the PS to sample per vertex not per pixel. So either you need a special mode in the hardware or you need to build a special mesh which effectively consists of a mesh build from single pixel sized triangles (each pixel being a sample taken from the map). The PS then simply needs to render each of these single pixel sized triangles by sampling the displacement map with the tex coord provided. And thats issue 2, rendering polygons with a size of 1 pixel is very inefficient. Remember that both R300 and NV30 have 8 pipelines, if you go and render 1 pixel sized triangles they are running at potentially 1/8th their efficiency, if their pipelines handle 2 blocks of 2x2 individually then they might be able to get away with 1/4th efficiency. Next they need to write out in a format that is supported as write by the PS and as read by the VS... which might be tricky... yes its all just data but the data order might be completely different, there might have to be headers, the memory layout might be different etc...

The way I see it this kind of technique can run into all kinds of little problems, there might be work arounds but there is a huge risk that the hardware will be using an incredibly inefficient path (e.g. pixel shader running at 1/8th efficiency or not even 1 single displacement sample per clock due to the multipass nature). This kind of hack will cost pre-processing vertex shader clocks and pre-processing pixel shader clocks, it uses output bandwidth and after that again input bandwidth. All in all NV and ATI might have decided that it is possible but its going to be slow and tricky to do to the level where using the software version might simply be faster on most CPUs.

In the end you want a true hardware implementation which has a cost only in the vertex shader since it is a vertex shader effect, no need to use pixel shader resources (clock cycles usable for something more valuable) for a vertex shader effect.

K-
 
Evildeus said:
Sage said:
ill bet DM is possible via vertex shader program. it's quite possible that, with the advancement with the vertex shader (such as dynamic flow controll) DM can be executed without dedicated hardware.
Well, i think you are right. I don't know from where comes this "no DM" :-? , but on the official paper Nvidia says the GF FX supports it, more precisely:
- Vertex displacement mapping
- Geometry displacement mapping
(P. 7)
http://www.nvidia.com/docs/lo/2416/SUPP/TB-00653-001_v01_Overview_110402.pdf

It came from Doug Rogers a guy from NVIDIA. Some more mails:

my question:

Hi!

NVIDIA say in its docs:

HOS (gf4/gffx)
continuous tessellation (gf4/gffx)
vertex dm (gffx)
geometry dm (gffx)

So I'am a little bit confused.
Why did they drop it? Any other HOS support?

Regards,
Thomas

Doug's answer/question:

Which document is this? The GeForce3 and GeForce4 support a bezier/bspline
patch implementation that can be continuously tessellated. This is not
exposed in Direct3D, only OpenGL. There is no HOS support in GeForce FX.

-Doug

my answer:

Hi!

On this page:

http://www.nvidia.com/view.asp?PAGE=channel_techbriefs

Section:
Technical Brief: NVIDIA CineFX Shaders

Thats the doc:
http://www.nvidia.com/docs/lo/2413/SUPP/TB-00626-001_v01_Shaders_110402.pdf

Regards,
Thomas

So, now I'am waiting for for info from Doug.

Thomas
 
Back
Top