1st practical Realtime GI Engine aimed at next-gen consoles/pc...

NB: This is not "Parallax mapping" or "Offset bump-mapping" - Divide 2004 Engine make true computation of displacement.

Divide is very nice but I'd personally have doubt that it makes "true computation of displacement."

Simply look at the before "he fixed the border" case.
http://www.divideconcept.net/d2k4/render/bordereffect.jpg
Clearly there was alot of texture co-ordinate "pushing around" going on there. Even after he fixed it you can still see some white pixels showing up from the text where they shouldn't be if you put a ruler on your screen the text doesn't follow a straight line.

Also it would have been nice to see a checker board displacement texture case.
 
That ran great and easy .Pretty simple example ,still .It sure has all the look of a full featured offline displacement mapping.

From the little i know and understand ,I'm not sure it would have worked on skinned geometry.
This guy is some genius ,he stopped working ont it to do some 3d camera stuff (extracting z-buffer from a movie with a camera).
has a blog ,too (in french)
http://blogs.nofrag.com/divide/
 
Divide is very nice but I'd personally have doubt that it makes "true computation of displacement."

It computes subpixel displacement .the dipsmap is converted first into his own specific (and secret) format.A big part of the trick is in this encoding.
 
Oh now I really see how he means per-pixel. This isn't a pixel shader he renders every pixel each pixel individually.

[font=Verdana, Arial]I'm currently writing such an engine (http://divide.3dvf.net/d2k4/index.htm); however it cannot use opengl since it is per-pixel computation of a non-polygonal rendering.[/font]
Somewhat puts a dampern on the algorithim.

Edit: nm I worked out what he is doing and it should be fine with just using pixel shaders. He is approximating raytracing via imposters. Here is distance imposters used for enviroment mapping http://www.iit.bme.hu/~szirmay/ibl_link.htm .
 
Last edited by a moderator:
bloodbob said:
Oh now I really see how he means per-pixel. This isn't a pixel shader he renders every pixel each pixel individually.


Somewhat puts a dampern on the algorithim.

Edit: nm I worked out what he is doing and it should be fine with just using pixel shaders. He is approximating raytracing via imposters. Here is distance imposters used for enviroment mapping http://www.iit.bme.hu/~szirmay/ibl_link.htm .

How well will this method perform in today's gaming environment? Can you hit a sustainable framerate around 30 fps with the amount of geometry and complex effects of today's games?
 
ROG27 said:
How well will this method perform in today's gaming environment? Can you hit a sustainable framerate around 30 fps with the amount of geometry and complex effects of today's games?
When I say I've worked it out I mean I've taken an educated guess. It would depend on the number of refinement steps ( each one is a dependant texture look up its how he fixed the boarder effect ) and of course it has it failures if you look at it in the wrong angles depending on the height map. You'll notice in many of his videos you will see silouting around the edges of shapes when he is looking at low angles this is method incorrectly looking up the texture co-ordinate and its actually view the near by valley which is shaded dark. Also in a few case you will see rubber banding which can't really be fixed.

He was talking about full screen on his 5200 getting 20fps so its probably going to be usable.
 
Guden Oden said:
Do you really HAVE to use that many vertices tho? 2k texture maps - there aren't even that many pixels on the screen, if there were 2k squared vertices per texture map per character there would be far far far far more geometry than could ever be seen, even at 1080P res, and that's as high as video screens will go for the foreseeable future. Seems to me, there's plenty headroom for some rendering cheatin' there! :LOL:

Keep in mind that the back faces of a character would get culled which should cut the number of visible polygons in half for a start. But you'd still have to process those vertices as well...

As for visible detail, check out UE3 screenshots and I'm sure that on close-ups you'd see pixelation in the textures. That means that one texel is bigger than a pixel on the final image, so there you'd actually get better results from a 4:1 vertex to texel ratio, as the 1:1 version could start to show artifacts. Which brings me to the other point... geometry can produce horrible aliasing, especially if it's full of tiny bumps with shadow/light transitions. I'm not sure how well MSAA would help this, but PRMan is using very high levels of AA to get artifact-free results with displacements. This way, a lot more detail will get into the final image than what would be visible with only 1 sample per pixel, and so you would see the difference with a less detailed model.

Also, if you want to completely replace bump/normal mapping with displacement, then you really need to have similar resolution to get the same shading detail. Otherwise what would give the shaders the necessary information? But displacement can only work through geometry.
Selectively detailing the model isn't really possible. First it'd take as much work as actually modeling everything by hand, and it'd stress the geometry pipeline as well, with a lot more data, ruining the 'compression' effect of displacement. And besides, you usually don't have totally smooth parts on a model as that would make it look artificial, and you want wear and tear on artificial surfaces as well, to make them more interesting and lifelike. So there shouldn't be any parts on a model where you'd not want to have displacement.
Unless, of course, you choose to combine displacements and normal maps as I've mentioned in my post.

But the point that you make is correct, in most cases you'd not see all that geometry. This is why the preferred approach is to use a higher order surface, preferably subdivision surfaces, and combine them with adaptive tesselation. And this is what needs hardware support to work properly.

Would it really? You mentioned large-scale displacement would look weird for various reasons, so if only small scale is used instead, would you really notice self-shadows might look a bit off?
I assume the shadows would cover displaced geometry, but not reflect the changed shape of the displaced geometry, and rather just the original shape of the base geometry. Am I right or wrong? :)

In close-ups, yes, because for example vertices moved towards the inside of the model should get shadowed in places but wouldn't be because their original position was not covered from the light source; and vice versa, some geometry raised up from the original surface would remain in shadow. Haven't tested it yet, but I might try to get some results.

You see shadows are soooo sensitive that even without displacement you usually have to tweak a lot of settings in offline rendering to avoid artifacting. I'm still not sure how game devs avoid such problems... but AFAIK, many engines still use stencils for character self-shadows? Heavenly Sword should be a different case though, shadow maps for everything.

Could you please explain why this is the case? I don't know enough about stuff like this to figure it out on my own...

You would render the character itself with displacement mapping into one set of shadow maps, and use them in the rendering of the characters. This way the self-shadowing would be correct.
Then you'd render the entire scene to another set of shadow maps, but skip the displacement for the characters. So their shadows cast on the enviroment wouldn't have displacement detail, but especially if the shadows are smooth edged, noone would notice the difference.
 
_phil_ said:
you don't have to render displaced polygons .

I allready pointed to this guy (more than 2 times actually):
http://www.divideconcept.net/index.php?page=news.php
His technic rasterises subpixel displacement without physically adding any polygons.Pixel shader does all the job.It works on silhouetes ,like a true displacement mapping.

I remember this stuff and it looks impressive - but I'd like to see it working in a full game enviroment, with lighting and shadows and skinned characters...
 
Laa Yosh said:
And this is what needs hardware support to work properly.
Or we might do like last 5 years and use programmable resources to support it.

Mind you I'm not saying anything about 1:1 vertex/poly ratio - that'd be a stupid way to go with current hw anyway.

but AFAIK, many engines still use stencils for character self-shadows?
Well technically you could use displacement with stencil shadows - but performance characteristics wouldn't be pretty.

I'm still not sure how game devs avoid such problems...
I'd say that they mostly just don't :p
Even for last half a year, every game that used shadow maps has been full of shadow aliasing and other artifacts (including those that aren't out yet). Some may hide it better then others though.
 
Laa-Yosh said:
AFAIK, many engines still use stencils for character self-shadows?
.

Shadow maps more often i would say.Bias tweaks is a pain.MGS4 shadows are amongsts the cleaner i saw.
 
Fafalada said:
Or we might do like last 5 years and use programmable resources to support it.
Mind you I'm not saying anything about 1:1 vertex/poly ratio - that'd be a stupid way to go with current hw anyway.

To be more precise, I've meant that a GPU-only solution would need changes in the hardware. Theoretically even the PS2 might be able to do adaptive tesselation on its VUs - but not at acceptable speeds. And I think that current gen hardware wouldn't be enough to display characters with displacement mapping only, either. But a displacement and normal map combo should work; perhaps the X360 version of King kong has already used it on the T-rex? There it makes sense too, for human characters it's not worth the effort yet IMHO.
 
Back
Top