what exactly is polybump?

991060

Regular
I know it's a technique to reduce the polygon count of a model while maintaining the detailed looking. My question is that what's the principle of this technique? Is it a kind of bump-mapping or displacement-mapping?

Crytech has a Polybumpâ„¢ Previewer which can be downloaded at http://www.crytekfiles.de/PolyBumpPreviewer.exe

some screenshot:
http://bbs.gzeasy.com/uploads/post-1-1070717809.jpg

http://bbs.gzeasy.com/uploads/post-1-1070718887.png

http://bbs.gzeasy.com/uploads/post-1-1070718928.png


Edit: Not everyone has broadband net connections or high resolution monitors - images changed to links. Moderation process terminated
 
It seems through searching previous topics my question gets answered :D I feel it's like an automatic normal map generating technique, am I right?
 
991060 said:
It seems through searching previous topics my question gets answered :D I feel it's like an automatic normal map generating technique, am I right?

Pretty much. It takes a high res model and a low res model, and makes a bumpmap for the low res model that derives its normals from the high res model, which gives the low res model almost the same appearance as the high res, but with a significantly performance increase.

I did a demo doing something like that earlier:
http://esprit.campus.luth.se/~humus/?page=3D&id=34
 
This is the technology used in Doom 3, and Deus Ex 2 I believe. Apparently the next Unreal engine as well.
 
nobie said:
This is the technology used in Doom 3, and Deus Ex 2 I believe. Apparently the next Unreal engine as well.

CryTek sure must have been busy lately.... :oops:
 
It isn't Polybump that's used in all those games, just the technology, which has been around since IIRC 1988 (the same basic technique has been published in many SIGGRAPH papers, the first of which was published in either 1992 or 1988).

It's an incredibly simple algorithm, and it only takes about a day to write an app that does it if you know what you're doing. All you do is take a UV-mapped low-res mesh, and a high-res mesh, shoot rays from each texel on the low-res mesh in both the +normal and -normal direction, find the closest intersection for both rays, and take the closest intersection of the two. Then grab the interpolated normal from the high-res mesh at the intersection and, if you're doing a tangent space normal map, convert to tangent space. Store the result in a 2D texture at the appropriate UV-coord.

Additional features like perturbation, super sampling and such are all equally simple.

I wrote one a couple years ago that's now being used at Digital Extreme's new London office, and a few others (unofficially - they all have their own apps as well, but I have a few friends there that prefer mine and, hence, I gave it to them). Been meaning to write v2.0 actually, thanks for reminding me =]
 
Ilfirin said:
It's an incredibly simple algorithm, and it only takes about a day to write an app that does it if you know what you're doing.
Alternatively, grab a free NormalMapper tool from ATI's developer site :)

The PolyBump is essentially bump-mapping (and that was invented like 30 years ago). It's the process of creating bump-maps (normal-maps, strictly speaking) that's relatively new (that is, just over 10 years old :)).
 
nobie said:
This is the technology used in Doom 3, and Deus Ex 2 I believe. Apparently the next Unreal engine as well.

Isn't DE2 using quite new build of UE?
Or did they only use only the "base" of UE and their own renderer, as some rumors suggested?
 
Back
Top