RSX and 1080p games.

From the Unreal Tech website talking about UE3:
Most of our characters are built from two meshes: a realtime mesh with thousands of triangles, and a detail mesh with millions of triangles. We provide a distributed-computing application which raytraces the detail mesh and, from its high-polygon geometry, generates a normal map that is applied to the realtime mesh when rendering. The result is in-game objects with all of the lighting detail of the high poly mesh, but that are still easily rendered in real time.

This is what I was thinking about in my last post. Would it be correct to say that John Carmack's MegaTexture is related to this technique in principle?
 
Tars Tarkus said:
This is what I was thinking about in my last post. Would it be correct to say that John Carmack's MegaTexture is related to this technique in principle?

No, this is similar to the normal mapping that was used in Doom 3.

Cheers
 
Back
Top