Shrike_Priest
Newcomer
We're hearing a lot of exciting news about the next gen consoles and their hardware. Sub-surface scattering, high-res normal mapping, motion blur etc.
But one field that I've heard very little about is LOD. What kind of new concepts are being put into action there?
When looking at the Cell slides just posted, we see a lot of stuff about NURBS and subdivision, and I'm just wondering to what effect this will be put to use?
Will we be able to see an engine that can determine distance/detail for a character model adapt the mesh on the fly? Or are we going to be "stuck" with three or four pre-defined models that pop in and out? Will this be able to be applied efficiently to larger static models like houses etc?
I apologise in advance for my numerous factual errors
But one field that I've heard very little about is LOD. What kind of new concepts are being put into action there?
When looking at the Cell slides just posted, we see a lot of stuff about NURBS and subdivision, and I'm just wondering to what effect this will be put to use?
Will we be able to see an engine that can determine distance/detail for a character model adapt the mesh on the fly? Or are we going to be "stuck" with three or four pre-defined models that pop in and out? Will this be able to be applied efficiently to larger static models like houses etc?
I apologise in advance for my numerous factual errors