Procedural textures, is Microsoft making a mistake?

Shifty Geezer said:
london-boy said:
So, procedural textures would save you space on the DVD, but not on RAM. That is, unless EVERYTHING is really done on the fly and dumped as soon as the frame is rendered. Seems fishy to me.
I imagine there would be a degree of on-the-fly creation. eg. In the wall example you have a shader that applies a degree of noise based on the UV coords, or on a reptile you create cellular bumps based on UV coords. This'll save RAM and provide infinitely scalable detail (no pixelation or blurring of bumps for example) but is a number-hog. You wouldn't procedural render to texture or you may as well just use a precalculated texture.


I can see procedural textures reducing the loading times, and the memory required for levels or zones, but they still have to take RAM space like any other texture while they're in view, don't they?

I'm no expert, but if the textures are created on the fly for each frame, on a static object (wall,floor) then they would shimmer as hell, wouldn't they? The only times I can see devs doing it, is with animated textures such as skies, fire, water, etc... Would someone enlighten me?
 
Alejux said:
I can see procedural textures reducing the loading times, and the memory required for levels or zones, but they still have to take RAM space like any other texture while they're in view, don't they?
I'm no expert, but if the textures are created on the fly for each frame, on a static object (wall,floor) then they would shimmer as hell, wouldn't they? The only times I can see devs doing it, is with animated textures such as skies, fire, water, etc... Would someone enlighten me?

Procedural textures can potentially be purely 'virtual' - you can create each texel on the fly in the shader as it is used. You might or might not choose to do this, because it typically requires quite a few instructions to generate each texel.

There's also no reason for procedural textures to shimmer - generally speaking they are created making use of some form of noise function - this function being designed in such a way that it gives repeatable results given the same input conditions and seeding, so if you are generating the texture on the fly for whatever reason then you can always regenerate exactly the same texture data that you used in the last frame if you so wish. Alternatively, by altering the input conditions you can alter the texture and use this to generate animations of various kinds.
 
Most procedural effects are grown out a predefined array of "seedpoints". It's only if you generate a random seed array every frame you will get shimmering textures.
 
What Microsoft is developing is not procedural textures (that has already been done), what they are developing is something they call "Procedural Synthesis". Quoted from an earlier interview with Allard last year... "He went on to describe some of the demos being created behind the scenes: 'I've seen demos of terrain and worlds, with no textures in them whatsoever and no geometry - it's just a program that's creating a scene for you."

Procedural Synthesis comprises of several things, on a smaller scale you could basically create an infinite number of unique and complex models from one simple base model in real time. On a larger scale you could apply that to the enviroment and create a extremely large world where there is no loading or transistion. One such demostration that was shown by Sakaguchi of Mistwalker to Famitsu was a scene showing a planet starting outside of the atmosphere, flying to any point on the surface and to any object on the surface with no loading or transition. Basically you have something called "Procedural Geometry" which I surmise is main reason why the XBox2 will have more than 1 CPU, not to equal the porported power of the Cell CPU, but to actually create geometry in real time on the fly to be rendered by the GPU based on a program.

Imagine a Star Wars game, for example... currently in a game such as Star Wars Galaxies you have discreet space and planet scenes and you could fly between planets, and when you approach one you would have a load occur, and you would appear on the surface at a fixed point you have no control over. Instead with a procedurally created world you could not only fly between planets with no loading or transition, but you could actually fly into the atmosphere all the way to any point on the surface... walk around and do whatever, board you ship and fly off the planet to where ever else... all with no loading or scene transitions and giving you full control over what you do in between. That is an example of how procedural synthesis could benefit gaming as a whole and frankly I feel if this is done right could be evolutionary for video gaming.

Procedural Geometry is something that I have not seen done before and could have radical implications on gaming directly if they indeed managed to do something like that. Procedural texturing has already been done, and is far simpler to accomplish and you have seen examples of that done already in some games (Unreal, Halo, etc...) The only problem with procedural geometry is that it needs a seperate processor from the GPU and CPU to produce the geometry to be rendered and it needs to have direct access to the GPU so the data can be transferred at high speeds and low latencies, and it seems the XBox2 architecture seems to be based around this. Time will tell though and I still do not know if Microsoft managed to pull something this complex off...

The GameMaster...
 
The GameMaster said:
Basically you have something called "Procedural Geometry" which I surmise is main reason why the XBox2 will have more than 1 CPU, not to equal the porported power of the Cell CPU, but to actually create geometry in real time on the fly to be rendered by the GPU based on a program.

Procedural Geometry is something that I have not seen done before and could have radical implications on gaming directly if they indeed managed to do something like that. .

So what is procedural geometry and what is the benefit?
 
blakjedi said:
So what is procedural geometry and what is the benefit?
The actual creation of an infinite unique complex models from a simple base model (on a smaller scale) in real time on the fly, based on a program instead of creating X variations of models by hand (which takes both time and memory). On a larger scale you could create entire worlds instead of models or a combination of both. You could create truly dynamic enviroments with this concept instead of static enviroments where the player can impact the world directly and physically. Its more than that actually, but I am just providing some simple examples...

The GameMaster...
 
Elite/Frontier is an example of procedural synthesis, as was Captive on the Amiga. It also includes Diablo and Champions of Norrath with their 'random' dungeons. You use an algorithm to place objects.
I guess XB2 (and all next gen) will be taking this a step further to create character models etc., piecing together building block (eyes type 3, nose type 5, mouth type 2, armour type 4) and then producing some sizing (stretch up a bit). Presumably these models will be stored in RAM and not recreated every frame.
 
Procedurally generated content has been used for centuries, it would be nice to see this being taken to the next level. It's a very cheap (memory-wise) way to visualise lots of things on screen. Now that computational power is increasing evry year, procedurally generated content will certainly become more widely used, for a wider range of features in games.
 
london-boy said:
Procedurally generated content has been used for centuries, it would be nice to see this being taken to the next level. It's a very cheap (memory-wise) way to visualise lots of things on screen. Now that computational power is increasing evry year, procedurally generated content will certainly become more widely used, for a wider range of features in games.

With each console having only 256MB, they don't have much of a choice, do they?

My main doubt here, with this approach, is how the development will maintain it's artistic quality, when programmers will have to do what was supposed to be a graphics designer's job.
 
Alejux said:
london-boy said:
Procedurally generated content has been used for centuries, it would be nice to see this being taken to the next level. It's a very cheap (memory-wise) way to visualise lots of things on screen. Now that computational power is increasing evry year, procedurally generated content will certainly become more widely used, for a wider range of features in games.

With each console having only 256MB, they don't have much of a choice, do they?

My main doubt here, with this approach, is how the development will maintain it's artistic quality, when programmers will have to do what was supposed to be a graphics designer's job.

Well i think devs will have some nice tools to show many things on screen at once without using a lot of memory, instancing, procedurally generated geometry and textures, it's all to be seen how computing intensive they wanna go, and how well next gen processor can handle all these processing power hogs.
 
Shifty Geezer said:
Elite/Frontier is an example of procedural synthesis, as was Captive on the Amiga. It also includes Diablo and Champions of Norrath with their 'random' dungeons. You use an algorithm to place objects.
I guess XB2 (and all next gen) will be taking this a step further to create character models etc., piecing together building block (eyes type 3, nose type 5, mouth type 2, armour type 4) and then producing some sizing (stretch up a bit). Presumably these models will be stored in RAM and not recreated every frame.

That seems highly complex. I can't imagine a human model, with 10's of bones, rigging elements, parallax mapping textures...being chopped up into seperate blocks and rearranged dynamically. I don't think it's impossible, but just seems very, very hard.
 
A human skeleton is the same no matter the face or clothes. You'd have a skeleton template applied a basic body mesh. You'd make the mesh a bit fatter/skinnier/taller/short just like you can in some footy games now. You'd piece different facial textures on. The graphics artists would need to provide all this content and the programmers would need to provide the means of piecing these identikit components together, which in theory isn't too hard. Several RPGs already show the clothes your character wears and just as you change the clothes on a character, you can chage the face too.
 
Shifty Geezer said:
A human skeleton is the same no matter the face or clothes. You'd have a skeleton template applied a basic body mesh. You'd make the mesh a bit fatter/skinnier/taller/short just like you can in some footy games now. You'd piece different facial textures on. The graphics artists would need to provide all this content and the programmers would need to provide the means of piecing these identikit components together, which in theory isn't too hard. Several RPGs already show the clothes your character wears and just as you change the clothes on a character, you can chage the face too.

Thats' been done for years.
Edit: oh, just saw u said the same.
 
But I'm not talking about stupid armors or hairs. That's already been done for ages. I'm taking about smaller things, like facial setup, with full expressions and lip sync, I imagine must be very hard to do, considering all these little elements such as mouth eyes, nose all have work together. It's much easier just customizing the whole head at design time.
 
I wouldn't have thought so. You have points you manipulate for lip syncing, and when you load a different mouth you load in it's collection of control points. After all every human being has the same collection of facial muscles that all work in the same way. You just load your skeleton with the same framework scaled to fit.

More complex than we have so far (though Poser allows quick figure construction similar to this I think) but not a challenge to compute. Just needs some sensible design at the content-creation phase. Or a different approach with a couple of dozen design and constructed heads loaded in.
 
Alejux said:
But I'm not talking about stupid armors or hairs. That's already been done for ages. I'm taking about smaller things, like facial setup, with full expressions and lip sync, I imagine must be very hard to do, considering all these little elements such as mouth eyes, nose all have work together. It's much easier just customizing the whole head at design time.

As long as everything fits together, it should work fine. Keeping everything separate and mixing them up gives you lots of final combinations, which would take AGES to design one by one.
 
What are the downsides to procedural synthesis and geometry? Would a game that is heavily scripted be less of a candidate for it compared to a game that is a lot more free roaming?

I apologise if these questions are silly, but I'm trying to get a better understanding of it, and its pros and cons etc.
 
I'd say the downsides are processor intensiveness and the chance for more buggy animations, where a mapping goes skew-wiff and the eyes pop out of the head. If you've got the horse-power proecural synthesis can only really be a good thing.
 
Back
Top