Procedural content creation *spawn

Dregun

Newcomer
Shifty,

Is this what you are referring to for ontime asset creation?

Example:
Here is a small detailed texture of standard concrete
Here is a small detailed texture of grout type A
Here is a small detailed texture of moss type A & B
Here is a small detailed texture of brick type A
Here is a small detailed texture of dirt type A, B & C

Developer creates a wall, creates the brick pattern he/she wants. Then basically starts layering the textures onto the wall to create the image he wants to see. More moss in the grout but heavy amounts in the large crevices. Varying amounts of dirt on the wall but larger amounts near the bottom with some very dirty areas near the top. The concrete and brick textures vary in strength throughout the wall...etc etc.

Now instead of applying multiple layers of textures to get this affect during rendering the system generates an entire texture from these textures and spits it to the GPU for rendering instead. All of that can be done on the CPU and allows for an immense amount of asset creation.

Basically instead of the developer at that very moment spitting out a texture and saving it in storage he/she just created the instructions for the system to generate the texture before presenting a complete texture to the GPU? Normally the GPU would just keep applying layers of textures on top of one another to get what the developer wanted but this way the CPU created a single texture instead?

This would mean a plethora of small textures would be loaded into ram and the system would just pull them and use them to create larger textures or smaller textures that have already been layered before rendering?

This makes sense to me but a part of me is saying this in one way or another is already being done.

I'm looking at it almost how Naughty Dog did Drakes animations, layering animations on top of one another and having the CPU fill in the transitions between them.
 
That's one way, but there are many possibilties, from just piecing together building block pieces for variety (think LBP sackboys being arrangements of arms, legs, clothes and faces, and apply that concept to GTA having a library of 10 each of face, torso, clothes, and it mixes and matches them); to procedurally adjusting models (think EA's sports games where you have a base model and you can adjust the mesh parameters to move and resive eyes, chain, cheeks, body etc. For a large population you'd apply randomised settings to lots of people models); to procedurally creating new content (think .kkreiger). Or a mix of these, like Speedtree randomising models and cut+pasting leaf textures.
 
Shifty,

Ok, so I wasn't dreaming!!

With a system that generates its own assets what kind of limitations would that bring. At one point a developer is going to have to ask him/herself if creating that texture by using multiple assets was worth the trade off compared to creating an asset and loading it into memory itself. Would the system be storing more assets than what is needed with this method?

Another question, if a developer were to develop an algorithm couldn't they also create buildings in the same way? Give the system parameters to designate a basic size (length + width) for individual buildings. Maybe even provide a designator for the system to know what type of building (house, garage, office, mall etc etc) to build that the developer would choose? Maybe the developer could create zones (commercial zone, residential zone, military zone) on a basic map and the system would just go to work creating that world.

I would imagine a developer could basically allow the system to build the entire world itself, if it can be controlled with some basic rules and guidance what is stopping the system from creating multiple buildings with completely different layouts. Using different textures and populating them with different objects as designated by the room modifier (bathroom, kitchen, hallway, bedroom, clothing store, office, manufacturing etc etc).

In that case would it be possible for the system to build textures themselves as well? Again if the system was given guidelines onto how certain textures can be combined and applied by giving them modifiers themselves (wall texture, foliage texture, wood texture..etc etc). The system would be able to work with the assets given to it and therefore the better the assets the better the world would be. This would also make a good case for reusing assets as they would only increase the amount of diversity available for the system to choose from.
 
With a system that generates its own assets what kind of limitations would that bring. At one point a developer is going to have to ask him/herself if creating that texture by using multiple assets was worth the trade off compared to creating an asset and loading it into memory itself. Would the system be storing more assets than what is needed with this method?
It'd be bad design if the developers are creating more assets than they need! Basically as you say, it's a trade off. You'd lose artistry with procedural content, and it'd cost more processing power, but potentially save RAM if you can create the assets just-in-time for rendering, and certainly save IO BW.

Another question, if a developer were to develop an algorithm couldn't they also create buildings in the same way? Give the system parameters to designate a basic size (length + width) for individual buildings. Maybe even provide a designator for the system to know what type of building (house, garage, office, mall etc etc) to build that the developer would choose? Maybe the developer could create zones (commercial zone, residential zone, military zone) on a basic map and the system would just go to work creating that world.

I would imagine a developer could basically allow the system to build the entire world itself, if it can be controlled with some basic rules and guidance what is stopping the system from creating multiple buildings with completely different layouts. Using different textures and populating them with different objects as designated by the room modifier (bathroom, kitchen, hallway, bedroom, clothing store, office, manufacturing etc etc).

In that case would it be possible for the system to build textures themselves as well? Again if the system was given guidelines onto how certain textures can be combined and applied by giving them modifiers themselves (wall texture, foliage texture, wood texture..etc etc). The system would be able to work with the assets given to it and therefore the better the assets the better the world would be. This would also make a good case for reusing assets as they would only increase the amount of diversity available for the system to choose from.
Absolutely. And bare in mind this isn't anything new as a concept. Algorithmic games have existed since at least elite, which created thousands of worlds. I'm sure some MUD would ahve done similar before. And Captive on the Amiga had procedural level creation that worked very well. Champions of Norrath on PS2 had procedural dungeon creation that pieced segments together, very well, but that used the full DVD capacity and hit loading issues as a result, so wasn't quite in the same vein.

Algorithmic texture creation is limited. There's only so much you can do that doesn't look rubbish, and the processing cost is often prohibitive. So a bit of randomised noise will work, but trying to create a cellular reptilian skin texture say, you'd be better off using a texture. It's very much a balancing act, but notably it's one that hasn't seen as much use this gen as we were led to believe it would, so next gen would be going in rather blind to actual design a system on the hopes of procedural content being commonplace and saving the need for storage. I can't see any console company saying ,"we don't need disc capacity because half the assets will be created on the fly, so can use flash cards."
 
Thanks for the insight Shifty!!

Another oddball question for you :devilish:

With the use of image recognition software have there been any developments with using photgraphs of objects and having the system create a 3D version of it? I thought I saw something at one point but I don't remember nor how to search for such a thing.

Would the ability for a system to perform image recognition allow it to create objects by point of refference? Take that lizard you were talking about for an example,

The system gets a jpg of a lizards profile.
It knows that it has 2 limbs in its profile view
It is longer then taller
Has jointed limbs due to the angles between them
Its eyes are large for its head
Its eyes are more out to the side then in front
It's skin is textured in a pattern typical of reptiles
The head is more narrow then tall.
It has a tail that is curved
etc etc

Without telling the system what it was looking at it could take that information and already define that the image it is looking at is a Lizard. Next it could take measurements of the image (distance between legs, body length and height, eye size etc etc) and combined with the knowledge it has of Lizards it could construct that lizard. It could rig that lizard up automatically because it knows how many joints the lizard has.

Could or is software available that just requires the end user to make sure objects are defined to their basic structure (a house has typically 4 sides and a roof et etc) and then feed the system 2D images of objects for it to construct around the basic defined structure?
 
Algorithmic texture creation is limited. There's only so much you can do that doesn't look rubbish, and the processing cost is often prohibitive. So a bit of randomised noise will work, but trying to create a cellular reptilian skin texture say, you'd be better off using a texture. It's very much a balancing act, but notably it's one that hasn't seen as much use this gen as we were led to believe it would, so next gen would be going in rather blind to actual design a system on the hopes of procedural content being commonplace and saving the need for storage. I can't see any console company saying ,"we don't need disc capacity because half the assets will be created on the fly, so can use flash cards."

Quick question about this.

The power requirements seem to (and agreeably) be very large as the depth and complexity of the textures increase. However my question is would we as an end user be able to sacrafice some quality for an extensive amount of unique textures to make up for it?

What I'm asking is do you think the plethera of unique textures would provide visuals that would offset the lack of quality that was lost do to such extensive work creating them?
 
Thanks for the insight Shifty!!

Another oddball question for you :devilish:

With the use of image recognition software have there been any developments with using photgraphs of objects and having the system create a 3D version of it?...
Nope. I've chatted with Laa-Yosh about this. I'd have hoped 3D scanning and asset creation would have developed along the same lines as other creative processes, but it doesn't work like that. 3D scanning at the moment is limited to creating reference models. Deriving 3D models from 2D scans is going to be even more difficult.

The power requirements seem to (and agreeably) be very large as the depth and complexity of the textures increase. However my question is would we as an end user be able to sacrafice some quality for an extensive amount of unique textures to make up for it?

What I'm asking is do you think the plethera of unique textures would provide visuals that would offset the lack of quality that was lost do to such extensive work creating them?
Not really, as the cost is an order of magnitude or 10 above just looking up a premade texture. Have a look at what's freely available in something like CGTextures. Finding a way to model them procedurally is hard enough. The time taken to create them is way beyond what you can do in realtime. Like rendering a fractal versus slapping a fractal texture on a quad. Creating a fractal tree is going to take ages versus plonking down a premade tree, or in the case of something like Speedtree, taking some premade parts and assembling them with randomised parameter variation.
 
Shifty,

I guess I'm just baffled that something like this isn't in existence or currently being used. Google earth for all intents and purposes does something like this right? The only thing I could find online is an example that mimics Google Street view but it doesn't seem like they took it any further.

I guess I just don't understand the complexity required. A 2D image of a house to me has TONS of information on how that object would be drawn in 3D. Using shadows and other key elements of a picture to determine scale you would imagine that a computer would simply just start drawing the polygons needed to re-create the structure. All it seems google street view and the like do is just manipulate a 2D image to make it look 3D but it's still a 2D image not a 3D object.

But alas my expertise (and quite frankly intelligence level) is not high enough to offer a solution or fathom the actual complexities of such a system. It's a shame however that with all this processing power we still rely on a human hand to give the system even the basic shapes for creating objects.
 
I guess I just don't understand the complexity required. A 2D image of a house to me has TONS of information on how that object would be drawn in 3D.
You mean something like this? There are loads of companies working on similar tech, but this stuff (automated 2D->3D) is still a far cry from what would be efficiently usable in a game or other high-detail interactive environment.
 
That tech looks pretty interesting..

I would wonder though if we could at least start out creating assets this way offline, then progressivly adding more of these methods in real time once the tech improves and the power increases.

Wish I could have gotten some of the City demos to work for the companies who are currently using their tech. Would have loved to see it in action.
 
That tech looks pretty interesting..

I would wonder though if we could at least start out creating assets this way offline, then progressivly adding more of these methods in real time once the tech improves and the power increases.

Wish I could have gotten some of the City demos to work for the companies who are currently using their tech. Would have loved to see it in action.

Games can do much better if they want to do 3d in post processing. Game engines have stuff like per pixel z-buffer available when doing post process 3d conversion. This enables engine to shift pixels to right places during the 2d-3d conversion process more accurately than if you only had framebuffer available. If engine has also per "motion vectors" available and renders slightly higher res than screen res perhaps the higher res and movement information can be used to extrapolate what the other eye would have seen.

Though still, it's impossible to get perfect details there where it was not rendered so post processing solution will cause artifacting. Just try by placing your coffee mug in front of your eyes. Now close one eye, then open the eye and close the other eye. Both eyes see different things. It's impossible to accurately fill in all the missing details in post processing without actually rendering the visible stuff twice.

I'm sure there are plenty of more tricks post process 2d-3d conversion can do(and perhaps only rendering selected stuff twice and post processing part of the scene to be 3d) but in the end it all comes to amount of artifacting players accept when developer chooses between proper 3d and some post processing solution. I'm sure some type of games lend themselves perfectly for post processing 3d and some will fail spectacularly without proper 3d.
 
Last edited by a moderator:
Shifty,

I guess I'm just baffled that something like this isn't in existence or currently being used. Google earth for all intents and purposes does something like this right? The only thing I could find online is an example that mimics Google Street view but it doesn't seem like they took it any further.

I guess I just don't understand the complexity required. A 2D image of a house to me has TONS of information on how that object would be drawn in 3D.
Only if you parse it right. Is that dark patch a shadow, or part of the wall? An example I remember was on a moonlit night in my room I saw a patch on the floor, and had to wonder if it was a patch of moonlight or a piece of paper. Turned out to be a piece of paper, but you can't know that just by looking. Something like a house provides an obvious basis for analysis. It's a basic shape, a collection of boxes. Extending that to other more organic shapes becomes explonentially more complicated.

There is software that exists for taking a series of 2D photos and turning them into 3D scenes. It needs the user to mark out edges. Your ideal would need to automate this. The end result also isn't an ideal, optimised game-model crafted to best fit the game's rendering pipeline.

I was actually thinking of scanning using 3D cameras, and it's that which still surprised me that it isn't suitable for creating game assets. Where we can create realistic textures by just photographing a real surface, much quicker and more realistic and available to anyone, compared to having to hand-draw the texture on a computer, we can't similar do the same taking 3D 'photos' to capture a model. that technology is still backward, but then that's because 3D is an exponentially more complex problem than 2D.

It's a shame however that with all this processing power we still rely on a human hand to give the system even the basic shapes for creating objects.
All that processing power doesn't count for much in terms of comprehension. Computers can turn images into numbers and crunch trillions of numbers a second, but they don't know what the hell it is they are 'looking' at. Humans have an intrinsic understanding of the objects based on real experience and a huge database of 2D and 3D information. They're two very different fields. eg, just looking out the window, the building opposite has glass windows, and I know glass reflects, so I know the colours I'm seeing in the window's aren't part of that surface but a reflection. A whole glass office is a complex mix of reflections and light coming from inside the building. In understanding the building I can ignore reflections, whereas a computer trying to understand a 2D image only has the colour values of each pixel to work with. The strong lines of a house or office makes that much easier to do with a building, especially when an algorithm has been programmed by a human to handle certain characteristics, which is why something like Google Earth can be produced now, but building are amongst the simplest of objects to convert to 3D (the simplest being basic geometric shapes).
 
Back
Top