(Follow up to the original discussion from the PS3 strategy thread)
I'd like to start with a general overview of the principles of game content creation. Then we can discuss any points in further depth, check out examples/comparisions, get into other topics, certain games etc. I know there are some other artists and of course coders here, any contributions/corrections are most welcome as well.
Game artwork usually has to fill a few certain, functional requirements.
Just so we know the reasons for the rest of this post... The first has always been to communicate information to the player, ever since the first 8-bit games: this is a door, this is an enemy character, this is a powerup, that gun is probably stronger than the one I already have.
It has to evoke emotions: this is an enemy to fear, this is a safe and happy place, there is an interesting contrast between the enviroment and the characters in it.
It has to be both efficient, making the best use of the allocated resources; and free of any technical problems that would cause rendering artifacts and slowdowns.
It also has to look good; as in, appeal to the general tastes of the intended audience, and fit into the overall visual style of the game.
Innovation happens, and the industry makes full use of it.
I'd like to stress this a lot, right at the start, to put things into perspective. The past few years have introduced many new software tools, techniques and more powerful hardware, that have enabled artists to create more detailed assets in a lot less time. Zbrush and Mudbox have in particular brought on a small breakthrough in the creation of high resolution models for normal mapping, and their ease of use has enabled traditional artists without technical backgrounds to easily get into digital content creation, usually without knowing what's going on behind the scenes. 64-bit systems have provided enough memory to handle and process these assets. But even though development hasn't stopped, it's still not too reasonable to expect another similar breakthrough in the upcoming years.
Most studios have already embraced these new technologies; in fact, without these innovations we would not have seen games like Gears of War or Mass Effect. Without them, artists could not have produced such quality within reasonable time.
No computer can replace human intelligence and artistic sense.
We cannot create programs that could make many important decisions. Intelligence is required to create proper polygon structures for animation and normal map baking, efficient UV texture layouts, blendable animations from mocap, and so on. There's a lot of academic research into various algorythms, but they are still far below the capabilities of a reasonably talanted and experienced artist, not to mention the lack of implementation in any off-the-shelf software.
For example, we do have the ability to automatically unwrap 3D objects into a 2D texture space - but it still requires a human's intelligence to decide where and how to cut up the object into individual parts that can be unfolded. And in general, what compromises to make between the number of seams separating different pieces and texture distortion, or how to avoid obvious mirroring while making the most use of texture space.
Artistic sense is very, very important. In fact, it requires a certain amount of talent, and many years of learning and practice, to be able to quickly determine what makes something look "good". Add some more mass here, make the dirt darker, remove detail from there, use this shade of blue, increase the strength of the ambient light, and so on, and so on. Composition, color theory, anatomy, ideals, styles, tastes - computers don't know anything about this. It's the main reason why an art director usually gets paid twice as much as a simple artist, his advices will help to push everyone's work further, and he'll also make sure that the entire team produces consistent visuals.
This is also true for most procedural approaches, particularly for textures - most automated dirt mapping algorythms create results that are boring, unconvincing, sometimes even ugly. There may be detail but it doesn't look "good" or "right", and it may not fit the style that your game is aiming for.
Most "automated" content creation tools are based on modifying existing assets.
Poser, Facegen, SpeedTree and the rest all work by taking a set of pre-built regular assets. The same goes for in-house tools like Bioware's customisable player characters in Mass Effect, or Weta's Orc Builder used for the LOTR movies' crowd scenes. So you're constrained by the quality level and technical properties of these assets (UV layout, polygon count and structure etc.) and have to modify them extensively to fit your needs. Usually though, these applications are not meant to be used for high quality game assets, they're more for the enthusiasts (ever seen Poser pron images...) or different fields like architectural visualisation. Also, by modifying things like facial features, you'll probably introduce texture stretching, mess up bone weights or morph targets. And in practice, characters created from a generic model will end up looking generic, lacking characteristic features.
And in the case of an in-house tool, you'll also have to build a set of your own assets to use, like armor, cloth and face variations. And you'll still end up with these models looking inferior to stuff that's been built from the ground up, by hand. Yes, you can get away with a lower level of quality for background stuff, but you can't go too low, because the camera is interactive, the player can go around, and you'll usually be judged by your worst stuff, not your best. The player's less likely to inspect every little corner of an NPC's texture, but he'll surely notice if it's half the resolution, or doesn't respond as well to the dynamic lighting.
Scanning, photography, motion capture and such will only provide raw data that has to be processed.
Scanning a famous actor or a dragon maquette will only get you a hyper-dense mesh, consisting of up to 40 million polygons, that you can only use to extract normal maps and perhaps some textures from, and you'll still have to build the ingame model. A photo will not fit your UV layout, will have some residual lighting that you'll have to paint out, and it won't give you specular maps either. Motion capture data has to be cleaned up, fixed when not appropriate, fitted to characters with different proportions than the actor, and edited into individual animations.
All this requires artists, time and money, and usually some specialized and expensive software too (for example Cyslice can process scanned objects for $4000 per seat - in addition to the standard Max/Maya/XSI + Photoshop + Zbrush + whatever). In most cases, starting from scratch is actually cheaper, faster and less complicated.
The majority of games has original content that hasn't got any real-world counterparts.
Sure there are cities like New York, celebrities like Tiger Woods, industrial CAD models of cars, cool looking dirty rock surfaces, and so on, that you can buy or process into your game somehow.
But you can't buy a tool that builds alien plants, fantasy monsters, spaceships or superhero physiques. You can't buy such assets from an object database, can't ask them to go for a scanning, photo shooting or motion capture session, you really have to make them all on your own, from scratch.
And even if you can buy a BMW M3 mesh from Viewpoint Data Labs, its polygon count and distribution, UV mapping, and construction is probably not going to fit your engine's requirements.
Well, so much for a start, I'm also pretty tired for now and will probably go to sleep soon... Anyway, I hope you'll find this interesting and maybe even enlightening to an extent.
I'd like to start with a general overview of the principles of game content creation. Then we can discuss any points in further depth, check out examples/comparisions, get into other topics, certain games etc. I know there are some other artists and of course coders here, any contributions/corrections are most welcome as well.
Game artwork usually has to fill a few certain, functional requirements.
Just so we know the reasons for the rest of this post... The first has always been to communicate information to the player, ever since the first 8-bit games: this is a door, this is an enemy character, this is a powerup, that gun is probably stronger than the one I already have.
It has to evoke emotions: this is an enemy to fear, this is a safe and happy place, there is an interesting contrast between the enviroment and the characters in it.
It has to be both efficient, making the best use of the allocated resources; and free of any technical problems that would cause rendering artifacts and slowdowns.
It also has to look good; as in, appeal to the general tastes of the intended audience, and fit into the overall visual style of the game.
Innovation happens, and the industry makes full use of it.
I'd like to stress this a lot, right at the start, to put things into perspective. The past few years have introduced many new software tools, techniques and more powerful hardware, that have enabled artists to create more detailed assets in a lot less time. Zbrush and Mudbox have in particular brought on a small breakthrough in the creation of high resolution models for normal mapping, and their ease of use has enabled traditional artists without technical backgrounds to easily get into digital content creation, usually without knowing what's going on behind the scenes. 64-bit systems have provided enough memory to handle and process these assets. But even though development hasn't stopped, it's still not too reasonable to expect another similar breakthrough in the upcoming years.
Most studios have already embraced these new technologies; in fact, without these innovations we would not have seen games like Gears of War or Mass Effect. Without them, artists could not have produced such quality within reasonable time.
No computer can replace human intelligence and artistic sense.
We cannot create programs that could make many important decisions. Intelligence is required to create proper polygon structures for animation and normal map baking, efficient UV texture layouts, blendable animations from mocap, and so on. There's a lot of academic research into various algorythms, but they are still far below the capabilities of a reasonably talanted and experienced artist, not to mention the lack of implementation in any off-the-shelf software.
For example, we do have the ability to automatically unwrap 3D objects into a 2D texture space - but it still requires a human's intelligence to decide where and how to cut up the object into individual parts that can be unfolded. And in general, what compromises to make between the number of seams separating different pieces and texture distortion, or how to avoid obvious mirroring while making the most use of texture space.
Artistic sense is very, very important. In fact, it requires a certain amount of talent, and many years of learning and practice, to be able to quickly determine what makes something look "good". Add some more mass here, make the dirt darker, remove detail from there, use this shade of blue, increase the strength of the ambient light, and so on, and so on. Composition, color theory, anatomy, ideals, styles, tastes - computers don't know anything about this. It's the main reason why an art director usually gets paid twice as much as a simple artist, his advices will help to push everyone's work further, and he'll also make sure that the entire team produces consistent visuals.
This is also true for most procedural approaches, particularly for textures - most automated dirt mapping algorythms create results that are boring, unconvincing, sometimes even ugly. There may be detail but it doesn't look "good" or "right", and it may not fit the style that your game is aiming for.
Most "automated" content creation tools are based on modifying existing assets.
Poser, Facegen, SpeedTree and the rest all work by taking a set of pre-built regular assets. The same goes for in-house tools like Bioware's customisable player characters in Mass Effect, or Weta's Orc Builder used for the LOTR movies' crowd scenes. So you're constrained by the quality level and technical properties of these assets (UV layout, polygon count and structure etc.) and have to modify them extensively to fit your needs. Usually though, these applications are not meant to be used for high quality game assets, they're more for the enthusiasts (ever seen Poser pron images...) or different fields like architectural visualisation. Also, by modifying things like facial features, you'll probably introduce texture stretching, mess up bone weights or morph targets. And in practice, characters created from a generic model will end up looking generic, lacking characteristic features.
And in the case of an in-house tool, you'll also have to build a set of your own assets to use, like armor, cloth and face variations. And you'll still end up with these models looking inferior to stuff that's been built from the ground up, by hand. Yes, you can get away with a lower level of quality for background stuff, but you can't go too low, because the camera is interactive, the player can go around, and you'll usually be judged by your worst stuff, not your best. The player's less likely to inspect every little corner of an NPC's texture, but he'll surely notice if it's half the resolution, or doesn't respond as well to the dynamic lighting.
Scanning, photography, motion capture and such will only provide raw data that has to be processed.
Scanning a famous actor or a dragon maquette will only get you a hyper-dense mesh, consisting of up to 40 million polygons, that you can only use to extract normal maps and perhaps some textures from, and you'll still have to build the ingame model. A photo will not fit your UV layout, will have some residual lighting that you'll have to paint out, and it won't give you specular maps either. Motion capture data has to be cleaned up, fixed when not appropriate, fitted to characters with different proportions than the actor, and edited into individual animations.
All this requires artists, time and money, and usually some specialized and expensive software too (for example Cyslice can process scanned objects for $4000 per seat - in addition to the standard Max/Maya/XSI + Photoshop + Zbrush + whatever). In most cases, starting from scratch is actually cheaper, faster and less complicated.
The majority of games has original content that hasn't got any real-world counterparts.
Sure there are cities like New York, celebrities like Tiger Woods, industrial CAD models of cars, cool looking dirty rock surfaces, and so on, that you can buy or process into your game somehow.
But you can't buy a tool that builds alien plants, fantasy monsters, spaceships or superhero physiques. You can't buy such assets from an object database, can't ask them to go for a scanning, photo shooting or motion capture session, you really have to make them all on your own, from scratch.
And even if you can buy a BMW M3 mesh from Viewpoint Data Labs, its polygon count and distribution, UV mapping, and construction is probably not going to fit your engine's requirements.
Well, so much for a start, I'm also pretty tired for now and will probably go to sleep soon... Anyway, I hope you'll find this interesting and maybe even enlightening to an extent.