Next generation asset creation

Laa-Yosh

I can has custom title?
Legend
Supporter
(Follow up to the original discussion from the PS3 strategy thread)

I'd like to start with a general overview of the principles of game content creation. Then we can discuss any points in further depth, check out examples/comparisions, get into other topics, certain games etc. I know there are some other artists and of course coders here, any contributions/corrections are most welcome as well.


Game artwork usually has to fill a few certain, functional requirements.

Just so we know the reasons for the rest of this post... The first has always been to communicate information to the player, ever since the first 8-bit games: this is a door, this is an enemy character, this is a powerup, that gun is probably stronger than the one I already have.
It has to evoke emotions: this is an enemy to fear, this is a safe and happy place, there is an interesting contrast between the enviroment and the characters in it.
It has to be both efficient, making the best use of the allocated resources; and free of any technical problems that would cause rendering artifacts and slowdowns.
It also has to look good; as in, appeal to the general tastes of the intended audience, and fit into the overall visual style of the game.


Innovation happens, and the industry makes full use of it.

I'd like to stress this a lot, right at the start, to put things into perspective. The past few years have introduced many new software tools, techniques and more powerful hardware, that have enabled artists to create more detailed assets in a lot less time. Zbrush and Mudbox have in particular brought on a small breakthrough in the creation of high resolution models for normal mapping, and their ease of use has enabled traditional artists without technical backgrounds to easily get into digital content creation, usually without knowing what's going on behind the scenes. 64-bit systems have provided enough memory to handle and process these assets. But even though development hasn't stopped, it's still not too reasonable to expect another similar breakthrough in the upcoming years.

Most studios have already embraced these new technologies; in fact, without these innovations we would not have seen games like Gears of War or Mass Effect. Without them, artists could not have produced such quality within reasonable time.


No computer can replace human intelligence and artistic sense.

We cannot create programs that could make many important decisions. Intelligence is required to create proper polygon structures for animation and normal map baking, efficient UV texture layouts, blendable animations from mocap, and so on. There's a lot of academic research into various algorythms, but they are still far below the capabilities of a reasonably talanted and experienced artist, not to mention the lack of implementation in any off-the-shelf software.
For example, we do have the ability to automatically unwrap 3D objects into a 2D texture space - but it still requires a human's intelligence to decide where and how to cut up the object into individual parts that can be unfolded. And in general, what compromises to make between the number of seams separating different pieces and texture distortion, or how to avoid obvious mirroring while making the most use of texture space.

Artistic sense is very, very important. In fact, it requires a certain amount of talent, and many years of learning and practice, to be able to quickly determine what makes something look "good". Add some more mass here, make the dirt darker, remove detail from there, use this shade of blue, increase the strength of the ambient light, and so on, and so on. Composition, color theory, anatomy, ideals, styles, tastes - computers don't know anything about this. It's the main reason why an art director usually gets paid twice as much as a simple artist, his advices will help to push everyone's work further, and he'll also make sure that the entire team produces consistent visuals.

This is also true for most procedural approaches, particularly for textures - most automated dirt mapping algorythms create results that are boring, unconvincing, sometimes even ugly. There may be detail but it doesn't look "good" or "right", and it may not fit the style that your game is aiming for.


Most "automated" content creation tools are based on modifying existing assets.

Poser, Facegen, SpeedTree and the rest all work by taking a set of pre-built regular assets. The same goes for in-house tools like Bioware's customisable player characters in Mass Effect, or Weta's Orc Builder used for the LOTR movies' crowd scenes. So you're constrained by the quality level and technical properties of these assets (UV layout, polygon count and structure etc.) and have to modify them extensively to fit your needs. Usually though, these applications are not meant to be used for high quality game assets, they're more for the enthusiasts (ever seen Poser pron images...) or different fields like architectural visualisation. Also, by modifying things like facial features, you'll probably introduce texture stretching, mess up bone weights or morph targets. And in practice, characters created from a generic model will end up looking generic, lacking characteristic features.

And in the case of an in-house tool, you'll also have to build a set of your own assets to use, like armor, cloth and face variations. And you'll still end up with these models looking inferior to stuff that's been built from the ground up, by hand. Yes, you can get away with a lower level of quality for background stuff, but you can't go too low, because the camera is interactive, the player can go around, and you'll usually be judged by your worst stuff, not your best. The player's less likely to inspect every little corner of an NPC's texture, but he'll surely notice if it's half the resolution, or doesn't respond as well to the dynamic lighting.


Scanning, photography, motion capture and such will only provide raw data that has to be processed.

Scanning a famous actor or a dragon maquette will only get you a hyper-dense mesh, consisting of up to 40 million polygons, that you can only use to extract normal maps and perhaps some textures from, and you'll still have to build the ingame model. A photo will not fit your UV layout, will have some residual lighting that you'll have to paint out, and it won't give you specular maps either. Motion capture data has to be cleaned up, fixed when not appropriate, fitted to characters with different proportions than the actor, and edited into individual animations.

All this requires artists, time and money, and usually some specialized and expensive software too (for example Cyslice can process scanned objects for $4000 per seat - in addition to the standard Max/Maya/XSI + Photoshop + Zbrush + whatever). In most cases, starting from scratch is actually cheaper, faster and less complicated.


The majority of games has original content that hasn't got any real-world counterparts.

Sure there are cities like New York, celebrities like Tiger Woods, industrial CAD models of cars, cool looking dirty rock surfaces, and so on, that you can buy or process into your game somehow.

But you can't buy a tool that builds alien plants, fantasy monsters, spaceships or superhero physiques. You can't buy such assets from an object database, can't ask them to go for a scanning, photo shooting or motion capture session, you really have to make them all on your own, from scratch.

And even if you can buy a BMW M3 mesh from Viewpoint Data Labs, its polygon count and distribution, UV mapping, and construction is probably not going to fit your engine's requirements.


Well, so much for a start, I'm also pretty tired for now and will probably go to sleep soon... Anyway, I hope you'll find this interesting and maybe even enlightening to an extent.
 
I have a question or two.

How much effort and time is put to make sure that everything, created by different artists in a team, is consistent and fits predefined artstyle? I mean how much relatively to actual asset creation. What if said predefined artstyle evolves a bit during development? Does it complicate things much?
Is it common that assets evolve and are refined throughout entire dev cycle or assets done in the beginning od development are usually left and never gone back to? For example, in Uncharted's case many things often changed, while in Lost Planet's case I played in the exact same locations showed over a year earlier in the first trailer even though the engine evolved quite a bit.
 
You've kinda answered both questions for yourself.

Assets may change constantly during development, especially if a lot of new technologies are involved. Gears of War went through a lot of changes as well, it's particularly evident in the main character. Generally, the more important the asset is, the more likely it'll get polished, as artists learn the techniques better.
Note that this also requires a production pipeline that allows for easy changes in assets, and this means addressing a lot of dependencies. Re-painting a texture is OK, but changing the polygon structure can mess up UV mapping and skinning as well. There are tools and methods to handle this, but it requires good technical artists.

As for how much time is spent, it's usually a question of how much is left to do such things.


Art styles may have a few guides developed early in development to help artists; also, most assets are checked every few days by the art director and/or the appropriate lead artist, who critiques them and sometimes asks for stuff to be re-done. Larger productions may have staff meatings where everything is reviewed and discussed with the leads; this means artists have to provide regular work-in-progress images or animations, which also takes a bit of time.

Learning to handle style changes is a process, sometimes you can solve it by working with a reversed priority - you can complete the defining assets late in development, and apply everything you've learnt so far. Or, as mentioned, perform one or more additional passes on existing assets, and even plan for this, as usually you'll have to have the lead characters and first levels done on time for trade shows and marketing screenshots.
 
Every once in awhile we all get reminded of what most of us find special about this place.
 
when it comes to alien assets id say they are mostly modifications of existing forms. here's a plant but it's blue. here's a vehicle with alteration. standard libraries of objects probably could help. they'd be like props in films. i figure they'd just have to be high enough quality that they can then be scaled down to the engines limitations. i'm not sure how easy that is to do tho, as you did talk about problems with scanned objects.
 
I have a question...

Coming from a pretty scientific background I was pretty taken back by this statement that you made Laa-Yosh...

We cannot create programs that could make many important decisions.
Which I think should have read...
We cannot practically create programs that could make many important decisions today.

In the context of the discussion I would have to agree based on the practicalities of developing such software systems which would require significant investments of both time, money, resources, R&D & extensive amounts of code (emcompassing technologies incorperating a vast degree of AI & mathematical knowledge among other things) to accomplish even the most basic task of procedurally building a polygonal mesh of a real world object (talk less of an imaginary one..)
However I'm actually a firm believer of the fact that anything that can be broken down, quantified and specified using natural language & cognitive deduction, can be translated into some formal specification from which a software system could be derrived.. In that respect I believe that, given the same set of inputs, predicates, arguements and parameters (e.g. tastes, styling and other subjective drivers which would have to be formalised in some way..), the software system could fair pretty well in terms of creating something of near equal quality to a human whom inherently (& probably subconciously) utilised the same inputs, predicates, arguements and parameters to solve the problem..

It's all theory anyway but I would like to know if you, Laa-Yosh agree-disagree that it's possible that if a computer system could utilise some high order "intelligence" engine (probably sometime in the far future), could it ever acceptably produce "art" in this same way, appropriating the same/similar perceptual & cognitive processes to achieve a given result?

(I guess this question goes pretty far off into the philosophies of "could a machine ever think like a man?" but I still feel it's valid to this discussion..)

I'd like to hear what you think about it..?
 
AI breakthroughs are far beyond the horizon...

Also, we do use script languages to automate as much of the workload as possible. Most of the asset processing is single-click stuff, that runs a MEL or maxscript command list to compile textures, models, shaders etc. and move files around the network. This does not require human intelligence, or you can move the decision making outside the system (naming conventions, custom attributes etc).


Art is so far outside the horizon that I don't even try to make guesses. Again, from all the people working in the industry, only about 10-20% are truly sensitive and talented enough to set visual styles and guide other artists. So how can we think about replacing them with a computer?
But I'm sure it's irrelevant to the current discussion, it's so far out in the future...
 
(I guess this question goes pretty far off into the philosophies of "could a machine ever think like a man?" but I still feel it's valid to this discussion..)
I'd say that the theoretical discussion is beyond the scope of this thread (thanks for the input Laa-Yosh!) which is implied as relating to this gen and next-gen, seeing as its birthplace was talk on BRD. Whether computers manage to create passable art through AI in the distant future, I think its safe to assume that for the foreseeable future its not going to happen, and we should accept that as true unless someone can actually link to something that proves otherwise, or else risking turning this thread into a theoretical 'what might be possible' rather than a more practical 'what is possible now or in the near future' debate.

On the point of 3D scanners being limited, have you seen the Sony movie about it's use in scanning military gear? I'm sure it was on Game Trailers and for SOCOM, but I haven't been able to find it. They had a scanner capturing a bag, and it caught geometry and textures. Use of different lighting setups should also yield specular maps. the vid didn't cover how the geometry was then turned into a game asset though, and how much human input was involved. I'd have thought clever scanning tech could produce some fairly optimized meshes at the scanning phase too, though I don't know if such tech exists yet.
 
What impact, if any, do you see Luxology with their software tool Modo having on the marketplace? They recently licensed patents from Pixar, so how do you think that will pan out?
 
Modo isn't revolutional in any way as far as I know about it.
It tries to integrate the traditional poly modeling method with the highpoly brush-based 3D sculpting that Zbrush pioneered and Mudbox followed. From what I've heard the speed isn't that good, and it's not as polished as MB or full of strange features as ZB. It's another tool in the box, it won't make anything go faser on its own.
 
Scanning can't really do the lowpoly modeling for you, for the reasons mentioned above (lack of intelligence and artistic sense).

Scanners are usually quite expensive and you have to take your stuff to a certain location to get it digitized. That's for the really high quality stuff where you can take actors, maquettes and such - especially for live subjects, it has to be lightning fast because no human can sit still for more than a second and even very small movements will distort the results.
Portable versions are cheaper but less accurate and can't handle large objects like a full body scan.

Capturing diffuse textures and reflectance related stuff requires a more complex lighting rig that can rotate around quickly. A lot of research went into this on the Matrix sequels, though I have to say that the results in the movie weren't really convincing.
Sony's VFX house Imageworks does have a similar machine, and EA has built their own stuff for their Sports games as well, having hired the guy who developed it; but it's very expensive, and only good if you can build all your characters this way. Having digitized humans against hand-made monsters will not go well together, and I don't really know if anyone rents such devices at all. But it's a great method to get celebrities like soccer players or fashion models into your game, when full likeness is important.
 
on the subject of reducing high poly assets again.

i'm imagining some sort of system where vertexs are freely movable on the surface of a high poly model. the high poly model works as a guide for the placement of low poly model vertexs. also many modelling programs do allow not so intelligent poly reduction. but if you tie that to a brush then an artist can just move around thinking shouldn't be as many polygons there and clicking the brush in that particular area. anyway i think such a system would make the poly reduction process easier for artists.
 
on the subject of reducing high poly assets again.

i'm imagining some sort of system where vertexs are freely movable on the surface of a high poly model. the high poly model works as a guide for the placement of low poly model vertexs.

Google Topogun; we've been beta testing it for several months now. Of course you still have to do all the low-res modeling, it's just somewhat faster to get it to conform to the highres mesh. Doesn't speed up the modeling itself, though, but it's quite handy nevertheless.

also many modelling programs do allow not so intelligent poly reduction. but if you tie that to a brush then an artist can just move around thinking shouldn't be as many polygons there and clicking the brush in that particular area. anyway i think such a system would make the poly reduction process easier for artists.

These systems all lack the intelligence to decide how to maintain silhouette detail, how to create proper edges and transitions for normal mapping, and especially how to create a model suitable for animation.
Think about it, source geometry is tens of millions of polygons and the result has to be 10-20 thousand at most. We're talking about getting rid of 99.95% of the vertices, and only keeping the right vertices. No software can be that clever. And even then it's most likely that the results wouldn't work well as an ingame asset. It's far, far better to build it by hand.

Automatic optimization can be handy though, to take a 30-million polygon reference mesh and chop it down up to 25-10% of its original complexity while maintaining detail, so that processing normal maps in some applications can be a lot faster. Epic's modelers do that all the time.
 
ah i see some bright sparks are ahead of me by a year again. adds retopologizing to vocabulary (i'll probably just forget it.)
 
Please, there's no good in trying to guess what will happen decades down the road. I'll be retired by then ;)
 
Back
Top