Mega Meshes - Lionhead

All they say is they use Hoppe's polygon reduction technique, so presumably there is no runtime geometry LOD selection.

Sorry, I probably should have used a different term. In that instance they're talking about texture resolution, not geometry.
 
My impressions when watching that was "Toy Story". The same level of cartoonish art style with an absolutely amazing realtime implementation. And happening on 5 year old hardware. Very cool indeed.
 
My impressions when watching that was "Toy Story". The same level of cartoonish art style with an absolutely amazing realtime implementation. And happening on 5 year old hardware. Very cool indeed.
Yea,Toy Story,thats what i meant.Add nice depth of field in there and they got themselves a cartoon-game :smile:
 
I don't suppose anyone's seen the Toy Story 3 presentation from Siggraph last year. :p It was quite impressive.
 
I did and I liked it a lot, too. Shadows and such, impressive work especially considering how most hardcore gamers treat such licensed stuff as mediocre by default.
 
I guess this will shut up those people who say nobody pushes the XB360 :devilish:

Or proves them right? It's not like there's anything out there like this now, is there? :devilish: (though the lighting for some of the Rare stuff does look a little bit like this I think, though with less GI-ness)
 
It's still not suitable for everything we do, as the majority of that is still characters and props, but I already have our developers look into this stuff, mostly for terrains of various kinds.
And there's still a lot of stuff that's easier to do in 2D unwraps ;)
 
I've never said things weren't easier in 2D unwraps. It has just always been my contention that those unwraps should just be temporary projections independent of the native storage ... baked back into whatever is the underlying native representation when you're done (whether the native representation is vertex colours or a texture atlas created with automatic unwrapping, which will not put seams in nice places for native editing). Trying to work in the native megatexture with Lionheads workflow for instance would not only not be easy, because of automatic unwrapping, but it would also mean any change in the megamesh artwork would trash your texture work.

Using Projection Master in ZBrush would be the superior way to get your 2D unwrap (although still a bit limited since it only has an orthogonal projection ... they should really add box, cylinder and sphere projections).
 
Oh no, projection master is terrible. I want my UVs for a gazillion reasons, I'm even against PTex personally which is really the most advanced approach IMHO, but still too limiting for anything but painting on stuff in 3D. But even getting a texture can involve a lot of 2D steps...

For example, lining up cloth texture patterns to the cuts on the surface with 100% precision is only possible with clever UV editing and we have very advanced tools to do that pretty quickly. I'll try to take a screenshot tomorrow to show what I mean, no projection or Ptex trickery can do that and it's a pretty common case with us doing mostly contemporary or historical characters nowadays.
 
And yet, it will have to happen sooner or later ...

I understand what you mean though, with ptex vs projection master, you need an unwrap which maintains geometrical length from surface space in texture space ... but even that can be done with temporary unwraps, there is never a need for the editing representation to be the storage representation.
 
Last edited by a moderator:
I'll show you what I mean tomorrow, but a shot summary:
imagine a plaid shirt, it needs a constant pattern on it with the vertical and horizontal lines aligned with the edges of the fabric pieces, like here:
blk-lavish-plaid1.jpg


Now the thing is that we find our cloth sims work best if we actually sculpt all the starting wrinkles and folds and rebuild the simulated mesh on top of this unorganized Zbrush sculpt. The trouble is that it'll be totally wrinkled and folded all around like this:
s2vs-plaid-shirt-front.jpg


It is nearly impossible to project a pattern to such a mesh without serious distortions. It's also going to drive the texture artist crazy and at least double or triple the time for the task.

But we can cut the UV's up at the natural seams, straighten those in 2D, then relax the remaining parts while maintaining the straight seams. The new Headus UV tools let you do this in like 15 minutes, then you put a tiled plaid pattern on top of it in Photoshop and you have a nearly distortion free texture. Add dirt, wear, and so on in 3D like bodypaint or whatever you like and you're done.

I hope it's easier to understand it now... will still try to get you screenshots tomorrow. And this is just one case where a 2D unwrap is a must. Then there's the case of poly to texel ratio if you'd want to use polypaint in Zbrush, lack of flexibility, and so on.


Lionhead's stuff as a whole is great for what they do, but I bet the characters use more traditional methods because it's still the practical approach there.
 
I understood the problem, but just because you did an unwrap for editing doesn't mean that the coordinates in that unwrap need to have anything to do with the native representation ... even those UV coordinates you want can be temporary (although you would want to store the exact unwrap with the model so you can repeat it).
 
Yeah, sure, but... why the hell would we want to drop UVs if they work perfectly well for us? :)
 
Cloth is a bit of an exception that an unwrapping without distortion is even possible, for just about everything else what is convenient for editing doesn't necessarily correspond to what is optimal for storage or rendering (or further editing for that matter).
 
What's optimal is quite different for realtime hardware rendering, where a few percent this way or that can be the deciding factor on stalling the GPU. In offline however, those few percents are just not worth the investment in artist time and the sacrifices in flexibility, IMHO.
 
Yeah, most of the presentation is about asset creation tools and new paradigms, to allow a uniquely modeled and textured but relatively small game world.
They also automate most of the process of converting high res source assets into lowres game meshes and UV mapped textures, and even the optimization of the texel density through the game world.

The realtime GI is also impressive, although it is different from most other implementations in that it does not allow destruction, the world geometry has to remain static as far as I get it. Considering the type of game Milo seems to be this makes perfect sense.


(It's also interesting how easily a lot of people still misunderstand technology related presentations. Back with Doom3 everyone thought that Carmack was rendering 2 million polygon characters; now a lot of people think Lionhead has somehow managed to render billions of polygons. Goes to show just how superficial the general audience's understanding of current hardware and rendering technology really is...)
 
So is it that the demo has only been seen in limited video, evidence of diminishing graphical returns in general, or an indication that this tech direction is a poor end-product differentiator that a fully megatextured, uniquely modeled world was shown nearly two years ago and nobody noticed anything different?
 
If you mean the first demo of Milo, I don't remember seeing any direct feed footage, only the game playing on a TV screen with all the stuff covering up the details; and most people's focus was with the character and the interactions anyway. It was like a 600*400 resolution window, skewed and blurred, I've checked it out on youtube and it really is hard to see much.
 
Back
Top