Mega Meshes - Lionhead

So is it that the demo has only been seen in limited video... that a fully megatextured, uniquely modeled world was shown nearly two years ago and nobody noticed anything different?

Limited direct feed footage and well... no details about the work that went into it.
also it wasn't grey enough :3
 
Is there any direct feed footage at all? As far as I can tell they've always demoed it with a human in front of a TV to show off the interaction.
 
There's some 480p direct-feed from Molyneux's TED demonstration. I've thought that smaller environments like this might be a more eye-popping use for virtual texturing than the sprawling but sometimes texel-sparse wastelands of Rage, but don't remember this video making any impression on me. Maybe it's something that really needs to be seen in person to appreciate the detail. Or maybe it's like a reverse magic trick, where you have to be told what's going on under the hood to be impressed by the results.
 
It's a mixture of the two IMHO. The sea shore with those monochromatic rocks and grass in the new video is pretty much of a letdown, for example, but the little lake is amazing, and the detail on some of the stuff at the nearby garden is also cool (where the snail is).

Also, the virtual texturing part isn't the most impressive to me, I'm personally more interested in the general asset workflow and toolset...
 
Wow those videos look incredible.

Cant find much about this at all though, is this truly some sort of push for next gen techniques on 360 by MS, using the tessellation unit finally? Or just some trivial thing that will never amount to anything?

For example this is what Lens of Truth wrote, hyperbole?

Microsoft and Lionheaad Studio’s have released 2 videos from GDC 2011. In them we see two types of new graphics technology made specifically for the Xbox 360. The first, “Mega Meshes” will increase tessellation units and the overall polygon count and level of detail by a massive margin for the up-and-coming Xbox 360 exclusives. The second video, titled “Mighty Light” shows off the new lighting system. Check out the videos below. Let us know what you think about them
 
Pretty sure that's wrong. I don't remember seeing tesselator mentioned in the slides, and like Laa-Yosh keeps seing, that huge number of polygons is in the source materials for the map, not in the actual game where they will be reduced. The big deal about the polygons is that you can sculpt an entire map like you would a character model in zbrush, rather than having to switch between a number of different tools because of polygon limits. Since I have no familiarity with zbrush or map making, someone else might want to clarify. This slide was mostly about tools for artists.
 
The realtime GI is also impressive, although it is different from most other implementations in that it does not allow destruction, the world geometry has to remain static as far as I get it. Considering the type of game Milo seems to be this makes perfect sense.

Their algorithm is very similar to Enlighten, and thus it has a lot of the same advantages and disadvantages. Personally I think it's the right trade-off to make for the target hardware.
 
Pretty sure that's wrong.

Yeah... just to clarify again, here are some facts.

- Zbrush and Mudbox are called sculpting apps, where you can use a Wacom tablet to work on a 3D model using brush based tools. Instead of manually cutting new polygons in the model to create shapes and forms, you just subdivide a simple model until it's poly count is in the millions and you can use the density for that.
http://www.youtube.com/watch?v=7gcUAV3DFtQ

- The general workflow is to build a very simple model, then sculpt a detailed highres mesh in zbrush. You export that final sculpt and build a simple, ingame lowpoly mesh on top in Max/Maya. You create UVs for the lowpoly and use a tool to generate normal maps by comparing it with the highres zbrush sculpt. This way you'll get all the shading detail with minimal actual modeling work which would be a lot more time consuming compared to sculpting.

- For characters, this workflow is usually applied in pieces, you can build a head, a hand, various armor pieces, and sculpt them separately in zbrush to maximize the detail. The sculpting apps can work with 8-12 million polygons nowadays on a 64-bit system, so simple characters can even fit into memory all at once.

- The trouble with environments is that they're very large and if you want a poly density similar to characters, you'll need to break everything down into a LOT of pieces. You can maybe speed things up by re-using a lot of the pieces, just like Epic does in their Gears stuff, see here.
http://www.zbrushcentral.com/showpost.php?p=554081&postcount=62
But this isn't intuitive enough and this isn't good for completely unique environments, which you would want if you have virtual texturing support. You can't sculpt a landscape with enough detail because it's too large to fit into memory, you can't even fit a building.


Lionhead's tech is basically adding a layer above Zbrush, and a very big one. They start with a simple very rough geometry for the entire level which they store on disk. It is subdivided until it's a very very high detail version of the game world - which is impossible to display at once, but they don't need to do that.
They just export chunks of it into Zbrush one at a time, to add detail and sculpt it and even paint it (it has a vertex color kind of feature to replace UV textures) and bring back the results into the MegaMesh tool. They have support for multiple artists to do this at the same time with the same level, even with neighboring chunks.
Then to create the actual ingame version, their toolset will automatically generate a lowpoly level geometry, unwrap UVs for it, and extract color, normal, specular and occlusion textures too.
This method is more intuitive, has far less technical issues for the artist to deal with, and can manage a very very high complexity for them almost completely automatically. So far better results with less work and headache.
 
Laa-Yosh said:
This method is more intuitive, has far less technical issues for the artist to deal with, and can manage a very very high complexity for them almost completely automatically. So far better results with less work and headache
On workflows where level-design is largely static/doesn't iterate much(or not gameplay affecting) so you can go wild with art.
This thing looks counter-productive to me if that isn't the case though (and game productions are far less conductive to the former then latter in my experience).
 
They only start after playtesting has been completed on a primitive version of the level, it's even mentioned in the document. With today's production values, if you have to rework a level done the traditional way it'll be almost as expensive.

They also do have a few mechanisms in place to leave some room for editing (thanks to the hierarchical subdiv and integrated re-projection features), and the whole turnaround thing is highly automated so they lose less artist hours even if something has to be re-done.
But yeah it can still be a bitch if you have to throw out the entire level... or if you have a very short schedule.
 
Laa-Yosh said:
With today's production values, if you have to rework a level done the traditional way it'll be almost as expensive.
Not if you're working in a modular fashion - and with today's production values and deadlines it's virtually impossible to wait until design iterations have completed until you start arting a level. AAA quality in games is by and large result of iterations, not 1-shot deals.
It does vary with genres of course - eg. open-world adventuring is not nearly as sensitive to level design issues as action-shooters.

Of course these are early iterations of such workflow - I don't think it's inherently impossible to make something that uses sculpting AND allows people to work modularly to minimize the waste.
 
Yeah... to understand my enthusiasm, know that I'm the guy responsible here for finding new and better ways to model all the stuff for CG cinematics. This thing is a solution to some of the most severe problems that we've had to deal with for a long time now and I'm very excited to get something like this for our painters.

And yeah, it certainly isn't something for the 2-year production schedules of the next COD episode, or the 1-year schedules of an Assassin game, but it's perfect for the kind of game Lionhead's making.
 
One other question I have, going off of my fuzzy memory of that presentation, is about the custom compression they implemented. Do a lot of games do this? I would have thought implementing a custom compression scheme for texture information would have been very costly compared to using the built-in format support in the hardware. It seems like they picked a good algorithm, judging from the example pics they showed. Does virtual texturing go hand-in-hand with some type of custom compression format?
 
And virtual texturing needs more effective compression because the megatextures are just too big for DVDs, both in terms of disk space and in reading the actual data. Most games only have a few gigabytes' worth of textures and it's usually enough to just use the fastest compression available; a single 32Kx32K megatexture with 4 channels (color, normal, spec, occlusion for example) requires 15GB in raw form.

When the tiles are unpacked into memory they're converted back to DXT formats though, because that works better with the hardware, and can still conserve runtime memory. This is also a significant processing overhead, uncompressing and recompressing data constantly.
 
Last edited by a moderator:
This is also a significant processing overhead, uncompressing and recompressing data constantly.
What recompression?

Decompression isn't necessarily slow either, almost all of the compression gains do not come from using algorithms which use more instructions per decoded pixel per se ... but simply from the fact that it's multi-rate.
 
You don't want to compress the DXT compressed texture tiles, right? You'd have artifacts adding up on top of each other and all kinds of nastiness. Also, DXTC's image format isn't fit for Lionhead's custom algorithm anyway, as I understand it.

You need to use the original image data as a start and use your custom compression to save it to disc. You use this to decompress when you load it, but then you'd have an uncompressed texture sitting in memory and hogging bandwidth to the GPU. So you perform a decompression and then a DXTC compression at the same time. Id's paper on their method goes deeper into this as far as I remember.
 
Back
Top