Carmack demos new iD engine at WWDC keynote

MrWibble,

I was thinking that a 3d texture (very sparsely populated, e.g. only has data for grid cells intersected by level geometry) would be one way to avoid the mapping problem. So, is there a way to efficiently approximate that using standard 2d textures?

What if you look at all geometry using an orthographic projection onto the xy, xz, and yz planes - consider that the initial mapping onto 3 different megatextures. They're not immediately usable since overhangs and such will result in triangles overlapping each-other in these projections, so: for each of the 3 textures, translate around sections of the projected geometry until no section overlaps another (I didn't think very hard about how to do this step, but it seems pretty doable). That gives you the megatextures, but you'd still need a way to figure out which blocks of them are visible from a given viewpoint since the translations employed earlier mean you can't map a view pyramid into a simple polygon in texture space... Also, I'm not sure whether each of the textures would have equal weight when it comes to rendering - I guess you could use the surface normal of a triangle to determine which of the three textures to "prefer".
 
Last edited by a moderator:
MrWibble,

I was thinking that a 3d texture (very sparsely populated, e.g. only has data for grid cells intersected by level geometry) would be one way to avoid the mapping problem. So, is there a way to efficiently approximate that using standard 2d textures?

Your planar projection idea doesn't seem to have a lot of mileage to me (depending on the model you could easily end up with rather a lot of overlaps to deal with, and not a whole lot of continuity in the resulting textures.

However a sparsely populated 3D texture can be efficiently encoded into a 2D texture using a hashing function.

What I'd suggest doing on a large level, would be to slice it up into a grid and have a 3D texture per grid-cell, then the cells could be stored in a streamed texture, allowing higher-detail maps for the nearby geometry just like mega-texture does. So you use a massive texture to store individually hashed 3D textures.

This is partly the concept I'm toying with at the moment.
 
If it were me, I'd start with dividing into a grid, and using a texture atlas per grid cell. Possibly unwrapping the genus 0 pieces to allow as much vertex reuse as possible. There is no reason that something that appears as a 32Kx32K texture need be stored as such.

Any sort of Atlas will waste texture space, but you'd be compressing them anyway, and runs of 0 tend to compress well.

A bigger question is how you deal with an artist making geometry changes once you have the map. Assuming you allocated space at consistent texel density, dragging vertices around would change that, I assume you would reallocate the texture coordinates and scale the existing texel data to fit?

Using a grid, you either have to actually split the geometry on the edges of the grid cell, or deal with the triangle spanning the edge case. I'd probably just pick a cell for any tri that spanned edges, and deal with the ragged edges, because my experience with splitting geometry on edges is that it significantly increases vertex/tri count.

Your still looking at one batch per cell though, but with large enough cells that would be manageable.
 
A bigger question is how you deal with an artist making geometry changes once you have the map. Assuming you allocated space at consistent texel density, dragging vertices around would change that, I assume you would reallocate the texture coordinates and scale the existing texel data to fit?

Well one alternative is to store the vertex XYZ when the initial hash/compression is done, and use that as an STQ to index the texture. If an edit is minor, and doesn't change the topology, the co-ordinates don't have to change and the texture remains static. If topology is edited, you'd have to resample the texture.

Using a grid, you either have to actually split the geometry on the edges of the grid cell, or deal with the triangle spanning the edge case. I'd probably just pick a cell for any tri that spanned edges, and deal with the ragged edges, because my experience with splitting geometry on edges is that it significantly increases vertex/tri count.

Yeah, that seems like both a valid point and a valid solution. Overlapping the grid cells to avoid splits shouldn't be a significant problem.
 
MrWibble - some kind of hashing function had occurred to me, but I couldn't think of a nice way to handle the case where the initial hash bucket doesn't contain the texel you want. Also, it seems to me this requires hand coded filtering... Or am I misunderstanding what you are saying?

Regarding the projection thing I am inclined to believe you (not having any graphics implementation experience myself :) ).
 
MrWibble - some kind of hashing function had occurred to me, but I couldn't think of a nice way to handle the case where the initial hash bucket doesn't contain the texel you want. Also, it seems to me this requires hand coded filtering... Or am I misunderstanding what you are saying?

If you use a standard hash function then yes, you will have a problem here. The option is basically to use shader branching, and given that pixels are processed in parallel you'll get a worst-case hit on many pixels.

However if you use a perfect hash function, you'll get the texel you want in one operation with no conditionals.

So the trick is in generating a perfect hash, and not in complicating the lookup.

Regarding filtering, one option is to do the filtering in the shader with multiple texel lookups (urg), but the other is to encode the texels in overlapping blocks, such that you get a bit of bloat on the data but can use GPU native filtering.

There was a good paper at last year's Siggraph which covers most of this stuff (look up Hughes Hoppe's page and it'll be near the top).
 

I think it's a given that if you're working with a large texture, clipmaps are one of the current accepted solutions.

So it's possible that some of the underlying texture streaming is based on clipmaps, but that's a pretty old and well understood technology, and it doesn't solve (or even touch upon) many of the issueas that *appear* to be dealt with in the tech Carmack is working on.

The real question to be answered first, is what does Carmack's new engine *do* - until we know that, it's hard to speculate as to how it does it. I've seen lots of people fixate on the stuff he's said about having a big texture on the terrain in Enemy Territory, but I think that's a sideline. For me, he seems more interested in removing the issues of mapping a texture onto a complex mesh, as solving that frees up a lot more interesting techniques (such as anything that needs per-texel precalc'd data) and gives the artists more flexibility.

And I'm guessing if Carmack's stuff was purely about streaming a texture in, he'd have called it Clipmaps in the first place. He didn't feel the need to rename BSP trees when he used those for Doom, nor any other tech he's implemented. So I doubt he'd start doing it now.

So the implication to me is that MT is distinct from clipmaps and is possibly more related to how it is mapped and edited. Clipmaps might be part of that, but I think they're a bit of a red herring here. It's kind of like saying he's probably using polygons, or shaders. Sure, he probably is, but it's not really the main feature.

As I said originally, if you're going to clipmap something, you need a nice ordered parameterisation to allow it to work - if your geometry is sampling from all over a map then you can't just hold a limited area in memory, and the job of working out which bits to stream in as you move becomes tricky.

So the parameterization is key, and I like the idea that it might be a hashed 3D texture, because that would fit nicely into a streaming system like clipmaps (though to be honest, you don't need clipmaps at all as you might as well have discete textures once they're already in cubes and you don't sample across them).

It's all fairly pointless speculation of course, but even if Carmack is doing something completely different and hasn't solved these issues, I think there's mileage in playing with the ideas anyway.
 
It's nice to see a thread finally discussing the interesting bits about the possible implementations of ID Tech 5 - even if for a moment it looked like it was going to veer into 'it's just a clipmap' discussion.

The way the thing is mapped and edited certainly is the most important feature that everyone really wants to know from the technical stand point. I think another important point that I haven't seen mentioned is that the concept seems to lend itself well to if you have (more or less) one shader used throughout your game world but what happens if you want multiple different ones that require all different kinds of texture maps for their effects? I wonder if this is a restriction to a degree of ID Tech 5 or if it has support for overcoming this (A few different ways come to mind). Also the compression scheme he has mentioned that is used is that a best fit for certain kinds of attributes you would like to store per pixel for shaders. Also implications such as streaming a megatexture onto a skinned character are curiousities although I assume it probably deals with them as one streaming texture block effectively for simplicity (But imagine if it could cope with something like a Colossus from Shadow Of the Colossus etc).

From what he said about artists authoring the geometry in their favourite editing packages then importing into the game world it seems to hint at possibly a static one way process. That once the geometry is final they can go in and paint the megatexture in the game editor but if the geometry was modified they would probably have to repaint that area - of course support could be added for resampling the existing texture but this could be very tricky depending on how the geometry was modified. My guess is its a one way process for geometry segments although projection painting information could be reapplied to modified geometry (I kind of do that in something I am working at the moment but geometry modification is more restricted) but in certain cases it just wouldn't make sense.

I've looked into the research for stuff such as hashed octree textures and the like and its certainly interesting (and quite possibly the future) but I think given its current performance characteristics I don't think its a likely candidate given that he mentioned the technology is scalable back to certain ranges of hardware. I think it is more likely a system that simply uses pages of typically uv unwrapped geometry charts or even something that uses individually packed triangle charts with some care as you state at border cases. Of course I could be totally off here!

Getting towards a fully sculptable, paintable environment is certainly where we want to be and I think coupling that with emerging technologies such as pixellux's DMM are very interesting.

On a side note I think Carmack should wait to divulge such information until his game ships! As much as I want to know the answers too! I think that is the 'commercial mistake' he made with the Doom3 tech. It's not that other engines got ahead I think its more of a case that he told them how to! I appreciate how open he is about the stuff he develops and I think it does the game development world a good but that is not always the same as doing what is good for ID as a business.
 
On a side note I think Carmack should wait to divulge such information until his game ships! As much as I want to know the answers too! I think that is the 'commercial mistake' he made with the Doom3 tech. It's not that other engines got ahead I think its more of a case that he told them how to! I appreciate how open he is about the stuff he develops and I think it does the game development world a good but that is not always the same as doing what is good for ID as a business.

I think he should even wait until he licenced ID Tech 5 to death before revealing the inner working of the technology, or maybe even the time at which he'll make the engine open source...
Of course being curious, I would like him to tell us about the technology as soon as possible ^^
 
Back
Top