If Rage had 4x the texel density

kyleb

Veteran
Obviously, if Rage had been made with 4x the textile density, surfaces up close like this wouldn't look so crappy, but more along the lines of what other games which use repeated textures like BF3 do. Of course that would make the already large game folder size absurdly large, but would it require notably more VRAM or a notably more powerful system in any other regards to run well?
 
it would depend on how the virtual texturing was setup, but storage space requirements would jump up lots. there are a few papers on virtual texturing online with the most resent being AMD's partly resident textures. but i think that shader generated fine detail might be a better way to go (imo of course!).
 
Of course that would make the already large game folder size absurdly large, but would it require notably more VRAM or a notably more powerful system in any other regards to run well?
It wouldn't require anything else than more physical storage space. Virtual texturing has constant RAM usage (both system memory and graphics memory) independent of the amount of texture data. If the mapped (virtual) area is increased a lot, they would need to increase the size of their indirection texture (but changing a single 1k texture to 2k isn't a big deal). And in the future, we might want to change the indirection texture to something more memory efficient (for example a cuckoo hash is pretty easy to implement on GPU).

Going from 20 GB -> 80 GB (4x texture data on disc/HDD) isn't practical with the current discs and current internet download speeds.
 
I'd rather have Rage have CONSTANT texel density to begin with, some textures look as low resolution as Half-Life textures. (Yes HL, not HL²)
 
but i think that shader generated fine detail might be a better way to go (imo of course!).
It seems Carmack might have been thinking along the same lines:

We have a bicubic-upsample+detail texture option for the next PC patch that will help alleviate the blurry textures in Rage.

I was puzzled by the mention detail textures comment as having the artists go back and draw a detail layer for everything would be an absurd amount of work, but your suggestion of shader generated detail seems like a more reasonable route. Granted, as that tweet was months ago and there has been no patch, so I've lost hope of ever seeing what he was actually talking about, at least until the new Doom.

Virtual texturing has constant RAM usage (both system memory and graphics memory) independent of the amount of texture data.
Huh, I was thinking along the lines of my admittedly limited understanding of traditional texturing, figuring that mipmaps are generated from the megatexture for distant surfaces while full resolution sections of the megatexture are loaded in for nearby surface, and hence upping the texel density would result in a bit more VRAM usage. Is there any chance you could elaborate on where I went wrong while keeping it simple enough for a guy who's done and mapping, modeling, and a bit of game code, but no actual graphics programing?
 
Huh, I was thinking along the lines of my admittedly limited understanding of traditional texturing, figuring that mipmaps are generated from the megatexture for distant surfaces while full resolution sections of the megatexture are loaded in for nearby surface, and hence upping the texel density would result in a bit more VRAM usage. Is there any chance you could elaborate on where I went wrong while keeping it simple enough for a guy who's done and mapping, modeling, and a bit of game code, but no actual graphics programing?

We did an article of ETQW's MegaTexture where the constant texture memory load is explained. ETQW's MT is simpler than Rage's arbitrary geometry approach but the concepts are similar.

sebbbi also wrote great posts about virtual texturing here.
 
I'd rather have Rage have CONSTANT texel density to begin with, some textures look as low resolution as Half-Life textures. (Yes HL, not HL²)

Do you mean the release assets or during production? I agree with the later but it wouldn't necessarily fix those textures if used in the release assets because that would mean all surfaces would have lower texel density (to fit on DVD/whatever).

Now, with a VT system, artists could (and probably should) work on a texel agnostic scale because they no longer have to make decisions on texture budget but on creating the release assets you'd need a tool, very similar to id's visiblity analiser to determine compression strength, to check which surfaces (or rather, corresponding VT coordinates) could saved at a lower texel density. This would allow you to spend more storage space on the really important surfaces while leaving the rest at a reasonable texel density. Even if, for argument's sake, you'd be deploying on Download-only platforms and had no physical storage space limit you'd still want to keep sizes small for bandwidth and loading times sake.

Of course, for Rage, because they had to fit into two DVDs for the single-player campaign AND they decided to automate compression determination rather than texel density there's not a whole lot they could cut to improve texture resolution.

What's more disappointing is that they took a hit to texture resolution to support a more open world and by people's accounts the world seems empty and more like a huge corridor between missions anyway. :|
 
Thanks Richard. I'd read the ETQW article back in the day, but hadn't seen Sebbbi's post, and that really helped further my understanding of the subject.
 
it would depend on how the virtual texturing was setup, but storage space requirements would jump up lots. there are a few papers on virtual texturing online with the most resent being AMD's partly resident textures. but i think that shader generated fine detail might be a better way to go (imo of course!).

I can understand the sentiment, but as someone who's also been an artist I will hyperbolically say "I will stab you with the hatred of a thousand suns!"

Seriously though, that's not how artists work and they'd hate you for it. RAGE is, at the correct times, the most stunning looking game ever made partially because the artists were able to do whatever they wanted. So long as they accepted it being squashed down into ugly compression errors and low resolutions. Still, it worked.

But the tradeoffs with disc space are enormous. I rather like the idea of, as Bungie put it, sparse textures. Not "mega", no single gigantic texture atlas that you have to make all those tradeoffs with compression and disc space and what have you. But with using similar scheme to actually streaming the visible texture chunks as with current virtual texturing you'd still take a load of pressure and constraints off artists.
 
Thinking a bit more it really could be a very flexible system. You'd stream in your tiles from whatever normal-ish texture atlases as needed, material blend and stamp up whatever the artists wanted, totally unique characters. As much totally unique textures stuff as you'd feel comfortable with versus disc space really.

Of course this would blow up the cache size as a single area might need multiple tiles. But on next gen consoles that's not such a concern. 200mb+ of cache would be perfectly acceptable for an incredibly high texel density and an incredible amount of texture variety. Same as used today but with much better quality, and you'd have plenty of room for other things.

Of course there'd by not totally unique per pixel pre-calculated lighting. But I'm sure high res spherical harmonic or spherical needlet lightmaps could serve well enough for unique lighting.
 
I rather like the idea of, as Bungie put it, sparse textures. Not "mega", no single gigantic texture atlas that you have to make all those tradeoffs with compression and disc space and what have you....
I dislike storing unique mapped the pixels to physical media as well. However virtual texturing can be made really flexible. Mixing various techniques is easy. When the renderer needs a certain texture tile, the virtual texture system can generate the tile (by various means) instead of loading the baked data from HDD. For example several tiles can point to identical baked pixel data in physical media, but have a separate set of decals blended top on the same base texture tile. You can also store material definition data (that is fed to an algorithm) to the tiles instead of storing pixel data (to form artist controlled procedural texturing to areas that do not require real stored pixels). Virtual texturing makes mixing of different techniques very easy, since the rendering pipeline only sees the (128x128) pixel tiles from the cache, it doesn't matter how the cache tiles are filled (pixels from physical media, generated procedurally, decaled by various methods, etc). For renderer the cache is just a simple standard texture (all decals and procedural stuff, etc have been burned in). And this also makes virtual texturing really efficient (no need to blend decals over rendered geometry every frame, over and over again).

... But with using similar scheme to actually streaming the visible texture chunks as with current virtual texturing you'd still take a load of pressure and constraints off artists.
This was one of the main reasons why we started using virtual texturing in the first place. Our artists do not have to worry about meeting texture memory goals anymore. Less technical limits usually results in better graphics quality.
 
I meant that *final* texel density shouldn't change by too much between different element of the world, the way it currently is in Rage hurts the game a lot.
You're looking at a splendid landscape, then at your feet lies a can with a texture so blurry you can count the texels, or at least the pixel blocks (S3TC compression blocks).

Olick talks virtual texturing vs unique texturing here:
http://olickspulpit.blogspot.com/2011/02/high-resolution-textures-field-guide.html

Ideally games would stream all data, unfortunately current API get in the way.

I thought about enhancing Total War with virtual texturing, however the code base was too messy to make it practical in a year timeframe :(
 
I'm hoping that if Doom 4 continues with this megatexture approach that they make a jump to a vastly larger game size. 50GB would be fine with me.

What's more disappointing is that they took a hit to texture resolution to support a more open world and by people's accounts the world seems empty and more like a huge corridor between missions anyway.
Overall, the game felt like a sort of superficial exploration of RPG and open world elements. It is very limited compared to say Fallout 3 or even Borderlands. Since this is a company which has never been one for complex gameplay, I thought it was interesting to see them push into a new area. I think it was all impressively polished. I'd call it a calculated reach into new territory. Although with how the game ended abruptly, maybe they still overreached.
 
What's needed in the API for better streaming?


Lazy devs!

The problems I found trying to do that with D3D11 is that you can't map a default buffer, you need a staging one. You can't map and read from the same staging buffer, which means you need (worst case, most likely never met) as many staging buffers as default buffers.
(Since you have multi-threaded data streaming you need to keep your buffers mapped for a while before you can use their data, and you want to "upload" as soon as possible.)
(Whatever happens you get an extra copy, you can't just read from disk straight to the destination area.)

You always must go through the API for memory management, it can be justified for textures (due to their layout being GPU specific), but for a raw buffer it's a bit lame...
[An alternative would be to explicitely call an API transcoding function.]
So you can't manage memory, which reduces a lot the clever stuff you can do, including using memory as a giant cache with a cache algorithm.
In theory all the API cares about is the set of states for a draw command and the source/format of the data.
If you could just create header/descriptors and modify their data pointer that would be nice, having access to the command queue would allow copying/pasting parts of it for reuse.


After that, there's the option to control the MMU, specifically decide the virtual to physical memory mapping. At that point all you have to do is to divide memory in pages, and manage pages with a cache algorithm.
That's it, with that, you can stream any kind of data and always get rid of the most irrelevant pages. (So as good as the cache algorithm you use.)
(I don't know how expensive switching pages would be. Also you could have software page faults for resources, that'd be reactive instead of proactive loading... But much easier to handle.)



We aren't lazy, we just have a limited amount of time and most often code bases are inflating dangerously because many people don't seem to understand that more code is bad.
(longer compiling time, way more reading before modifying...)


-*EDIT*-
Forgot to mention that I wouldn't mind the GPU getting an ISA, just like CPU, having a set and known common texture memory layout, command stream and things like that.
(And I don't care if you need a massive decoder somewhere, x86 does it :p)
 
Ive never been one who has understood the benefits of megatexture - still dont
and looking at he link Rodéric posted it seems to be severly limited
eg:

"Use as high of a resolution texture as your heart desires."
If thats true why does rage have some really low quality textures

"Infinitely complex lighting algorithms."
Again if thats true why does it say "Lighting is mostly static" later on

"Bi-linear filtering. (Tri-linear is possible, but requires more work.)"
No AF ?

" Can bake in decals to allow an infinite number of them"
Can i have realtime decals ?

ps: If id suddenly realised someone had secretly slipped in a porn pic to the megatexture and they needed to patch it out would the patch have to include the whole megatexture ?
 
ps: If id suddenly realised someone had secretly slipped in a porn pic to the megatexture and they needed to patch it out would the patch have to include the whole megatexture ?

Why in the world would they have to do such a silly thing when they could replace just a chunk of it. You can replace parts of a JPEG, PNG, BMP etc... why would you think you cannot replace just a part of the megatexture?
 
Back
Top