Doom3 -- High and Ultra settings -- compression

Reverend

Banned
How important is the difference between the two settings to you? We're talking about normal maps compression artifacting of course.

Given that this is Doom3, where the environment is mostly black at default gamma and brightness settings, what is your opinion of the importance of normals compression, given the currently available compression techniques, as well as considering other more-brightly lit environments in other games?

Additionally, how important is compression techniques for the end results of various shader effects (let's just average the expectations of a huge bunch of developers wrt shaders, quality, pickiness)?
 
Reverend said:
Additionally, how important is compression techniques for the end results of various shader effects (let's just average the expectations of a huge bunch of developers wrt shaders, quality, pickiness)?
Of course, given that many of you aren't developers or programmers but are those that seem to analyze things to death once you know what certain settings mean, I suppose I should include that I'd like to know if you'd know the difference between normals compression being on or off if you didn't know the difference between "High" and "Ultra" settings and if you'd even care (if you do know the difference between the two settings wrt normals compression) when casually (i.e. not analyzing) playing the game?
 
I think you should look at other angles aswell (especially for the 'in other games' part of the question)...
Compression is useful in increasing performance at slight quality loss (often not disturbing, barely even noticable, especially during gameplay), or allowing more detailed textures to be used with the same memory footprint. In the case of Doom3 the only uncompressed option is with the same detail as the best compressed option, so you can't make the comparison between quality given the same footprint. Uncompressed textures always take more memory here, and therefore always decrease performance. So Doom3 may not be representative for other games.

Also, Doom3 seems to bruteforce compression on everything. Other games may choose to use compression only on surfaces where there is little or no chance of artifacts.

To me, the Ultra setting is not an option, because I don't have enough memory for it. I suppose everyone would use the Ultra setting if it meant little or no performance-loss, even if there is little or no improvement in quality. But in that sense normalmap compression is very important, it allows me to play the game at a high level of detail without having a card with 256 mb or more. When I play the game, I don't have time to notice any artifacts anyway (it's not like they're very obvious, the low quality shading because of the limited material support in Doom3 is far more obvious anyway. Everything is shiny plastic). I think the Ultra-setting is mostly for purists who have the hardware that can pull it off too.
In short, I would say that in general extra detail with a few artifacts (at the same memory footprint and performance) is better than no extra detail at all.
 
I think the "2" setting for Normal Compression is sufficient quality for Doom 3 at the resolutions I've used (up to 1280/960).

I think it unfortunate that higher detail on some surfaces was not offered, because I found some surfaces lacking in detail, and I wonder if it was content creation limitations or a specific decision to meet performance goals. Given what I understand as being the methodology and focus for normal mapping content creation for Doom 3, it seems like it might be the latter, which I find unfortunate.

Basically, I think compression would offer more important benefit as a tool to allow a higher detail level target, especially for automated normal map creation tools. I think the current role where it is used as a scaling tool to spend some minor image quality loss for usability on slower hardware is an unnecessary limitation (one that can still be used without dictating that it is the only use), which dilutes the impact it could have on improving the game experience on the latest hardware.
 
The other question about not having normal compression on is wether the quality gain (if any) is worth the speed hit of the extra bandwidth.

Often forgotten that its not only 4+ more memory but also 4+ memory bandwidth...

To push this slightly OT...
There is still alot that can be done for normal compression if the artifacts really are noticable. Developers should be exploring these before 'giving' up and just wasting RAM.

With cheap ALU ops (now and even more in the future), I've always thought about trying principle component anaylsis to pick optimal space before compression to minimize quantization errors (the stuff I keep talking about for vertex data in ShaderX, though didn't know it had the fancy name of PCA back then...). The only runtime cost would be 3 dp3 (and 3 constants for the matrix) when you lookup the normal out of the map.
I suppose you can see DX5 normal compression as a fixed version, where you promote one principle axis into the 'high' precision alpha channel and then decompress with a swizzle...

But using a PCA matrix you might get the alpha channel back to increase spatial resolution in some cunning way, hmmm this needs some more thought...
 
Reverend said:
How important is the difference between the two settings to you? We're talking about normal maps compression artifacting of course.

Not sure what you mean since normal maps are not compressed in either High or Ultra quality settings.

Given that this is Doom3, where the environment is mostly black at default gamma and brightness settings, what is your opinion of the importance of normals compression, given the currently available compression techniques, as well as considering other more-brightly lit environments in other games?

Compression is always desired (assuming current bandwidth requirements) and I'd wish id had implemented 3Dc as a flag probably allowing Ultra quality in the X800's to be a much more viable choice.
 
well ultra doesn't compress anything from what carmack said pre release and High only compresses the textures but not the effects. But to be honest I personally can't tell the difference like 99% of time. I ran a few levels on both modes and the only difference i saw was in the fps category, not really so much in the visuals. Ocassionally, i saw a magazine or desk edge that looked sharper.
 
BloodyCape said:
High only compresses the textures but not the effects.

other way around :)

edit:

I just tried medium versus high.... it's a noticeable difference to me. Medium just looks weird.
 
Personally, I'm pretty disturbed with color map compression already, so normal map compression is truly unacceptable :)
 
Laa-Yosh said:
Personally, I'm pretty disturbed with color map compression already
I'm highly surprised by this. DXTC with mipmapped trilinear filtering is effectively undetectable on over 90% of textures.

I find there to be no difference at all between High and Ultra quality during gampelay on Doom3 (and I was deliberately looking for compression artifacts).

Note that if you are allowed to approach the texture so closely that it goes into high magnification, eliminating the trilinear filter, then it can be clearly visible. This isn't typically an issue, as if you approach something so close that it undersamples then you will obviously see blurring anyway - therefore applications already need to be designed to prevent this occurring.
 
Scali and DeanoC hit the nail on the head, I think. Going from compressed to higher resolution compressed has probably more potential than going to uncompressed, since the latter gives only a small improvement compared to the increase in space/bandwith requirements.

Like so many people I have played around with High Quality, Ultra and DoomConfig.cfg tweaking but I could not perceive a difference in-game. In particular the pixelly appearance of textures on most geometry remained - most things look perfect if you are at least 4 metres away but if you get closer then features that are not horizontal or vertical appear pixellated. One example that is easy to check is the floor tiles on the lower level of the hangar (where shot_demo001 was taken) - the tiles a bit away from the edge. The grates close to the edge are perfect for tuning aniso filtering quality, BTW - I get a bit of shimmer/aliasing even at 16AF and it only goes away with image_lodbias 0.3 or thereabouts but I am playing with image_lodbias 1 because of the performance penalty for non-integer bias.

Setting image_usePrecompressedTextures to 1 did not result in any perceivable image quality degradation but it increased performance by 10% (56.4 fps to 61.2 fps in the timedemo). Thorough comparison of screenshots might turn up a few differences but in-game one setting appears just as good as the other, except for the smoother framerates of course. I guess that underlines Scali's and DeanoC's point - compressed textures can give the same apparent quality at higher performance, or better quality if the space/bandwith are used for higher-res compressed textures.

Be that as it may, Doom³ is perhaps not a very good example for judging the merits of compression for normal maps, textures etc. For one thing, many of the things that are normal-mapped are moving around quickly and trying to kill you. For another, other compromises like low-poly models (heads) and very low-res textures are probably more apparent/distracting in-game than the artifacts from normal compression. Last but not least, many people will have to make a compromise between playable framerates and image quality in this game, since smooth framerates are just as much a part of the perceived quality when playing a game as what might be visible in a still image.

Me personally, I hate it when framerate drops below 55 fps or so because then I see multiple copies of things when I move or turn, which I find very distracting especially during fast combat. Same for the shimmer caused by texture/geometry aliasing, although geometry aliasing is less of a factor for me in Doom³ than texture aliasing. That is why I am playing at 1280x960 2AA 16AF with image_lodbias 1, even though the bias causes slight blurring even at 16AF (if you know where to look). Some people may find this horrible - especially those that recommend setting image_lodbias to -1 or -2 :devilish: - but for me it is important that the scene remains smooth and noise-free even if I turn/move. I mean, I have done things like playing Unreal in software mode and Quake2 on a Voodoo², but the fact that I was willing to suffer in the past does not mean that I am willing to suffer now.

I guess that was just a rather long-winded way of saying that artifacts which are static with regard to the geometry - like compression artifacts - may be much less noticable for some people than artifacts resulting from things like poor filtering because the latter are very apparent when moving/turning while the former are only visible upon close inspection. And even then one might have trouble telling whether things were designed this way or the result of artifacting, unless one compares screenshots.
 
Absolutely. Compared to a 32-bit texture, you can have an extra mipmap level on a DXT1 texture and still use half as much memory.

In the majority of cases it also reduces the load on the VPU (at lower mip levels there is less fetch, plus other efficiency gains).

Not using DXT1 is crazy. It's almost always a no-lose situation (for visible maps).
 
One "problem" of DXTC is whether to use precompressed textures. I think it's a natural way to precompress at least color textures with DXT1. However, some developers seem to like use JPEG to compress their color textures, since JPEG can compress "better."

IMHO since many people have DVD-ROM and large HDDs, games should be distributed with precompressed textures. Higher resolution, of course. :)
 
Interesting. DXT1 could probably be compressed somewhat further for disk - there's usually reasonable correlations in the 16-bit colours, so a delta/entropy coding method might show some significant further improvements over just a zip file.
 
As long as JPG is used with high quality... At low quality, JPG can have some very visible artifacts... And recompressing JPG artifacts with DXT could be a problem. In some cases, DXT can't handle JPG-artifacts very well at all, so you could get much worse quality than you'd get from DXT compression alone.
 
Dio said:
Interesting. DXT1 could probably be compressed somewhat further for disk - there's usually reasonable correlations in the 16-bit colours, so a delta/entropy coding method might show some significant further improvements over just a zip file.
You'd also possibly want to re-arrange the data so that the colours (after your delta encoding) and the 2bpp indices were separated as that'd probably help something like the bog-standard Zip.

You'd probably also get a small additional compression by doing prediction/differences on the indices as well.
 
Yep, that's what I was thinking. Split 'em up, delta the colour values to signed ints, maybe delta the indices as well (that's the kind of thing I'd want empyrical data on though).

A (very hand-waving) guess is that it would typically come in between 40% and 70% of the size of a DXT1 file - I'd expect the colours to compress pretty well and the indices pretty equally badly.
 
I wrote quick test app that simply splitted the colors and indices. So all color data first in the file, and the index data in the end of the file. Results:

512x512 DXT1

Normal file: 131,200 bytes
Rar compressed: 95,878 bytes (73%)
Zip compressed: 102,250 bytes (78%)

Splitted file: 131,200 bytes
Rar compressed: 89,737 bytes (68%)
Zip compressed: 92,512 bytes (71%)

Not a huge gain, but clearly worth the ten lines of code needed if you got a lot of data. :)
 
Humus said:
Sorry for the OT but is this an actual word?!

Anyway, sorry for the confusion regarding normals compression and the High and Ultra setting (which, as Mordenkainen pointed out, there aren't any normals compression in either modes).
 
Back
Top