NVIDIA G80: Image Quality Analysis

Discussion in 'Beyond3D Articles' started by Geo, Dec 12, 2006.

  1. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,528
    Likes Received:
    107
    I think you need to read up some more. There`s quite a bit that's wrong in your post, starting with its premise;).
     
  2. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,299
    Likes Received:
    137
    Location:
    On the path to wisdom
    With this attitude you're probably in the wrong place here.
     
  3. ballsweat

    Banned

    Joined:
    Jul 8, 2007
    Messages:
    71
    Likes Received:
    0
    I'm terribly sorry about that.

    Anyways, so others will understand me a little bit better, I only care about iq, compatibility, and temps, and so i don't like transistors wasted (or "spent" as most would put it) on compression b/c i'd rather have more image quality features than performance enhancing features.

    Another thing I didn't like nvidia spending transistors on with the g80, is any of the old modes of af, especially when their new hq af isn't absolutely perfect.

    Beyond 3D pointed that out in the g80's iq analysis.

    I am very unhappy with nvidia and ati for forcing mediocre image quality and drivers.

    I realise everyone says how much better the g80 looks than anything before it, but my point is that it could look a lot better, and it's very frustrating when I see these extremely high benchmark scores and yet they can't offer better iq, when the speed is there to do so.

    I think it would be a lie not to say that the majority of benchmark numbers are so high that they prove there is well more than enough performance to quit using so much compression, or at least have the choice.

    Video games are my life, and it litterally makes me want to die sometimes, b/c I don't have the option to play games with image quality (and audio) that's acceptable to me, and it doesn't matter how much money i have, it could never buy me acceptable image quality, or at least that's how it currently looks.
     
  4. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,734
    Likes Received:
    5,825
    Location:
    ಠ_ಠ
    whoa boy... completely forgot about this thread. Belated Thanks for the reply. :)
     
  5. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,528
    Likes Received:
    107
    Trolling is cool, ain`t it?Sigh...the final part with video games are my life gave you away. Good effort, so so execution.
     
  6. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,436
    Likes Received:
    264
    You can't have good image quality without performance. Compression improves performance which allows for higher resolution textures and frame buffers. So in a sense compression is a image quality feature.
     
  7. ballsweat

    Banned

    Joined:
    Jul 8, 2007
    Messages:
    71
    Likes Received:
    0
    that's definately true. you know more than i ever do.

    i would personally prefer smaller uncompressed textures than large compressed textures, but that's just me i guess, or another solution could be to load the video cards with so much memory, that no compromises would have to be made, but that would cost too much for the masses.

    the problem with dxtc is that it's fairly lossy, some compression is lossless, but the difference between s3tc and no s3tc in quake 3 has really kind of made compression a turn-off for me ever since 7 or 8 years ago.

    by the way, i haven't tried it on my 8800 gtx, but is s3tc forced in quake 3 or will the console command still disable it?
     
  8. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    First of all, it might have been a good idea to mention you were thinking of *texture* compression only when you said 'compression'. Framebuffer & Z compression are both obviously lossless and features centered around maximizing performance/transistor and per/dollar.
    I'd be interested to know if you believe this is the case even with the new more advanced DXTC compression algorithms, such as the iterative cluster one in Squish: http://sjbrown.co.uk/?article=dxt

    Note that I'm obviously not thinking of that image specifically (since it's not very representative of real textures), and that there is not even any illustration of the iterative results there. However, it is my experience that everyone most annoyed with S3TC/DXTC is actually comparing low-quality compression algorithms with uncompressed textures, rather than high-quality algorithms.

    DXTC is very far from perfect, of course. Since you seem to be so against it, I'm curious whether that is because of bad experiences (such as your Q3 one) or the fact the IQ actually feels that bad to your eyes and/or brain. And in the latter case, what disturbs you the most with it...
     
  9. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    there was a bug on nvidia cards (NV1x and maybe NV2x in a lesser way) that made the quake 3 sky look awful. I didn't have that problem on voodoo5 and I didn't notice much of an IQ difference if any. The extra fps were appreciable so it's a no brainer. No matter the card and the quake 3 powered game, I still let compression turned on. (r_ext_compressed_textures controls it in the console or .cfg file)

    Mafia is a game where I disable compression though (I got some weird shit at places and, it might be psychological, I get the impression it'll get me more banding)
     
  10. ballsweat

    Banned

    Joined:
    Jul 8, 2007
    Messages:
    71
    Likes Received:
    0
    thanks=)

    anyways to answer #48's question, it can hurt my eyes, if there's banding, blotchiness, and tiling, then that hurts both my brain and my eyes.

    but as you say, with a higher quality tc algorithim, the artifacts are gone, for example if you use dxt1 in tomb raider aod, the sky in the 1st level looks horrible, but then if you switch to dxt5 it's no longer blotchy.

    i personally wish the option for either higher quality compression (or none) was always there.

    also #49, relevant to #48, the voodoo 5 used 3dfx's proprietary fxt1, and as #48 says the format makes a hige difference, so the voodoo5 was clean and the geforce 256 and 2 gts had the problem.

    so anyways, i'm just saying i want the option to always be available to disable it or choose a better format, so that everyone can be satisfied.
     
  11. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Well, that's understandable. I'm just questioning whether these problems are actually there in practice, when the programmers and the hardware are doing things right.

    And that's on a GF4+, right? Since the DXT1 bug existed until NV20 included, so it was only fixed in NV25 and NV17, plus everything that came after that. I guess since it's TR:AOD, it probably is, but I'd like to be certain about this.

    Also, I can't really find a good explanation for that, unless they used 1-bit alpha everywhere and a really awful algorithm. Personally, I am completely opposed to using DXT1 for 1-bit alpha; I'd rather use DXT5 even if it costs twice the bandwidth and memory footprint, heh.

    High quality compression is a preprocess. It's really quite slow, even on a G80 with NVIDIA's new CUDA-based texture tools. So it doesn't make sense to make it an "option": it's not even more expensive than really shit DXT compression. It just takes more time to create, so you need to ship that with the game obviously.

    As for having an option for uncompressed textures, even if you ship the compressed textures with the game, that might still make sense, because the lower mipmaps are not lossless. So by basing them on the data you have on the top mipmap, you can have higher quality in all of those, but not in the top mipmap obviously. That IS very expensive in terms of memory footprint and bandwidth though, so it obviously doesn't make much sense unless you play the game several years after it was released with a card that has significantly more bandwidth and memory.

    NV10 and NV20 suffered from the problem of doing DXT1 filtering in 16-bit instead of 32-bit internally. It's not the 3DFX which had an incredibly better compression algorithm (hint: it wasn't even really better at all, IMO). It was just NVIDIA which fucked up their DXT1 implementation completely until NV17/NV25.

    As I said above, better DXT compression is not more expensive. It just takes more preprocessing time, and it isn't even that bad nowadays. So if I did understand your points correctly, next time you see that kind of problem, bitch to the game developers, and not to the IHVs.
     
  12. ballsweat

    Banned

    Joined:
    Jul 8, 2007
    Messages:
    71
    Likes Received:
    0
    I definately agree with you with everything, especially about dxt5.

    i had been bitching about ihvs b/c i thought they could just force dxt5 if the game requested dxt1 or dxt3, but i'll definately take your word for it that it's the games fault.
     
  13. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Well, if the game provides DXT1 data with 1-bit alpha with 3 possible colours per block, they can't magically make that DXT5 with 4 possible colours per block. The data just isn't there, unless you deduced it from the top mipmap for the lower mipmaps rather than taking the app's provided data for these lower mipmaps. And that works, but I'm not sure how much it really helps, or even matters.

    Back in the NV10/NV20 days, there definitely were options in things such as Rivatuner to force DXT5 instead of DXT1/DXT3 iirc (or did it just pretend DXT1 wasn't supported? Hmm, I can't really remember). That's another problem completely from just having lower quality due to 1-bit alpha stealing you one possible opaque colour per block though. I'm not sure of how that affected perceived IQ compared to cards with decent DXT1 support. For opaque textures, it was presumably identical; for transparent textures, I really don't know at all.

    I just thought a bit more about that sky problem in TR:AOD you mentioned. My guess is that the game has a preprocessed DXT5 version of the sky, done with a good (and slow) compressor. But to save video memory, they give users the option to use 1-bit alpha DXT1 instead, and that is based on the DXT5 data (which suffered from high-quality compression once already!) and then recompressed with a low-quality but fast compressor at runtime.

    The end result? Same as if you took a raw image, compressed it to JPG with a good compressor at >90% quality, then opened the compressed version of the file and resaved it with a bad compressor at <70% quality. That's my guess, and it very well might be wrong - but at least it makes some sense, heh. And it is, obviously, a pretty damn bad design choice from an IQ perspective. Not sure they really had a choice, but on the other hand, indicating that was a "really-low-end" option might have helped you and others a fair bit, heh.
     
  14. ballsweat

    Banned

    Joined:
    Jul 8, 2007
    Messages:
    71
    Likes Received:
    0
    now riva tuner lets you turn off the dxt completely which is great, but the problem with that is that i believe it's at the cost of no sli-aa modes.
     
  15. andypski

    Regular

    Joined:
    May 20, 2002
    Messages:
    584
    Likes Received:
    28
    Location:
    Santa Clara
    Unless you play funny tricks with the encoding DXT1 and DXT5 are capable of equivalent quality of colour encoding. In fact, as originally specified, DXT1 is capable of higher quality pure-colour encoding than DXT5 because the block can take either 4 or 3 colours, with interpolants at 1/3 and 2/3 or at 1/2, whereas in DXT5 only the 4-colour mode is permitted - this can make a difference in some cases.

    If DXT5 encodes colour data at higher quality than DXT1 it is probably because there is something incorrect about the DXT1 decoding.

    If a developer needed more than 1-bit of alpha I very much doubt that they would use DXT1 - that would be a very elementary mistake to make.

    I know that DXT1 decoding on some earlier cards sometimes looked as if the interpolation was only done at 16 bits of precision - a naive interpretation of the encoding specification might suggest that this is sufficient, since the endpoints are only specified at 16-bits of accuracy (RGB 5.6.5). In fact it can easily be seen that a good encoder can extract significantly more than 16 bits of colour precision from the format (equivalent of 6.7.6 or even better for simple colour gradients), but to do this the interpolated colours must be decoded at better than 5.6.5 precision. The result of only using 16-bit interpolated colours would show up as blocky, poor quality compression, even of relatively simple images.
     
  16. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Ah, interesting point - I forgot that DXT5 didn't allow for that trick... :???:

    So I guess that makes DXT1 superior for mostly opaque textures then, and DXT5 only superior in the corner cases where you have a high number of blocks with a little bit of transparent data. Thus, in practice, the reverse of my previous arguement wrt DXT1 vs DXT5 is more correct...
     
  17. andypski

    Regular

    Joined:
    May 20, 2002
    Messages:
    584
    Likes Received:
    28
    Location:
    Santa Clara
    Correct - in theory DXT1 gives better colour-only compression than DXT5, but it is really only appropriate for transparency if the alpha is punch-through - if there is any requirement for smooth alpha edges then DXT1 is pretty much out of the picture.

    In practice now that we have very flexible shaders there are tricks you can play at the point when you originally compress the data to get improved quality out of DXT5 for colour-only textures - basically you effectively store some of the colour information in the alpha channel, and use shader instructions to do the final steps in the decoding (after the normal DXT decode process).

    Of course this increases the storage requirements by a factor of two over DXT1, and although it improves the overall quality it's doubtful that you actually typically get better quality per-bit of storage than DXT1 does.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...