The Lack of high rez S3TC textures in games

Althornin said:
A couple of points:
No need to develop two seperate sets fo textures - you can just dither the high res set down for regular users.

What? You cant just 'dither down' compressed textures for regular users.

You have 2 choices when dealing with texture compression: using the hardware to compress textures on the fly (and potentially risking some artifacting) or too precompress and ship an additional set of textures to be used by S3TC supported hardware.
 
Althornin said:
i happen to think that it IS nVidias fault for ALLOWING developer laziness.
Features need to be pushed. nVidia could have done this (as S3 did) - but they failed to. This is why it is their fault.
That's ridiculous.

S3 pushed S3TC because it was the signature feature of their card at the time. It was a marketing decision. nVIDIA, 3dfx, ATI, etc. etc. have all thrown their weight behind various features at various times, in each case because they hoped to increase sales of their products.

nVIDIA deserves some criticism for having DXT1 work in a suboptimal fashion, but DXT3 works well enough. If developers had any interest whatsoever in spending the extra time and money to develop large, detailed textures they would have done so already.

Blaming a GPU manufacturer is as irrational as blaming... you. The lack of DXT compressed textures in new games is all your fault, Althornin. Why haven't you written to game developers demanding this feature? Why haven't you started your own game company, developing products with extensive use of high res textures?
 
Johnny Rotten said:
Althornin said:
A couple of points:
No need to develop two seperate sets fo textures - you can just dither the high res set down for regular users.

What? You cant just 'dither down' compressed textures for regular users.

You have 2 choices when dealing with texture compression: using the hardware to compress textures on the fly (and potentially risking some artifacting) or too precompress and ship an additional set of textures to be used by S3TC supported hardware.

Or a third option. Ship only S3TC compressed textures and decode them in software to RGBA prior to uploading them on cards that doesn't support S3TC. No need for additional texture sets. This is what I do in all my decently recent demos.
 
I blame God because he allowed NVidia to take the lead from 3Dfx (who also wasn't pushing higher quality pixels all the time) who in turn took the lead from some other companies (who also weren't pushing higher quality pixels all the time), etc....etc.... but in the end I blame Althornin for pulling me into a moronic argument in the first place.
 
In our engine this is a completely transparent feature to artists and users alike.

The artist creates textures in uncompressed .tga files.
When he checks it in the editor/game, the engine automatically compresses the texture on loading from disk if DXTC is available.

Periodicaly we run a utility over our dataset to compress the textures and store alongside the .tga files with the same timestamp.
When the texture is loaded next time, the engine loads the compressed file if it has the same timestamp, optionally decompressing it if the card doesn't support it.

When a release is made only compressed files are included.

There is an option in the program to reduce texture resolution, which is useful when someone's card doesn't support DXTC. In this case the program simply discards the highest mip-level. But this is not mandatory. You don't get worse quality automatically just because your card don't support it.

The feature is transparent because:
- noone has to do anything special because it exits
- there is no way/need to enable/disable it
 
I think the major reason S3TC isn't more widely supported is the same as with many other features, developer incompetence. I think lots of developers hardly know much stuff beyond simple texturing. When I was on my interview with nVidia many of them expressed concerns about developers not using their features, they hear all kinds of bad excuses for not aiming higher than TNT level. The real problem being that they simply lack basic knowledge in todays 3d features. They create all kinds of papers, tutorial and tools, still developers can't seam to get up to GF1 level. While they would have preferred if developers would implement support for these features themself they are now working on various ways to do that for them, almost to the level of them sitting by their side telling them each line of code to write or providing large bunches of code to sort of cut'n'paste into their project.
 
Humus said:
Or a third option. Ship only S3TC compressed textures and decode them in software to RGBA prior to uploading them on cards that doesn't support S3TC. No need for additional texture sets. This is what I do in all my decently recent demos.

That must be expensive though (either in terms of texture storage space or performance) otherwise there'd be little point in having onboard hardware support for S3TC.
 
Even before DX7/8, there were only a couple titles with really good 3d engines. I don't think hardware is making 3d concepts or coding any simpler (if anything it's conceptually more difficult now that gouraud shading is being replaced with more realistic shading), so why is this a surprise?

Just looking at what a game must include these days is kind of scary. You need the graphics engine, physics engine, sound engine, pathfinding/AI, animation... And all of these "different" modules must interact. (oh yes, and network code). All of these are now much more complicated than they used to be.

Then, once you have all that, you can start on the actual game portion of the code. IMO this is why engine licensing is a good thing...

my 2c,
Serge
 
Johnny Rotten said:
Humus said:
Or a third option. Ship only S3TC compressed textures and decode them in software to RGBA prior to uploading them on cards that doesn't support S3TC. No need for additional texture sets. This is what I do in all my decently recent demos.

That must be expensive though (either in terms of texture storage space or performance) otherwise there'd be little point in having onboard hardware support for S3TC.

If you mean expensive as requiring more time to load, no it's not.
Actually loading an S3TC texture and uncompress it is faster, than loading an uncompressed texture!
It's paid back by the much smaller hard disk access time!

As for your onboard S3TC support comment - I guess you have a basic misconception of that feature. The texture is NOT uncomressed when it's uploaded to the card (neighter hw not sw), it is read in comressed form while texturing an it's uncompressed inside the GPU upon usage.
So one of the points is less GPU bandwidth requirement.
 
So then what advantage does a chip that is able to decode compressed textures in hardware enjoy over a chip without (and which must rely on software decompression) if performance issues are non existant?
 
That must be expensive though (either in terms of texture storage space or performance) otherwise there'd be little point in having onboard hardware support for S3TC.

If its done at load time it wouldn't be that costly - remember, all Q3 engined games do this in reverse when TC is enabled; the take the uncopressed textures and software compress them on level-load time.
 
DaveBaumann said:
If its done at load time it wouldn't be that costly - remember, all Q3 engined games do this in reverse when TC is enabled; the take the uncopressed textures and software compress them on level-load time.

It was my understanding that by enabling compressed textures in Quake3, a card that supports S3TC would compress the level textures at load of the level. You're saying that the card isnt compressing the textures but your cpu is? You can enable texture compression in Quake3 (for example) on a V3/TNT class card?
 
psurge said:
Even before DX7/8, there were only a couple titles with really good 3d engines. I don't think hardware is making 3d concepts or coding any simpler (if anything it's conceptually more difficult now that gouraud shading is being replaced with more realistic shading), so why is this a surprise?

Well, yes and no.

The concepts might go more complicated, but the coding is not necessary is.

I had my time coding software 3D rasterizer, texturer, rotation and stuff, and optimizing it calculating CPU cycles thinking hard where I could - with a trick - save one (or half) a cycle per pixel.

Then came the hardware rasterization, and suddenly life got a lot easier, you have to care about a lot less.

And by the time I got into game programming, T&L came by and again it took a whole lot away from things I had to do.

So I don't think the amount of code increases that greatly, it's just the engine of the now has it's focus on different places than the engines of the past.
 
Johnny Rotten said:
It was my understanding that by enabling compressed textures in Quake3, a card that supports S3TC would compress the level textures at load of the level. You're saying that the card isnt compressing the textures but your cpu is? You can enable texture compression in Quake3 (for example) on a V3/TNT class card?

The compression part isn't what the hardware gives you. The hardware gives you texturing directly from the compressed texture.
 
Johnny Rotten said:
So then what advantage does a chip that is able to decode compressed textures in hardware enjoy over a chip without (and which must rely on software decompression) if performance issues are non existant?

Ok. We seem to have misunderstood each other.

The original conversation was about whether you need 2 version of the textures.

Of course there are a performance difference between using a compressed texture and an uncomressed one.

But there is no point storing the uncompressed one on disk, you can load the compressed one decompress it in memory. It's the same as you had uncompressed textures in the first place, but you need less storage place.
But, it won't gave you speed advantagages on cards that don't support compression.
 
It was my understanding that by enabling compressed textures in Quake3, a card that supports S3TC would compress the level textures at load of the level. You're saying that the card isnt compressing the textures but your cpu is? You can enable texture compression in Quake3 (for example) on a V3/TNT class card?

AFAIK no card has hardware to compress textures – the intention, again AFAIK, was always to supply the textures in one of three ways:

1.) Compressed textures supplied on the game CD and installed.

2.) Textures compressed as the game is installed and an S3TC capable board is detected

3.) Uncompressed textures are installed but are compressed in software as the level loads so that compressed textures are stored in the cards local memory.

All 3 options have their plus points and negative points; option 3’s biggest draw back is the increased level load time, although that isn’t significant, and generally speaking only one of the 5 compression schemes will be utilised under this method (unless a map of texture to compression scheme is given, which is unlikely since it makes ongoing support / patches etc a little more tiresome).

If you enabled texture compression on a V3/TNT then what would there be to decompress the textures at render time? Nothing.

In fact you couldn’t enable it on a V3/TNT because they would neither have the OpenGL extension that OpenGL games look for, nor the relevant support for the DX compressed texture formats.
 
Johnny Rotten said:
It was my understanding that by enabling compressed textures in Quake3, a card that supports S3TC would compress the level textures at load of the level. You're saying that the card isnt compressing the textures but your cpu is? You can enable texture compression in Quake3 (for example) on a V3/TNT class card?

No it's software that compresses the texture and loads that to the videocard.
What the videocard does that it can use a compressed texture - that is it can decompress it inside the GPU on-the-fly while texturing.

You cannot enable TC on V3/TNT because (while the software could compress the textures all the same), the graphic chip cannot use the compressed texture. (Just like V3 cannot use - say - 32bit textures.)
 
Alright I'm on the same again, but then Humus' original posts confuses me.

"Or a third option. Ship only S3TC compressed textures and decode them in software to RGBA prior to uploading them on cards that doesn't support S3TC".

If this has no performance impact, or neglible performance impact, and you can just decode the compressed textures through software it seems to me that it would just make sense to use compressed textures as your standard, base foundation. I guess I'm saying, 'Where's the drawback?'
 
I guess I'm saying, 'Where's the drawback?'

The drawback there is that when the game is actually playing and the frame are being rendered they are using uncompressed textures (as that is what will be stored in the frame buffer) and hence there will be no bandwidth benefits; the rendering will be slower and the texture space required will be higher (giving rise to the possibility of the texture spilling into system ram and further impacting performance by needing to pass over the AGP bus).
 
Back
Top