Texture Resolution

mkillio

Regular
Is there any visual benefit to having a texture at a higher resolution than the screen resolution, i.e. having a 800x600 texture in a Wii game that only displays at 640x480? If not is it safe to say that all textures will be at 640x480 on Wii?
 
From my understanding, it shouldnt matter much. Regardless of the resolution, higher textures will look better. If you take a pc game put it on the lowest resolution possible, there is still a difference between lowest textures and highest textures. Textures are wrapped around a 3d model, so its not like you will see the entire texture covering the screen. So any console could benefit from higher textures, especially the wii. Only problem would be memory.


of course, I could be wrong. But thats just my understanding.
Someone feel free to correct me. :LOL:
 
mkillio said:
Is there any visual benefit to having a texture at a higher resolution than the screen resolution, i.e. having a 800x600 texture in a Wii game that only displays at 640x480? If not is it safe to say that all textures will be at 640x480 on Wii?
Absolutely. It can be a hard thing to understand until you actually see it, though there are many reasons why it can be better.

One is that if the textures are high enough in resolution, they won't look blurry when you plant your face into a wall. This was the effect that really stood out to me when I first checked out the DXT1 compressed textures in the original UT.

Another benefit is that you don't have to repeat high-resolution textures as much if you design your content around them. This is the impetus behind the Megatexture tech that Carmack has been working on, which allows for effective texture resolutions as high as 32k x 32k, the idea being that with Megatexture, you won't have to repeat textures at all. With a setup like this, for instance, artists could paint blood on the walls and it would actually be part of the texture, not some effect painted on that looks the same on 20 other walls in the game.
 
If you arrange the MIP levels so that when you have the camera pressed up against a wall or other object you will see the lowest level, then screen resolution will be the highest desirable resolution. Any higher and you will begin to see pixel fighting (shimmer).
But for some reason hardly any games are made like that. :???:
In truth we are really should be talking pixel to texel ratio. If the textures are LODed right then even pretty low res textures should be able to get a ~ 1:1 texel to pixel coverage.
The real issue is tiling.
I wonder why "light" procedural techniques like splatting and wang tiles aren't more popular?
 
Squeak said:
The real issue is tiling.
Right, so, particularly for consoles, the artists have had to draw a compromise between tiling and textures that were too blurry. Large textures alleviate much of the need for a compromise (as you can see in many recent PC games...though they still have some ways to go).
 
I don't think mega textures, clip-mapping, virtual textures or whatever you want to call it, is going to be the be all end all solution to blurry textures.
First of 32k^2 is not nearly enough for a large landscape or room. The ~500Mb of HD space it takes up is not insignificant either, when you consider that you're probably going to need twenty or so for a whole game.
Virtual textures will certainly be an integral part of the texturing process in nextgen consols and PCs but I doubt it will be the end of tiling (in some form or another).
 
I bet you could do quite a bit better than 500MB with some decent JPEG compression. It's not like the texture should be loaded into video memory that often, after all.
 
Shaders and bump/normal/displacement mapping strenghten JPEG compression artifacts. So I don't think the industry should move in that direction.
 
Laa-Yosh said:
Shaders and bump/normal/displacement mapping strenghten JPEG compression artifacts. So I don't think the industry should move in that direction.
You can do lossless compression on normal maps, too.
 
What if instead of encoding a normal map as a point on the Cartesian plane in 32-bits, you simply encoded it as a pair of angular deflections? Unless I misunderstand how they're encoded, most of the space is just wasted. I'm basically assuming that the RGB data corresponds to a point on the unit ball in the upper half-plane, i.e. [255 0 0] is (1,0,0), but {255,255,255] isn't a usable value since it's a point on the cube circumscribing the sphere. Is this correct? Why not simply store it as a pair of angles each effectively varying between 0 and pi? I guess it depends on how fast a GPU can compute a cosine and angles.

I'm kinda talking out of my ass here. Is that approach even feasible or even already being done?
 
One problem with that approach is that the distribution of points is very uneven: there's a lot of points near the poles, and not nearly as many around the equator. The way that it's usually done is that just two of the three cartesian components are stored, and the third is calculated.
 
fearsomepirate said:
What if instead of encoding a normal map as a point on the Cartesian plane in 32-bits, you simply encoded it as a pair of angular deflections?
That's actually pretty close to the way that Dreamcast encoded its normal maps. I wish, however, I'd specified cartesian coordinates, but it was too late to change the design by the time I'd realised my error.
 
Back
Top