Optimizing texture memory-usage further?

Ether_Snake

Newcomer
Hello

I'm an artist and not a coder, and usually every time I think of something clever some programmers have already beaten me to it anyway;)

But how about this? I know back in the PS1 days (and probably still today) a lot of grayscale textures were used, which were then colored either per-vertex or otherwise.

textureoptom0.jpg


But would it be possible to load only a certain range or colors from a texture and dynamically load a greater range of the texture's colors as we get closer to it (not like with mipmaps since they add more data in memory)? Basically, load the necessary information needed to get the equivalent of the grayscale image above, + a pres-specified color, then add more and more of the textures' colors as we get closer to it? Or would it be too much data fetching?


Heck, even across a whole image to reduce fillrate issues, maybe based on depth if possible?

imageoptvr4.jpg


Or I'm just being stupid?

/goes back to vertex pushing
 
combining the color + greyscale texture is not gonna give that final result.
sounds like u suggest using a pallette which has its own problems, in saying that though the eye is more sensitive to light than color thus some HDR methods have seperate luminance + hue color info's. eg logluv32 uses 16bits for luminance + 16bit for the hue
 
Actually the resulting images shown above ARE what you get when you combine the grayscale+compressed images. It's not a maybe, it's really what you get when you combine both. You can't tell the difference because it works so well:)

The problems it causes are more apparent on high definition images, but with more samples it is not that noticeable if you can't compare the result with the original, and if there was a trick to implement this based on depth it would be even less noticeable.

What I am basically wondering is if this would put less stress on the fillrate?
 
how are they combined its not a modulation or addition, its not that i dont believe it but i cant see it working without the banding ( i just tried it with photoshop + couldnt see how it works )

also WRT speed u need to sample 2 textures which is not nice
 
To get the "compressed + grayscale" combination image he is presenting (or at least something very closely similar):
  • Divide the "compressed image" by a grayscaled version of itself (take care so this division doesn't overflow)
  • Multiply the resulting image with the "grayscale equivalent" image
Effectively, he seems to be working in a HSV-like color space where H is taken from the grayscale image and SV are taken from the highly quantized data in the "compressed image".
 
arjan de lumens, you mean that the V is taken from the grayscale image, and the HS is taken from the colored version.

Anyway, I don't think this will work well in practice. There are only a limited number of texture formats to choose from when rendering, and having to store 4 components for this makes that even worse. Although, you could just work in HSV directly, and use an RGBA texture with 4 bits per component (I know some cards support this format). Take two to use as values for HS, and the other two can be combined to make an 8 bit value for V. Converting it back to RGB would be pretty expensive though, and may require another 2d texture as a lookup table just to make it feasible.
 
Yeah the HS is from the compressed colored image. BTW I'm really talking about two different things, I forgot to change the topic of the post before posting.

The first was about how textures not displayed up close would have only their values loaded + an additional pre-specified color value (a color that matches more or less what the final texture would look like). As the player gets closer to it the H and S are loaded. But I doubt this is possible as far as I know because formats like DDS use palettes and not RGB, which is lighter anyway.

The second image was something else based on a similar principle but I'm not sure it makes much sense to you guys (because of my own lack of understanding on the subject). This is not about textures:

Instead of displaying thousands/millions of colors for each rendered frame, only a limited number of colors would be sampled. The actual value of the image is sampled at 100% (to put it in other words each frames are rendered fully in black and white). The two are combined and you obtain an image that is almost indistinguishable in quality from one that would not use such a process (I tested it with high resolution images and it's very difficult to notice other than slight alterations to the brightness of the image).

So in the end since less colors are sampled per rendered frames would this be positive in any way? Or is this not even possible on a technical level? And I guess that yes it would mean each frame would be rendered twice. Just curious:)
 
A single texture fetch costs basically nothing. Any sort of compression really only helps when you're using more memory than the video card has avaiable, because the compression would reduce how much it has to move things back and forth from the card. Also, rendering a scene twice for any reason is going to be a big performance hit.
 
Isn't this quite close to JPEG compression method?

The only thing that seems different is the image subdivision which could be done by using vectors instead of square boxes...
 
You could just use DXT5 compression with YCoCg or similar colorspace. The decode cost is cheap, the compression rate is 1:4 and the quality is superb. You even get a 5-bit chanel for free ;)
 
Thanks for the info all. I knew if it was clever someone else would have done it already!

Zengar: Yeah. The whole idea would have worked with RGB/HSL-based textures, not palletized ones, which in the end would take more memory I would guess:)
 
Back
Top