Colour model for computer graphics which is "good enough"

K.I.L.E.R

Retarded moron
Veteran
I've been dwelling in this area for a while now and I really want to know the perfect colour model for computer graphics.
By computer graphics I am talking about lighted scenary varying from indoors to nature, in other words the stuff you see in games.

Anyway the basic colour models are very bad.
Image processing in RGB changes colour.
YIQ/HSV have bad representations of intensity.
CIE XYZ is a non-linear colour model suited for eyes but not for processing.

I need a colour model which is best suited for processing computer graphics.
After I've processed the result I can then display them afterwards as CIE XYZ(if I can). Due to the nature of the video cards and monitors this is probably not possible.

I've been doing some reading about this lately and I can not find an answer and my image processing teacher hasn't dwelled in the computer graphics part of image processing.
 
I guess that what you mean with RGB processing changing colors, is stuff like, if you try to change intensity in a non-linear way by applying it to all color channels you'll get altered hues too?

I don't know whats wrong with YIQ, but if you say so... (I guess YUV is no better then?)

I can see what's bad about HSVs interconnected S & V. Have you tried HSL? Much better way to represent intensity, IMHO.

(BTW, stating the obvious, but dont forget that H should wrap around)
 
Last edited by a moderator:
According to the Poynton colour FAQ HSL should be abandoned, so it's worse than the current colour models I'm using.

Thowllly said:
Have you tried HSL?

" I don't know whats wrong with YIQ, but if you say so... (I guess YUV is no better then?)"
So is YIQ better than HSV?
 
K.I.L.E.R said:
According to the Poynton colour FAQ HSL should be abandoned, so it's worse than the current colour models I'm using.
I wouldn't say that it's automatically worse just because somebody says that it should be abandoned, it depends on what you're trying to do.
" I don't know whats wrong with YIQ, but if you say so... (I guess YUV is no better then?)"
So is YIQ better than HSV?
I didn't mean to imply that you were wrong, I really don't know! (But reading it again it's, really, really easy to see thats the way most would read it... please note my disclaimer :p)

It would help if you said what you're trying to achive... :p
 
Last edited by a moderator:
General image processing techniques such as convolution, FFT(high pass filters), edge detection, skeletanisation and so forth.

I'ts doable but according to my teacher I need to seperate the value component(intensity) from the colours.
None of the colour models I've used do that perfectly.
 
K.I.L.E.R said:
General image processing techniques such as convolution, FFT(high pass filters), edge detection, skeletanisation and so forth.

I'ts doable but according to my teacher I need to seperate the value component(intensity) from the colours.
None of the colour models I've used do that perfectly.
Hmm, I think it would be difficult to separet color and intensity completely, given the way our visual system work. Saturation and intensity affect each other. If intensity is very low, then saturation will drop too (everything starts to look gray when it's really dark). And highly saturated colors look like they are brighter than low saturation colors, even if they're really equally bright. But I don't know if that really matters anyway :shrugs:
 
K.I.L.E.R said:
General image processing techniques such as convolution, FFT(high pass filters), edge detection, skeletanisation and so forth.
What exactly do you want to do with these things? FFT, edge detection, skeletonization only really make sense on scalar data. You have to be more explicit about what you want to do with colour.

For edge detection, do you want to find edges between two equally luminous but differently coloured regions? Just do each channel in RGB and your edges are the union of all individual edges.

I think you're making things a lot more difficult than they have to be. The shader you were trying to make in the other thread also suggests this.
 
K.I.L.E.R said:
Image processing in RGB changes colour
I assume you're refering to clamping the components after a transform that gave you an off-gamut color, right ? Because that's just a dynamic range clamping problem: take some more margin to account for it when you're choosing the precision of your internal representation for components, and make sure you limit the excursion of the components at the input of your transform (you know, like the Rec. 601 video standard, which encodes white with a luma of 235 so that there's some room left for some display-time tranforms).

K.I.L.E.R said:
General image processing techniques such as convolution, FFT(high pass filters), edge detection, skeletanisation and so forth.

I'ts doable but according to my teacher I need to seperate the value component(intensity) from the colours.
None of the colour models I've used do that perfectly.
So you need a luminance component to perform some processing on it, but you don't need to convert back its result into a color picture, right ? For instance, the result of a contour detection is a contour. Because if that's the case, all you need is to transform your RGB into a luminance value, which is really easy.
Code:
// 	LUMINANCE CIE709
//	----------------
//
// This computes the luminance of a rgb color within the CIE709 standard. 
// (alpha is ignored)
float Luminance709(float4 col)
{
	float4 Y709 = { 0.2125, 0.7154, 0.0721, 0 };
	return dot(col, Y709);
}
Edit: Ooops! I haven't seen Mintmaster's post. I believe we're basically saying the same thing: the problems you're seeing have practical solutions, so of course you might try to solve it the hard way (which would be, according to Poynton: "If you really need to specify hue and saturation by numerical values, rather than HSB and HSL you should use polar coordinate version of u*and v*: h*uv for hue angle and c*uv for chroma."), but I believe you should not need to do things as "extreme" as defining your own color space, as RGB should do. :)
 
Last edited by a moderator:
So in effect everyone is telling me that I should either use RGB or use a simple model to seperate intensity from colour?
 
Personally, I'm saying figure out what you want to do before trying to do it.

All these things you're talking about do not have an unambiguous application to colour images. Tell us what your aims and goals are and we can help.
 
I've noticed. :LOL:

Mintmaster said:
Personally, I'm saying figure out what you want to do before trying to do it.

All these things you're talking about do not have an unambiguous application to colour images. Tell us what your aims and goals are and we can help.
What am I trying to do?
I have an original texture 2048x1024.
I have a render target of 512x512.

Obviously the final result will look like crap.
The point is to come up with a filter which could offset the crappiness as much as possible.

In effect I need to apply different filtering on different areas depending on the interpolated distance from eye-object. I've already taken care of this.
What I do need is a better filter for those far out results.

I've been told that I should use a bi-cubic filter but I feel that it is a very poor filter and so I'm looking at more complex alternatives.
Also anisotropic filtering gives very poor results so please don't tell me to just enable that. :)
 
Kruno,
If you are doing filtering of an image, then you need to be working in linear (i.e. proportional to the amount of energy) RGB space. (or, if you're really keen, a scheme with more colour channels (eg ROYGBIV)). You also need > 8bpc.

Just remember that a computer display is usually works in something like sRGB space, i.e. the channels are non-linear. The same probably also applies to your original input image. This means you'll need to gamma correct on input and then back again on output.

As for the filter, some sort of approximation of a Gaussian (even a cubic approximation) would be a good start but you have to make sure you get your footprint correct.
 
For texture filtering the right colour space is linear RGB.

If you want the optimum texture filtering use footprint filtering in linear RGB space (basically you project the pixel footprint, a truncated gaussian for instance, to texture space to determine the weights for the individual texels).
 
If you do it in software then splatting (forward projection) is probably easier to implement than normal texture mapping.
 
The part where I map my function to a colour model to get a gamut sounds complicated(yes I've done research so I know what you're talking about :D ), can you give me some pointers on where I should start with that?


Simon F said:
Kruno,
If you are doing filtering of an image, then you need to be working in linear (i.e. proportional to the amount of energy) RGB space. (or, if you're really keen, a scheme with more colour channels (eg ROYGBIV)). You also need > 8bpc.

Just remember that a computer display is usually works in something like sRGB space, i.e. the channels are non-linear. The same probably also applies to your original input image. This means you'll need to gamma correct on input and then back again on output.

As for the filter, some sort of approximation of a Gaussian (even a cubic approximation) would be a good start but you have to make sure you get your footprint correct.
 
The gamut stuff is only relevant when you are prepping a HDR image for display.

All you have to do to get linear RGB is to convert to floating point and apply the gamma function.
 
Gamma function?
I thought monitors already did that and the video card has a lookup table to do that already.


MfA said:
The gamut stuff is only relevant when you are prepping a HDR image for display.

All you have to do to get linear RGB is to convert to floating point and apply the gamma function.
 
Back
Top