Colour model for computer graphics which is "good enough"

According to the Poynton colour FAQ HSL should be abandoned, so it's worse than the current colour models I'm using.
The main reason he says that is because its value is rather lost at this stage with visual feedback about color and reference points against other color spaces as well as color spaces that are based on perceptual models (e.g. Luv, Lab).

YIQ/HSV have bad representations of intensity.
I can see what's wrong with HSL/HSV's representation, as they're both based on selection schemes from RGB (e.g. max(RGB) and min(RGB)).

I'm not sure what's THAT bad about YIQ/YUV/YCbCr, as they all use pretty standard greyscale for their intensity. It's not the best thing out there, but it's fair that the main value of YIQ-type color spaces is for image/video compression (i.e. you can afford to lose more in the chrominance channels) and for NPR type of work (i.e. there are certain things you want to do exclusively to chrominance).

I've been told that I should use a bi-cubic filter but I feel that it is a very poor filter and so I'm looking at more complex alternatives.
Sure you can mess with Lanczos or sinc^2 or something like that. If you really need some edge detail or something, maybe you can apply some sort of isophote smoothing after a normal interpolation pass. NEDI is also an option, but rather complicated as you probably have to check the determinant of your matrix every time to see if it's ill-conditioned for inversion -- fortunately when it IS ill-conditioned, bilinear/bicubic is good enough.
 
You guys are amazing. Where did you all learn this stuff?
I know there are a lot of PhD graduates around here but I didn't think there were so many.

Thanks for the help, I'll work on understanding what you've all given me and then come back if necessary.
 
K.I.L.E.R said:
You guys are amazing. Where did you all learn this stuff?
-Speaking for myself : Uni, books, journals (eg Siggraph) and papers, the web and hard experience.
 
Since Poynton and Gamma has been mentioned in this thread:
Have you noticed the bug in his Gamma FAQ?

In FAQ 14, he says that "Computer Graphics" have a intensity linear frame buffer. That might be true in some special cases. For the computer graphics we usually talk about here, it isn't. But it is a common mistake to think that it is.
 
Basic said:
Since Poynton and Gamma has been mentioned in this thread:
Have you noticed the bug in his Gamma FAQ?

In FAQ 14, he says that "Computer Graphics" have a intensity linear frame buffer. That might be true in some special cases. For the computer graphics we usually talk about here, it isn't. But it is a common mistake to think that it is.
Well... in a sense he is right... most of the shading algorithms (eg gouraud) are assuming linear blending so therefore the framebuffer is linear.... it's the display hardware that's wrong ;)
 
I guess most of the time it's an unfortunate mix of linear lighting with sRGB (or similar) textures filtered linearly, linear blending, and ~sRGB AA downsampling.
 
Xmas said:
I guess most of the time it's an unfortunate mix of linear lighting with sRGB (or similar) textures filtered linearly, linear blending, and ~sRGB AA downsampling.
Hence probably why some SGI machines had 12bpc.
 
Simon F said:
Well... in a sense he is right... most of the shading algorithms (eg gouraud) are assuming linear blending so therefore the framebuffer is linear.... it's the display hardware that's wrong ;)
Chicken and egg?

His gamma FAQ is well known and taken as the truth by many that bother about gamma. And when an authority like that says that the framebuffer is linear, why bother doing gamma correction on the frame buffer accesses.
 
He is minifying, not magnifying.
That's what I thought originally, but when he said that bicubic wasn't good enough I was wondering if he was trying to take a 512x512 render target and then render that to a texture which is much larger. If he's minifying, I don't know why he's complaining so much about the simplicity of the bicubic as opposed to something more extensive -- you don't really need anything major unless you're looking to achieve something specific. Sinc may be valid as it's considered to be the "ideal" lowpass filter.

Dare I ask how long it had taken?
I don't know that I would consider it all as PhD material. Maybe prior to the existence of the Internet, it would have been. A whole lot of it can be absorbed in a day or so, as long as you have the interest. I find that the main reason I never learned things like SQL or CSS or PHP is because I never gave a damn about any of them. Conversely, I learned stuff about color arithmetic because it interested me.
 
ShootMyMonkey said:
If he's minifying, I don't know why he's complaining so much about the simplicity of the bicubic as opposed to something more extensive -- you don't really need anything major unless you're looking to achieve something specific.
Well, texture filtering in non linear space is usually taken to be good enough ... but it will tend to pull down the intensity of the image. For instance if you minify the good old checkerboard into an even gray that gray will be too dark.
Sinc may be valid as it's considered to be the "ideal" lowpass filter.
Sinc looks like crap most of the time, windowed sinc can look good ... but it is important to realise why it looks good. Ringing can sometimes make an image look sharper, of course sometimes it can just look like ringing too.

IMO texture filters shouldn't ring, there are better ways to sharpen an image than unsharp.
 
Which is why I put "ideal" in quotes. It's simply considered to be so because it's mathematically equivalent to a lowpass filter. The problem with sinc on an image is the whole range-bounded issue, which causes some "negative colors" to come up. Sinc^2 fixes most of those problems, but you do tend to weaken the response of higher frequency components (as it's essentially a triangle filter in frequency space).

If edges are a concern, there are post-processes like isophote smoothing which should work fine. If energy preserving is a concern, well, okay, there are different filters you can use (e.g. lots of wavelet filter banks and you can just grab the lowpass image from that).
 
ShootMyMonkey said:
The problem with sinc on an image is the whole range-bounded issue, which causes some "negative colors" to come up.
Nah, the problem is that it rings.
 
MfA said:
Nah, the problem is that it rings.
And it's a damned nuisance when you're in the bath.


To SMM: Seriously though, since the reconstruction on the monitor doesn't use a sinc function, it probably doesn't make sense to use one for the filtering.
 
Back
Top