I have a question:
Would it be beneficial for 3d hardware to implement jittered pixel locations? I mean this seperately from jittered AA sampling where the pixel location is still on a fixed grid. The point of my bringing this up is that stochastic sampling doesn't produce aliasing; it produces noise instead. In most situations I'm aware of, we are much less sensitive to noise than a regular, patterned error like aliasing - but again I don't have much experience with graphics.
I actually remember John Carmack mentioning this in a speech or a blog update somewhere but I haven't been able to find it. It popped into my head and I'd like to discuss the costs and benefits of implementing something like this - i.e. why hasn't it been done?
Would it be beneficial for 3d hardware to implement jittered pixel locations? I mean this seperately from jittered AA sampling where the pixel location is still on a fixed grid. The point of my bringing this up is that stochastic sampling doesn't produce aliasing; it produces noise instead. In most situations I'm aware of, we are much less sensitive to noise than a regular, patterned error like aliasing - but again I don't have much experience with graphics.
I actually remember John Carmack mentioning this in a speech or a blog update somewhere but I haven't been able to find it. It popped into my head and I'd like to discuss the costs and benefits of implementing something like this - i.e. why hasn't it been done?