3dcgi said:The rest is my extrapolation and hasn't been given much thought. Tricks could be played to make it adaptive where some objects would only be written to certain sample buffers. This would mean not every object receives the same level of AA. Of course the pixel shader filtering program would need to know this which might mean rerendering the objects or figuring out some way to have the stencil buffer tell it what to do. The end result is the entire scene is super-sampled and the number of samples per object/pixel is determined by the programmer. 4x, 8x, 16x, super-slow x, etc.
Sort of like 3dfx's jittered super-sampling, but with filtering done in the pixel shader. Render the scene once (first sub-sample) and save the frame buffer for later use. Then render again (second sub-sample). Repeat as many times as necessary. Then read all of these surfaces into a pixel shader program and filter them as you desire.
In an alternative embodiment, the simple box filter could be replaced with a more complicated filter and a buffer larger than a tile size could be provided. This would allow more complex filtering image/data processing techniques to be performed, automatic edge detection, up-scaling and noise reduction techniques.
The wishes I heard were from a private converstation a few years ago and I wasn't talking to Carmack so between second hand information and my possibly bad memory who knows if I'm thinking of what Reverend has heard.Chalnoth said:Well, I know I've suggested programmable filtering (more specifically, shader access to specific pieces of the filtering pipeline, such as the fetching of 4 texels, the blending of 4 color samples, the calculation of bilinear and trilinear blend weights, LOD calculation, etc.), but I don't remember JC asking for it. He may have, I don't know.
Except the methods I described were leaving the complexity to the software and using brute force shader and fill rate.Ailuros said:Sounds nearly as complicated as stochastic supersampling to me.
I don't see the connection. The patent seems to be talking about generating mip-maps.How would this alternative sound?
http://v3.espacenet.com/textdes?DB=EPODOC&IDX=WO2004114222&F=0&QPN=WO2004114222
What I was refering to likely wouldn't require much hardware as it would use the shader pipeline for the fetching and filtering. It would however, likely be slower than pure hardware methods and thus I am suggesting the possibility, but not advocating it.Stop gap here:
In an alternative embodiment, the simple box filter could be replaced with a more complicated filter and a buffer larger than a tile size could be provided. This would allow more complex filtering image/data processing techniques to be performed, automatic edge detection, up-scaling and noise reduction techniques.
Sounds equally complicated and costly in terms of HW to me, but wouldn't it be theoretically applicable to most tiled back buffers? (don't shoot the layman if he's asking dumb questions *cough*).
It's in parentheses. Between the commas would be separate instructions.3dcgi said:...Chalnoth said:Well, I know I've suggested programmable filtering (more specifically, shader access to specific pieces of the filtering pipeline, such as the fetching of 4 texels, the blending of 4 color samples, the calculation of bilinear and trilinear blend weights, LOD calculation, etc.), but I don't remember JC asking for it. He may have, I don't know.
I'm not sure what you mean by shader access to the filtering pipeline.