Laa-Yosh said:Aaaargh, please don't bring up this topic of the CPU performing AA again...
it is already done in the Cell ray-casting demo (@ full speed).
Laa-Yosh said:Aaaargh, please don't bring up this topic of the CPU performing AA again...
Nemo80 said:it is already done in the Cell ray-casting demo (@ full speed).
Laa-Yosh said:I haven't heard that the landscape demo had AA. Still, it's a fully software renderer so obviously the CPU can calculate AA because it does the rest of the calculations as well. But it can not do AA for the GPU...
Nemo80 said:well whats the difference? if the SPEs can do Multisampling, Texturefiltering, Blending, lighting etc. all on their own at a 1280x720 Resolution @ 30 fps (in that demo) then i guess it's not too slow for just doing AA alone for the GPU (since they can exchange any data anyways, 35GB/s shuold also not be too slow for AA).
It doesn't work that way. Antialiasing involves rendering multiple samples for a pixel, basically subdividing it and colouring the pixel with an average of the samples. In the raytracer demo Cell was casting up to 16 samples per pixel (adaptive multisampling was ther term used in the writeup on the demo).Nemo80 said:well whats the difference? if the SPEs can do Multisampling, Texturefiltering, Blending, lighting etc. all on their own at a 1280x720 Resolution @ 30 fps (in that demo) then i guess it's not too slow for just doing AA alone for the GPU (since they can exchange any data anyways, 35GB/s shuold also not be too slow for AA).
Shifty Geezer said:It doesn't work that way. Antialiasing involves rendering multiple samples for a pixel, basically subdividing it and colouring the pixel with an average of the samples. In the raytracer demo Cell was casting up to 16 samples per pixel (adaptive multisampling was ther term used in the writeup on the demo).
If the GPU is rendering the graphics, there's nothing the Cell can do to sample subpixel resolutions for AA. For Cell to contribute to AA it'll be through magical jiggerypokery and not conventional multisampling techniques.
Shifty Geezer said:Edit : I'll add that there's some very expensive Photoshop plugins for upscaling images and none does a fantastic job. I don't know any that can upscale an image and when you downscale it have a better looking image, certainly not without 'muddying' the clarity somewhat. But then maybe no-one's implemented your technique and there's a market for you!
Laa-Yosh said:I haven't been into signal processing since I've graduated, so I'd rather not try to get into details. Nevertheless, doing AA in post processing is not possible because you've already lost data about the signal, ie. you need to have several samples for each pixel.
Titanio said:That's what I was wondering about. Is there a difference between a digitised photo and a framebuffer, for instance, from this perspective? I guess the answer is yes. Would a digital photo be different? I'm having a hard time remembering myself the specifics of all this, but I guess there is a distinction between all these.
Laa-Yosh said:A digital photo should already have AA incorporated if I'm correct.
If it's from a camera, then mother nature took care about it, if it's been scanned than the scanner.
Well, from the signal processing perspective, I think it would be simpler to confine to SSAA (which as far as I'm concerned is the only true AA there is) -- what you've got is a source which contains information up to some frequency range, and a destination framebuffer which is essentially a specific samplerate(or resolution). Now with 4x SSAA, you've got information covering twice the frequency range (twice in each dimension makes 4x), so the Nyquist limit of the source sampledata (internal rendering resolution) is equal to the destination samplerate. Obviously, you run into the same issues you get with sound as the frequency of certain waves gets closer to the Nyquist limit of the samplerate without actually hitting it, which is why things like 16x SSAA exist.That's what I was wondering about. Is there a difference between a digitised photo and a framebuffer, for instance, from this perspective? I guess the answer is yes. Would a digital photo be different?
ShootMyMonkey said:With a photo, you can consider the original sample source to be photons bouncing around and eventually striking the CCD/film during the exposure time, and since you can basically say that the number of photons striking any single "pixel" in the array is pretty arbitrarily large -- well, you can say that photographs are basically infinitely antialiased (getting into the specific number of photons in a scene would be too exhaustive a discussion). Same thing happens with a scanner in that there's the light emitted from the scanner bouncing off the object being scanned and hitting a photosensitive element containing some number of pixels.
The best you can do as far as AA in a post-process is to come up with some sort of decent heuristic for what a superresolution image would look like -- some sort of edge-preserving thing such as covariance-derived solutions like NEDI or partial differential-derived solutions like isophote smoothing. As long as you have a halfway decent guess as to what frequency information was lost in creating a low-res render, you can get okay results.