A tiny Heavenly Sword update

Nemo80 said:
it is already done in the Cell ray-casting demo (@ full speed).

I haven't heard that the landscape demo had AA. Still, it's a fully software renderer so obviously the CPU can calculate AA because it does the rest of the calculations as well. But it can not do AA for the GPU...
 
I know it was previously discussed and I don't want to derail this topic here, but when it was previously talked about, was the possibility of doing some DSP on the very final image brought up (wrt AA)? It's probably way too intensive for realtime use, more of a theoretical thing, but it would be wholly independent from what the GPU was doing.
 
Laa-Yosh said:
I haven't heard that the landscape demo had AA. Still, it's a fully software renderer so obviously the CPU can calculate AA because it does the rest of the calculations as well. But it can not do AA for the GPU...

well whats the difference? if the SPEs can do Multisampling, Texturefiltering, Blending, lighting etc. all on their own at a 1280x720 Resolution @ 30 fps (in that demo) then i guess it's not too slow for just doing AA alone for the GPU (since they can exchange any data anyways, 35GB/s shuold also not be too slow for AA).
 
Nemo80 said:
well whats the difference? if the SPEs can do Multisampling, Texturefiltering, Blending, lighting etc. all on their own at a 1280x720 Resolution @ 30 fps (in that demo) then i guess it's not too slow for just doing AA alone for the GPU (since they can exchange any data anyways, 35GB/s shuold also not be too slow for AA).

The issue here is not that "it can't be done". The issue is that "it probably won't be done" cause the devs won't bother, unless there is an easy and cheap way to do it. In the end, the exchange of data in such a situation between Cell and RSX could slow everything.

We don't know as of now, and in our ignorance, our stance is to assume it won't be done, until we see it.

Fairly simple if you ask me.
 
Nemo80 said:
well whats the difference? if the SPEs can do Multisampling, Texturefiltering, Blending, lighting etc. all on their own at a 1280x720 Resolution @ 30 fps (in that demo) then i guess it's not too slow for just doing AA alone for the GPU (since they can exchange any data anyways, 35GB/s shuold also not be too slow for AA).
It doesn't work that way. Antialiasing involves rendering multiple samples for a pixel, basically subdividing it and colouring the pixel with an average of the samples. In the raytracer demo Cell was casting up to 16 samples per pixel (adaptive multisampling was ther term used in the writeup on the demo).

If the GPU is rendering the graphics, there's nothing the Cell can do to sample subpixel resolutions for AA. For Cell to contribute to AA it'll be through magical jiggerypokery and not conventional multisampling techniques.
 
Shifty Geezer said:
It doesn't work that way. Antialiasing involves rendering multiple samples for a pixel, basically subdividing it and colouring the pixel with an average of the samples. In the raytracer demo Cell was casting up to 16 samples per pixel (adaptive multisampling was ther term used in the writeup on the demo).

If the GPU is rendering the graphics, there's nothing the Cell can do to sample subpixel resolutions for AA. For Cell to contribute to AA it'll be through magical jiggerypokery and not conventional multisampling techniques.

Me personaly, i can see Cell doining the HDR and RSX doing the FSAA. That method just seems more believeable.
 
The components aren't seperable. HDR is a way of representing illumination values with a high range then 256 discrete intensities. This is the data passing through the GPU when it renders the image. You can't take a 32 bit (8 bit per channel) render pipeline and add HDR afterwards. That's like expecting of photorealistic picture to be produced in oil on canvas with one artist to add the blue and yellow paint and another to add the red paint when he's finished and the paint's dried. If you aren't mixing the paints all the way through you can't get all the colours. If you aren't using HDR all the way through you can't add it later on. You CAN fudge some HDR like effects but if you use HDR your going to need something like FP16 all through the render pipeline and it's associated performance hit (which as I understand it is mainly bandwidth. The shaders can support FP16 at no performance hit. I think higher bit resolutions might have a performance hit but I'm not sure).

Short of some crazy edge filtering technology, some wierd-ass tile based rendering in the SPE's (wasn't it Fafalada talking about this?), or some as yet univented technology, I don't know how Cell can contribute to AA. The graphics areas Cell can obviously contribute are post processing effects like blurs and warps, colour balancing, and general PhotoShop like effects, plus a degree of raytracing.
 
AA doesn't have to be multisampling though..

This probably wouldn't be feasible in realtime, or there are other issues - otherwise I'm sure someone would have mentioned it before - but from an old DSP class, I remember we had to take an image, embed it in an array of zeroes e.g. 0, colour value, 0, colour value and so on, and then we did something (here's where my memory gets hazy) which basically filled in the zero values, sort of like interpolation. I can't remember if it was fourier transforming the image and then doing something before inverse fourier transforming it, or if it was convolution, or what. I don't think it was a simple resize, but the upshot of it was that you took an image, say 256x256 and were able to retrieve an image at twice the resolution using the images frequency info (you could retrieve "lost" info "in between" the existing colour values using the frequency domain IIRC). It was pretty cool, we did it with the "lenna" image, and it definitely improved the IQ noticeably. I remember my lecturer mentioning it in the context of anti-aliasing, that this process itself reduced jaggies, but there also could be the further step of then resizing the image back down to its original size. It's basically super-sampling, using some DSP tricks.

Anyone know what I'm talking about here? I really should remember better. It's probably more of a theoretical thing, I'm not sure if the performance would be there to do it every 1/30 of a second. But it would be totally seperate from the GPU, there would be no interdependency there, all that would be required would be the final image.
 
Last edited by a moderator:
Never heard of it, but it does come under the 'magical jiggerypokery' umbrella of AA techniques :p

Edit : I'll add that there's some very expensive Photoshop plugins for upscaling images and none does a fantastic job. I don't know any that can upscale an image and when you downscale it have a better looking image, certainly not without 'muddying' the clarity somewhat. But then maybe no-one's implemented your technique and there's a market for you!
 
Last edited by a moderator:
Shifty Geezer said:
Edit : I'll add that there's some very expensive Photoshop plugins for upscaling images and none does a fantastic job. I don't know any that can upscale an image and when you downscale it have a better looking image, certainly not without 'muddying' the clarity somewhat. But then maybe no-one's implemented your technique and there's a market for you!

Oh it's not mine! It's a fairly standard DSP thing, I mean I learnt that in college ;) Maybe it's just a fancy resize/upconversion..not sure what would happen once you downsampled it again. Maybe it could be combined with something else for better results.

Image processing is something Cell is good at though. There might be something that could be pulled out of the bag that'd be applicable and worthwhile.
 
Last edited by a moderator:
I haven't been into signal processing since I've graduated, so I'd rather not try to get into details. Nevertheless, doing AA in post processing is not possible because you've already lost data about the signal, ie. you need to have several samples for each pixel.
You could theoretically move that data onto the SPEs, but 1. it'd take a lot of bandwith 2. the GPU has hardwired circuits to do the calculations, so why bother?

As for some magical stuff, well, there's a slim chance that Nvidia and Sony have managed to cook up something but they're keeping it a secret until the console's launch. But I'd be very surprised about that :) You know, there's nothing for free in 3D, as we B3D readers should all remember...
 
Laa-Yosh said:
I haven't been into signal processing since I've graduated, so I'd rather not try to get into details. Nevertheless, doing AA in post processing is not possible because you've already lost data about the signal, ie. you need to have several samples for each pixel.

That's what I was wondering about. Is there a difference between a digitised photo and a framebuffer, for instance, from this perspective? I guess the answer is yes. Would a digital photo be different? I'm having a hard time remembering myself the specifics of all this, but I guess there is a distinction between all these.
 
Titanio said:
That's what I was wondering about. Is there a difference between a digitised photo and a framebuffer, for instance, from this perspective? I guess the answer is yes. Would a digital photo be different? I'm having a hard time remembering myself the specifics of all this, but I guess there is a distinction between all these.

A digital photo should already have AA incorporated if I'm correct.
If it's from a camera, then mother nature took care about it, if it's been scanned than the scanner.
 
Laa-Yosh said:
A digital photo should already have AA incorporated if I'm correct.
If it's from a camera, then mother nature took care about it, if it's been scanned than the scanner.

Yeah, I'm not thinking directly about "AA" but rather preservation of the original signal. I'm guessing despite the fact that a scanned or digital photo is just a bunch of pixels, like a framebuffer, there is more information there in the frequency domain? That makes reasonably sense, at least in the case of the scanned photo perhaps.
 
If you can Antialias a 24 bit image captured from a scanner or camera, you can antialias an output buffer. Both are 2D colour data. If it were possible to extrapolate data in between pixels and upscale, you could then downscale and produce AA. Where Laa-Yosh talks about lost signal information, you could basically fabricate information. From the sounds of it that's what you're suggest Titanio. An analogue source like a photo contains a lot of noise (high frequencies) but that shouldn't aid upscaling in any way.

Upscaling processes are all about trying to derive this extra information in all sorts of ways, but as I say I've never seen any be particularly useful. Though I have seen great results clearing up blurred images such as astronomical pics. But upscale+gaussian+one of these processes+downscale could be managed to smoothing edges a bit?
 
That's what I was wondering about. Is there a difference between a digitised photo and a framebuffer, for instance, from this perspective? I guess the answer is yes. Would a digital photo be different?
Well, from the signal processing perspective, I think it would be simpler to confine to SSAA (which as far as I'm concerned is the only true AA there is) -- what you've got is a source which contains information up to some frequency range, and a destination framebuffer which is essentially a specific samplerate(or resolution). Now with 4x SSAA, you've got information covering twice the frequency range (twice in each dimension makes 4x), so the Nyquist limit of the source sampledata (internal rendering resolution) is equal to the destination samplerate. Obviously, you run into the same issues you get with sound as the frequency of certain waves gets closer to the Nyquist limit of the samplerate without actually hitting it, which is why things like 16x SSAA exist.

With a photo, you can consider the original sample source to be photons bouncing around and eventually striking the CCD/film during the exposure time, and since you can basically say that the number of photons striking any single "pixel" in the array is pretty arbitrarily large -- well, you can say that photographs are basically infinitely antialiased (getting into the specific number of photons in a scene would be too exhaustive a discussion). Same thing happens with a scanner in that there's the light emitted from the scanner bouncing off the object being scanned and hitting a photosensitive element containing some number of pixels.

The best you can do as far as AA in a post-process is to come up with some sort of decent heuristic for what a superresolution image would look like -- some sort of edge-preserving thing such as covariance-derived solutions like NEDI or partial differential-derived solutions like isophote smoothing. As long as you have a halfway decent guess as to what frequency information was lost in creating a low-res render, you can get okay results.
 
Last edited by a moderator:
ShootMyMonkey said:
With a photo, you can consider the original sample source to be photons bouncing around and eventually striking the CCD/film during the exposure time, and since you can basically say that the number of photons striking any single "pixel" in the array is pretty arbitrarily large -- well, you can say that photographs are basically infinitely antialiased (getting into the specific number of photons in a scene would be too exhaustive a discussion). Same thing happens with a scanner in that there's the light emitted from the scanner bouncing off the object being scanned and hitting a photosensitive element containing some number of pixels.

The best you can do as far as AA in a post-process is to come up with some sort of decent heuristic for what a superresolution image would look like -- some sort of edge-preserving thing such as covariance-derived solutions like NEDI or partial differential-derived solutions like isophote smoothing. As long as you have a halfway decent guess as to what frequency information was lost in creating a low-res render, you can get okay results.

Cheers, and my thanks also to Shifty and Laa-Yosh. The thing about this scanned photo that we used was that it was fairly aliased or artifacted, for whatever reason, be it processed to be that way for demo purposes or whatever. I can remember my lecturer pointing out, for example, how much smoother Lenna's eyebrow was after this operation etc. etc. I'll have to try and look this stuff up again.

Shifty - I was thinking also about a gaussian blur in combo with some of this stuff, but it'd have to be fairly tightly controlled I'd think. I should really whip out matlab one of these days and fiddle with it a bit ;)
 
Back
Top