Could..in THEORY Cell do FSAA.

Just a quick question....as i was just playing back some of the Cell rendered demo's from E3 and they all had FSAA implemented.

Is it possible????? and how much power would it tax???

And what would it be???? 2x, 4x, 6x ????
 
What resolution were those vids you saw? The originals were rendered at 1080p, so downscaled for your monitor you'd get SSAA. I don't think the Cell was rendering them with AA (though I'd happily be wrong!!)
 
can't you do anything anyway trough software rendering on your main cpu (cell or intel or powerpc)
it may be slow as hell maybe but it is be done eventually
 
You can't really separate rendering from antialiasing so the only way CELL is going to do FSAA is if you render in software only.

Well, I've heard people mention that terrain demo, though I never actually did see it... When you get down to it that was a voxel raycast/raytraced terrain renderer. We had that in software in-game (namely Outcast) back in the era of Pentium Pro. Naturally, back then, everything was lower resolution, but scaling that up to CELL and bearing in mind that raycasting is almost infinitely parallelizable (as long as there are rays to trace) and the actual ray-hit tests and BRDF stuff are not that difficult to translate to SIMD... it wouldn't be that hard to picture it being rendered in realtime with SSAA.
 
It's usually not possible to add AA in post processing because you don't know where the edges are, and edge detection is highly unreliable.

However, if you have the Z-buffer, or the actual polygon data to work with then you can figure out exactly where the edges are. And when you do know that you can do postprocessing AA. I'm not exactly sure how I would illustarte the 'finding edges' part, but here I've tried illustrating the 'blending edges' part:


postaa17lc.png

(2x zoom) This is supposed to be a small cutout of a polygon edge. By looking at the zbuffer or polygon data we have figured out that this is an edge that should be aa'ed.


postaa27mm.png

(8x zoom) We partition the edge up in segments (the white box) and blend with the closest neighbor pixels (black boxes). Blending with neighbor pixels will introduce some blurring, but not too much, because it's only the edges that will be blurred (unlike QuincunxAA where everything is blurred) and the neighbor will always contribute less than the centre pixel to the final color. The blending factor is simple linear interpolation from 100%,0% to 0%,100%


postaa33ea.png

This is what the segment looks like afterwards.


postaa43zw.png

(2x zoom) and here the entire edge is AAed

Now while it is possible to do postprossesing AA if you know exactly where the edges are, I don't expect anybody to actually do it. (Too much work to be worth the effort, for both the Cell and the programmer)
 
Cell is not a GPU or a rasterizer. why on earth would you want to do software based FSAA on a CPU that does not have hardware specifically designed to do anti-aliasing ?
 
Megadrive1988 said:
Cell is not a GPU or a rasterizer. why on earth would you want to do software based FSAA on a CPU that does not have hardware specifically designed to do anti-aliasing ?

I think the motivation behind this thread is simple.

The thread starter knows that the G70 can't do HDR and MSAA at the same time, and he thinks there's a good chance tht the same will hold true for the RSX ( although hopefully Sony has fixed this limitation in the RSX ).

He is looking for a way for the PS3 to be able to do both AA and HDR at the same time.

Having the Cell do HDR would be stupid as the RSX would manage it FAR easier, leaving only the Cell for AA.

But apparantly there's no reasonable way for the Cell to do that..... although I don't know why. I don't even know how AA is rendered.

Do you know?
 
In theory Cell can do AA as much as any other CPU on this planet, as long as it does the whole rendering itself (software rasterization). Obviously that's kinda pointless, as it would mean we'd pretty much have to bypass RSX entirely, reducing the PS3 into a kind of PS2.1 when it comes to rendering capabilities!

The way I understand hardware rasterization, it's impossible to have the CPU do the AA for something that's rendered on the GPU. It would be possible to engineer such a feature into the hardware, but both pieces would have to be developed from the beginning to work together in such a way, kinda like the mother and daughter die of the Xenos. FlexIO offers many possiblities for Cell and RSX to work together, but that would be WAY beyond its capabilities AFAIK. And even if it were possible, next-gen scene complexity and HD resolutions would cripple Cell's performance in this area! It took specialized hardware many years to finally deliver AA solutions that offer a decent tradeoff between quality and performance. How can anyone seriously expect a piece of general purpose hardware to offer anything even remotely comparable? Sony PR is a bitch, but it takes a fool to actually buy into it... ;)

The only other way would be to have Cell do some sort of post filtering, I hope I don't have to explain how pointless that would be. Its been done in the past and quality just sucks.
 
If HDR scenes are rendered by compositing separate HDR objects, if thoseobejcts are passed to Cell they could have their outlines AAd and returned prior to compositing. That doesn't sound outside the realms of distinct possibility to me, but as I've said elsewhere, I don't understand the differences between HDR and normal rendering.
 
It's usually not possible to add AA in post processing because you don't know where the edges are, and edge detection is highly unreliable.

However, if you have the Z-buffer, or the actual polygon data to work with then you can figure out exactly where the edges are. And when you do know that you can do postprocessing AA. I'm not exactly sure how I would illustarte the 'finding edges' part
The "finding edges" part can be fairly easy, but using it only in Z-buffer space will only enable you to conclusively find the edges between objects. There can still be "jaggies" within an object or within texture space on an object. Also the post-processing approach will not recover detail.

Take for instance, the example of a thin wire in the distance that's barely a pixel wide. Using edge blending in post-process or MSAA, you'll only elongate a few twinkling pixels. Supersampling is the only solution for a problem like that because it effectively raises the Nyquist limit of the sample space.
 
Matrox had up to 16x Fragmental AA on the Parhelia and while I don't remember all the details, I'm fairly sure it relied on some sort of hardware edge detection. It certainly didn't seem like it was "easy" to implement this though. It potentially offers better performance and higher bandwidth efficiency than SSAA or MSAA, but IIRC it didn't work flawlessly (not all edges were detected). If it were so easy and efficient, I'm sure others would have attempted the same by now. This kind of AA certainly isn't trivial in any way...

Theoretically many things are possible, but are they practical? Could some kind of software edge detection AA running on Cell offer acceptable performance for complex 3D scenes in HD resolutions at 60Hz? Sorry, to the best of my knowledge the answer HAS to be no. If it could do that, every engineer working in the 3D hardware industry over the past decade would have to be ashamed of himself and quit his job because he obviously sucks at what he's doing...
 
PS3 supports 1080p x2
Cell and RSX share mem access

Why not have RSX render nigh redundant frames using a jittered offset and then have Cell access both of them and perform the actual FSAA blend? No need to worry about edge detection- just use post rasterization super sampling- you should even be able to offload that to one of the SPEs fairly easily I would assume...... anyone see any problems with this?

You would chew up quite a bit of the RSX's time and obviously it wouldn't work as well(limited sampling, massive performance hit compared to MSAA etc) as 'native' AA but it seems to me like it should be reasonable to get it done.
 
As before, in theory that should be possible (although the technical details on especially RSX are yet a bit too sparse to be sure), but would it be practical? HDR itself is known to incur a performance penalty (if G70 is anything to go by that'd be 20-25% at HD resolutions). The method you describe would essentially be SSAA (kinda like Voodoo5 handled it IIRC), which means you'd get a motherload of a performance hit with every sample. To get 4xAA you'd burn 75% of your fillrate, with an equal increase in bandwidth requirements. Ouch! That'd cripple even the PS3.

So no matter how you twist it, something has to go. Either HDR, AA, acceptable performance or graphical detail, you choose...
 
The RSX is designed to handle two 1080p displays at the same time and the max framerate at that resolution is only 30 anyway(unless someone comes up with a 1080p/60 standard). If the RSX ships with 32 pipes clocked @500MHZ it should have roughly 157% of the performance of the 7800GTX which should be enough to handle at least 2x SSAA with decent levels of performance @1080p while maintaining next gen visuals. At that level of resolution 2x AA should compare decently with the XB360's 4xAA running 720p.
 
Ben your only taking into account fillrate and vertex / pixel power increases .

The fact of the matter is the rsx will have the same or less bandwidth as a 7800gtx .

The bandwidth hits on ssaa and hdr will be big and doing both at the same time will most likely cripple any video card coming out for the next 2 years or so .

There was a reason why ati and nvidia went to msaa after the first few ssaa designs

As for two images at 1080p that is yet to be confirmed. I don't think anyone here expects that with out signifigant decreases in image quality
 
jvd said:
...
The fact of the matter is the rsx will have the same or less bandwidth as a 7800gtx .
...

Please stop posting misinformation. We've been through this on many occasions. The current fact in this context, is that the RSX has 22.4+35 ~ 57 GB/sec of bandwidth available to it, although with varying latencies.
 
Jaws said:
jvd said:
...
The fact of the matter is the rsx will have the same or less bandwidth as a 7800gtx .
...

Please stop posting misinformation. We've been through this on many occasions. The current fact in this context, is that the RSX has 22.4+35 ~ 57 GB/sec of bandwidth available to it, although with varying latencies.

Please stop posting misinformation . We've been through this before . The cell cpu will still need to acess ram also .


aside from that to actually store a hdr + ssaa buffer and other relevent data the only bandwidth numbers that matter are the bandwidth to ram . Which would be

25.6 and 22.3 giving 47.9 but that is if the cell chp never has to use any of the ram badnwdith for itself .
 
jvd said:
Jaws said:
jvd said:
...
The fact of the matter is the rsx will have the same or less bandwidth as a 7800gtx .
...

Please stop posting misinformation. We've been through this on many occasions. The current fact in this context, is that the RSX has 22.4+35 ~ 57 GB/sec of bandwidth available to it, although with varying latencies.

Please stop posting misinformation . We've been through this before . The cell cpu will still need to acess ram also .

Jvd, read carefully what I've posted. And I'm only talking about the RSX. Either you do not understand this technology as we've been through this on many occasions or you are being extremely stubborn or I'm missing something. Not only have I gone through this several times but also actual PS2 devs on this board, yet you still ignore this number in this context.

:?
 
Back
Top