Can CELL do AA?

I remember reading that since RSX can read/write to main memory and indeed directly to CELL, that it was possible for CELL to perform post-render effects onto the RSX's output frame buffer.

Would it be possible to perform some kind of AA on CELL?

I appreciate this is very simplistic >> If we allocated a single SPE to the task and our resolution was 1920x1080p @ 60Hz @ 32bit colour – that’s potentially 100 cycles available per pixel in order to pull off some kind of "cleaning up".

Is the 256Kb of SPE cache enough to hold some kind of AA routine, and also a meaningful amount of streaming data from RSX?

Feel free to shoot this idea down if it’s a non starter, as I really don't know how AA actually works (lol) - just doing the whole "thinking out loud" thing...

On another note, if this is a non starter, what kinds of cool effects would be possible using an SPE to do some post-processing on the frame buffer?

Cheers.
 
Why would it do fsaa ?

I would expect the rsx wil lbe a fillrate beast if its based on the g70 or g80 . Even if it doesn't have the full rop set up of the g70 or g80 a 6800gt card would be enough for 2-4x fsaa with minimal performance hits at 1080p . i would expect in the ps3 the rsx will be bandwidth limited not fillrate
 
jvd said:
Why would it do fsaa ?

I would expect the rsx wil lbe a fillrate beast if its based on the g70 or g80 . Even if it doesn't have the full rop set up of the g70 or g80 a 6800gt card would be enough for 2-4x fsaa with minimal performance hits at 1080p . i would expect in the ps3 the rsx will be bandwidth limited not fillrate

You do know that 1080p televisions are limited to 30 fps don't you? I was under the impression this was a tech savy site, not a fanboy jerkfest forum.....
 
jvd said:
Watch the attacks on members if you want to stay part of this forum

Edited it. Do you know something we don't regaurding 1080p? The sets are limited to 30 fps. In fact I was under the impression that the 1080p 'wow factor' that Sony is flaunting is really just in regaurds to their Blueray media capability, not games.
 
As others have pointed out before they are pushing a new standard of 1080p 60hz .

But that is regardless . I have only stated that it will have enough power to render that res with that amount of fsaa in terms of fillrate. I highly doubti t can in terms of bandwidth. But i don't see the cell having acess to more bandwidth than the rsx . So i don't see it helping
 
Think of it like the current situation...

NTSC TV's are approx 480i ( 525i including overscan/blanking ) but many games render internally at 480p to give flicker fixing..

Although the basic TV standard was 1080p@30hz and 1080i @ 60hz in the initial broadcast specs 1080p@60hz is now an option, which is supported by at least some of the LCD HDTV's on the market.. ( and if you want to use a computer monitor it's much more likely to support 1080p @ 60Hz than 30Hz, or 1080i because that res is close to the 1200x1920 monitor res )
 
just_some_gamer said:
as I really don't know how AA actually works
Indeed. ;)

FSAA, on today's computers, is done by Super Sampling (SSAA) or by Multi Sampling (MSAA).
Both of those techniques don't rely on the CPU. In other words, Cell don't play a part in AA.
 
On another note, if this is a non starter, what kinds of cool effects would be possible using an SPE to do some post-processing on the frame buffer?

i'd like an answer on this ,too.It would be basicaly convolutions ,but up to what matrix (3*3, 4*4 ?) .I know most of photoshop filters are convolution based.I like the idea of considering the final frame buffer picture as a 2 d space with photoshop-like possibilities.
 
Just_some_gamer : Antialiasing isn't an effect you can apply to a 2D rendered image. You can't take a jaggie PS2 screen grab, pop it into PhotoShop and apply an 'antialias' effect. Antialising needs to be done doing the pictures creation, taking multiple samples for each pixel and spewing out some kind of average.

_phil_ : I can't see what a SPE can do that the RSX couldn't. We're just starting to see GPU's used for image-processing effects and they're a lot faster than your PC's CPU. The only advantage to Cell I guess is it can take up this post-processing and leave the RSX to the rendering.

As for effects, I've seen on a lowly PS2 warping, blurring, and so forth. I'm hoping some advanced stuff will appear, non-photorealistic rendering, like Okami but moreso. Even pencil sketch renderings (check my own Sketch Rendering plugin here for what would be cool in realtime!)
 
I think the cell will have enough on its hands doing a.i , physics , sound , procedual (sp? ) textures and what not and general cpu tasks that are assigned.

I don't think we are going to see it doing much else and the same goes with the x360 cpu
 
Shifty Geezer said:
As for effects, I've seen on a lowly PS2 warping, blurring, and so forth. I'm hoping some advanced stuff will appear, non-photorealistic rendering, like Okami but moreso. Even pencil sketch renderings (check my own Sketch Rendering plugin here for what would be cool in realtime!)

The newest Edge talks about some effects The Getaway demo was using. They were trying to capture how London might look from a tourists camcorder (and looking at the demo, they seem to have got that "look" quite convincingly). Some of the optical effects mentioned include automatic white balance, emulation of a camera's autofocussing and depth of field. If we remember the Chatani interview, he was saying that Cell was handling the rendering all on its own..

(Also, on another interesting note, Edge says that the demo was reusing assets from the PS2 series (which may help give more credibility to the "only on cell" claim?). It is in fact just a tech demo apparently, not a game, so reusing existing assets makes sense).

SPEs are supposed to be really good at image processing, no? Surely some things in that way could be done as well as or better on them than on the GPU? They are quite a bit more general, I presume? Some things that would be hard to do within the confines of a GPU could be passed to Cell.

edit - a dumb question, perhaps, but theoretically for AA could you not send the samples to the SPEs? GPU does the sampling, SPEs the rest? I'm not sure if the bandwidth you'd be saving doing it that way would outweigh the benefit of eating Cells internal bandwidth vs external bandwidth however..
 
Titanio said:
The newest Edge talks about some effects The Getaway demo was using. They were trying to capture how London might look from a tourists camcorder (and looking at the demo, they seem to have got that "look" quite convincingly). Some of the optical effects mentioned include automatic white balance, emulation of a camera's autofocussing and depth of field. If we remember the Chatani interview, he was saying that Cell was handling the rendering all on its own..
Is this officially confirmed? The E3 demo had them talking of 'mostly produced on Cell'. Mostly. Talk was of Cell creating a virtual city, with individuals going about their own lives. A living world. Unless something has since been said, I don't think those visuals were created on Cell (though at first I thought they were). Perhaps the geometry was, but not the shading? :?

SPEs are supposed to be really good at image processing, no? Surely some things in that way could be done as well as or better on them than on the GPU? They are quite a bit more general, I presume? Some things that would be hard to do within the confines of a GPU could be passed to Cell.
They're both as suitable I think. Post-processing is taking loads of pixel data, performaing a transformation or two on it, and chucking it back out. That's stream processing, which is what both SPE and pixel pipelines do. Pixel shader code is ideally suited to this, and won't ned the data to be rendered, passed to Cell, processed, then sent back. Though I can imagine some effects will want full-scene histograms and such, so there may be a case of creating the buffer and then processing it as a whole.

I think Sony's idea of Cell working with RSX is more in synthesis, geometry creation, texture creation. Procedural 'dirt' and the like.
 
Shifty Geezer said:
Is this officially confirmed? The E3 demo had them talking of 'mostly produced on Cell'. Mostly. Talk was of Cell creating a virtual city, with individuals going about their own lives. A living world. Unless something has since been said, I don't think those visuals were created on Cell (though at first I thought they were). Perhaps the geometry was, but not the shading? :?

Sony's CTO said in an interview after E3:

"For example we showed the demo that renders London City, it's not rendered in the GPU but the CELL does lighting and texture processing then outputs it to the frame buffer. Even without GPU, only CELL can create good enough 3D graphics. "

Which seems pretty clear cut. I'd like if someone could confirm it again in another interview perhaps with a Harrison or the like, just to make sure there were no crossed wires. Some of the tech sites really need to get interviews with the Sony guys and get more detail out of them on the demos and Cell/RSX collaboration or Cell and graphics.

Shifty Geezer said:
They're both as suitable I think. Post-processing is taking loads of pixel data, performaing a transformation or two on it, and chucking it back out. That's stream processing, which is what both SPE and pixel pipelines do. Pixel shader code is ideally suited to this, and won't ned the data to be rendered, passed to Cell, processed, then sent back. Though I can imagine some effects will want full-scene histograms and such, so there may be a case of creating the buffer and then processing it as a whole.

I think Sony's idea of Cell working with RSX is more in synthesis, geometry creation, texture creation. Procedural 'dirt' and the like.

Cheers. I think if Cell can access the framebuffer reasonably (either through flexio, or if RSX is rendering it to XDR), arbitrary postprocessing on Cell (within the bounds of performance, of course), could be attractive.
 
OpaOpa said:
You do know that 1080p televisions are limited to 30 fps don't you? I was under the impression this was a tech savy site, not a <bleep> jerkfest forum.....

1080p broadcast standard is limited to 30Hz, but most 1080p sets are happy to do 60Hz. Beside the standard will be added in the near future.
 
Sony's CTO said in an interview after E3:

"For example we showed the demo that renders London City, it's not rendered in the GPU but the CELL does lighting and texture processing then outputs it to the frame buffer. Even without GPU, only CELL can create good enough 3D graphics. "

This is great. I read that but didn't fully understand it. I guess I still don't but why is this news not reported by the full media?
 
from work done on movie scale cgi development, xbox aa will heavily assist with the archiving results of tiling. ps3 has 512 ram available for video, and does not need to tile. Because of the negative effect tiling causes on the xbox, I can see how 2x aa on ps3 would look equal to 4x aa on x360. This is just from the noticable effect of tiling video's requirements of aa for movie cgi. There is, ofcourse, no way not to tile the video on a cluster box.
 
mckmas8808 said:
Sony's CTO said in an interview after E3:

"For example we showed the demo that renders London City, it's not rendered in the GPU but the CELL does lighting and texture processing then outputs it to the frame buffer. Even without GPU, only CELL can create good enough 3D graphics. "

This is great. I read that but didn't fully understand it. I guess I still don't but why is this news not reported by the full media?

It was in a Japanese interview that went pretty unreported. one translated it here, but I didn't really see it get much coverage beyond that.

The western sites etc. really should be better on the ball, but I guess they've less access to SCE Japan people. And it seems those that do have access to SCEE people (Harrison and the like) don't ask the right questions :p

edit - a bit more info on cell<->rsx stuff...I knew I remembered reading about postprocessing on cell from one of these interviews:

David Kirk: SPE and RSX can work together. SPE can preprocess graphics data in the main memory or postprocess rendering results sent from RSX.

Nishikawa's speculation: for example, when you have to create a lake scene by multi-pass rendering with plural render targets, SPE can render a reflection map while RSX does other things. Since a reflection map requires less precision it's not much of overhead even though you have to load related data in both the main RAM and VRAM. It works like SLI by SPE and RSX.

David Kirk: Post-effects such as motion blur, simulation for depth of field, bloom effect in HDR rendering, can be done by SPE processing RSX-rendered results.

Nishikawa's speculation: RSX renders a scene in the main RAM then SPEs add effects to frames in it. Or, you can synthesize SPE-created frames with an RSX-rendered frame.

David Kirk: Let SPEs do vertex-processing then let RSX render it.

Nishikawa's speculation: You can implement a collision-aware tesselator and dynamic LOD by SPE.

David Kirk: SPE and GPU work together, which allows physics simulation to interact with graphics.

Nishikawa's speculation: For expression of water wavelets, a normal map can be generated by pulse physics simulation with a height map texture. This job is done in SPE and RSX in parallel
 
Titanio said:
mckmas8808 said:
Sony's CTO said in an interview after E3:

"For example we showed the demo that renders London City, it's not rendered in the GPU but the CELL does lighting and texture processing then outputs it to the frame buffer. Even without GPU, only CELL can create good enough 3D graphics. "

This is great. I read that but didn't fully understand it. I guess I still don't but why is this news not reported by the full media?

It was in a Japanese interview that went pretty unreported. one translated it here, but I didn't really see it get much coverage beyond that.

The western sites etc. really should be better on the ball, but I guess they've less access to SCE Japan people. And it seems those that do have access to SCEE people (Harrison and the like) don't ask the right questions :p
That's true. The only interview I could find (although it was a quick search) with Phil Harrison talking about the demos was with Eurogamer, and that still didn't go into much detail.

http://www.eurogamer.net/article.php?article_id=59243
 
Back
Top