Let's talk FSAA on next generation consoles ok ?

I am considerign especially PlayStation 3 and the possiblity of having 32 MB of e-DRAM ( plus there are some Image Caches on each Processor Element for the Visualizer chip... they might be used for temporary textures storage or something else... to store VQ tables maybe ? ;) )...

I will think about tiling the screen in 4 parts so that each PE can work on one of them... each PE processes a separate polygon ( efficient with very tiny polygons... pixel or sub-pixel sized polygons ).

all in all it will result that basically it is almost like we considered one big front-buffer and one big back-buffer.

Let's think 1280x720p ( which has 3x the amount of pixels than 640x480p so aliasing is already reduced ) and let's think about 4x AA.

Front-Buffer = full size at 24 bits, 8:8:8 R:G:B... no destination alpha.

Z-buffer = full size and 32 bits precision.

Back-buffer = extra-large size ( 4x AA... 2x Horizontal and 2x Vertical ) at FP24 per color component ( RGBA ).

Back-buffer = 1280 * 720 * 4 * 12 bytes ( FP24 RGBA ) = 44 MB

Front-buffer + Z-buffer = ( 1280 * 720 ) * ( 3 + 4 ) = 6 MB

Total memory needed = 50 MB

This would leave 14 MB for pure texture space if we had 64 MB of e-DRAM.

To fit 32 MB constraints we would have to lower the FP precision of the back-buffer or to lower the AA to 2x AA ( would cut the Back-buffer in half ).

I'd rather keep the FP Back-buffer...

Back-buffer = 22 MB

Total memory needed = 28 MB

This would leave 4 MB of space for textures and with good texture compression and using procedural textures when available we should be fine.

Worse comes to worse we can JPEG or VQ compress them and uncompress them using the APUs.

Do not forget that the Visualizer can still stream textures and the bandwidth to the external memory is a non insignificant as Rambus' Yellowstone can provide 25.6 GB/s ( about 21.3x the bandwidth that there is between the GIF and the GS in the PlayStation 2 and on PlayStation 3 Vertex data should not come from the external memory, but it should pass through direct bus between the Broadband Engine and the Visualizer and that would be fatter than the pipe between the Broadband Engine and the external Yellowstone DRAM ).

I do not see why you should not use Texture Streaming on PlayStation 3 if it can help you :D

In average the dynamic ( for streaming in and out textures ) on a lot of PlayStation 2 titles is ~1 MB ( another ~1 MB or less is used for static texture storage )... we are talking about an increase that goes from 2x to ~4x for space in e-DRAM to stream textures and we now have much more bandwidth to the GPU and we can have better than standard 8 bits CLUT compression.

I hope not to have made too much fuzzy math ( after this post some nice JFET and BJT transistors wait for me... exam coming up tomorrow ;) )...


If we were doing 4x AA at 640x480p...

Back-buffer = ~14 MB

Front-buffer + Z-buffer = ~2 MB

Total memory needed = ~16 MB

We have then ~16 MB for textures.
 
Hrm, looks like you're going for super sampling. Why not use aggressive MSAA and filtering? I figure there should be more than enough fillrate to spare to allow for this.
 
but MSAA uses the same space as FSAA which was my initial concern...

I do not see why we could not use MSAA and anisotropic + tri-linear filtering...
 
Panajev2001a said:
I am considerign especially PlayStation 3 and the possiblity of having 32 MB of e-DRAM ( plus there are some Image Caches on each Processor Element for the Visualizer chip... they might be used for temporary textures storage or something else... to store VQ tables maybe ? ;) )...

I will think about tiling the screen in 4 parts so that each PE can work on one of them... each PE processes a separate polygon ( efficient with very tiny polygons... pixel or sub-pixel sized polygons ).

all in all it will result that basically it is almost like we considered one big front-buffer and one big back-buffer.

Let's think 1280x720p ( which has 3x the amount of pixels than 640x480p so aliasing is already reduced ) and let's think about 4x AA.

Front-Buffer = full size at 24 bits, 8:8:8 R:G:B... no destination alpha.

Z-buffer = full size and 32 bits precision.

Back-buffer = extra-large size ( 4x AA... 2x Horizontal and 2x Vertical ) at FP24 per color component ( RGBA ).

Back-buffer = 1280 * 720 * 4 * 12 bytes ( FP24 RGBA ) = 44 MB

Front-buffer + Z-buffer = ( 1280 * 720 ) * ( 3 + 4 ) = 6 MB

Total memory needed = 50 MB

This would leave 14 MB for pure texture space if we had 64 MB of e-DRAM.

To fit 32 MB constraints we would have to lower the FP precision of the back-buffer or to lower the AA to 2x AA ( would cut the Back-buffer in half ).

I'd rather keep the FP Back-buffer...

Back-buffer = 22 MB

Total memory needed = 28 MB

This would leave 4 MB of space for textures and with good texture compression and using procedural textures when available we should be fine.

Worse comes to worse we can JPEG or VQ compress them and uncompress them using the APUs.

Do not forget that the Visualizer can still stream textures and the bandwidth to the external memory is a non insignificant as Rambus' Yellowstone can provide 25.6 GB/s ( about 21.3x the bandwidth that there is between the GIF and the GS in the PlayStation 2 and on PlayStation 3 Vertex data should not come from the external memory, but it should pass through direct bus between the Broadband Engine and the Visualizer and that would be fatter than the pipe between the Broadband Engine and the external Yellowstone DRAM ).

I do not see why you should not use Texture Streaming on PlayStation 3 if it can help you :D

In average the dynamic ( for streaming in and out textures ) on a lot of PlayStation 2 titles is ~1 MB ( another ~1 MB or less is used for static texture storage )... we are talking about an increase that goes from 2x to ~4x for space in e-DRAM to stream textures and we now have much more bandwidth to the GPU and we can have better than standard 8 bits CLUT compression.

I hope not to have made too much fuzzy math ( after this post some nice JFET and BJT transistors wait for me... exam coming up tomorrow ;) )...


If we were doing 4x AA at 640x480p...

Back-buffer = ~14 MB

Front-buffer + Z-buffer = ~2 MB

Total memory needed = ~16 MB

We have then ~16 MB for textures.




1 teraflops and 16MB texture mem ?????
MEGALOL
you crazy:)
 
Using virtual texturing and the onboard DRAM as a temporal storage, you'd still be able to apply >8 textures per pixel per frame (assuming 4-6 bits per texel and moderate waste)

Cheers
 
of course, the PS3 should have at least 256-512 MB of Rambus Yellowstone main system memory. plenty of room for textures there :)
 
actually all I really want is that smooth CGI-look that you get even in the lowest end CGI used in television commercials and tv series.
 
Panajev...

Why split the screen into four? Just cos you have 4 PE's dont mean a split of four is the best way. The best tilers split a hell of a lot more for various reasons...

Just thinking out loud cos I heard you say splitting into four a few times and to me it doesnt seem the wise choice.
 
Well that was to allow each Processor Element to basically work on its own frame... the pixel pipelines would be then basically independent, they process different polygons.

I split the screen into four to show that we do not need to have 4 full size separate front and back buffers for each Processor Element as vers pointed out.

We can subdivide each sub-screen in small fixed size tiles like 32x32 as my solution is not preventing a full tiled approach frokm being used.

4 MB could host VQ or even JPEG textures ( the APUs should be able to do software JPEG decompression quite fast... )... 4 * 8-12 = 32-48 MB of uncompressed textures.

You are also forgetting we can still stream textures ( ~Virtual Texturing ) from external RAM...

On top of that we can use procedurally generated textures right on the Visualizer taking very few memory.
 
1 TFLOPS is for the Broadband Engine ( maybe plus the FLOPS of the Visualizer )... they have separate e-DRAM and you still have external RAM...

1 TFLOPS == more polygons ( less detail that has to be faked with textures ), better lighting (less use of lightmaps ), less pre-calculated textures ( why use cube maps for normalization if we can do that in real-time ? ), more procedurally generated textures, etc...

I do not see, as my previous post also indicated, where is your mega LOL ;)
 
Do you think with so many polygons promised it would still make sense to put the Z Data in some sort of extremely fast cache for processing?
 
Do you think Sony and the rest of the console industry will move to Floating Point precision or stay with Integer internal calcs for another generation?
 
With that much Z-data you're going to have to get rid of a lot and compress the rest, but I'm getting off topic.

Panajev, I thought your tiling scheme --like TBDRs-- would do it in a small local buffer and you'd see spatially free MSAA, the only thing would be computation -- fillrate.
 
I'm not even sure how useful or noticeable 4x AA at 1280x720 on an HDTV will really be (but I realize that is not the focus of the topic). Maybe 2005 technology could prove differently, but I tend to believe that diminishing gains will weigh-in for what ends up being typical of consumer equipment for the age. ...just IMO.
 
That type of AA might work for PS3, but I'm sure there are some new types of AA that can be thought of that will be of a higher advantage to PS3 hardware. Would Z3 work?
 
Sonic said:
That type of AA might work for PS3, but I'm sure there are some new types of AA that can be thought of that will be of a higher advantage to PS3 hardware. Would Z3 work?

I don´t see how using a BMW would help matters, not to mention that the overall cost would go up through the roof. ;)

Being serious, what´s Z3?
 
I THINK that at 720p there will not be a big need for AA. it would only make things look blurry. it would look like GC games compared to the sharpest PS2 games.... i personally prefere the sharp look, not the blurry look. and at 720p the screen will be detailed and sharp enough not to need a lot of AA, which could result in blurring the image and losing detail. but maybe that's just me...
personally, i'm convinced that, at least in europe, there will be NO support for 720p AT ALL... so i'll have to make do with 480p with AA :rolleyes: call me pessimistic...
 
I really wouldn't want to see aliasing, shimmering, or any sort of artifact in next gen. games. I expect nigh perfect IQ for almost all titles, and if this is not achieved... it will be dissapointing indeed...

I think the original rez, of the cancelled gs3, of 4xxx X 2xxx is a good start, they should try to render at such high rez, at least internally...

Well at least having a small 13" HDtv should make each pixel small enough to make aliasing unnoticeable for me...

EDIT changed ' for "
 
zidane1strife said:
I really wouldn't want to see aliasing, shimmering, or any sort of artifact in next gen. games. I expect nigh perfect IQ for almost all titles, and if this is not achieved... it will be dissapointing indeed...

I think the original rez, of the cancelled gs3, of 4xxx X 2xxx is a good start, they should try to render at such high rez, at least internally...

Well at least having a small 13" HDtv should make each pixel small enough to make aliasing unnoticeable for me...

EDIT changed ' for "

what on earth is a 13" HDtv? might as well call it PC monitor.. and a small one at the as well....
my TV is an 17" LCD display that goes up to 1024x768 in VGA mode. enough to support 720p. i'll be happy with 720p and no AA. but i think they will push AA. i'm sure that at least 4x AA at 720p won't be a problem for next gen hardware in terms of performance...

sorry if i dont make much sense, but my brain is still in shock after watching Xmen2 last night... :LOL:
 
Back
Top