Look at this Google-cached (pulled down) PlayStation 3 page

london-boy said:
megadrive0088 said:
128-bit color / rendering precision was one of the givens that I thought PS3 and all next-gen consoles would have.

ME TOO.... still, since i assume the trend of console-RAM-starvness will still exist, i'm not raising my expectations too high....

u know, better being pessimistic than disappointed

Difference between 64 and 128 bit rendering on a TV (even HDTV) will be very subtle (if noticeable), may be some special occassion with extremely over bright or extremely dark scene will see a little bit of difference.

I will value higher resolution, finer geometry, more usage of pixel shaders, better texture filtering (for bigger polygons) and anti-aliasing a lot more than colour depth above 64bit floating point ARGB.
 
Difference between 64 and 128 bit rendering on a TV (even HDTV) will be very subtle (if noticeable), may be some special occassion with extremely over bright or extremely dark scene will see a little bit of difference.
Why using scren sized render buffers though? They could be considerably smaller with the kind of arch. PS3 is supposedly promising.
The only thing that needs to be screen sized are front buffers which are in format of target display device anyhow (and unless something changes radically that will stay at 16-32bit).
 
Fafalada said:
Difference between 64 and 128 bit rendering on a TV (even HDTV) will be very subtle (if noticeable), may be some special occassion with extremely over bright or extremely dark scene will see a little bit of difference.
Why using scren sized render buffers though? They could be considerably smaller with the kind of arch. PS3 is supposedly promising.
The only thing that needs to be screen sized are front buffers which are in format of target display device anyhow (and unless something changes radically that will stay at 16-32bit).

I guess for most displays 64bit floating point colours will be much more than enough, if it has no problem doing 128bit then it will be great. Just a matter of priority to me.
 
Gubbi said:
Your Z buffer has to be the same resolution as your backbuffer, -otherwise you won't be able to determine which subpixels to cull.

That is also true... I forgot about that... lol :(

Ok let's recalculate... that would mean 37.5 MB for back-buffer and Z-buffer and ~1.18 MB for the front-buffer.

I doubt that the Visualizer will only have 32 MB of e-DRAM ( I think 64 MB for the Visualizer would be fair enough to assume [I would expect only 32 MB, maximum, on the Broadband Engine] ).

I think 16 bits for the Z-buffer ( we migth do W-buffering to compensate for the distributiuon range problem that Z-buffer natively has and the worsening of that problem by using only 16 bits ) might be enough though especially since we resolved lots of Z-fighting by doing deferred Shading ( models tagged with displacement mapping would be sent regardless though )...

In the case of a 24 bits Z-buffer...

14.04 MB + 18.72 MB = 32.76 MB and the front-buffer is only 1.18 MB

In the case of a 16 bits W-buffer/Z-buffer...

9.375 MB ( W/Z-buffer ) + 18.72 MB + 1.18 MB = 29.275 MB

If the e-DRAM were only 32 MB this would leave 2.725 MB for Texture Space plus you would have 2 MB of Local Storage in the APU and the Image Caches to play with...

Not a problem as the Visualizer is not the only one that samples textures: texture sampling is mainly done in the Shading phase ( except for displacement maps ) and that will be distributed across the Broadband Engine and the Visualizer...

Also, we can still stream ( compressed ) textures from the external XDR... 25.6 GB/s is nothing to laugh at...

Thinking about having 2.725 MB in the Visualizer and Broadband Engine e-DRAM would mean a total of 5.45 MB of compressed textures per frame ( not counting Texture Streaming [Virtual Texturing] from the external XDR )...

If we can de-compress in real-time VQ/S3TC compressed textures ( achieving 1:6-1:8 compression ratios ) with the APUs this would mean 32.7-43.6 MB of uncompressed textures/frame...

This would mean up to ~8-11 "full" 1,024x1,024 32 bpp textures, mip-mapping excluded.

Not much, it would seem ( still not too bad ), but we are forgetting that we will be using a few procedurally generated ones ( perfect for ground textures as well ) and that we will not need to store full-textures if we upload only the visible texels ( again taken some ideas from a Virtual Texturing approach ).

Please correct the wrong assumptions you think I made...


Also with 16x supersampling you get a 16x increase in number of micropolys. The whole idea of micropolys is that you don't have to scan-convert them (so they have to be smaller than your subpixels).

That is fine, I said we would be Shading limited more than fill-rate limited...

As long as the Shading part can provide us with enough micro-polygons we should be all set...

640 * 480 * 4 ( each micro-polygon is 1/4th of a pixel ) * 16 = ~19.6 M micro-polygons/frame, which at 60 fps would mean ~1.16 Billion micro-polygons/s...

I recognize that maybe 19.6 M micro-polygons/frame might be a bit on the high-side, but I would not advice to use this approach for a racing game or a fighter...

With 16x AA and high quality motion blur that a REYES approach would provide, for lots of games we could have a stable 30 fps approach and that would mean ~580 M micro-polygons/s which is achievable by the Broadband Engine and the Visualizer... Or you could go at 60 fps with 8x AA... the decision is yours...
 
I guess for most displays 64bit floating point colours will be much more than enough, if it has no problem doing 128bit then it will be great. Just a matter of priority to me.
You misunderstood me.
Today's displays operate on 16bit (analog TV - YUV) or 24bit(digital devices - RGB).
There's no such thing as 64bit color consumer display, much less one that uses floating point.

The higher precision framebuffer is downsampled to TV color space for displaying - which is why I argued that you don't need to keep screen sized FP buffers - saving memory without compromising calculation precision OR picture quality.
 
Fafalada said:
Difference between 64 and 128 bit rendering on a TV (even HDTV) will be very subtle (if noticeable), may be some special occassion with extremely over bright or extremely dark scene will see a little bit of difference.
Why using scren sized render buffers though? They could be considerably smaller with the kind of arch. PS3 is supposedly promising.

Can you expand on this Fafalada, please ?

And maybe also make an in-depth comment about the previous post I made... ( the longer one in aswer to Gubbi )...
 
Paul said:
(non-octagonal bins ...yaeh..).

Hehehe or how about square tires? :)

Yea, the geometry next gen will be there so everything is round, but there will always be that lazy developer, or the developer who doesn't take time to care about square tires, but rather wants to get their game on market ASAP.

Well shiney is just thousands of years behind their fellow man. Give them time. They'll eventually discover the wheele.
 
Fafalada said:
I guess for most displays 64bit floating point colours will be much more than enough, if it has no problem doing 128bit then it will be great. Just a matter of priority to me.
You misunderstood me.
Today's displays operate on 16bit (analog TV - YUV) or 24bit(digital devices - RGB).
There's no such thing as 64bit color consumer display, much less one that uses floating point.

The higher precision framebuffer is downsampled to TV color space for displaying - which is why I argued that you don't need to keep screen sized FP buffers - saving memory without compromising calculation precision OR picture quality.

Heh heh ! You have also misunderstood me, I understand the limits of the displays (I work with video processing), I was simply talking about the rendering buffer. I understand that they still need to be downsampled/downfiltered to the target res. and colour depth.

With anti-aliasing, memory usage will increase a lot, I mean if it is not ok with 128bit FP buffer, I would take 64bit FP buffer just fine and I think even at 64bit FP colours, the final result will not differ by much at the display.
 
Pana,
I think it's a little early for going so in depth with numbers when we have none available about the actual product yet.
But for what's worth I think stuff like amount of 'static' texture available is irellevant information (kinda like debating how much textures NV2a cache stores at one time).
Heck even with PS2 the number had largely no meaning except maybe for first couple of titles.
And I'd wait a bit before we start going on and on about micropolygons also... I mean yes it look like an interesting possibility, but that's all it is at the moment, and it'd be a long argument to even see if it's amongst the best possibilities available.

As for the other thing, if you can consider working everything else in small packets, why not render in subscreen sized buffers as well? While I oversimplified frontbuffer requirements (the results should probably still be in full FP for render to texture ops) backbuffers need not be screen sized.
Deferring the rendering is not a new thing - it's commonly used on consoles today, and splitting the rendering targets into smaller parts is only a small step from there if it doesn't incurr large performance overhead (and within few years, it no longer will, unlike today).

maskrider said:
Heh heh ! You have also misunderstood me, I understand the limits of the displays (I work with video processing), I was simply talking about the rendering buffer. I understand that they still need to be downsampled/downfiltered to the target res. and colour depth.
Oops :oops: sorry bout that. Not sleeping much lately and it's starting to show. -_-
 
Fafalada said:
maskrider said:
Heh heh ! You have also misunderstood me, I understand the limits of the displays (I work with video processing), I was simply talking about the rendering buffer. I understand that they still need to be downsampled/downfiltered to the target res. and colour depth.
Oops :oops: sorry bout that. Not sleeping much lately and it's starting to show. -_-

I have similar problem with sleeping, it is already 4:56am here in Hongkong, I am still staying up working. Ha ha ! :D
 
Doesnt the new DX9 cards render in 64/128 color depth? If PS3 optimizes at 32bpp or limited at 480p w/ higher bpp, it is gonna be PS2 all over again....

you mean a runnaway success right?
 
Darn'it all! Now chaphack has another buzzterm (for him at least) to perpetuate the mission to prove that PS3, 4, etc. is inevitably doomed. Now its going to be about either "shoddy 64 bpp" or "not enough memory" for another 2 weeks. :rolleyes:
 
Panajev2001a said:
In the case of a 24 bits Z-buffer...

14.04 MB + 18.72 MB = 32.76 MB and the front-buffer is only 1.18 MB
In this case you probably need to figure a 32-bit z buffer. As 32 bits is much easier to address than 24 bits I doubt Sony will go through the trouble of packing data.

16x AA isn't too bad at 640x480 resolution, but if they try to support 720p or more memory and processing requirements will increase dramatically.
 
...

Ounch, developer's worst nightmare confirmed. Sony provides no OS/compiler level parallelism abstration and IBM has no magic technology that will make multithreading(Or should we say micro-process piping) headache go away. It will be fun(depends on whom you are asking) to create thousands of micro-processes, maintain pipes between them and keeping track of which processes are alive and which ones are dead to maintain pipe integrity, all without the help of OS/compiler...

A typical Japanese hardware, the pursuit of absolute theoritical performance with zero regards to developer convenience. The legacy of Saturn lives on with PSX3... I feel sorry for Fafalada and all the developers who will be working on this beast.

Cramming more processors into single die is not the solution to performance problem. DirectX works because it makes parallel shaders largely invisible. Maybe MS will be getting its big break with Xbox2 afterall.
 
I still wish Sega would make a new console. this time with ATI graphics, since ATI has all the ArtX and some Real3D technology & engineers.

*droools over future twin R500 VPU chipset with 512 MB memory each*

bah, just call it XBox Next.... oh wait :D
 
hey there Deadmeat it''s been awhile.

Sony provides no OS/compiler level parallelism abstration and IBM has no magic technology that will make multithreading(Or should we say micro-process piping) headache go away.

I don't believe he explictily states it tho it does seem to be the case at the mo.

A typical Japanese hardware, the pursuit of absolute theoritical performance with zero regards to developer convenience. The legacy of Saturn lives on with PSX3

heh, the saturn was a rush job I think so that comparison is a little unfair.

Cramming more processors into single die is not the solution to performance problem. DirectX works because it makes parallel shaders largely invisible. Maybe MS will be getting its big break with Xbox2 afterall.

so parrellism works is MS is working on it? is it unfeasable to write DX stlye API's of other parrellell architectures or is this severly limited to the way CELL appears to be heading?
 
Panajev, 60 frames per second is almost ALWAYS perferable over 30 FPS
imvho :D

I'd take 60 FPS w/ 8x FSAA over 30 FPS w/ 16x FSAA anyday!
 
Re: ...

DeadmeatGA said:
Ounch, developer's worst nightmare confirmed. Sony provides no OS/compiler level parallelism abstration and IBM has no magic technology that will make multithreading(Or should we say micro-process piping) headache go away. It will be fun(depends on whom you are asking) to create thousands of micro-processes, maintain pipes between them and keeping track of which processes are alive and which ones are dead to maintain pipe integrity, all without the help of OS/compiler...

A typical Japanese hardware, the pursuit of absolute theoritical performance with zero regards to developer convenience. The legacy of Saturn lives on with PSX3... I feel sorry for Fafalada and all the developers who will be working on this beast.

Cramming more processors into single die is not the solution to performance problem. DirectX works because it makes parallel shaders largely invisible. Maybe MS will be getting its big break with Xbox2 afterall.

This post could have just been reduced to three letters: FUD. That would have saved the typer and the reader a good deal of time.
 
...

heh, the saturn was a rush job I think so that comparison is a little unfair.
The problem is that Japanese electronic industry is very hardware centric and software guys have little say in how hardware is designed. This is why we have flawed designed like PSX2 and PSX3, all shooting for big marketting hype numbers at the expense of programmability. If PSX2 failed then PSX3 would have been a nicer machine(Was the case with DC and GC, both developer friendly machines), But since developers did not protest PSX2's "unusual" architecture, Kutaragi struck back with an even worse design, the PSX3. This is why SOFTWARE GUYS and not hardware guys should be in charge of hardware development.

so parrellism works is MS is working on it? is it unfeasable to write DX stlye API's of other parrellell architectures
It is feasible, but Sony doesn't have the experience to do it.

is this severly limited to the way CELL appears to be heading?
CELL was not developed with programmability in mind; the guy who masterminded CELL is an electrical engineer, not a programmer. He simple doesn't know.
 
Back
Top