Look at this Google-cached (pulled down) PlayStation 3 page

Discussion in 'Console Technology' started by Panajev2001a, Jul 20, 2003.

  1. maskrider

    maskrider Henshin !
    Veteran

    Joined:
    Feb 12, 2002
    Messages:
    1,279
    Likes Received:
    0
    Location:
    Hong Kong
    Difference between 64 and 128 bit rendering on a TV (even HDTV) will be very subtle (if noticeable), may be some special occassion with extremely over bright or extremely dark scene will see a little bit of difference.

    I will value higher resolution, finer geometry, more usage of pixel shaders, better texture filtering (for bigger polygons) and anti-aliasing a lot more than colour depth above 64bit floating point ARGB.
     
  2. Fafalada

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,773
    Likes Received:
    49
    Why using scren sized render buffers though? They could be considerably smaller with the kind of arch. PS3 is supposedly promising.
    The only thing that needs to be screen sized are front buffers which are in format of target display device anyhow (and unless something changes radically that will stay at 16-32bit).
     
  3. maskrider

    maskrider Henshin !
    Veteran

    Joined:
    Feb 12, 2002
    Messages:
    1,279
    Likes Received:
    0
    Location:
    Hong Kong
    I guess for most displays 64bit floating point colours will be much more than enough, if it has no problem doing 128bit then it will be great. Just a matter of priority to me.
     
  4. Panajev2001a

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,187
    Likes Received:
    8
    That is also true... I forgot about that... lol :(

    Ok let's recalculate... that would mean 37.5 MB for back-buffer and Z-buffer and ~1.18 MB for the front-buffer.

    I doubt that the Visualizer will only have 32 MB of e-DRAM ( I think 64 MB for the Visualizer would be fair enough to assume [I would expect only 32 MB, maximum, on the Broadband Engine] ).

    I think 16 bits for the Z-buffer ( we migth do W-buffering to compensate for the distributiuon range problem that Z-buffer natively has and the worsening of that problem by using only 16 bits ) might be enough though especially since we resolved lots of Z-fighting by doing deferred Shading ( models tagged with displacement mapping would be sent regardless though )...

    In the case of a 24 bits Z-buffer...

    14.04 MB + 18.72 MB = 32.76 MB and the front-buffer is only 1.18 MB

    In the case of a 16 bits W-buffer/Z-buffer...

    9.375 MB ( W/Z-buffer ) + 18.72 MB + 1.18 MB = 29.275 MB

    If the e-DRAM were only 32 MB this would leave 2.725 MB for Texture Space plus you would have 2 MB of Local Storage in the APU and the Image Caches to play with...

    Not a problem as the Visualizer is not the only one that samples textures: texture sampling is mainly done in the Shading phase ( except for displacement maps ) and that will be distributed across the Broadband Engine and the Visualizer...

    Also, we can still stream ( compressed ) textures from the external XDR... 25.6 GB/s is nothing to laugh at...

    Thinking about having 2.725 MB in the Visualizer and Broadband Engine e-DRAM would mean a total of 5.45 MB of compressed textures per frame ( not counting Texture Streaming [Virtual Texturing] from the external XDR )...

    If we can de-compress in real-time VQ/S3TC compressed textures ( achieving 1:6-1:8 compression ratios ) with the APUs this would mean 32.7-43.6 MB of uncompressed textures/frame...

    This would mean up to ~8-11 "full" 1,024x1,024 32 bpp textures, mip-mapping excluded.

    Not much, it would seem ( still not too bad ), but we are forgetting that we will be using a few procedurally generated ones ( perfect for ground textures as well ) and that we will not need to store full-textures if we upload only the visible texels ( again taken some ideas from a Virtual Texturing approach ).

    Please correct the wrong assumptions you think I made...


    That is fine, I said we would be Shading limited more than fill-rate limited...

    As long as the Shading part can provide us with enough micro-polygons we should be all set...

    640 * 480 * 4 ( each micro-polygon is 1/4th of a pixel ) * 16 = ~19.6 M micro-polygons/frame, which at 60 fps would mean ~1.16 Billion micro-polygons/s...

    I recognize that maybe 19.6 M micro-polygons/frame might be a bit on the high-side, but I would not advice to use this approach for a racing game or a fighter...

    With 16x AA and high quality motion blur that a REYES approach would provide, for lots of games we could have a stable 30 fps approach and that would mean ~580 M micro-polygons/s which is achievable by the Broadband Engine and the Visualizer... Or you could go at 60 fps with 8x AA... the decision is yours...
     
  5. Fafalada

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,773
    Likes Received:
    49
    You misunderstood me.
    Today's displays operate on 16bit (analog TV - YUV) or 24bit(digital devices - RGB).
    There's no such thing as 64bit color consumer display, much less one that uses floating point.

    The higher precision framebuffer is downsampled to TV color space for displaying - which is why I argued that you don't need to keep screen sized FP buffers - saving memory without compromising calculation precision OR picture quality.
     
  6. Panajev2001a

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,187
    Likes Received:
    8
    Can you expand on this Fafalada, please ?

    And maybe also make an in-depth comment about the previous post I made... ( the longer one in aswer to Gubbi )...
     
  7. Legion

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    598
    Likes Received:
    3
    Well shiney is just thousands of years behind their fellow man. Give them time. They'll eventually discover the wheele.
     
  8. maskrider

    maskrider Henshin !
    Veteran

    Joined:
    Feb 12, 2002
    Messages:
    1,279
    Likes Received:
    0
    Location:
    Hong Kong
    Heh heh ! You have also misunderstood me, I understand the limits of the displays (I work with video processing), I was simply talking about the rendering buffer. I understand that they still need to be downsampled/downfiltered to the target res. and colour depth.

    With anti-aliasing, memory usage will increase a lot, I mean if it is not ok with 128bit FP buffer, I would take 64bit FP buffer just fine and I think even at 64bit FP colours, the final result will not differ by much at the display.
     
  9. Fafalada

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,773
    Likes Received:
    49
    Pana,
    I think it's a little early for going so in depth with numbers when we have none available about the actual product yet.
    But for what's worth I think stuff like amount of 'static' texture available is irellevant information (kinda like debating how much textures NV2a cache stores at one time).
    Heck even with PS2 the number had largely no meaning except maybe for first couple of titles.
    And I'd wait a bit before we start going on and on about micropolygons also... I mean yes it look like an interesting possibility, but that's all it is at the moment, and it'd be a long argument to even see if it's amongst the best possibilities available.

    As for the other thing, if you can consider working everything else in small packets, why not render in subscreen sized buffers as well? While I oversimplified frontbuffer requirements (the results should probably still be in full FP for render to texture ops) backbuffers need not be screen sized.
    Deferring the rendering is not a new thing - it's commonly used on consoles today, and splitting the rendering targets into smaller parts is only a small step from there if it doesn't incurr large performance overhead (and within few years, it no longer will, unlike today).

    Oops :oops: sorry bout that. Not sleeping much lately and it's starting to show. -_-
     
  10. maskrider

    maskrider Henshin !
    Veteran

    Joined:
    Feb 12, 2002
    Messages:
    1,279
    Likes Received:
    0
    Location:
    Hong Kong
    I have similar problem with sleeping, it is already 4:56am here in Hongkong, I am still staying up working. Ha ha ! :D
     
  11. notAFanB

    Veteran

    Joined:
    Jun 5, 2003
    Messages:
    1,165
    Likes Received:
    1
    you mean a runnaway success right?
     
  12. randycat99

    Veteran

    Joined:
    Jul 24, 2002
    Messages:
    1,772
    Likes Received:
    12
    Location:
    turn around...
    Darn'it all! Now chaphack has another buzzterm (for him at least) to perpetuate the mission to prove that PS3, 4, etc. is inevitably doomed. Now its going to be about either "shoddy 64 bpp" or "not enough memory" for another 2 weeks. :roll:
     
  13. notAFanB

    Veteran

    Joined:
    Jun 5, 2003
    Messages:
    1,165
    Likes Received:
    1
    why render interally so high in DX9+? I am asking since I have no idea where in the pipline this happens and why.
     
  14. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    In this case you probably need to figure a 32-bit z buffer. As 32 bits is much easier to address than 24 bits I doubt Sony will go through the trouble of packing data.

    16x AA isn't too bad at 640x480 resolution, but if they try to support 720p or more memory and processing requirements will increase dramatically.
     
  15. DeadmeatGA

    Banned

    Joined:
    Jan 14, 2003
    Messages:
    391
    Likes Received:
    0
    ...

    Ounch, developer's worst nightmare confirmed. Sony provides no OS/compiler level parallelism abstration and IBM has no magic technology that will make multithreading(Or should we say micro-process piping) headache go away. It will be fun(depends on whom you are asking) to create thousands of micro-processes, maintain pipes between them and keeping track of which processes are alive and which ones are dead to maintain pipe integrity, all without the help of OS/compiler...

    A typical Japanese hardware, the pursuit of absolute theoritical performance with zero regards to developer convenience. The legacy of Saturn lives on with PSX3... I feel sorry for Fafalada and all the developers who will be working on this beast.

    Cramming more processors into single die is not the solution to performance problem. DirectX works because it makes parallel shaders largely invisible. Maybe MS will be getting its big break with Xbox2 afterall.
     
  16. megadrive0088

    Regular

    Joined:
    Jul 23, 2002
    Messages:
    700
    Likes Received:
    0
    I still wish Sega would make a new console. this time with ATI graphics, since ATI has all the ArtX and some Real3D technology & engineers.

    *droools over future twin R500 VPU chipset with 512 MB memory each*

    bah, just call it XBox Next.... oh wait :D
     
  17. notAFanB

    Veteran

    Joined:
    Jun 5, 2003
    Messages:
    1,165
    Likes Received:
    1
    hey there Deadmeat it''s been awhile.

    I don't believe he explictily states it tho it does seem to be the case at the mo.

    heh, the saturn was a rush job I think so that comparison is a little unfair.

    so parrellism works is MS is working on it? is it unfeasable to write DX stlye API's of other parrellell architectures or is this severly limited to the way CELL appears to be heading?
     
  18. megadrive0088

    Regular

    Joined:
    Jul 23, 2002
    Messages:
    700
    Likes Received:
    0
    Panajev, 60 frames per second is almost ALWAYS perferable over 30 FPS
    imvho :D

    I'd take 60 FPS w/ 8x FSAA over 30 FPS w/ 16x FSAA anyday!
     
  19. randycat99

    Veteran

    Joined:
    Jul 24, 2002
    Messages:
    1,772
    Likes Received:
    12
    Location:
    turn around...
    Re: ...

    This post could have just been reduced to three letters: FUD. That would have saved the typer and the reader a good deal of time.
     
  20. DeadmeatGA

    Banned

    Joined:
    Jan 14, 2003
    Messages:
    391
    Likes Received:
    0
    ...

    The problem is that Japanese electronic industry is very hardware centric and software guys have little say in how hardware is designed. This is why we have flawed designed like PSX2 and PSX3, all shooting for big marketting hype numbers at the expense of programmability. If PSX2 failed then PSX3 would have been a nicer machine(Was the case with DC and GC, both developer friendly machines), But since developers did not protest PSX2's "unusual" architecture, Kutaragi struck back with an even worse design, the PSX3. This is why SOFTWARE GUYS and not hardware guys should be in charge of hardware development.

    It is feasible, but Sony doesn't have the experience to do it.

    CELL was not developed with programmability in mind; the guy who masterminded CELL is an electrical engineer, not a programmer. He simple doesn't know.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...