Questions about PS2

Discussion in 'Console Technology' started by Liandry, Apr 7, 2016.

  1. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    I think you're overstating that GSCube was very successful even in its intended application. From what I've read, less than 20 were ever made and it wasn't long before they were sent back to Japan where they were destroyed. It allegedly took several minutes to upload the results from the GSCube to the attached server, which defeated the purpose of having real time rendering. Which, if true, shows how much of a hacked together effort this was. And I wonder if it would have competed well against a hypothetical similar effort from a rival like nVidia and if that simply didn't happen due to lack of interest and demand rather than Sony holding an intrinsic advantage for this application.

    There's also claim that they were sold at a tremendous loss. All of this information is taken from here: http://assemblergames.com/l/threads/gscube-information.18036/#post-271056

    Still it's kind of moot; even if it was great for movie previews (which could be useful with very specific criteria, like high polygon count but low shading quality) that alone doesn't support the argument that something like it would have been well suited for PS3.

    GS had really serious limitations that I don't think you're really addressing this. function mentioned a big one, lack of decent texture compression. Culling is a little more debatable, we could argue the suitability of keeping that largely in the vertex shading domain (the VUs) although I think there's a reason other GPUs haven't done this and the problem (percentage of polygons that need to be culled geometrically) would have gotten worse with resolution scaling.

    But I'd say that the entire shading model had real intrinsic limitations. Not just because of lack of blend operations but because the precision/dynamic range was of the values was limited to 8-bits per component. You could really do that many blend passes before quantization noise ruined the image quality. And the way the eDRAM-based framebuffer is used for intermediate value storage is inefficient compared to the register model other GPUs have followed. The bandwidth needed to perform RMWs from eDRAM almost certainly resulted in more power consumption than bandwidth needed to update SRAM based registers, especially when accounting for forwarding in the ALUs. With the SRAM based register files in GPUs the intermediate operations don't have to be coherent in screen space and can manage better than GS can with aggregating many small polygons over the same shader, where GS has to update 4x2 or 8x2 blocks in eDRAM vs other GPU shaders typically working in 2x2 quads.

    I'm sure there's also a very real bandwidth/power overhead in having to resubmit triangles for multi-pass, and some inefficiencies on the VU (Cell SPU?) side keeping the GS FIFOs full.

    The lack of implicit texture caching/requirement to load textures into the eDRAM almost certainly would have resulted in a lot of needless redundancy in textures loaded to different eDRAM tiles. The solution to this, making the tiles larger, would have needed more ALUs per tiles and would have exasperated the aforementioned screen-space coherency problem.

    Then there's the lack of many other features that improved efficiency like early/hierarchical depth testing, which were already features in the PS3 console generation.

    No matter how you slice it 512MB of eDRAM is enormous for the 1080p targets GSCube was designed for. Even if we would have been looking at an 8xGS with 256MB of eDRAM instead for something targeting PS3 that still would have been a tremendous amount of eDRAM for the time and I doubt it would have been economical. 1GB of RDRAM would have also been fairly expensive.

    For its generation, you could say GS was a decent architecture. You could also say Flipper and NV20A were, each having different strengths and weaknesses and being very different from GS. But I can't agree that it would have been competitive moving forward by simply brute force scaling any of these upwards like GSCube did.
     
    #401 Exophase, Jan 5, 2017
    Last edited: Jan 5, 2017
  2. corysama

    Newcomer

    Joined:
    Jul 10, 2004
    Messages:
    184
    Likes Received:
    145
    Where's that info from? It doesn't match my understanding. Maybe it's referring to some other feature.
     
    Liandry likes this.
  3. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
  4. Nesh

    Nesh Double Agent
    Legend

    Joined:
    Oct 2, 2005
    Messages:
    12,474
    Likes Received:
    2,780
    Interesting. There is only one game that used bump mapping on the PS2 right (Matrix Path of Neo)?
    I am curious what results the PS2 would get if there was an effort to exploit fully its peculiarities.
     
    Liandry likes this.
  5. HTupolev

    Regular

    Joined:
    Dec 8, 2012
    Messages:
    936
    Likes Received:
    564
    It isn't.

    The *effective* fillrate, in a sense, goes lower as more passes are required. Having to fill several times is just like filling slower.
     
    Liandry likes this.
  6. corysama

    Newcomer

    Joined:
    Jul 10, 2004
    Messages:
    184
    Likes Received:
    145

    Ah, that's just saying "If you draw over the same pixels 4 times, it's gonna cost you 4 times as much as a single pass". 1 x 4 = 4. That's all.
     
    Liandry likes this.
  7. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    I was thinking more about the prospect of a GSCube-based PS3 and when you get down to it I just don't see it as technically feasible to come anywhere remotely close to its eDRAM storage.

    For comparison: XBox 360 had a mere 10MB of eDRAM on a 90nm process and it was on a daughter die with integrated ROPs. Not a particularly small die either at 80 mm^2. Wii U, years later, had 32MB of eDRAM which took up around 41mm^2 of its "Latte" chip built on a 40/45nm process (AFAIK).

    Looking at it another way, the CPU+GPU reduction was about 6x in area going from 250nm to 90nm for PS2, and some of that was likely due to consolidation of the two chips into one, but we'll use 6x transistors as a reasonable starting point. PS3's GPU was about the same size as PS2's was originally (around 260mm^2); let's say that using eDRAM would save some external costs by not needing two different pools of memory, so it could maybe afford to have a bit of a larger GPU. So maybe you could fit 8x GS onto one die, but not while also increasing the individual GS eDRAM pool by 8x. You'd have a (quite large) chip with 32MB of eDRAM, not 256MB of eDRAM. You'd still hit the theoretical fillrate peaks of 1/2 the 1080p targeting GSCube, so around similar IQ at 720p, but with much less eDRAM. And like was said earlier, it was the huge increase in eDRAM that made GSCube at all viable for its purposes. It would have been really hard to deal with this tiled setup where every tile needs different pieces of textures manually uploaded and dynamically competing for a small amount of available eDRAM space. You'd have had a lot of redundancy, meaning that the realistic texture to framebuffer ratios would likely be even worse than they were with PS2.

    Those GS-I32 chips on GSCube must have been monsters, maybe 300-400nm large, and they had 16 motherboards with these. Starting to see how $50-80k would have been sold at a major loss...
     
    Liandry and tuna like this.
  8. Squeak

    Veteran

    Joined:
    Jul 13, 2002
    Messages:
    1,262
    Likes Received:
    32
    Location:
    Denmark
    There is no such thing as a true GPU. That is a construct concocted by makers if graphics hardware and APIs.
    Of course the possible PS3 GS wouldn't just have been an upscaled GS. Good ideas don't often scale.
    The main thing that the on die eDRAM solved was the incessant hammering and monotonous writes of and to RAM that is growing even more of a problem with larger framebuffers.
    The graphics would have to be tiled, and the die would have APU's/SPE's on it to do shading, blends, transformation etc.
    There was a patent drawing out at some point (which is of course impossible to find now) that showed exactly that setup with a main Cell-like CPU and a mainly graphics oriented die with eDRAM and SPE's.

    Sony would have owned their graphics architecture instead of having to shop around every generation, with all the associated problems. They would also be able to scale it much more freely and make cost reductions. They did that with both PSX and PS2, even if it wasn't apparent to the enduser.
    It would have meant a equally huge investment in software tools to make it frictionless for devs to work it into their existing procedures and habits. But that would still have been more than worth it.
    This is not magic guys. Nvidia and AMD are much smaller companies that Sony. Sony would have had more that enough resources to do this had they started early enough.

    The GSCube was obviously never anything but a testing of waters and a demonstration of technology,
    The main problem with using it was actually getting the data into the system. But that has always been a problem with any kind of rendering system.
     
    #408 Squeak, Jan 8, 2017
    Last edited: Jan 8, 2017
    Liandry and Pixel like this.
  9. tuna

    Veteran

    Joined:
    Mar 10, 2002
    Messages:
    3,271
    Likes Received:
    428
    Just because a company is big does not mean that they can produce good software. Nokia had more software developers than Apple* but still was not able to produce an OS in any reasonable time frame**.


    *According to a source I can't find anymore.

    **Probably because they did a lot of stupid decisions like first forking GTK+ and then abandoning that for Qt.
     
  10. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,744
    Location:
    Barcelona Spain
    I did not said than the RSX was not a true GPU but I don't think two CELL machine was a good idea.

    They needed a GPU... A GS 2 or the RSX or a GPU by AMD or a power VR...

    After the problem was to have a good documentation and I am not sure the third party devs would have been happy to work with a machine with a GS 2 and a CELL.
     
    Liandry likes this.
  11. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,744
    Location:
    Barcelona Spain
    From a technical point of view my friend tells me he thinks the first idea working with CELL and a GS 2 was a good one...

    But he understood that it was too risky and this I the reason they decided to work with NVIDIA for the GPU of PS3 and for the PS4 choose an x86 architecture...
     
    #411 chris1515, Jan 8, 2017
    Last edited: Jan 9, 2017
    Liandry likes this.
  12. SedentaryJourney

    Regular

    Joined:
    Mar 13, 2003
    Messages:
    476
    Likes Received:
    27
    I get the desire for a new krazy ken architecture, I used to love that about PlayStation as well, but there's no way this is ever going to be cost effective unless they're able to sell their chips outside of the console industry.

    It made sense with earlier PlayStations when the graphics industry wasn't as mature, but now it's just not worth it for a device that gets updated every six years or so.
     
    vipa899 and bunge like this.
  13. bunge

    Regular Newcomer

    Joined:
    Nov 23, 2014
    Messages:
    725
    Likes Received:
    513
    I would think a six year cycle would make it more cost effective. I'm ignorant about the process, but why would custom be worse?

    I mean, if you target 60-80 million sold, that could be profitable. Where is the gap or fail point?
     
  14. SedentaryJourney

    Regular

    Joined:
    Mar 13, 2003
    Messages:
    476
    Likes Received:
    27
    It's a product sold at a loss or minimal margins, and you have to spend large amounts on r&d, production, tools and libraries.

    AMD and Nvidia sell new products every year at pretty good margins and actual IC production is handled by external companies that specialise in fabbing chips. Intel is vertically integrated, but they also sell their products for a lot more than what Sony could charge for a console and the PC market is larger.

    For something small margin or loss leading it's better to contract out to hardware vendors for something cheap but semi-custom.
     
  15. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    I remember Shifty said what Snowblind Studios usedin their Snowblind Engine 2xSSAA. I tried to find some info about SSAA but where isn't much. So maube we also can discuss it here.
    1) As I understand 4xSSAA require 4x backbuffer size, right?
    2) How then is 2xAA works? Is it uses only 2x backbuffer? Is it horizontal scale or vertical?
    3) If I'm right whan backbuffer is two times larger when 2xSSAA used, when how it can fit in EDRAM? Is Z buffer also twice bigger?
    4) Did Snowblind Studios used some kind of tiling? I mean first rendered one half of frame, then reduced, then half of front buffer written?
    Anyone please explain this to me.
     
  16. steveOrino

    Regular

    Joined:
    Feb 11, 2010
    Messages:
    489
    Likes Received:
    156

    The fail point is getting people to make software for the architecture. It was far easier to go nuts on hardware when the dev teams were 1-20 people and you could strong arm the publishers with lucrative licensing arrangements. That ship sailed long ago.

    Its too bad in some ways because I loved the crazy hardware but its just a reminder that software is still king.
     
  17. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,453
    Likes Received:
    583
    Location:
    Finland
    1. Yes.
    2. Depends on which way developer wants it, not sure which was used.
    3. Yes, SSAA requires color/Z-buffers to be same size.
    Buffers are still quite small in size, so 4MB is enough.

    4. Pretty sure that no developer did multi pass SSAA on ps2, it would have complicated things a lot.(Re-sending texture data and polygons for each pass.)
     
  18. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,029
    Location:
    Under my bridge
    It was uniform XY supersampling AFAIK - there wasn't aliasing in one direction only. So 900x680 ish buffer downsampled to 640x480 (root two larger in each dimension for two times total area).
     
  19. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    900x680 is 2,05 MB, so backbuffer and z buffer won't fit to EDRAM, so how they done it?
     
  20. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,453
    Likes Received:
    583
    Location:
    Finland
    Most likely 16bit Z-buffer and 24bit or less for color.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...