MEME: Sony engineering better than everyone else *cleanup*

Discussion in 'Graphics and Semiconductor Industry' started by w0lfram, Jan 28, 2019.

  1. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,004
    Likes Received:
    2,509
    Location:
    Well within 3d
    Wouldn't that have been the original plan for the PS3? It was alleged that RSX was chosen over including a bespoke graphics companion to the host Cell processor.
    Cell and its SPEs had some philosophical similarities, with a number of DSP-like side-processors, a scratchpad-enhanced memory hierarchy, and an aggressive on-die network.

    The PS2's graphics subsystem stretched across two chips, and had among other things a 128-bit on-die bus, a direct 64-bit link to the GS, and had a more complex memory mapping scheme with the CPU and other units able to map memory in a heterogeneous way.
    If Cell's SPEs were predominantly the VPU portion of an evolved EE, then whatever would have been the other PS3 chip could have been the next version of the GS.
    I've only seen rumors about what the PS3 might have wanted to be, and I didn't start researching architectures significantly until the the generation after the PS2--which limits my familiarity. The era of the XBox 360, PS3, multicore x86, R600, G80, and POWER chips was a high-water mark in terms of ready accessibility and evangelism for hardware architectures.

    As for whether the T&L path taken by PC architectures was a mistake versus the PS2's VPU arrangement, for whatever reason the latter was supplanted by the former.
    I can see the conceptual appeal of the more flexible programmable path, although looking at the system diagram of the PS2 there are elements that would have likely been counterproductive in the realm of PC-industry graphics ASICs on PCI/AGP expansion buses.
    The VPUs themselves plug directly into the main memory bus and I believe some level of interaction in a MMU with interacting page table contexts. For the more standardized PC industry, having that bespoke hardware on-die at the time would have been a general non-starter. The CPUs had more need of the precious die space, and the system architectures were not amenable to the more complicated view of memory that the EE had. Even now, generations hence, I'm not sure the level of interaction GPUs have in the x86 virtual memory system is as free, with the IOMMU having some hefty isolation.
    The bus between the EE and GS was something that would take a number of generations in terms of expansion bus protocols to get the same bandwidth, and the relationship with slave devices over PCI demands a stronger separation between host and graphics domain that has not been fully erased with current APUs.

    In that context, a more fixed-function front end could have made more sense. Interaction with a graphics slave chip would have been more costly in terms of latency and driver involvement, and bandwidth would lag given the needs of expansion slots and long board traces. Doing more with packaged geometry and asynchronously processing it on the other side of the divide, and obfuscated more of it because of a need to be more compatible with the existing ecosystem likely solved problems that the PS2's architecture did not help with.
    By conforming to the needs of the multi-billion dollar market still addressed by GPUs with that same architectural legacy, it seemed like it was able to generate multiple generations in a way the PS2's implementation did not.
    Other elements, like eDRAM and the speed of inter-chip PCB connections seemed to hit practical limits as far as how they could be scaled, or who was left to manufacture them. The more complex processor and memory hierarchy proved to have performance and developer problems that more directly threatened the architecture than difficulties that arose with optional use of the later shader types, and in the end no vendor left to push it.

    While this seems to have appealed for geometry, it seems like that piece alone was swamped by whatever happened with the silicon gains on the pixel processor side, and the rejection of the complexity and scaling costs specific to the Sony+Toshiba paradigm versus what the rest of the industry wanted.
    From what rumors state, the mistaken architecture will be what the next generation will be using. Mesh and task shaders also do not remove the hardware paths, so much as they expose it and allow software to leverage its on-die paths more effectively.
     
    AlBran, iroboto and vipa899 like this.
  2. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    348
    Location:
    Sweden
    @3dilettante

    Wow ok thanks for the detailed description. The PS2 was my favorite console (and still is), its hardware while not the fastest or best of the 6th gen, was very intresting to say the least, and intresting results where shown.



    Thought that was very impressive for being hardware from early 2000. Offcourse the system was pushed more then any system out there even today, it still is an impressive feat.
    Imagine if transformers was a 30fps title!

    On a note, do you think the OG xbox had the best performing hardware (in general) of the three 6th gen consoles?
     
  3. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,004
    Likes Received:
    2,509
    Location:
    Well within 3d
    I was less aware of the original Xbox at the time than the PS2. I had a PS2, but my memory is fuzzy about that far back.
    I don't recall seeing attempts at rigorous comparisons being made between platforms, and I think at the time the general trend was that the original Xbox could usually be counted on giving more stable performance on cross-platform titles.

    Running from fuzzy memory, and from the wikis for the tech specs for both.
    The PS2's hardware, if well utilized by devs with the time/skill to massage it, could be pushed very far. Its peaks could be high, but there were corners the architecture and step functions based on features used that could bring it down to more modest levels pretty quickly.
    The Xbox's hardware had lower peaks in a number of spots, but it seemed to have more generous resources for handling non-ideal coding. It had some bottlenecks relative to the PS2, like the split between CPU and GPU vertex capability that the PS2's EE did not have, but on the other hand those bottlenecks were familiar to many devs and the tools to deal with them were more robust.

    In terms of the main CPU in general performance without counting the VPUs or assuming they were heavily used for a graphics load, the Xbox's Pentium III appears to be substantially more capable, and this may have explained some of the performance inconsistencies with the PS2.
    The VPUs would have done much better in terms of vector capability, and they contributed to some high peak vertex rates. The more complex arrangement of units and the reliance on optimal software tended to lead to significant underutilization.
    VU0, for example, made up somewhat less than half of the peak FP performance of the vector units but in practice saw most of that peak unused. (Unfortunately as with many attempts to get details that far back, the source link for the following thead is dead: https://forum.beyond3d.com/threads/ps2-performance-analyzer-statistics-from-sony.7901/)


    The Xbox's GPU would have to take on much of the fight with the VPUs and the GS, which meant lower peak geometry and pixel rates. Complexity in leveraging the more complex PS2 and eDRAM arrangement aside, there were some significant steps down in power based on how many features were employed at once. The pixel engine lost half its throughput if texturing was enabled, for example, and other features dropped the rates as they were enabled. Geometry and pixel fillrate could be very high for the PS2 for simple output, although the single-texturing rate looks a bit modest compared to the high other peaks.

    The NV2A didn't have the same raw numbers, but it seems that it could sustain more performance with more pixel effects applied. The PS2's fast eDRAM was also more limited in size, and that could lead to reducing pixel/texture detail to avoid additional passes for tiling purposes.
    I'm even hazier on this, but in terms of the big gap between the CPU and GPU in the PC architecture I mentioned earlier: I thought this was bolstered by discussion a while ago about how the PS2 could more readily create its desired output by burning more of its peak geometry and pixel throughput through multiple low-overhead submissions of the same geometry, versus the PC method of reducing the number of passes while cramming in more effects of higher complexity per pass.

    https://forum.beyond3d.com/threads/ps2-vs-ps3-vs-ps4-fillrate.55200/
    https://forum.beyond3d.com/threads/questions-about-ps2.57768/

    As time progressed, some of the assumptions baked into the PS2 became less tenable. eDRAM figured highly in the GS architecture, but the manufacturing base and physical scaling of eDRAM processes did not keep pace. The 360 used it, but it seems by that point the needs of the GPU silicon proper did not allow for it to be on-die. Nintendo kept using eDRAM, although this was increasingly more limited in terms of what it could achieve and in terms of manufacturing (the last aging node for eDRAM by the last fab offering it). The aggressive on-die connectivity allowed by the on-die arrangement helped give the PS2 the high fillrate ceiling, but also bound future scaling to connectivity and capacity scaling.
    The PS2's image quality did suffer from the capacity limitations, and the 360's eDRAM capacity constraints could be felt as well. The Xbox One's ESRAM wasn't eDRAM, but its capacity limits were noticed as well.
    The overall trend seems to be that demand for capacity outstripped what could be delivered on-die. Relying on the high bandwidth from multiple on-die connections also rode a weaker scaling curve, as interconnection proved more challenging to increase versus transistors.
    The more complex programming model, high-peak finicky hardware, and many pools of different memory also became less tolerated as time went on.

    (edit: fixed link)
     
    #63 3dilettante, Feb 13, 2019
    Last edited: Feb 13, 2019
    vipa899 likes this.
  4. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    348
    Location:
    Sweden
    @3dilettante

    Thanks for a as usual from you in detail explanations about the architectures, its in line what ive heard before here on B3D i think.
    I understand from you too that, in general the OG Xbox was quite much more capable, if pushed as much as PS2 offcourse?

    Was testing this title today, unreal championship 2, i feel it pushed the xbox quite far.

     
  5. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,004
    Likes Received:
    2,509
    Location:
    Well within 3d
    I had less exposure to the Xbox, although there would be games that had an above-average optimization for the platform.
    A game that pushed one console very hard would be different from a game targeting the other, enough so that making a direct comparison might be debatable.
    The Xbox did have a fair number of extra features and storage that provided a value-add besides raw performance, and there's the benefit of hardware that made getting decent or consistent performance easier to achieve.
    The market started to shift towards favoring pixel quality, which the Xbox favored.
    I think there were PS2 games that were standouts in terms of what they delivered, and the Xbox had its own standouts and was able to provide a higher average baseline even for titles that didn't push it.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...