@3dilettante
Wow ok thanks for the detailed description. The PS2 was my favorite console (and still is), its hardware while not the fastest or best of the 6th gen, was very intresting to say the least, and intresting results where shown.
Thought that was very impressive for being hardware from early 2000. Offcourse the system was pushed more then any system out there even today, it still is an impressive feat.
Imagine if transformers was a 30fps title!
On a note, do you think the OG xbox had the best performing hardware (in general) of the three 6th gen consoles?
I was less aware of the original Xbox at the time than the PS2. I had a PS2, but my memory is fuzzy about that far back.
I don't recall seeing attempts at rigorous comparisons being made between platforms, and I think at the time the general trend was that the original Xbox could usually be counted on giving more stable performance on cross-platform titles.
Running from fuzzy memory, and from the wikis for the tech specs for both.
The PS2's hardware, if well utilized by devs with the time/skill to massage it, could be pushed very far. Its peaks could be high, but there were corners the architecture and step functions based on features used that could bring it down to more modest levels pretty quickly.
The Xbox's hardware had lower peaks in a number of spots, but it seemed to have more generous resources for handling non-ideal coding. It had some bottlenecks relative to the PS2, like the split between CPU and GPU vertex capability that the PS2's EE did not have, but on the other hand those bottlenecks were familiar to many devs and the tools to deal with them were more robust.
In terms of the main CPU in general performance without counting the VPUs or assuming they were heavily used for a graphics load, the Xbox's Pentium III appears to be substantially more capable, and this may have explained some of the performance inconsistencies with the PS2.
The VPUs would have done much better in terms of vector capability, and they contributed to some high peak vertex rates. The more complex arrangement of units and the reliance on optimal software tended to lead to significant underutilization.
VU0, for example, made up somewhat less than half of the peak FP performance of the vector units but in practice saw most of that peak unused. (Unfortunately as with many attempts to get details that far back, the source link for the following thead is dead:
https://forum.beyond3d.com/threads/ps2-performance-analyzer-statistics-from-sony.7901/)
The Xbox's GPU would have to take on much of the fight with the VPUs and the GS, which meant lower peak geometry and pixel rates. Complexity in leveraging the more complex PS2 and eDRAM arrangement aside, there were some significant steps down in power based on how many features were employed at once. The pixel engine lost half its throughput if texturing was enabled, for example, and other features dropped the rates as they were enabled. Geometry and pixel fillrate could be very high for the PS2 for simple output, although the single-texturing rate looks a bit modest compared to the high other peaks.
The NV2A didn't have the same raw numbers, but it seems that it could sustain more performance with more pixel effects applied. The PS2's fast eDRAM was also more limited in size, and that could lead to reducing pixel/texture detail to avoid additional passes for tiling purposes.
I'm even hazier on this, but in terms of the big gap between the CPU and GPU in the PC architecture I mentioned earlier: I thought this was bolstered by discussion a while ago about how the PS2 could more readily create its desired output by burning more of its peak geometry and pixel throughput through multiple low-overhead submissions of the same geometry, versus the PC method of reducing the number of passes while cramming in more effects of higher complexity per pass.
https://forum.beyond3d.com/threads/ps2-vs-ps3-vs-ps4-fillrate.55200/
https://forum.beyond3d.com/threads/questions-about-ps2.57768/
As time progressed, some of the assumptions baked into the PS2 became less tenable. eDRAM figured highly in the GS architecture, but the manufacturing base and physical scaling of eDRAM processes did not keep pace. The 360 used it, but it seems by that point the needs of the GPU silicon proper did not allow for it to be on-die. Nintendo kept using eDRAM, although this was increasingly more limited in terms of what it could achieve and in terms of manufacturing (the last aging node for eDRAM by the last fab offering it). The aggressive on-die connectivity allowed by the on-die arrangement helped give the PS2 the high fillrate ceiling, but also bound future scaling to connectivity and capacity scaling.
The PS2's image quality did suffer from the capacity limitations, and the 360's eDRAM capacity constraints could be felt as well. The Xbox One's ESRAM wasn't eDRAM, but its capacity limits were noticed as well.
The overall trend seems to be that demand for capacity outstripped what could be delivered on-die. Relying on the high bandwidth from multiple on-die connections also rode a weaker scaling curve, as interconnection proved more challenging to increase versus transistors.
The more complex programming model, high-peak finicky hardware, and many pools of different memory also became less tolerated as time went on.
(edit: fixed link)