What metric is being used to measure complexity? Amount of hardware in general, or what was exposed to the programmer?
Also who is being compared in each generation?
The original Playstation had a CPU and some dedicated processors on-die, with a separate graphics chip.
The PS2 had a CPU with dedicated processors, though it included a non-standard on-die bus between the CPU and vector units as well as scratchpad memory.
In these cases, there was an element of CPU die silicon that went towards something to be programmed for geometry processing. The PS2's graphics chip had EDRAM, although the graphics elements were primarily related to pixel processing.
The PS2 also included a PS1 processing element that served in an IO capacity if not being tasked with backwards compatibility.
Much of this was exposed at a lower level and without the level of hardware management and protection common today.
The original Xbox had a variant of a commodity x86 processor, which was straightforward to program despite having a comparatively large amount of internal complexity. The GPU was a variant of a PC architecture GPU with hardware T&L.
The PS3 had a similar CPU+processing element concept, although the SPEs were tasked with more than geometry (they did rather well with the geometry tasks they were given). There was one general purpose core that could be programmed in a relatively straightforward manner, and the SPEs were architecturally distinct programming targets with an explicit and non-standard memory organization. This was paired with an unusually standard GPU, for Sony. The apparent story there is that Sony's original plan for a more exotic solution fell through.
The XBox 360 had a custom CPU, but it was a uniform set of 3 general purpose cores. The GPU was a unified architecture with an EDRAM pool.
The PS4 design is an APU that is mostly standard. The Xbox One had the ESRAM, which was a memory pool that introduced complexity, although in terms of how it was integrated into the system it was intended to be even easier to use than what was considered acceptable with the Xbox 360's EDRAM.
The current gen consoles are APUs and it's down to secondary hardware blocks and ancillary elements like IO or variations in IP or bus width to distinguish them.
Is this the claim that was corrected a few posts ago? This seems like a mistatement and a mislabelling. The OS's primary footprint is in the 6GB region, but it's a fraction of it.
Was this the choice of 320 bits versus wider? The differently handled address ranges wouldn't seem to matter electrically. The split is a matter of the capacity of the chips on the bus. The bus is not affected by the chip's capacity.