Yes, I would tend to agree. On the other hand, Sony's design contains hardware features (faster hardware decompressor, cache scrubbers, 12-channel SSD) that appear to be refinements beyond what Microsoft has done.
Microsoft's compression choice may give it better average compression ratios for game content, since the numbers both vendors gave for sustained SSD bandwidth seemed to close the gap more than raw bandwidth would indicate.
I'm curious about the cache scrubbers in terms of what use cases they are meant for, and whether the improvement is substantial.
There may be area devoted elsewhere, such as the 4x or higher rates of packed operations for inference. At a minimum, the PS5 may at least have double-rate in keeping with 2xFP16, but going further could be optional.
Agreed. What about power usage?
Micron's GDDR6X comparison gives GDDR6 7.5 pJ/bit versus GDDR6X at 7.25 pJ/bit.
560GB/s of GDDR6 seems like it's roughly 34 W versus ~27 W for 448 GB/s.
That would be about 7W for the interface, although I think modules are ~3W or so each, maybe 13W+?
128MB of SRAM would have measurable power consumption when in use and in terms of standby, but I haven't found figures in recent times to how much that could be.
I assume it would be lower power than the extra modules, though Sony would need to weigh if ~10W power matters sufficiently.
Both of the new top consoles are squarely targetting 4K.
I'm curious how dominant native 4K will be long-term, and I think there may be some compromises due to how many systems will be paired with sub-4K TVs. If some kind of DLSS-like solution did take root, internal settings could be even lower for much of the frame's processing.
The GDDR6 bus is also much closer in bandwidth to the competition versus what happened with the ESRAM setup with the Xbox One, and as a cache it can be more flexible in rotating out.