So here is a diagram I super professionally made in MSPaint that should clarify how the memory in SeriesX is split between two "pools" of memory with different bandwidths:
The "two pools" aren't really
two physical pools, that division is virtual. There are 10 chips, but 6 of them have twice the capacity. To achieve maximum bandwidth whenever possible, all data is interleaved among the 10 chips (10*32bit = 320bit). This means a 10MB file will supposedly be split into 10*1MB partitions, one for each chip, so that the memory controller can write/read from all chips
in parallel, hence using a 320bit bus.
But the memory controller can only interleave the data among all 10 chips while all 10 chips have space available to do so. When the 1GB chips are
full, then the memory controller can only interleave the data among the 2GB chips that still have space available. There are 6 chips with
1 extra GB after the 1st GB is full, so that leaves us with 6*32bit = 192bit.
Of course, the system
knows of this, so it's making a virtual split from the get-go, meaning the memory addresses pointing to the red squares become the "fast pool", and the ones pointing to the orange squares become the "slow pool". This way the devs can determine if a certain data can go to the fast red pool or the slow orange pool, depending on how bandwidth-sensitive the data is. They're not left wondering if e.g. a shadow map is going to be accessed at 560GB/s or 336GB/s, as that could become a real problem.
BTW, I chose to make the distinction between
memory chips and not PHYs because I don't know if AMD is using 32bit or 64bit wide units (IIRC it's usually the later), but I do know each GDDR6 uses a 32bit / 2*16bit connection.
Cache srubbers sound like they'd be sensible in the PC space too unless the programming model on the PC makes this impractical.
IIRC Cerny specifically mentioned the cache scrubbers as blocks that AMD chose not to adopt for their RDNA2 PC architecture, and stayed a feature exclusive to the PS5.