That sentence doesn't make sense in my opinion. Reads from eSRAM of course don't saturate the DRAM bandwidth, as DRAM isn't touched for this. Why shouldn't it be possible to write 68GB/s to DRAM while reading 68GB/s (or even more) from the eSRAM? The other way around should work too (reading 68GB/s from DRAM while writing 68GB/s or more to eSRAM). That's just a question if the connections to the DRAM and eSRAM somehow share some parts limiting the bandwidth. But as indicated by MS, only the connection to the DRAM and the CPU share bandwidth, i.e. the 30GB/s coherent bandwidth are included in the 68GB/s figure. The eSRAM bandwidth is independent of this (as there is no such thing as CPU cache coherent eSRAM access).
There may be an inconsistency with the 102GB/s write and 170GB/s write bandwidth in the older documentation leaked by vgleaks, as this would imply that the write bandwidth is shared between DRAM and eSRAM. This could indeed be true (usually more is read than written, so it shouldn't be too much of an issue) but it could also be an oversight, for example caused by considering only the write bandwidth available to the ROPs while the read bandwidth includes the complete memory hierarchy (traditionally one could only read through the path TMUs/L1/L2/Mem but it got more symmetric since Xenos and one has a read/write capability outside of [MEM/ROP] exports).
edit:
As said above, this thinking could be the reason for the asymmetry. But it doesn't apply anymore to GCN GPUs. One can also write through the TMU(AGU) => L1 => L2 => mem hierarchy and not just through the ROPs.