The reality is a bit more complicated than that. DRAM is heavily optimized for localized or linear accesses with writes and reads not being mixed together. Internally, the DRAM is heavily subdivided, slower, and it can't keep everything at the ready at all times. It also incurs a penalty whenever it has to switch from reads to writes.
The memory subsystem tries very hard to schedule accesses so that they hit as few bank and turnaround penalties as possible, but this isn't simple to do with other constraints like latency and balancing service to multiple clients.
Ideally, the eSRAM could dispense with all of this, and gladly take any mix that works within the bounds of its read and write ports.
However, the peak numbers and articles on the subject suggest that for various reasons there are at least some banking and timing considerations that make the ideal unreachable. The physical speed of the SRAM and the lack of an external bus probably mean that the perceived latency hierarchy is "flatter" than it would be if you were spamming a GDDR bus with reads and writes with poor locality.
This is where I assume the hinted advantages the eSRAM has for certain operations come in, where the access pattern starts interspersing reads with writes, or there is poor locality.