According to both AMD and Nvidia, future GPUs will be using HBM for their main memory, with 512GB/s with 4 stacks, the memory is sampling right now. By the time we can put 256MB on die, HBM will be on it's second gen for a while (1TB/s and 64GB total ram in 4 stacks in 2016 or 2017). How does 256MB on die makes any sense for next gen?
If every vendor select HBM as their next main RAM, a large internal memory pool is no longer an intelligent design, the GPU/APU technologies will have reached a point of evolution where stacked memory have naturally solved the problem that the XB1 ESRAM was trying to solve. Bandwidth, pad area, cost, and power.
If every vendor select HBM as their next main RAM, a large internal memory pool is no longer an intelligent design, the GPU/APU technologies will have reached a point of evolution where stacked memory have naturally solved the problem that the XB1 ESRAM was trying to solve. Bandwidth, pad area, cost, and power.