Having a single unified pool of memory to manage was the most requested feature from developers according to Sony.
So my question is what would be easier to program for :
Xbox One : (32MB ESRAM + 8GB DDR3) or a "Hypothetical Xbox One" (with a separate GPU die and memory bus) so (4GB for CPU + 4GB for GPU).
Is juggling the small 32MB ESRAM with a larger unified pool (8GB) easier or more difficult than a completely separate CPU and GPU memory pools each at 4GB?
My guess would be unified+ESRAM, based on the fact the 360 was the "easy" system last gen, and the PS3 with split pools the hard one.
The bar has moved in that respect. In theory One should be no more difficult than 360. It's just PS4 moved to the simplest model of all.
Now, I get the impression XOne's memory model is still somehow more difficult than 360's. It seems like maybe the 360's was more plug n play while the One's requires more handholding maybe, which could be both good and bad. But my impression could be wrong, One's ESRAM may not be any more difficult than 360's EDRAM, the difference could all be relative to the competition.
The fact of the matter is, BF4 when trying to keep 60fps stays at a consistent 10fps lead in almost every scenario, growing to a 15 fps lead when in higher stress scenes at a 40% higher resolution. This we know
What does this really have to do with the discussion anyway? Other games results vary, "this we know". The important thing here is you need to demonstrate a pattern that XOne struggles more to attain 60 FPS than 1080P relatively, not "PS4 IS CLEARLY MORE POWERFUL RAH"