LightHeaven
Regular
I would think the more concerning thing right now for MS is the fact that the difference between 720P vs 1080P is loser to 100%, much larger than the ALU deficit, and closer to the ROP deficit.
The engines in use for these early games aren't designed around the limited size of the ESRAM, and the prevalence of deferred renderers probably isn't helping that, which likely explains some of deficit, but I would be concerned that the virtualization of the GPU is introducing significant overhead, or the limited ROPs are an issue.
Having never worked on an XB1, I would assume that juggling the 100MB's of render targets most modern games use wouldn't be a huge problem, and that you should be able to get 80% of the way to optimum relatively easily, but maybe it's harder than I imagine, especially considering the launch timeframe.
It would be interesting to know why various tradeoffs were chosen and what buffers were put where, but I'd guess we'll never know.
Do you think there is any chance for whatever reason, they ended up putting their render targets on ddr3 thus the ROPs are being bandwidth starved, or do you think that even if they managed to fit the the most ROP intensive target fitting on esram they might still get fillrate issues?