No, they cited a whole bunch of benefits (Baker did) while Goossen later on suggested that it was in place by virtue of the successes of the 360's eDRAM. There's a difference between Baker saying "it solves a whole bunch of our design goals" and Goossen saying "we view it as an extension of the eDRAM in 360". There's an implied timeline there that seems to run counter to the one you've described.
Keep reading. Goodden echoes the point I made about context. When you view the eSRAM's inclusion as the result of 8GB of DDR3, you do so incorrectly, and the context of that improper perspective colors the interpretation of its inclusion. Read what Baker said again, this time in context of the questioning...
Note the context. DF asked specifically about the eSRAM's role in deciding to rule out GDDR5 here. It was NOT a question premised on why the eSRAM was included to begin with. Baker is simply saying that there are lots of engineering benefits to eSRAM + DRAM. Goossen notes that their thinking on the eSRAM was very deliberately as an evolution of the 360's eDRAM.
The bolded is a strawman. Their design goals consisted of a variety of things, but the specific inclusion, according to them, was as much if not moreso motivated by wanting to expand on the eDRAM's successes and make it more flexible.
You are confusing benefits with a priori design goals imho. We don't disagree that it solves tons of important challenges and helps achieve their design goals. What we disagree on is the motivating priority for its inclusion. My view is that they decided, immediately upon a post-mortem of the 360, that they wanted to amplify the eDRAM's successes and it happened to be a natural, perfect fit for accommodating other design goals drafted thereafter.
I disagree on the implied ordering of the decisions you've outlined here. This all would have been born out of a post-mortem on 360. Before they would even bother thinking seriously about design goals they would have looked very hard at what worked and what didn't on 360. My contention is that they likely knew they wanted eSRAM as a direct result of that before targeting bandwidth or power consumption metrics.
They went in only after doing a detailed post-mortem of the previous console and deciding they'd like the opportunity to expand on its architectural successes and erase some of its weaknesses. Do you really disagree with this?
As Shifty said, both would have influenced their decision to use esram. The fact that Microsoft already has a lot of experience with embedded memory and no doubt had lots of ideas on how to expand on and improve that solution would no doubt have influenced them to go with it. However I still maintain that ultimately the deciding factor would have been the requirement to have a large main memory pool. If the XB1 was only ever going to target 2GB of main memory then I think they'd have been far more likely to just go with a high speed GDDR5 interface and be done with it. Afterall, all the improvements that the esram brings in usability over the edram are only aimed at reducing the useability deficit of such a memory configuration compared to a single pool of high speed memory.
The other 2 potential benefits are bandwidth (which according to the initial spec was only going to be on par with a fast pool of GDDR5 anyway) and latency - which the engineers practically went out of their way to avoid talking about.
Your argument hinges on the fact that they would have designed the XB1 by looking at the XB360 design and evolving it (which no doubt would have been part of the equation). But technology is different today than it was in 2005. In 2005 if you wanted high bandwidth from a single memory pool without embedded memory you'd have had to use a prohibitively expensive 256bit memory interface. Today, that's a viable option and so the correct high level choices for 2005 would not necessarily be the correct high level choices for today and thus some element of 'going back to the drawing board' would certainly have been required.
You seem to be moving the goalposts as I never claimed to have that detailed of information. I said they mentioned the low latency in the dev docs. Specifically, they are cited as a boon to performance in the CB/DB's.
http://www.vgleaks.com/durango-gpu-2/2/
My point in asking for that level of detail was to specifically prevent a vague reference being held up as proof of some kind of notable esram latency advantage.
The vgleaks article does reference lower latency. But do we know that's come directly from Microsoft and isn't just a VGleaks assumption about embedded memory? And even if it does come direct from Microsoft, without specific numbers and in light of the downplaying of any latency advantages in the recent interview, how do we know that the stated advantage is significant?
He doesn't actually say much of anything there. He doesn't deny it nor confirm it.
Which in an interview specifically aimed at extolling the advantages of the XB1 architecture is extremely telling IMO.
I seem to recall sebbbi detailing some reasons a while back about why it could be very useful. It's also cited in the Kinect GPGPU comment as being important there.
FWIW, I agree that it seems odd for Baker to not mention it at the ideal opportunity, but on the other hand we have the info from dev docs and ERP/sebbbi have both talked it up as a potential benefit iirc.
I think the key word there is potential. Those guys obviously know what they are talking about so if they say there's a specific advantage of a low latency embedded memory solution then you can take that to the bank. However I don't think anyone has said there is an advantage so far. Merely that if the memory is truly low latency enough then it would have these advantages. The key is whether that low latency is real and it's now starting to look as though it may not be. At least not to a significant enough extent to be notable when questioned about it's performance adding potential.
Sure thing...here ya go, from the article we are discussing:
The highlighted part of that quote (below) has me a little confused. Are they talking specifically about low latency memory or are they talking generally about the latency hiding abilities of GPU's (large number of threads etc..) being the key performance driver for this particular GPGPU application?
Andrew Goossen said:: I will say that we do have quite a lot of experience in terms of GPGPU - the Xbox 360 Kinect, we're doing all the Exemplar processing on the GPU so GPGPU is very much a key part of our design for Xbox One. Building on that and knowing what titles want to do in the future. Something like Exemplar... Exemplar ironically doesn't need much ALU. It's much more about the latency you have in terms of memory fetch [latency hiding of the GPU], so this is kind of a natural evolution for us. It's like, OK, it's the memory system which is more important for some particular GPGPU workloads.