No, I am not perpetuating a myth. I'm re-stating the evidence as it was presented to us. Did you read the article? They very clearly stated the reasoning behind the inclusion of the esram was to allow for a large amount of main memory, while maintaining high bandwidth at a low cost and low power draw. Let me re-post the relevant part of the article in case you missed it:
No, they cited a whole bunch of benefits (Baker did) while Goossen later on suggested that it was in place by virtue of the successes of the 360's eDRAM. There's a difference between Baker saying "it solves a whole bunch of our design goals" and Goossen saying "we view it as an extension of the eDRAM in 360". There's an implied timeline there that seems to run counter to the one you've described.
Can it really get any plainer than that?
Keep reading. Goodden echoes the point I made about context. When you view the eSRAM's inclusion as the result of 8GB of DDR3, you do so incorrectly, and the context of that improper perspective colors the interpretation of its inclusion. Read what Baker said again, this time in context of the questioning...
Digital Foundry: Perhaps the most misunderstood area of the processor is the ESRAM and what it means for game developers. Its inclusion sort of suggests that you ruled out GDDR5 pretty early on in favour of ESRAM in combination with DDR3. Is that a fair assumption?
Nick Baker: Yeah, I think that's right. In terms of getting the best possible combination of performance, memory size, power, the GDDR5 takes you into a little bit of an uncomfortable place.
Note the context. DF asked specifically about the eSRAM's role in deciding to rule out GDDR5 here. It was NOT a question premised on why the eSRAM was included to begin with. Baker is simply saying that there are lots of engineering benefits to eSRAM + DRAM. Goossen notes that their thinking on the eSRAM was very deliberately as an evolution of the 360's eDRAM.
Andrew Goossen: I just wanted to jump in from a software perspective. This controversy is rather surprising to me, especially when you view ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it. We had to pull over all of our vertex buffers and all of our textures out of system memory concurrent with going on with render targets, colour, depth, stencil buffers that were in eDRAM.
Of course with Xbox One we're going with a design where ESRAM has the same natural extension that we had with eDRAM on Xbox 360, to have both going concurrently. It's a nice evolution of the Xbox 360 in that we could clean up a lot of the limitations that we had with the eDRAM.
Are you suggesting that the main reason for including esram in the XB1 had nothing to do with design goals and was purely driven by the fact they had edram in the XB360 and thus wanted to evolve that design into the XB1?
The bolded is a strawman. Their design goals consisted of a variety of things, but the specific inclusion, according to them, was as much if not moreso motivated by wanting to expand on the eDRAM's successes and make it more flexible.
They had a set of design goals and challenges with the XB1's memory system, these are already stated in the quote above.
You are confusing benefits with a priori design goals imho. We don't disagree that it solves tons of important challenges and helps achieve their design goals. What we disagree on is the motivating priority for its inclusion. My view is that they decided, immediately upon a post-mortem of the 360, that they wanted to amplify the eDRAM's successes and it happened to be a natural, perfect fit for accommodating other design goals drafted thereafter.
They decided that esram was the answer to those challenges and obviously leveraged their experience with the XB360's edram to implement the XB1's esram in a more flexible and useful fashion.
I disagree on the implied ordering of the decisions you've outlined here. This all would have been born out of a post-mortem on 360. Before they would even bother thinking seriously about design goals they would have looked very hard at what worked and what didn't on 360. My contention is that they likely knew they wanted eSRAM as a direct result of that before targeting bandwidth or power consumption metrics.
Obviously they are going to leverage their previous experience with a similar memory setup but that doesn't change the reasons why they went with a similar memory setup in the first place rather than something completely different. They didn't go into the XB1 design stage saying "we must have embedded memory and screw everything else" they'll have made the decision to include embedded memory because it was what they saw as the best answer to their design goals.
They went in only after doing a detailed post-mortem of the previous console and deciding they'd like the opportunity to expand on its architectural successes and erase some of its weaknesses. Do you really disagree with this?
Links, quotes and numbers showing this to be a genuine advantage over alternate memory configuration options?
You seem to be moving the goalposts as I never claimed to have that detailed of information. I said they mentioned the low latency in the dev docs. Specifically, they are cited as a boon to performance in the CB/DB's.
Durango dev docs said:
The advantages of ESRAM are lower latency and lack of contention from other memory clients—for instance the CPU, I/O, and display output. Low latency is particularly important for sustaining peak performance of the color blocks (CBs) and depth blocks (DBs).
http://www.vgleaks.com/durango-gpu-2/2/
Yes, you say it's a benefit. And yet when specifically asked if it will have an impact on GPU performance Nick Baker says this:
He doesn't actually say much of anything there. He doesn't deny it nor confirm it. Just says they haven't talked about it. Dev docs (above) evidently make note of it. How important is it? No idea. We have no details on it really at all afaik.
Do you have an explanation as to why he wouldn't extol the benefits of the low latency esram at this seemingly perfect moment to do so if they were as real as you claim?
Do you have an explanation for why the dev docs cited by VGLeaks evidently specifically call latency out as a direct benefit of the eSRAM? I seem to recall sebbbi detailing some reasons a while back about why it could be very useful. It's also cited in the Kinect GPGPU comment as being important there.
FWIW, I agree that it seems odd for Baker to not mention it at the ideal opportunity, but on the other hand we have the info from dev docs and ERP/sebbbi have both talked it up as a potential benefit iirc. There's also a patent from way back in March that seems to utilize the eSRAM's low latency very specifically along with 'multiple image planes' to do a different methodology for rendering tiled assets, but I don't have the patent link anymore and MS hasn't mentioned anything about it at all outside the patent itself.
I do recall seeing something along these lines but again a link would be good so that we can be sure the statement isn't being taken out of context. i.e. are they talking about a specific performance advantage afforded by the esrams comparative low latency in relation to a different form of graphics memory and are they stating the Kinect example as something very specific or as one general example of a much wider range of benefits.
Sure thing...here ya go, from the article we are discussing:
Andrew Goossen: I will say that we do have quite a lot of experience in terms of GPGPU - the Xbox 360 Kinect, we're doing all the Exemplar processing on the GPU so GPGPU is very much a key part of our design for Xbox One. Building on that and knowing what titles want to do in the future. Something like Exemplar... Exemplar ironically doesn't need much ALU. It's much more about the latency you have in terms of memory fetch [latency hiding of the GPU], so this is kind of a natural evolution for us. It's like, OK, it's the memory system which is more important for some particular GPGPU workloads.