Why would developers "demand" something like that? Developers demand flexible AND easy to use hardware, with the freedom to put its strengths into whatever aspects of visual quality they see fit.
I would personally value much more the ability to texture from EDRAM than any particular EDRAM size or target resolution.
To be fair, you did not quote me in full context. The whole sentance that summed up my statement much more clearly was:
eDRAM will be a dead end if developers demand FP16 @ 1080p @ 4xMSAA unless a smarter implimentation (caching [streaming out?] of buffers?) can be found.
You are abstracting the "end point" from the "technicals" which is fine. But when you say, "
Developers demand flexible AND easy to use hardware, with the freedom to put its strengths into whatever aspects of visual quality they see fit" my arguement stands by the fact, regardless of the design implimentation on the hardware, developers have (a) voiced a desire for anti-aliasing and (b) complained about how Xenos goes about doing this. They are wanting the flexibility AND ease of use--exactly what you said.
And what I said, if you properly quoted me, is that eDRAM is a dead end
if developers demand things like MSAA and higher resolutions if the current implimentation is proposed. Why? Because, to quote you, "
Developers demand flexible AND easy to use hardware".
eDRAM, as is, isn't as flexible as a standard memory pool AND it isn't easy to use when attempting to get basic features developers want (e.g. MSAA at HD resolutions) if the eDRAM is too small.
I think your ease of jumping on my posts has resulting you in you leaping before looking because there is no fundamental axe to grind if you actually read what I said in context
Maybe your concerns are different than other developers but the handfuls I talk to (by no means huge) the #1 gripe I hear about eDRAM from actual developers is kevetching that
they do want MSAA at HD resolutions
but 10MB isn't sufficient to do a 720p target resolution with MSAA without additional work and workload considerations. When asked about future consoles they have told me if eDRAM is used again they don't want to fiddle with tiling unless it has significant changes (i.e. demand more memory/better implimentation to avoid these issues). But that is just a small survey of the couple handfuls I have talked to. You probably talk to more developers, but I haven't met many who don't think anti-aliasing is a bad thing (unless they are not able to do it performantly and fall back to the 'consumers don't notice' excuse/position).
#2 gripe with eDRAM is the lack of ability to do more robust operations to utilize the benefits of the bandwidth.
Which leads me to your specific desire--surprising to be quite honest.
I am surprised to see you argue for textures--are you referring more to MRT or straight texturing? Not sure how valuable eDRAM would be due to size limitations. At a high level it appears one of the "strengths" of GPUs is their high tolerance for latency--I could be mistaken, but this seems to fit perfectly with texturing.
Considering how tolerant GPUs are for latency and the size of textures in memory (100s of MBs) how is texturing from eDRAM (10s of MBs) a major benefit worth the silicon investment? Putting aside legacy design issues (difficulty) and cross platform development (more difficulties+lower exploitation) I am not sure this is a "win" from a design perspective or benefit onscreen.
Maybe you can elaborate how using eDRAM for texturing fits your criteria for flexibility and ease of use? How is demanding texturing different than "why would developers demand" an eDRAM pool that supports MSAA without current tiling issues?
[If you mean an eDRAM pool that is a flexible scratchpad I am all for that if it can be designed within a reasonable budget, as I mentioned in the next-gen prediction thread just last week.]