These are the types of statements I don't understand.
It uses components from AMD in a configuration similar to an existing AMD chip, but it isn't just a 7770.
Integrating embedded memory with the GPU was almost certainly a significant engineering undertaking, and it probably impacted several of the functional blocks.
And how are you measuring the "little gain"?
I am not understimating the engineering challenge that MS was faced with. I was just wondering aloud whether they really needed to. So when I look at the specifications for each card:
AMD Radeon™ HD 7770 GHz Edition Feature Summary
1000MHz Engine Clock
Up to 2GB GDDR5 Memory
1125MHz Memory Clock (4.5 Gbps GDDR5)
72GB/s memory bandwidth (maximum)
1.28 TFLOPS Single Precision compute power
GCN Architecture ◦10 Compute Units (640 Stream Processors)
◦40 Texture Units
◦64 Z/Stencil ROP Units
◦16 Color ROP Units
◦Dual Asynchronous Compute Engines (ACE)
versus
1000MHz Engine Clock
1GB GDDR5 Memory
1500MHz Memory Clock (6.0 Gbps GDDR5)
96GB/s memory bandwidth (maximum)
1.79 TFLOPS Single Precision compute power
GCN Architecture ◦14 Compute Units (896 Stream Processors)
◦56 Texture Units
◦64 Z/Stencil ROP Units
◦16 Color ROP Units
◦Dual Geometry Engines
128-bit GDDR5 memory interface
It feels like MS is aiming for 7790 performance but is really getting 7770 performance (1.28 TF, 72 GB/s bandwidth) plus ESRAM.
Would it have been simpler to just base your GPU part on 7790?
Don't the development costs of ESRAM essentially match the costs of going straight with a 7790 in the first place?
Is it that they had to engineer it toward DDR3 vice GDDR5 which seems to be the memory normally included with the discrete card?
These are just questions I have. I have no real judgment per se except that it seems most of the technical conversation surrounding XB one seems aimed at extracting more performance in order to create console parity.