Part of the GDDR5 interface is dedicated lines for error detection. Detected errors cause a re-transmission attempt and may be used as a signifier to kick off re-training to adapt to varying voltages/temperatures.
I'm aware of that. But the BER targeted by GDDR5 is probably worse than the target for DDR3 or FBD. That's my point. I'm aware that there is CRC over the interconnect, but the question is about the target BER relative to standard memory.
Yeah, that's a serious problem.
In fact this is a motivator in the patent document. The hub chips insulate the GPU chips from the vagaries of memory technology and varying interface types. But the hub chips incur a significant increase in entry-level cost for the board as a whole, as well as the power penalties.
Now you could argue that the rising tide of IGP softens that blow - i.e. that the entry-level cost for a board is rising anyway. But it seems the penalties are so severe that it might only be possible with the biggest GPUs. In which case the DDR flexibility would de-emphasise the problems the chip team has when they're building a new huge GPU over a multi-year timeline, enabling the existing strategy of delivering the halo chip as the first chip in a new architecture. The smaller chips would then be engineered for the specific memory types. This would make NVidia "laggy" on memory technology adoption - but NVidia is already laggy, if taking GDDR5 at face value - though NVidia had the first GDDR3 GPU.
You keep on thinking up ways that this could work, but the reality is that it's a bad idea.
Integrated memory controllers are the future, and the only systems that need memory oriented discrete components are ultra-high capacity and ultra-high reliability ones. Neither of those describe a GPU.
You end up adding a lot of latency, adding power, adding cost, and you don't get a whole lot in return. That sort of thing is handy when you have a massive split in the industry (e.g. RDR vs. DDR), but it's pretty clear that the consensus is GDDRx for the high-end and DDRx for the low-end.
There are a huge number of downsides and very few upsides.
GDDR3 seems to be facing a rapid tail-off - it may be only 18 months before it disappears entirely. ATI may not use it in the upcoming generation, sticking with DDR and GDDR5. Dunno if that effectively means that GDDR5 would face a yet-more-rapid tail-off if it were replaced by GDDR6.
The rising tide of IGP also hampers the economies of scale that make an interation of GDDR viable. Against that discrete GPUs are still undergoing growth. But the notebook sector is putting a squeeze on everything.
I think that's a very insightful comment and I wonder what the data shows. I think you're right, and I'd be curious to see if that means that the low-end of discrete move upwards in the price stack.
@MFA:
Since you are looking at the difference between two pins, any changes (e.g. temperature, bad pcb design) that equally impact both pins will cancel each other out. That makes EMI easier to handle for instance and can reduce the amount of shielding needed.
Differential signaling is also more power efficient (smaller swings in voltage on each pair can produce the same overall voltage shift).
If you look at all the new interfaces introduced in the last 10 years, they have all trended towards diff. signaling: Rambus, CSI, HT, FBD, PCI-e, etc.
DK