Chalnoth said:Basically, I have said for a long time that a 256-bit bus was not a good idea, primarily due to cost concerns. Quite simply, it's very expensive to produce video cards on a 256-bit bus.
JohnH said:The other possibility is that they're treating the 128bit physical bus as two logical buses with two independent clock phases, using the memory chip's output enables to get the data on the bus at the correct time. This would obviousley require you to be able to mess with the output enable at twice the DDR's burst data rate (so for 300MHz DDR it means 1200MHz), no idea if this is physically possible, and would probably require some rather special memory pads in the graphics chip itself.
Note, if they do this it isn't a new idea, even in a graphics chips as the Weitek 9100 did it back in 1993 (admittedly only with EDO memory running at something like 60MHz).
John.
Chalnoth said:Basically, I have said for a long time that a 256-bit bus was not a good idea, primarily due to cost concerns. Quite simply, it's very expensive to produce video cards on a 256-bit bus.
At the same time, however, are two other issues. Firstly, it is possible to significantly reduce the total bandwidth usage of today's video cards through better optimizations. Possible ways of optimizing include partial deferred rendering (I don't support full deferred rendering, btw...), frame/z-buffer compression, occlusion detection, early z-rejection, hierarchical Z, and so on. In this way, it may be possible to drastically reduce the memory bandwidth hit required for MSAA, reducing the need for insanely-high memory bandwidth.
Increasing the frequencies is inherently cheaper than increasing the number of pins because the primary costs involved are inside the chips themselves. Still, at the same time, it does require collaboration between graphics chip companies and memory manufacturers, meaning that this is the harder of the two improvements in memory bandwidth to make happen.