GDDR Roadmap - GPUs @ 900 MHz next year!?!

g__day

Regular
Interesting reading on the future GDDR roadmap, especially their view that GPUs would hit 900MHz in 2004 (which was speculated with NVidia having a split chip design with true-time T&L on a seperate chip clocked 50% than the rest of the GPU, a 600 MHz chip one 900 MHz chip two with a fast interconnect?


http://www.vr-zone.com/#2902

GDDR3-3-m.jpg



Samsung GDDR roadmap revealed the availability of 800/1000Mbps GDDR-II (128MB) now which NVIDIA GeForce FX Ultra 5800/Ultra will be using and in second half of this year, we will expect to see 1200 to 1400Mbps GDDR-II (256MB) which most probably NV35 will be using. The VDD of the GDDR-II will be reduced from 2.5V to 1.8V somewhere along the roadmap so the memories to run much cooler as a result. The next generation GDDR-III is also plotted in the roadmap and we can expect to see them in second half of 2004 with speed of 1.5 to 2.xGbps. The GPU frequency is expected to reach a stunning 900Mhz next year so GDDR-III will be required to provide enough memory bandwidth.
 
g__day said:
Interesting reading on the future GDDR roadmap, especially their view that GPUs would hit 900MHz in 2004 (which was speculated with NVidia having a split chip design with true-time T&L on a seperate chip clocked 50% than the rest of the GPU, a 600 MHz chip one 900 MHz chip two with a fast interconnect?
This has been speculated again and again and again. I don't think it's ever going to happen, particularly not with fragment and vertex processing becoming so closely-related.

As a side note, the next big leap in RAM will probably be MRAM. I just recently saw a rough description of how it can be made. The technology really is very intriguing. The biggest benefit will be power savings. MRAM will not require any power to store data (DRAM needs to be continually recharged), and will retain its data after power loss. From what I've seen, it looks like MRAM may well be capable of being about as dense as DRAM, though I'm not absolutely certain if that's true. MRAM technology is currently used in read heads for advanced hard disks.

Back on topic, however, interconnects between various system components simply cannot evolve fast enough to keep up with chip evolution. Therefore, as we move into the future, chips will continue to combine, not split apart.

Update:
Oh, one other thing. I also saw a presentation recently about some pure research being done for magnetic storage media. Looks like this new media, if a good way of manufacturing it can be found, could possibly hold up to around 1-10Tbits/square inch.
 
I thought Samsung was pissed about even having to make GDDR3? I guess ATi must have thrown enough green their way (or they realized that DDR2 wasn't going anywhere fast).
 
Nagorak said:
I thought Samsung was pissed about even having to make GDDR3? I guess ATi must have thrown enough green their way (or they realized that DDR2 wasn't going anywhere fast).
Well, if you look at it you'll see that GDDR3 is not named so could all but GDDR3 ;)
 
Back
Top