As noted above it states GDDR4; there are many aspects that raise an eyebrow to that detail. For one I'm not so sure that 3x times the bandwidth compared to G8x would be necessary; historically transistor count (amongst others) scales much more than memory bandwidth on GPUs in recent years. As a close second if an IHV is going to pick anything outside GDDR3 I wonder why he'd go for a more complex and wider MC; if simulations have shown that bandwidth X is sufficient then it sounds more reasonable to go for the cheapest all around sollution. Last I'd believe that GDDR5 might be easier available in late 08' than
GDDR4@2.GHz/4.0 effective DDR.
I think that quote was meant to mean GDDR4 running at 1000mhz, or 2000mhz DDR, ala what we saw with the X1950xtx. There still isn't 1600 (3200) rated GDDR4, let-alone 2000 (4000) rated GDDR4. This means the product would likely use 1100mhz (2200mhz) rated GDDR4, if true. That's going by Samsung's product list which is composed of 1100, 1200, and 1400 atm. 1600 was sampled ions ago, but still isn't on the
product list...
EDIT: Hynix does show 1600mhz GDDR4 in their catalog pdf.
GDDR3 or 4 makes the most sense if Nvidia is opting to go with the 512-bit bus because while GDDR3 would be capped at 128gbps, and while probably not enough to remove the bandwidth limitation from the architecture, especially with an upgrade in the rest of the spec, it would sure be an easy way to refresh the lineup down the line with GDDR4. GDDR4 could (theoretically) allow them up to ~180gps (1400mhz) or ~205gbps (1600mhz) ...Probably granting a substantial performance increase with or without a new chip (55nm dumb shrink?)
I wouldn't be surprised to see the G100GTX with 2000mhz (effective) GDDR3 and the 55nm refresh have a "GT/GTS" part that used 2000mhz (effective) GDDR4, with the new king of the hill replacing G100 would use higher-rated stuff.
On the total flipside, it looks like ATi is shooting for 256-bit and GDDR5. IIRC that puts them at somewhere between 160-192gbps eventually, if they use 5000-6000 rated stuff, which Hynix (1gbit, 5000 rated) and Samsung (512bit, 6000 rated) have both showcased. They'll probably release slower-specced stuff first and they'll use that, but even so, the low-ball guess would be 4000 effective, or roughly 128gps...the exact same number we've come to expect from nvidia and their 512-bit bus using 2000 (effective) GDDR3 or GDDR4. Again, just like Nvidia and GDDR4, they too would have room to grow to 192gbps (6000 effective) and beyond with GDDR5 on their 256-bit bus.
Like someone else said...We'll probably end up with similar numbers, both with room to grow, but we'll get there dramatically different ways. Does it matter how we get there? I would venture yes. ATi's method sure looks better when you think about using multiple GPUs on a PCB, while Nvidia could be poised to do well with just a change of RAM and/or a switch to a smaller process, each fitting their respective choices for the future, each with their obvious pros and cons. 256-bit requires less transistors, and doesn't need a large die to fit pins, although you have the disadvantages of multi-gpu (unless R700 truly fixes these issues). The reverse is true for a 512-bit large single chip.