DDR2 expected to be late for the party

Hmmm appears DDR 2 is not expected in volume until 2005, although this appears to be only talking about chipset ram it may impact graphic card ram pricing also if the manfuacturers are not switching over fabs until then, supply and demand.

Mass production of DDR2 SDRAM for computers will develop in 2005, a year later than originally forecast, according to analysts and chipmakers at last week's Platform Conference held here.

http://www.ebns.com/showArticle.jhtml;jsessionid=CMRO0YF3FHAQKQSNDBGCKH0CJUMEKJVN?articleID=6500001
 
if we're lucky nvidia will require low speed ddrII for their nv31 or nv34, so we can get those fabs warmed up.
 
NV31 and NV34 both use DDR-I (neither controller is even capable of using DDR-II).

MuFu.
 

Not really. At the speeds that the NV31 & NV34 are designed for, and then some, DDr I is as good or better than DDrII. Also, both the NV31 & NV34 are cost effective solutions, so why in heaven's name would you want to saddle them with expensive memory that requires a very complex PCB?
 
Martrox, you have to read the comment in the context of his previous comment. (If NV31 & NV34 doesn't use DDRII, then those fabs wont get warmed up)
 
What in gods name do NV31 & NV34 have to do with people making or supporting DDRII for system memory? The only place NVIDIA is getting DDRII from is Samsung, who is already mass producing it *for graphics cards*. It's also over twice as fast as any system RAM based on DDRII would be at this time. If NV31 & NV34 used DDRII, it would still come from Samsung. It wouldn't and it would still have nothing to do with system RAM, and it certainly wouldn't speed up the adoption of DDRII by other memory and mainboard chipset companies.

I don't even know what this article has to do with 3D graphics, but I've got a pretty good idea why he posted it in this forum, and titled this thread "DDR2 expected to be late for the party".
 
Yes it may impact Graphic card ram prices, since ATI is looking to GDDR3 and possibly DDR II for its next gen parts, it may impact pricing.
Presently only one company is using DDR II, and only ONE company is making graphic card DDR II Samsung.
 
I guess the point I'm trying to make here is that there are some problems with using DDrII, it's not a bandwidth cureall, as nVidia has found out. GDDrIII seems to deal with all of the problems with DDrII, as it is designed from the ground up for video memory, and it looks like it's availability will be in time for R400. The question I have is wether nVidia will swallow it's pride & use it.

Just because it's a "2" rather than a "1" doesn't mean it's better. In fact, I believe it's proving to be ill concieved, at least as video memory. Look at the hoops nVidia had to jump through just to use it.... 12 layer PCB & I'm sure the craxy FX Flow is there for the memory, too. And look at all here that though ATI was crazy to go to a 10 layer PCB. (yes, I know it's an 8 layer now, but was rumored to be 10 originally)
 
martrox said:
I guess the point I'm trying to make here is that there are some problems with using DDrII, it's not a bandwidth cureall, as nVidia has found out. GDDrIII seems to deal with all of the problems with DDrII, as it is designed from the ground up for video memory, and it looks like it's availability will be in time for R400. The question I have is wether nVidia will swallow it's pride & use it.

The "DDRII" memory on the Geforce FX boards is actualy GDDRII, it has most of the GPU optimisations GDDR3 will have.
 
It's my impression that in at least one respect, GDDR-III is much closer to DDR-II than the GDDR-II in the GFfx is: prefetch count. From what I've heard, GDDR-III shares the 4x prefetch of DDR-II, while the GDDR-II in GFfx has a 2x prefetch just like standard DDR-I. I don't really know the details, but it seems to me that the 4x prefetch is one of the two primary features of DDR-II which will allow max. transfer speeds to continue to ramp, because it allows double the transfer speed for the same DRAM core clock speed. (The other feature being differential signalling, and I don't know whether that's included in GFfx's GDDR-II either.)

Point is, I'm not sure the GDDR-II used in GFfx resembles DDR-II in much other than the protocol used to control it. But I'm confused on this point. Can anyone clearly and specifically lay out the exact differences between (G)DDR-I, system DDR-II, GDDR-II and GDDR-III??

In any case, the manufacturing delay of system DDR-II has essentially nothing to do with GDDR-II, GDDR-III, or even their prices. Widespread rollout of system DDR-I was delayed for over a year, but that didn't appear to hinder the ramping or acceptence of DDR-I for graphics cards.
 
Dave H said:
It's my impression that in at least one respect, GDDR-III is much closer to DDR-II than the GDDR-II in the GFfx is: prefetch count. From what I've heard, GDDR-III shares the 4x prefetch of DDR-II, while the GDDR-II in GFfx has a 2x prefetch just like standard DDR-I. I don't really know the details, but it seems to me that the 4x prefetch is one of the two primary features of DDR-II which will allow max. transfer speeds to continue to ramp, because it allows double the transfer speed for the same DRAM core clock speed. (The other feature being differential signalling, and I don't know whether that's included in GFfx's GDDR-II either.)

Point is, I'm not sure the GDDR-II used in GFfx resembles DDR-II in much other than the protocol used to control it. But I'm confused on this point. Can anyone clearly and specifically lay out the exact differences between (G)DDR-I, system DDR-II, GDDR-II and GDDR-III??

In any case, the manufacturing delay of system DDR-II has essentially nothing to do with GDDR-II, GDDR-III, or even their prices. Widespread rollout of system DDR-I was delayed for over a year, but that didn't appear to hinder the ramping or acceptence of DDR-I for graphics cards.

Why dont you just ask matrox...apparently he knows the exact specifications for all of these ram types. including amazing prognostications of how well they will perform on unanounced products ;)

edit: I didnt know nVidia was using a 2x prefetch. where do they say that? it doesn't seem possible that they would be able to get any type of advantage by using a ram like that. 4x prefecth is what allows ddrII to be able to keep up with ddr,clock for clock, while operating at a 1/2 bit array frequency. this was the sole reasoning behind ddrII if I understood correctly, since it would allow lower frequency bit arrays thus higher clock speeds that would be synchronus with the bus.
 
martrox said:
I guess the point I'm trying to make here is that there are some problems with using DDrII, it's not a bandwidth cureall, as nVidia has found out. GDDrIII seems to deal with all of the problems with DDrII, as it is designed from the ground up for video memory, and it looks like it's availability will be in time for R400. The question I have is wether nVidia will swallow it's pride & use it.

Just because it's a "2" rather than a "1" doesn't mean it's better. In fact, I believe it's proving to be ill concieved, at least as video memory. Look at the hoops nVidia had to jump through just to use it.... 12 layer PCB & I'm sure the craxy FX Flow is there for the memory, too. And look at all here that though ATI was crazy to go to a 10 layer PCB. (yes, I know it's an 8 layer now, but was rumored to be 10 originally)

once again, since you weren't able to answer any of these questions in a related thread.

do you know the actuall difference in latency between 400mhz ddr and 400mhz ddrII? do you know how it will affect a deeply pipelined parrallel vpu? do you know the thermal specifications of ddr vs ddrII at 400mhz respectively?

and lastly, why do you feel that ddrII at 400mhz would require more pcb layers than regular ddr at 400mhz? (keep in mind, ati has an 8 layer pcb using ddr @ ~300mhz and nvidia a 12 layer pcb @ 500mhz. neither of these are going to support your arguement on a 1:1 basis, because in the event that ati uses ddrII, it could very well be 400mhz which has much lower thermal characteristics than 500mhz ddrii. and the converse would probably be true for 400mhz ddr, which would probably have slightly higher thermal characteristics than ddr @ 300mhz) Not only this, but it took ATI around 4 or 5 months to get a quality 8 layer pcb that would not have signaling problems for 320mhz ddr. so its very possible that in a few months, 500mhz ddrII will be able to fit on a revamped 10 layer pcb for nVidia.

im guessing you wont be able to answer any of these, but I could very well be wrong. you certainly speak as if you know the answers though.
 
I didnt know nVidia was using a 2x prefetch. where do they say that?

I know I read that somewhere, but a quick google search is not finding anything to substantiate it at the moment. (Which in itself is an indication I'm probably wrong about this...) The thing is, I don't know, which is why I'm hoping someone here can set me straight...
 
Dave H said:
I didnt know nVidia was using a 2x prefetch. where do they say that?

I know I read that somewhere, but a quick google search is not finding anything to substantiate it at the moment. (Which in itself is an indication I'm probably wrong about this...) The thing is, I don't know, which is why I'm hoping someone here can set me straight...

I'll be looking as well. Findg the right specs on non-standard memory has proven rough for a laymen like myself. It took me forever to figure out samsungs .pdf on ddrII. if those arent the same specs for their ddrII for nVidia, I'd be back to square 1 :LOL:

I'm still wondering if anyone around here knows exactly how big an impact a slightly higher latency would have on a VPU. Unless someone tells me otherwise, I just can't see the memory read patterns being at all the same between a VPU and CPU. a CPU being the case where latency plays a huge difference because it involves mis-reads on l1 and l2 caches, and getting it as quickly from system memory before more clock stalls is paramount.
 
Back
Top