*** Will NV30 use Elpida DDR II or SAMSUNG DDR I SDRAM?

g__day

Regular
This was rumoured a while ago - although I guessed that NVidia would use the latest ultra fast DDR I RAM that only SAMSUNG make today - could this be a new option for them?

http://www.vr-zone.com/#2454

Elpida DDR-II SDRAM

Elpida Memory announced today the development of new circuit technologies and a low-impedance hierarchical I/O architecture that enables 1 Gigabit per second (Gbps) per pin operation with a 1.8 Volt (V) power supply in a multi-Gb DRAM. The results were verified with a 0.13 micron 512 Megabit DDR-II SDRAM, and represent performance that is 7.5 times that of Single-Data-Rate PC133 SDRAM, and a 75% improvement over DDR 266. On June 14, Elpida Memory presented two papers on these technologies at 2002 symposium on VLSI Circuits in Honolulu, Hawaii.

To achieve 1 Gbps data rates at 1.8 V requires precisely synchronized regenerated clocking with less than 30 ps misalignment, and high-integrity output data signals for a maximized valid data window. Such performance also requires a read/write cycle time of less than 4 ns and an access time of less than 8 ns. Elpida's unique technology satisfies these requirements, providing more than enough bandwidth for high-end workstations and PCs that require the maximum 533 Mbps operation specified by DDR-II at 1.8 V. The technology will also accommodate the highest data rates of the next-generation DRAM specification, DDR-III.

The design of computer systems is constantly improving both in terms of performance and power consumption, and 1.8 V DDR-II SDRAM is a necessary solution for the main memory component of these systems. Elpida's new circuit technologies can be used to fabricate the world's first commercial DDR-II SDRAM from Elpida and will hasten the development of next-generation DDR-III specifications into the mainstream of DRAM applications.

***

What do you folk reckon? Is this too late in the day for them to consider or have known about this all along and SAMSUNG was just a fallback plan. NVidia might have had enginering samples of DDR II for sometime now, so long as DDR II production can scale up to meet demand for predicted quarter 4 sales if the chip is launched in August things might now be all go!!!

I am sure this development is not a complete surprise to NVidia, who like others must be constantly praying for ever quicker RAM.

Just imagine a Gigabit/second bandwidth per pin on a 256 bit wide bus (like all the next generation video cards) == 32GBytes/second throughput by my humble calculations == whoa !!!
 
This looks like a technology demonstration to me, not an actual product.

Elpida said:
...
5. Summary
We developed a 512-Mb DDRII SDRAM to test a memory interface with 1-Gb/s data rate. ...

Read more in the pdf's at www.elpida.com

That said, there's still a possibility that nvidia use some kind of DDRII.
 
It's more than unlikely that NV30 will use DDR-II RAM. Samsung announced first samples for November and it will probably be a couple of months more until it's ready for market.

Anyway, isn't the biggest advantage of DDR-II the reduced latency? How important is low latency for video memory?
 
The three factors that give memory performance are:

1) clock speed - the higher the better
2) latency - the lower the better, CAS 2 is much better than cas 3
3) bandwidth - the wider the better, 256 bit is better than 128 bit wide
 
I'm not saying that I think they will use it - I don't like to speculate. But remember that it should be easy to design a mem-interface that handles both DDR and DDRII. That's a smaller step than between SDR and DDR, and we know that the original GF could handle both of those, with SDR cards coming first and DDR cards following as memory got available.

Regarding latency, I've seen too many screwed up uses of that word regarding memories, so I won't touch it.

g__day:
4) Minimum burst size. And the bad side of DDRII is that it has increased from 2 to 4 relative DDR.
 
I remember a while back that 3d chipset had a news piece surrounding a seminar that was held to some developers & I guess the Nv-30 was showcased to these developers.

Since some sites are poking out info regarding the NV30, I might as well spill what I know. You might think this is just rumors, but this is straight from Nvidia who held a Seminar to some top end developers. So if these end up being wrong, blame Nvidia as they are the ones who held the seminar. Here is the info:

I caught a couple of NV30 specs for you. First the RAM will be running at 900Mhz. Secondly, they are claiming at this point 200 million polys per second.

Will Nvidia be using the DDR II memory that is on Samsungs product description pages? Will have to wait for that info, but it's apparent that Nvidia is banking on massed produced 900Mhz modules from Samsung. The 200 million polys per second is almost doubled that of the GeForce 4. I believe the GeForce 4 is what? 124? or something around that.
 
AFAIK 0.9-1Gb/s class memory won't be available in mass until next year from ALL of the major memory manufacturers not just Samsung. IMO if NV30 is released this year it won't have that class of memory.
 
PC-Engine - but how far do you know - no offense meant?

Ascended Saiyan - I have read all the guesses on NV30 - including that one - or that it will be a 2 chip solution (seperate T T&L and rendering chips) - or that it will have embedded DRAM (who can spell bitboys here :) ). Guess we will just have to wait and see.

The timing looks too tight for me (for NVidia to be comfortable with it for a flagship production release). Maybe NV35 in Feb / March may be DDR II, again time will tell.

I guess that by the time properly coded games for a GF4 or better are out we will have NV35 on the shelves and all be eagerly awaiting NV40!!!
 
Hey Basic, why don't the GPU makers just go to RDRAM after DDRI? I always figured burst length was the problem with RDRAM. Now that DDRII and RDRAM have the same burst length, it seems that it would be easier and cheaper to produce than using 4ns DDR SDRAM with a 256-bit bus interface.
 
elimc said:
Hey Basic, why don't the GPU makers just go to RDRAM after DDRI? I always figured burst length was the problem with RDRAM. Now that DDRII and RDRAM have the same burst length, it seems that it would be easier and cheaper to produce than using 4ns DDR SDRAM with a 256-bit bus interface.

Thats a good question. Why not use Rambus? Is it because of a higher price because it doesn't have the the volume production that ddr sdram has?
 
I thought it was the latency that killed the consideration of RAMBUS since the Stealth 3D 2000 days :)

Give the data traffic analysis between the video's GPU and graphics memory is pretty well know, I am sure they have worked out what memory types best fit their pattern of usage. I don't know why RAMBUS nowadays is less desireable than DDR, QDR or DDR II - great point!!!
 
elimc said:
Hey Basic, why don't the GPU makers just go to RDRAM after DDRI? I always figured burst length was the problem with RDRAM. Now that DDRII and RDRAM have the same burst length, it seems that it would be easier and cheaper to produce than using 4ns DDR SDRAM with a 256-bit bus interface.

Samsung is currently the only memory manufacturer developing RDRAM. Having a single source of such a critical component as memory is uncomfortable. Furthermore, not only are manufacturers required to pay license fees for the memory itself, but you also have to pay license fees for RDRAM memory controllers (unless you're Intel), which adds critical cost in a cut-throat business. Ouch. Plus of course the somewhat worse latency of Rambus vs DDR(II).

Even so, using RDRAM is not a horrible idea at all. Due to its nice granularity properties, it was a pretty good alternative for consoles. But at this point it's just not palatable. :)

Entropy
 
As a matter of principle, I'm very glad that Rambus seems to be dropping out.

As far as the memory technology is concerned, I'm not sure it's that great, either. I'm pretty sure that the main problem with Rambus isn't as simple as just looking at the latency. In fact, if I remember correctly, at high bandwidth usage, Rambus tended to have lower latency than DDR SDRAM.

The problem is that only one chip in a Rambus setup is active at any one time, and it can take a significant amount of time to get the other chips powered up.

This is necessary because the Rambus chips put out a significant amount of heat. This is also why there is a heat spreader on Rambus RIMMs.

This setup would be very detrimental to a modern graphics card that is continually accessing multiple spaces in memory (front buffer, back buffer, z-buffer, textures, vertex buffer, and soon vertex/pixel shader programs).
 
Latency is a generic problem for DRDRAM. The part that has been discussed the most is the synchronization that has to take place for devices on the channel, effectively adding delay to the earlier devices in order to synchronize them with last devices. Signal propagation times increase as well with the number of memory chips, as the signal has to pass through all devices connected on the channel. These are reasons DRDRAM is less suitable for very large memory systems. Shouldn't be too much of a problem for a gfx-card though.
A less obvious problem is that Rambus doesn't support critical word bursting, that is when it sends a stream of data, the word that was actually requested can be anywhere within that burst as opposed to coming first as with DDRSDRAM.
There's a shitload of other factors of course, but I'd imagine that lack of critical first word bursting would be a problem for gfx as well.

In order to achieve good performance, rambus memory has many banks open simultaneously on the active chip, improving latency, but increasing power consumption. It is true that only one device per channel can be in a ATTN state at any one time, and that accesses on a different chip will have to suffer a latency hit as it changes from standby. Locality of reference is a greater factor than for DDRSDRAM.

Still, when I saw the Sony PS2 with its two DRDRAM chips soldered to the board I thought it was a pretty neat solution for that niche.

Entropy
 
If you only have one chip per channel (which would seem likely in a video card), neither chip activity switch delays, nor signal propagation delays would be relevant.

Much of the latency in an RDRAM system comes from the RIMM itself and the serial nature of it. With up to 32 memory devices on one channel (two double-sided RIMMs) things are quite a bit different than in a PS2 for example...


Anyway, one would need quite a few RDRAM channels to equal a 20+ GB/s 256-bit DDR setup, not sure how that would work out in the number of pins required, etc... DDR might end up being the simpler solution after all. :)


*G*
 
The problem with only one chip per channel would be that all memory chips would be active at once, which would likely require active memory cooling. Given how current video card manufacturers seem to be fairly inept at mass-producing good cooling solutions, I doubt they could handle even hotter memory than DDR SDRAM...and besides, more heat=more power consumption, another thing that wouldn't be easy for a video card to handle.
 
I thought it was the latency that killed the consideration of RAMBUS since the Stealth 3D 2000 days

I don't understand this. The GPU makers could just solder the RAM onto the card. This solution would have extremely low latency. Factor into the equation that RDRAM has no bus turn around latency and the trace lengths can be extremely short for a GPU and you have a very good setup.

Samsung is currently the only memory manufacturer developing RDRAM. Having a single source of such a critical component as memory is uncomfortable.

Kingston, Samsung, Elpida, and Toshiba are all currently manufacturing RDRAM.

Furthermore, not only are manufacturers required to pay license fees for the memory itself, but you also have to pay license fees for RDRAM memory controllers (unless you're Intel), which adds critical cost in a cut-throat business. Ouch.

Samsung pays a higher percentage of royalties to Rambus when they make DDR SDRAM than when they make RDRAM. Even then the royalties are extremely low, much lower than adding gold contacts which the GPU makers seemingly have no trouble doing.

A less obvious problem is that Rambus doesn't support critical word bursting, that is when it sends a stream of data, the word that was actually requested can be anywhere within that burst as opposed to coming first as with DDRSDRAM.

This is a good point, but the thing is, future DDR standards are eliminating critical word first bursting because it adds too many pins.

The problem with only one chip per channel would be that all memory chips would be active at once, which would likely require active memory cooling.

How does the PS2 keep the RAM cool?

and besides, more heat=more power consumption, another thing that wouldn't be easy for a video card to handle.

Does the LVDS offset this at all, or am I misaken?

Personally, I can't imagine that the 4ns RAM that the GPU makers are using is all that cheap, especially in a 256-bit bus situation.
 
Doesn't a 256 bit bus on upcomming graphic cards require more layers to exist in the PCB? One of the advantages of Rambus is the lower PCB density needed to accomodate the ram isn't it?
 
Back
Top