XDR sometime?

ShootMyMonkey: no

The extra wire in the differenial pair is not an "insurance", the swing there is so low that you need both wires to be sure what bit that is transfered. The differential pair transfer (the same) data at 3.2Gbps in each wire, but the interesting parameter for needed pin count is bandwidth divided by number of pins needed. And for 3.2GHz XDR that's 3.2Gbps / 2 data pins = 1.6Gbps per data pin.

GDDR4 datasheets are freely available at samsung.com, they do not use differential signaling on the data pins.

On the second point you say "wrong", and then prove my point. ;)
XDR speeds are usually "effective" data rates, GDDR are usually actual clocks. So to make the 900MHz of a GF8800 compatible with XDR speed naming convention, you'd have to say 1.8GHz. Combine that with that XDR is differential, and you'll see that GDDR transfers data faster on the same amount of data pins.

As silent_guy says though, the built in data deskewing makes it easier to route XDR.
 
XDR at 4 GHz effective means the DRAMs themselves are at 500 MHz, and when you've got DRAMs clocked that high, latency has to suffer, which is why it's only then equal to DDR1 at 400 MHz effective (200 MHz DRAM). If you notice, DDR-2 at 600 is about where the latencies even out with DDR at 400 for the same reason (that's for average latency, not best or worst-case). Nonetheless, the latencies are still better for XDR vs. say, GDDR-3 or 4, and putting it in the graphics arena was kind of what the OP was asking.


I agree with your conclusions. I'm trying to understand the logic of others in this thread, but I don't get it.


XDR has octal rate signaling or 8 bits of data per clock. So a 400 Mhz clock speed ram is rated at 3.2 GHZ.


XDR2 is going to move to hex data which will equal 16bits of data per clock. Double of what current XDR does.



Read this PDF. It cleary shows where XDR2 is headed, and it's poised to leave GDDR-4 in the dust. On the second page it mentions hex rate signaling (16 bits per clock).

http://www.rambus.com/assets/documents/products/XDR2MemoryInterfaceProductBrief_091905.pdf
 
ShootMyMonkey: no

The extra wire in the differenial pair is not an "insurance", the swing there is so low that you need both wires to be sure what bit that is transfered. The differential pair transfer (the same) data at 3.2Gbps in each wire, but the interesting parameter for needed pin count is bandwidth divided by number of pins needed. And for 3.2GHz XDR that's 3.2Gbps / 2 data pins = 1.6Gbps per data pin.

GDDR4 datasheets are freely available at samsung.com, they do not use differential signaling on the data pins.

On the second point you say "wrong", and then prove my point. ;)
XDR speeds are usually "effective" data rates, GDDR are usually actual clocks. So to make the 900MHz of a GF8800 compatible with XDR speed naming convention, you'd have to say 1.8GHz. Combine that with that XDR is differential, and you'll see that GDDR transfers data faster on the same amount of data pins.

As silent_guy says though, the built in data deskewing makes it easier to route XDR.

Basic,

Samsung is not so convinced on GDDR3's pin-count advantage over XDR:

XDR_DRAM_application.gif


GDDR3 needing to handle twice the external clock-rate than the XDR solution and delivering a product that uses more pins to transfer less data seems good for XDR. Samsung manufactures and sells both kind of chips so it is not in their best interest to play dirty politics IMHO.
 
GDDR4 uses substantially fewer pins for address/control than GDDR3, as well as having much higher per-pin data rates; as such, comparisons that show GDDR3 to be weaker than XDR do not translate nicely to GDDR4.

Also, if you are going to compare XDR2 (which isn't available yet), then the standard that it would be fair to compare it against is GDDR5 (since that is presumably what is going to be available in the XDR2 timeframe), not GDDR4.
 
ShootMyMonkey: no

The extra wire in the differenial pair is not an "insurance", the swing there is so low that you need both wires to be sure what bit that is transfered. The differential pair transfer (the same) data at 3.2Gbps in each wire, but the interesting parameter for needed pin count is bandwidth divided by number of pins needed. And for 3.2GHz XDR that's 3.2Gbps / 2 data pins = 1.6Gbps per data pin.

GDDR4 datasheets are freely available at samsung.com, they do not use differential signaling on the data pins.

Differential signalling is used because it makes your signal much more resistant to common mode noise.

Single ended signalling, like GDDR uses, requires high quality ground and power planes. This inflates the number of ground and power pins to a degree that makes the pin count advantage of the single ended databus non-existant.

On Samsung's 512Mbit GDDR4 136 BGA memory chip, there are 33 ground pins and 31 power pins of the 136 total pins. That's for a 32bit bus.

On top of that you can't have multiple devices per channel, the way you can with XDR

Cheers
 
I was explicitly talking about just data pins. Both those setups have 64 data pins.
I'm sorry that I missed the 4Gbps version of XDR, GDDR3 doesn't reach all the way to that speed (it only beats the 3.2Gbps version). With GDDR4 though, Samsung have listed memories with 40% higher speed per data pin than 4Gbps XDR.

Rambus have an advantage in control pins though. GDDR* doesn't do DDR on control pins, so they will need more of those pins. But I'm not sure if it's fair to compare pin count of one control bus for XDR vs two control buses for GDDR3 (as they did in that figure). Wouldn't the two control buses be more flexible.

Another thing is that the pin count they gave for GDDR is higher than if you count the control pins on a GDDR4 memory (35 pins on one module, of which some most likely could be shared between the two modules).

I wouldn't be surprised if that page comes from a "Rambus division" in Samsung, and while it doesn't lie, it tries to make those memories look as good as possible. Even compared to Samsungs other memories.

If you'd compare the full pin count between their fastest XDR and GDDR4 memories, and compare with the bandwidth you get from them, you'd probably get similar results. GDDR4 has more pins, but compensate it by being faster.


I'm NOT saying that XDR is bad. I just thought that there was need for a counterweight for the "OMG 3.2GHz is sooo fast". As shown, the data bus isn't the fastest available per data pin.
And if you factor in that you'll have to deal with Rambus Inc, it loose yet some of its attraction.
 
I was explicitly talking about just data pins. Both those setups have 64 data pins.
I find it odd that you count the negative as actual "data" pins, since they're not really transferring the "data" per se. The way you worded that prior post, though, made it sound as if you were saying that each pin within the pairs transfers at half the quoted speed.

The second part of that was that you were referring to the GDDR variants by their bus clock speed while referring to XDR's speed by the effective transfer rate. At least quote the same things... "1.6 GHz GDDR-4" is still 1.6 GHz bus clock speed, but being DDR, the effective rate is 3.2 GHz in the same way that 3.2 GHz XDR is 3.2 GHz (and also has a 1.6 GHz bus clock).

If you'd compare the full pin count between their fastest XDR and GDDR4 memories, and compare with the bandwidth you get from them, you'd probably get similar results. GDDR4 has more pins, but compensate it by being faster.
AFAIK, the fastest XDR that's been built in a lab is double the speed of PS3's XDR (6.4 GHz effective), and that's just XDR, not XDR2... Certainly, thats not the fastest they have in production (which I think is the 4 GHz units), but then I don't know what the same is for GDDR-4. I'd only have to hang on your claim that the 3.2 GHz effective GDDR-4 is the fastest they have. And I fail to see how that's faster than 3.2 GHz XDR or has lower total pin count.

In broader terms of technique, I do think that GDDR-4 is doing the same thing as XDR. Of course, they both make little sacrifices relative to the other which are due to differences at a lower level. All the same, I don't see XDR happening on graphics cards unless Rambus does something about the way they do business and somehow or other Rambus tech is adopted into large-scale production lines.
 
None of the two pins in a differential pair is more "actual data pin". Both are equal (except for the polarity). Unless Rambus idea of a differential bus don't follow normal conventions.

But sorry if it was unclear that I meant bandwidth divided by number of data pins.


On the second part, I just followed the usual convention for the respective technology (just as you yourself said). And noted how those frequencies related to actual data speed per data pin. The point being that the seemingly big difference between 3.2GHz for XDR, and 900MHz GDDR actually isn't what it looks like. But I'll use effective rates for everything below if it makes it easier.


A GDDR4 module at 3.2GHz effective has 32 data pins, and transfer 32 bit = 4 byte per effective Hz => 12.8 GB/s.
A XDR module at 3.2GHz effective has 32 data pins, and transfer 16 bit = 2 bytes per effective Hz => 6.4GB/s.

How many pins in total will those use?
It's hard to get it from Samsungs site, and I'm not registered at Rambus.
At one point they say 74 controller pins incl pwr/gnd for one XDR channels, but doesn't say if it include data pins. Later they say 19 controller pins excl pwr/gnd for two XDR channels, so it seems as "controller pins" doesn't include data pins. Add 32 data pins to the 74 pins and you'll get 106 pins.

With a very pessimistic guess for GDDR4, let's say that the GPU needs as many pins for a GDDR4 channel as the memory itself have, that's 136. That's still a lot less than double the pin count for double the bandwidth. You could compare it to the 74 pins mentioned first, and it still has a better bandwidth/pin ratio.

But it's a bit unfair to compare it to 3.2GHz effective GDDR4 since they haven't listed it as a product. They have listed 2.8GHz effective GDDR4, and with that memory, the most pessimistic uess for GDDR4, and the most optimistic guess for XDR, they are on par in bandwidth / (total pin count).

So even there, XDR doesn't show the massive extra bandwidth that it apears to have at first look.
 
None of the two pins in a differential pair is more "actual data pin". Both are equal (except for the polarity). Unless Rambus idea of a differential bus don't follow normal conventions.

But sorry if it was unclear that I meant bandwidth divided by number of data pins.


On the second part, I just followed the usual convention for the respective technology (just as you yourself said). And noted how those frequencies related to actual data speed per data pin. The point being that the seemingly big difference between 3.2GHz for XDR, and 900MHz GDDR actually isn't what it looks like. But I'll use effective rates for everything below if it makes it easier.


A GDDR4 module at 3.2GHz effective has 32 data pins, and transfer 32 bit = 4 byte per effective Hz => 12.8 GB/s.
A XDR module at 3.2GHz effective has 32 data pins, and transfer 16 bit = 2 bytes per effective Hz => 6.4GB/s.

How many pins in total will those use?
It's hard to get it from Samsungs site, and I'm not registered at Rambus.
At one point they say 74 controller pins incl pwr/gnd for one XDR channels, but doesn't say if it include data pins. Later they say 19 controller pins excl pwr/gnd for two XDR channels, so it seems as "controller pins" doesn't include data pins. Add 32 data pins to the 74 pins and you'll get 106 pins.

With a very pessimistic guess for GDDR4, let's say that the GPU needs as many pins for a GDDR4 channel as the memory itself have, that's 136. That's still a lot less than double the pin count for double the bandwidth. You could compare it to the 74 pins mentioned first, and it still has a better bandwidth/pin ratio.

But it's a bit unfair to compare it to 3.2GHz effective GDDR4 since they haven't listed it as a product. They have listed 2.8GHz effective GDDR4, and with that memory, the most pessimistic uess for GDDR4, and the most optimistic guess for XDR, they are on par in bandwidth / (total pin count).

So even there, XDR doesn't show the massive extra bandwidth that it apears to have at first look.


GDDR is tweaked for GPU applications

XDR2 is tweaked for GPU applications


DDR is a general purpose design

XDR is a general purpose design


The starting clock speeds for XDR2 is targeted for 1Ghz.


Rambus has mentioned they think they can make inroads in the GPU market, so they're targeting this market now.
 
I was explicitly talking about just data pins. Both those setups have 64 data pins.

But you can't look at signal pins in isolation. Single ended signalling requires more power and ground pins, period. You have to look at all the pins of a memory package.

Cheers
 
Some 3.33 ns latency at 3.2 GHz for XDR, while we have min. 11,25 ns for DDR2-533. Next time check your sources a bit better before you post :)

Back on topic, I said that over a year ago. The problem is (besides the politics issues) that it's still too costly in comparison and the availability isn't all that great either.

i said that it is compaired with DDR not DDR2;)
 
Back
Top