*** Will NV30 use Elpida DDR II or SAMSUNG DDR I SDRAM?

The way I understand it, DDR-II is generally preferable to RDRAM in graphics cards for the following reasons:

  • Lower power consumption at a given clock speed (AFAIK by a factor of about 2)
  • Shorter minimum burst length - 4 for DDR-II, 8 for RDRAM (neither supports critical word first bursts).
  • Easier to make reasonably efficient controllers, especially as proven DDR-I designs can be modified to do DDR-II with moderate effort. (With RDRAM, it took even Intel 3 attempts to come up with a really efficient controller.)
  • Less costly. One of the main selling points of DDR-II is that it will be less expensive to manufacture (simple architecture, easy and quick to test) than previous memory types. Whereas for RDRAM, you get not only additional manufacturing costs, but licensing costs as well.
  • May require fewer pins as well - as the address bus of DDR-II runs at only half the datarate of the data bus, it can be shared nicely between multiple devices, unlike the full-speed address bus of RDRAM.

I would expect both clock speed and protocol efficiency to be rather similar for RDRAM and DDR-II. At a given data bus width/speed, there should be little difference in PCB complexity between RDRAM and DDR-II - possibly somewhat more complex for the RDRAM.
 
Lower power consumption at a given clock speed (AFAIK by a factor of about 2)

Can we say that DDRII has lower power consumption than RDRAM? Does the LVDS make up for the power losses?

Shorter minimum burst length - 4 for DDR-II, 8 for RDRAM (neither supports critical word first bursts).

Are you sure it's not 4 for RDRAM?

Easier to make reasonably efficient controllers, especially as proven DDR-I designs can be modified to do DDR-II with moderate effort. (With RDRAM, it took even Intel 3 attempts to come up with a really efficient controller.)

I have to disagree here. The i850 has been around for over two years and has beaten out all DDR competition. There has not been a single DDR board that has come close to two years.

Easier to make reasonably efficient controllers, especially as proven DDR-I designs can be modified to do DDR-II with moderate effort. (With RDRAM, it took even Intel 3 attempts to come up with a really efficient controller.)

Less costly. One of the main selling points of DDR-II is that it will be less expensive to manufacture (simple architecture, easy and quick to test) than previous memory types. Whereas for RDRAM, you get not only additional manufacturing costs, but licensing costs as well.

A 256-bit bus can not be all that easy to design. The number of traces is insane and verification would cost quite a bit. RDRAM could actually have a cost advantage in GPUs. Plus, you do have to pay licensing fees for DDR SDRAM.

May require fewer pins as well - as the address bus of DDR-II runs at only half the datarate of the data bus, it can be shared nicely between multiple devices, unlike the full-speed address bus of RDRAM.

This is the exact same method that RDRAM employs. The only caveat is that it adds latency.

I would expect both clock speed and protocol efficiency to be rather similar for RDRAM and DDR-II. At a given data bus width/speed, there should be little difference in PCB complexity between RDRAM and DDR-II - possibly somewhat more complex for the RDRAM.

RDRAM has pretty clean signaling due to its' signaling technology and the fact that it employs ground dams. Frankly, I wouldn't at all be suprised if Rambus increased the royalties to manufacture DDRII, since they are getting more and more similar.
 
elimc said:
Are you sure it's not 4 for RDRAM?

Before spreading FUD, please get some real info.

I have to disagree here. The i850 has been around for over two years and has beaten out all DDR competition. There has not been a single DDR board that has come close to two years.

Two things:

1. The i850 was Intel's third chipset that supported RDRAM.
2. "Beating out" the DDR competition likely has more to do with the asynchronous bus required on the DDR parts. I'd really like to see a comparison between 800MHz RDRAM and dual 200MHz or single 400MHz DDR SDRAM.

A 256-bit bus can not be all that easy to design. The number of traces is insane and verification would cost quite a bit. RDRAM could actually have a cost advantage in GPUs. Plus, you do have to pay licensing fees for DDR SDRAM.

There are no licensing fees for DDR SDRAM, except from those companies that caved to Rambus. And yes, while RDRAM does have a pincount advantage, you have to consider that that pincount advantage never materialized in the desktop space to make its price better than DDR. You have to consider that there may be many other factors involved that make RDRAM more expensive.

RDRAM has pretty clean signaling due to its' signaling technology and the fact that it employs ground dams. Frankly, I wouldn't at all be suprised if Rambus increased the royalties to manufacture DDRII, since they are getting more and more similar.

Rambus has zero legitimate claim over DDR SDRAM.
 
Before spreading FUD, please get some real info.

I was JUST asking a question, SO GET OFF MY CASE, OK!

1. The i850 was Intel's third chipset that supported RDRAM.

What was the first?

2. "Beating out" the DDR competition likely has more to do with the asynchronous bus required on the DDR parts. I'd really like to see a comparison between 800MHz RDRAM and dual 200MHz or single 400MHz DDR SDRAM.

So what? The i850 has been around for two years. There have been no DDR SDRAM mobos that have been out even close to that long. This is the reason I object that RDRAM's memory controller is harder to develop than DDR SDRAM's memory controller. It is also self explanatory when the origional DDR mobos were nine months late and they still sucked.

There are no licensing fees for DDR SDRAM, except from those companies that caved to Rambus.

Well, they are all going to be forced to pay royalties to Rambus pretty soon.

And yes, while RDRAM does have a pincount advantage, you have to consider that that pincount advantage never materialized in the desktop space to make its price better than DDR.

We aren't talking about the desktop, we are talking about graphics cards. A 256-bit bus has a very large number of traces and there is no way around it. Anyway, I can't imagine that the high quality DDR SDRAM they are using in high end GPUs is all that cheap.

Rambus has zero legitimate claim over DDR SDRAM.

I could see it having superior latency and cost savings, plus higher BW in the graphics arena. The RDRAM wouldn't have to be double sampled like with the P4, and the clock generator could be placed on die. This would have very low latency. Plus the traces would be shorter. However, I am not entirely well versed in how RAM is utilized in graphics cards.

For instance, RDRAM has wrt delay. How useful would something like this be in a graphics card? How much does the RAM that nVidia uses cost? How important is initial latency in reality? There are some other questions I have, but I am tired and want to go to bed.
 
rambus royalties

I doubt any of the major memory manufacturers are going to be paying rambus royalties any time soon. The courts so far have smacked rambus around, and the other memory manufacturers have a pretty good case against them. I'd say they'll be lucky if they can get out of court unscathed and keep making their own ram, let alone charging other companies for theirs.
 
elimc said:
So what? The i850 has been around for two years.

It's not about how long it's been out. It's about how the bus speed, and quite possibly some P4 architectural decisions, favor Rambus. It doesn't matter, anyway. Intel is phasing out Rambus.

Well, they are all going to be forced to pay royalties to Rambus pretty soon.

No, they won't. Rambus' claims to those patents are invalid. Check out a couple of news stories:

"The FTC, which voted 5-0 to file the lawsuit, alleges that Rambus violated antitrust laws by deliberately not disclosing key patent applications, among other acts, while it was a member of the JEDEC Solid State Technology Association (formerly known as the Joint Electron Device Engineering Council). JEDEC's bylaws required members to disclose or license relevant intellectual property to other members."

http://news.com.com/2100-1001-937449.html?tag=fd_top

"In other news, a group of Rambus investors have filed a class-action lawsuit against the company alleging that Rambus misled led them about the validity of its patents regarding SDRAM and DDR SDRAM and the royalties it could obtain from those patents. This comes on the heels of a recent judge’s decision to overturn a jury’s verdict finding Rambus guilty two counts of fraud. However, the same court did order Rambus to pay Infineon’s legal fees - $7.1 million."

http://firingsquad.gamers.com/news/newsarticle.asp?searchid=3288

"A jury in the U.S. District Court for the Eastern District of Virginia ruled Wednesday that Rambus committed fraud against Infineon by failing to properly disclose patent information when required by an industry standards body. The jury awarded Infineon $3.5 million. But Judge Robert Payne is likely to reduce the award to $350,000 to conform with state law, according to reports from the trial. Infineon had sought $105 million."

http://firingsquad.gamers.com/news/newsarticle.asp?searchid=3035

It should be obvious why I can't stand Rambus as a company. Personally, I'd like to see the company stomped out of existence, and the administrative leaders of the company never find jobs again.

We aren't talking about the desktop, we are talking about graphics cards. A 256-bit bus has a very large number of traces and there is no way around it. Anyway, I can't imagine that the high quality DDR SDRAM they are using in high end GPUs is all that cheap.

I never said a 256-bit bus was a good idea. In fact, I never believed a 256-bit bus would come into widespread usage in consumer video cards. It is still my feeling that other technologies for saving external memory bandwidth will be used first, such as eDRAM and technologies like Hyper-Z.
 
The i850 and the i840 are basically the same. THere is little to no difference, especially interms of the memory controller. And the i820 and i840 are simillarly little difference, except for some changes to the memory controller for the second channel.
 
elimc said:
Can we say that DDRII has lower power consumption than RDRAM? Does the LVDS make up for the power losses?
AFAIK, most of the lower power consumption comes from the lower number of banks used in DDR-II compared to RDRAM (4 for DDR-II, 32 for RDRAM). Besides, the signalling used for DDR-II is not LVDS, but a variant of SSTL - only the clock lines use differential signalling at all.

Shorter minimum burst length - 4 for DDR-II, 8 for RDRAM (neither supports critical word first bursts).
Are you sure it's not 4 for RDRAM?
According to documents at rdram.com, it's 8. Not 4.

Easier to make reasonably efficient controllers, especially as proven DDR-I designs can be modified to do DDR-II with moderate effort. (With RDRAM, it took even Intel 3 attempts to come up with a really efficient controller.)
I have to disagree here. The i850 has been around for over two years and has beaten out all DDR competition. There has not been a single DDR board that has come close to two years.
The i850 WAS Intel's 3rd attempt (the two first were i820 and i840 for the Pentium-III platform). If you remember, the first of the chipsets, the i820, suffered from weak performance, severe delays, and bugs severe enough to force the retraction of about 1 million motherboards.

Also, according to Anandtech, the P4X333 chipset, using DDR-I memory, beats the i850/i850E (except when using expensive and unsupported PC1066 RDRAM).

A 256-bit bus can not be all that easy to design. The number of traces is insane and verification would cost quite a bit. RDRAM could actually have a cost advantage in GPUs. Plus, you do have to pay licensing fees for DDR SDRAM.
I find it hard to imagine that a 256-bit RDRAM bus (=16 channels!) is any easier to design than a 256-bit DDR-II bus. Also, while some of the smaller RAM makers agreed to pay Rambus for DDR SDRAM, the bigger ones, in particular Micron and Infineon, don't - hasn't Rambus lost just about every court case all over the world against Micron/Infineon?

May require fewer pins as well - as the address bus of DDR-II runs at only half the datarate of the data bus, it can be shared nicely between multiple devices, unlike the full-speed address bus of RDRAM.
This is the exact same method that RDRAM employs. The only caveat is that it adds latency.
Umm, no. The RDRAM address bus runs at the same data rate as the data bus - it just spends multiple cycles to pass each command. Which adds about 1 data cycle's worth of latency to the DDR-II and about 7 data cycles for RDRAM.

RDRAM has pretty clean signaling due to its' signaling technology and the fact that it employs ground dams. Frankly, I wouldn't at all be suprised if Rambus increased the royalties to manufacture DDRII, since they are getting more and more similar.

RDRAM signalling isn't that good; i850 motherboards require a rather weird RIMM layout to be able to support even 2 channels without adding board layers beyond the 4 common for motherboards. (dual-channel DDR motherboards do not suffer similar problems, as nForce has shown). Also, signal integrity was one of the major problems with early i820 boards as well.

And given the level of success Rambus enjoys in courtrooms all over the world (last I heard, FTC was investigating them for their patenting practices), I doubt they are going to see much DDR-II royalty money.
 
well a 256bit RDRAM bus would be 16 16bit channels. using PC1066 say, that would give you 33.6 GB/s.

AFAICS the advantage is that you can get ~equivalent bandwidth to DDR with half the bus width.

I.e. a 128 bit (8 channel) Rambus setup with PC1066 RDRAM gives 17GB/s. If you were to use really high speed RDRAM (Say PC1200),
that goes up to 19.2 GB/s.

I don't know enough to say why it's not used, but I'm guessing there are very good reasons...

Serge
 
arjan de lumens said:
The i850 WAS Intel's 3rd attempt (the two first were i820 and i840 for the Pentium-III platform). If you remember, the first of the chipsets, the i820, suffered from weak performance, severe delays, and bugs severe enough to force the retraction of about 1 million motherboards.

Well, to be fair, the i820 was recalled due to its terrible support of SDRAM (through an MTH), and doesn't have a whole lot to do with this subject.

Regardless, given that we're (sort of) talking about Rambus with respect to graphics cards, DDR SDRAM used in graphics cards is rather close to the clock speed of RDRAM. I doubt that RDRAM could be made to go much faster (given the yield problems we've seen in the past).

Given the exact same clock speed, both RDRAM and DDR would have the same bandwidth if they each had the same width datapath. So, I don't see Rambus having more than in the range of a 30%-50% memory bandwidth advantage in video cards. From what I can tell, this would not be enough to offset the latency and other issues (You'd probably need a rather beefy memory controller, as well as much more on-chip cache, to take care of a large multi-channel Rambus setup).
 
i820, followed by i840

I wasn't thinking when I wrote this. In any case, the memory controllers are fairly similar for all the RDRAM boards that Intel has made so far.

We'll see what the courts have to say about that. BTW, is that supposed to be a good thing?

Only if you don't live in China.

It doesn't matter, anyway. Intel is phasing out Rambus.

Link?

No, they won't. Rambus' claims to those patents are invalid. Check out a couple of news stories:

Rambus filed patents before they even joined JEDEC. They should at least be collecting royalties for those.

"The FTC, which voted 5-0 to file the lawsuit, alleges that Rambus violated antitrust laws by deliberately not disclosing key patent applications, among other acts, while it was a member of the JEDEC Solid State Technology Association (formerly known as the Joint Electron Device Engineering Council). JEDEC's bylaws required members to disclose or license relevant intellectual property to other members."

Basically, this investigation is worthless and will only piss off the court of appeals which already reviewing the same case. It is a special interest funded investigation. The whole thing is a joke, considering that there were a number of companies in JEDEC that got away with what Rambus is getting sued for. Including IBM which announced that they would not disclose any pending patent applications and yet were not punished.

"In other news, a group of Rambus investors have filed a class-action lawsuit against the company alleging that Rambus misled led them about the validity of its patents regarding SDRAM and DDR SDRAM and the royalties it could obtain from those patents. This comes on the heels of a recent judge’s decision to overturn a jury’s verdict finding Rambus guilty two counts of fraud. However, the same court did order Rambus to pay Infineon’s legal fees - $7.1 million."

All this means is that some lawyers thought they could make a quick buck. That is their job, after all.

"A jury in the U.S. District Court for the Eastern District of Virginia ruled Wednesday that Rambus committed fraud against Infineon by failing to properly disclose patent information when required by an industry standards body. The jury awarded Infineon $3.5 million. But Judge Robert Payne is likely to reduce the award to $350,000 to conform with state law, according to reports from the trial. Infineon had sought $105 million."

This is known as the Markman Ruling which goes against the curriculum of every engineering college in the US. You don't have to believe me if you don't want to, but go to a non biased engineering site and see what they think of the Markman Ruling.

It should be obvious why I can't stand Rambus as a company. Personally, I'd like to see the company stomped out of existence, and the administrative leaders of the company never find jobs again.

The memory makers are very wary of each other due to previous history. Do you really think that they would actually be stupid and innocent enough to let Rambus steal and patent their ideas without their knowledge?

AFAIK, most of the lower power consumption comes from the lower number of banks used in DDR-II compared to RDRAM (4 for DDR-II, 32 for RDRAM). Besides, the signalling used for DDR-II is not LVDS, but a variant of SSTL - only the clock lines use differential signalling at all.

I wasn't clear enough. I meant that the LVDS that RDRAM employs might make up for excess power dissipation. BTW, there is a version of RDRAM that has fewer banks and is cheaper to manufacture than the original RDRAM. It is called 4iRDRAM.

According to documents at rdram.com, it's 8. Not 4.

I had always thought that RDRAM sent 4 word packets . . .

Also, according to Anandtech, the P4X333 chipset, using DDR-I memory, beats the i850/i850E (except when using expensive and unsupported PC1066 RDRAM).

Link?

Also, while some of the smaller RAM makers agreed to pay Rambus for DDR SDRAM, the bigger ones, in particular Micron and Infineon, don't - hasn't Rambus lost just about every court case all over the world against Micron/Infineon?

Last time I checked, seven of the top ten memory manufacturers pay royalties to Rambus. Some decisions have gone to Rambus and some have gone to Infineon. Rambus has lost one ruling.

Umm, no. The RDRAM address bus runs at the same data rate as the data bus - it just spends multiple cycles to pass each command. Which adds about 1 data cycle's worth of latency to the DDR-II and about 7 data cycles for RDRAM.

The DDRII bus is multiplexed by four just like RDRAM. Right?

RDRAM signalling isn't that good; i850 motherboards require a rather weird RIMM layout to be able to support even 2 channels without adding board layers beyond the 4 common for motherboards. (dual-channel DDR motherboards do not suffer similar problems, as nForce has shown). Also, signal integrity was one of the major problems with early i820 boards as well.

Baloney. The DDR mobos have always had stability problems. How do you account for the huge delay the original mobos had? Why is dual channel RDRAM being run on 4 layer PCBs while the BIOS will automatically adjust the timing speeds of RAM when DDR SDRAM mobos are fully loaded even on 6 layer PCBs. The tRAC and tCAC timing is automatically increased in order to keep the system stable. This is true for the nForce as well.

Pseudo-differential signaling, ground dams, high quality materials, and some other creative technology allow RDRAM to have superior channel-to-channel isolation.

And given the level of success Rambus enjoys in courtrooms all over the world (last I heard, FTC was investigating them for their patenting practices), I doubt they are going to see much DDR-II royalty money.

I guess we'll see, eh?

What is exactly the problem with moving to RDRAM? Is it a technical issue or a business one?
 
Regardless, given that we're (sort of) talking about Rambus with respect to graphics cards, DDR SDRAM used in graphics cards is rather close to the clock speed of RDRAM. I doubt that RDRAM could be made to go much faster (given the yield problems we've seen in the past).

By 2005 RDRAM will be at 9.6GBs per stick of memory. The DDR SDRAM camp hopes to be in the 5GBs range, but this is fairly optimistic. RDRAM will then move to 1330MHz and then 1600MHz. This is pretty fast. You could probably go faster than this on a graphics card since the memory is soldered.

From what I can tell, this would not be enough to offset the latency and other issues (You'd probably need a rather beefy memory controller, as well as much more on-chip cache, to take care of a large multi-channel Rambus setup).

I imagine the latency would be very good on a graphics card: short trace lengths, no double sampling, clock generator on die, and high clock speeds. The added overhead of the sytem could be offset by the fewer traces and pins.
 
elimc said:
It doesn't matter, anyway. Intel is phasing out Rambus.

Link?

"The first point of discussion was the future of RDRAM with the Pentium 4 platform to which Mr. Siu quickly reaffirmed what we had been hearing from motherboard manufacturers - after the 850E, there will be no more RDRAM based chipsets for the Pentium 4. Although Mr. Siu (as well as most people in the industry) believes that RDRAM is technically superior to DDR, he made it clear that Intel's roadmap was hurt severely by an overly strong commitment to the technology which today still isn't as economically viable as DDR."

From: http://www.anandtech.com/showdoc.html?i=1631&p=6

Rambus filed patents before they even joined JEDEC. They should at least be collecting royalties for those.

Patents which they were required to disclose before joining JEDEC. It was more than just a little unethical for Rambus to not disclose those patents, and then attempt to push their patented technologies into SDRAM and DDR SDRAM when the other companies believed all of the SDRAM/DDR tech was fully open. If you want court precedent, I do know that Dell got slammed for doing the same thing some time ago (though I'm not sure of the exact case...).

This is known as the Markman Ruling which goes against the curriculum of every engineering college in the US. You don't have to believe me if you don't want to, but go to a non biased engineering site and see what they think of the Markman Ruling.

Well, I read about a Markman ruling, and it has nothing to do with what happened here. The issues aren't whether or not the patents Rambus holds cover DDR/SDRAM, but whether or not those patents are valid. A Markman ruling only deals with patent definitions, and usually decides the outcome of a case.

Side note: Of course, I'm sure that those companies that are still fighting Rambus are fighting Rambus in every way possible...meaning they are also attempting to show that Rambus' patents don't cover their products. They'd be stupid not to. But, the heart of it is still that Rambus' patents are invalid due to fraud.

The DDRII bus is multiplexed by four just like RDRAM. Right?

So is AGP4x. Are you saying that AG4x infringes on RDRAM patents? I don't think any claims have been made to state that yet...
 
elimc said:
AFAIK, most of the lower power consumption comes from the lower number of banks used in DDR-II compared to RDRAM (4 for DDR-II, 32 for RDRAM). Besides, the signalling used for DDR-II is not LVDS, but a variant of SSTL - only the clock lines use differential signalling at all.

I wasn't clear enough. I meant that the LVDS that RDRAM employs might make up for excess power dissipation. BTW, there is a version of RDRAM that has fewer banks and is cheaper to manufacture than the original RDRAM. It is called 4iRDRAM.

Fair enough - if we ever get to see it in use. 4 banks sounds a bit on the low side to keep up the protocol efficiency advantage of the RDRAM, though.

According to documents at rdram.com, it's 8. Not 4.

I had always thought that RDRAM sent 4 word packets . . .
Nah. It's 8.

Also, according to Anandtech, the P4X333 chipset, using DDR-I memory, beats the i850/i850E (except when using expensive and unsupported PC1066 RDRAM).

Link?

Here. Keep in mind when reading the article that the PC1066 platform is NOT officially supported by Intel.

Umm, no. The RDRAM address bus runs at the same data rate as the data bus - it just spends multiple cycles to pass each command. Which adds about 1 data cycle's worth of latency to the DDR-II and about 7 data cycles for RDRAM.

The DDRII bus is multiplexed by four just like RDRAM. Right?
No. The available documentation indicates that the RDRAM bus is multiplexed by 8. This goes for both address and data lines.

RDRAM signalling isn't that good; i850 motherboards require a rather weird RIMM layout to be able to support even 2 channels without adding board layers beyond the 4 common for motherboards. (dual-channel DDR motherboards do not suffer similar problems, as nForce has shown). Also, signal integrity was one of the major problems with early i820 boards as well.

Baloney. The DDR mobos have always had stability problems. How do you account for the huge delay the original mobos had? Why is dual channel RDRAM being run on 4 layer PCBs while the BIOS will automatically adjust the timing speeds of RAM when DDR SDRAM mobos are fully loaded even on 6 layer PCBs. The tRAC and tCAC timing is automatically increased in order to keep the system stable. This is true for the nForce as well.
Hmmm - the first RDRAM motherboards back in the bad old i820 days were, as far as I remember, even more delayed. DDR stability was an issue with some of the first DDR motherboards, but this has long since been fixed (given the huge overclockability of some of the newer DDR boards, the noise margins would seem to be quite safe/large by now). I know that nForce somethimes slows down DRAM timings if you do not equip its two memory channels with identical memory configurations - are there any other examples around of DDR motherboards dropping timings when fully loaded?

Pseudo-differential signaling, ground dams, high quality materials, and some other creative technology allow RDRAM to have superior channel-to-channel isolation.
Channel-to-channel isolation would mostly be a motherboard design issue - the way I understand it, 4-layer boards means that you cannot route the RDRAM channels on top of each other, resulting in the two RDRAM channels - and their associated RIMM slots - being placed perpendicular to each other.

What is exactly the problem with moving to RDRAM? Is it a technical issue or a business one?

Technical: Latency, access granularity, possibly board layout issues with >4 channels. Business: Too high risk - for Nvidia/ATI/etc the technology is unproven and unfamiliar. And experience form Intel (the i820 delay) indicates that making a good controller takes a long time.

Chalnoth said:
Well, to be fair, the i820 was recalled due to its terrible support of SDRAM (through an MTH), and doesn't have a whole lot to do with this subject.
I know about the MTH, that it was discontinued because it was buggy and Intel was unable to fix it. The i820 motherboard recall issue was something different - essentially, when 3 RIMM slots were connected in series, signal integrity could not be maintained, with the result being system instability - tons of motherboards that were ready to be shipped had to be scrapped over this issue.
 
"The first point of discussion was the future of RDRAM with the Pentium 4 platform to which Mr. Siu quickly reaffirmed what we had been hearing from motherboard manufacturers - after the 850E, there will be no more RDRAM based chipsets for the Pentium 4. Although Mr. Siu (as well as most people in the industry) believes that RDRAM is technically superior to DDR, he made it clear that Intel's roadmap was hurt severely by an overly strong commitment to the technology which today still isn't as economically viable as DDR."

"Our friends at Anandtech reported yesterday that Mr. Siu had stated that the current i850E would be the last chipset supporting RDRAM, although upon writing this it seems that the statement has been removed from their website. Mr. Siu explained to us that he had been misquoted. While he would not go into detail, he gave the impression that Intel would continue to support RDRAM in the future. So it seems as though we'll certainly see RDRAM and DDR RAM both stay strong in Intel's current marketing strategies."

From: http://www.aceshardware.com/read_news.jsp?id=55000505

Patents which they were required to disclose before joining JEDEC.

Why would they? There was no rule to disclose patents when Rambus was invited to join JEDEC. It was only after Rambus joined and the big companies wanted their IP that this rule was created. Since Rambus' sole method of making money is through IP and not products, you can see why they were a little skittish about releasing this patent information. There were other companies that blatantly broke this JEDEC rule, yet they are not being investigated by the FTC. This latest investigation is just more evidence of the big companies trying to put Rambus under. Besides, why would Rambus want to disclose the patents they already had when all you have to do is ask the Patent Office for them.

Here's how I see it:
A company (I forget who) from JEDEC invites Rambus to join to try to make their IP free despite the millions of dollars that Rambus engineers had spent on R and D. When Rambus balks, the patent disclosure rule is announced so that they can steal Rambus IP and patent it before Rambus does. Rambus follows the JEDEC rules, yet has its' IP stolen by a few companies. Rambus then drops out of JEDEC, since JEDEC had nothing to offer, and their IP was being stolen. If you think Rambus has stolen technology, then can you tell me exactly what they stole?

Memory makers constantly lie and steal from each other. Their only concern is money, not continuation of the capitalistic society. The internet news media was used as pawns and Infineon hired people to go through the internet chatrooms to spread biased information regarding Infineon so that the stock price would rise.

It was more than just a little unethical for Rambus to not disclose those patents, and then attempt to push their patented technologies into SDRAM and DDR SDRAM when the other companies believed all of the SDRAM/DDR tech was fully open.

Rambus had key patents involving SDRAM before they even joined JEDEC. Several of their key patents allowed RAM to run synchronously. Take a look at this patent and tell me if you think it is possible to make RAM without violating it:

http://164.195.100.11/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/srchnum.htm&r=1&f=G&l=50&s1='5,995,443'.WKU.&OS=PN/5,995,443&RS=PN/5,995,443

So is AGP4x. Are you saying that AG4x infringes on RDRAM patents? I don't think any claims have been made to state that yet...

I don't know what you are getting at. IIRC, Mostek was the first company to apply multiplexing.

Fair enough - if we ever get to see it in use. 4 banks sounds a bit on the low side to keep up the protocol efficiency advantage of the RDRAM, though.

Samsung is currently producing it in mass quantities.

http://www.samsungusa.com/cgi-bin/n...ejceefdfggdhgm.0&oid=23322&leftmenu=1

Keep in mind when reading the article that the PC1066 platform is NOT officially supported by Intel.

That is pretty impressive. Of course, it would pretty nice to see the same benchmarks from a mobo bought from the store with all the banks loaded. They only used one stick of RAM in this benchmark and they still had trouble getting AGP 8x to work.

I know that nForce somethimes slows down DRAM timings if you do not equip its two memory channels with identical memory configurations - are there any other examples around of DDR motherboards dropping timings when fully loaded?

The Sis and Asus chipsets do the same thing.
 
elimc said:
"Our friends at Anandtech reported yesterday that Mr. Siu had stated that the current i850E would be the last chipset supporting RDRAM, although upon writing this it seems that the statement has been removed from their website. Mr. Siu explained to us that he had been misquoted. While he would not go into detail, he gave the impression that Intel would continue to support RDRAM in the future. So it seems as though we'll certainly see RDRAM and DDR RAM both stay strong in Intel's current marketing strategies."

From: http://www.aceshardware.com/read_news.jsp?id=55000505

Interesting. We'll see...but I have doubts that Rambus has much future with Intel.

Why would they? There was no rule to disclose patents when Rambus was invited to join JEDEC. It was only after Rambus joined and the big companies wanted their IP that this rule was created. Since Rambus' sole method of making money is through IP and not products, you can see why they were a little skittish about releasing this patent information. There were other companies that blatantly broke this JEDEC rule, yet they are not being investigated by the FTC. This latest investigation is just more evidence of the big companies trying to put Rambus under. Besides, why would Rambus want to disclose the patents they already had when all you have to do is ask the Patent Office for them.

That doesn't make any sense. As far as I know, Rambus was only required to *disclose* their patents, not required to make them available to the other manufacturers without a license. Thus, from what I can tell, JEDEC moved forward thinking all of the technology they were implementing was free, but in fact Rambus held patents on those technologies. Rambus almost certainly did its best to push those technologies into SDRAM and DDR SDRAM. I think JEDEC would have balked if Rambus had showed their patents.

Rambus lied, cheated, and stole. They lied by not disclosing patents. They cheated by later claiming those patents were valid. They stole by attempting to collect royalties (sometimes successfully) for SDRAM and DDR SDRAM. Rambus has used many underhanded tactics in order to gain a monopoly in the memory industry.

Regardless of how "evil" you may think the other memory manufacturers are, they still agreed to come together and generate one free and open standard. That is definitely a very good thing. Rambus has spit in the face of that standard.
 
Regardless of how "evil" you may think the other memory manufacturers are, they still agreed to come together and generate one free and open standard. That is definitely a very good thing. Rambus has spit in the face of that standard.

So, um, if all the other "graphics companies" get together and ratify a "standard" for a free and open high level shading language...what would that make nVidia with Cg? ;)
 
Joe DeFuria said:
So, um, if all the other "graphics companies" get together and ratify a "standard" for a free and open high level shading language...what would that make nVidia with Cg? ;)

We'll see what happens there. It does appear that nVidia is making at least some steps to make Cg usable by any manufacturer. Whether those steps are enough for the other manufacturers remains to be seen.

Anyway, that's a big what-if, and yes, if Cg turns out to be a lone HLSL with sole support from nVidia, and better open shader languages are developed, I will claim that Cg is bad.

What it doesn't appear to be (yet) is an attempt by nVidia to monopolize the 3D graphics market and charge all other manufacturers royalties for their graphics products.

Granted, I do believe nVidia thinks it'd be great if they became a monopoly. But, so far they've gained market share by putting out superior products. If they continue to gain market share through putting out superior products, I will not complain. Other methods I tend to dislike...
 
Didn't some of the top management for Rambus come from Sun Microsystems Java team?

I think the idea of an IP company trying to innovate memory technology was a good idea in principle. CPU speed has grown at a much faster rate then memory technologies, so having a company that makes money off of innovation of RAM design was a good theory. The problem was the the dirty tactics and crooked mentality of the company executives. Why Intel tied itself so closley to Rambus in the first place is a big mystery to me?
 
Back
Top