Dual/Multi Chip Consumer Graphics Boards

elroy

Regular
Hi everyone,

It seems to me that one of the biggest problems with making a graphics chip these days is the .13 um process. Firstly, there was/is the NV30 debacle, and now it looks like the R400 might be pushed back to next year. Both of these circumstances are a result of an immature .13 um process. Are we getting to a stage where it might be better to produce multi chip cards, where part of the functionality resides on one chip, and the rest on the other(s)? This way, you would end up with chips of lower transistor count, which seem relatively easy to produce on .13 um. Any thoughts?
 
Any problem with the .13 micron process appears to be the fault of TSMC (or possibly nVidia being too optimistic about what TSMC could put forward with .13 micron...). There's no fundamental flaw in the process.
 
elroy said:
Hi everyone,

It seems to me that one of the biggest problems with making a graphics chip these days is the .13 um process. Firstly, there was/is the NV30 debacle, and now it looks like the R400 might be pushed back to next year. Both of these circumstances are a result of an immature .13 um process. Are we getting to a stage where it might be better to produce multi chip cards, where part of the functionality resides on one chip, and the rest on the other(s)? This way, you would end up with chips of lower transistor count, which seem relatively easy to produce on .13 um. Any thoughts?

Thoughts?
Using several chips increase chip yield, (assuming the same functionality gets partitioned out -> smaller dies) but also increases the cost of packaging/testing. Overall the cost should increase unless the yield on the bigger chip would be truly dismal.

Assembly and PCB design should get a bit more costly as well. So in all probablility it wouldn't pay off vs. squeezing all the functionality into a single chip, even if yields for that chip was pretty crummy.

It would allow you to reach performance points inaccessible otherwise, which is the reason we have seen the approach used for consumer cards. Their decent price levels demonstrated that multi-chip wasn't all that difficult to pull off.

So it could definitely happen, but I would still be surprised to see it.

Entropy
 
Any problem with the .13 micron process appears to be the fault of TSMC (or possibly nVidia being too optimistic about what TSMC could put forward with .13 micron...). There's no fundamental flaw in the process.

??

One of the reasons to go to multi-chip is precisely to reduce the dependency on bleeding edge manufacturing, so that problems with TSMC or "overzealous nVidia egineers" are avoided. It's the same reason you build a 256 bit bus rather than rely on bleeding edge 128 bit ram.

There is no fundamental flaw with EITHER multi-chip or sinlge-chip approaches. Each one has pros and cons.

3dfx, for example, was very successful with the multi-chip approach with the original Voodoo and Voodoo Graphics. Not so succssful with the VSA-100. The problem is, the decision to go "multi-chip" or "large single chip" is going to be 18-24 months ahead of when the actual product is ready to ship, and trying to forecast market conditions to pick the best "approach" is little more than a crap shoot.

So unless you already have some expertise in designing "inexpensive" multi-chip solutions it's hard to justify the extra R&D expense to create that tech, when you're "not sure" if it will be a benefit...
 
I wish TSMC and .13 micron could be done away with altogther.

funny that .13 micron is giving everyone so much trouble. unlucky number 13 :oops:

cannot wait for .11 and .09!
 
Well, you could just hedge your bets and design a chip that is multi-chip capable Just In Case, and design PCBs for specialized multichip applications, so the only thing you have to test and develop for the consumer space is a refinement of what you've already done (significantly reduced lead time), that is assuming you actually haven't done that ahead of time as well...

Hey, you could even have your drivers support the concept ahead of time with specific code paths for the product, and test things out in the less varied and demanding specialized multichip application....

:p

Makes the 0.13 R350 rumors even more interesting (power reduction), and I sort of wonder what a dual RV350 chip design would perform like.
 
Sabastian said:
Thats funny doesn't seem ATi is having any difficulties with it.

http://customer.ibeam.com/GOLD005/022403a_by/default.asp?entity=ati

Given the lower clock speeds and less complex (lower transistor count) chips that they're dealing with, that's nothing suprising. Also, NVidia's pushing for .13 to mature faster in order to get the NV30 on the market has probably improved the situation to beyond the point that it would be had NVidia not taken the risk.
 
Process

Which means when it comes to the R400 they'll have even more experience with it (.13u). Excellent use of Risk Mitigation.
 
Ostsol said:
Sabastian said:
Thats funny doesn't seem ATi is having any difficulties with it.

http://customer.ibeam.com/GOLD005/022403a_by/default.asp?entity=ati

Given the lower clock speeds and less complex (lower transistor count) chips that they're dealing with, that's nothing suprising. Also, NVidia's pushing for .13 to mature faster in order to get the NV30 on the market has probably improved the situation to beyond the point that it would be had NVidia not taken the risk.

If you actually listen to the report ATi went for the same process at the same time. Nvidia pushing the process = hype.
 
3dfx, for example, was very successful with the multi-chip approach with the original Voodoo and Voodoo Graphics. Not so succssful with the VSA-100.

Small difference: V2= two VGA's in SLI, VSA100= multichip. I paid almost 600$ back then for the two V2's, but then again there wasn't actually anything comparable around that could beat a setup like that back then.

In retrospec I have no intention to exceed the 300-400 upper threshold in a best case scenario nowadays. If IHV's don't solve one way or the other the problem with the ram "dependancy" for each chip, I don't think it'll be affordable. If you need today 128MB onboard ram on a >mainstream card, then you have to multiply that amount of ram times the chips the multichip sollution will use.

An interesting idea I read about in another forum was a separate geometry processor for example, yet still on the same die as the raster. Something like an "integrated" co-processor. I can't explain it any better and I have no idea if it's feasable either.
 
Small difference: V2= two VGA's in SLI, VSA100= multichip.

No, V1 = 2 chips: basically one chip for the pixel pipe, and one chip for the TMU. (I think even the RAMDAC was a separate chip...)

A single V2 had 3 graphics chips: one for the pixel processor, and a pair of texture processors...

Unlike the nVidia TNT approach which integrated all three "units" in a single, larger chip.
 
Understood. Usually when it comes to multichip and 3dfx, my mind usually hops to SLI, hence the confusion.
 
SLI only means that the two chips/boards share the framebuffer so that one renders all odd lines and the other one all even lines. V5 employs a similar scheme, but it's stripes of multiple lines interleaved.

It would have been possible to put multiple 'chuck'/'bruce' (PixelFX/TexelFX in Voodoo Graphics) combinations on one board, but noone did that.
 
Ailuros said:
3dfx, for example, was very successful with the multi-chip approach with the original Voodoo and Voodoo Graphics. Not so succssful with the VSA-100.

Small difference: V2= two VGA's in SLI, VSA100= multichip. I paid almost 600$ back then for the two V2's, but then again there wasn't actually anything comparable around that could beat a setup like that back then.

In retrospec I have no intention to exceed the 300-400 upper threshold in a best case scenario nowadays. If IHV's don't solve one way or the other the problem with the ram "dependancy" for each chip, I don't think it'll be affordable. If you need today 128MB onboard ram on a >mainstream card, then you have to multiply that amount of ram times the chips the multichip sollution will use.

An interesting idea I read about in another forum was a separate geometry processor for example, yet still on the same die as the raster. Something like an "integrated" co-processor. I can't explain it any better and I have no idea if it's feasable either.

Ram was a lot more expensive then--and a lot slower. That was the cost barrier, IMO.
 
Xmas said:
SLI only means that the two chips/boards share the framebuffer so that one renders all odd lines and the other one all even lines. V5 employs a similar scheme, but it's stripes of multiple lines interleaved.

It would have been possible to put multiple 'chuck'/'bruce' (PixelFX/TexelFX in Voodoo Graphics) combinations on one board, but noone did that.

Well, 1 company did, and that was Quantum3D. Of course these weren't consumer cards and obviously were hellishly expensive.

A single Voodoo 2 card was actually only a little bit slower than a SLI 2 TMU Voodoo 1.
 
elroy said:
It seems to me that one of the biggest problems with making a graphics chip these days is the .13 um process. Firstly, there was/is the NV30 debacle, and now it looks like the R400 might be pushed back to next year. Both of these circumstances are a result of an immature .13 um process.

You state that R400 is late because of .13. Don't you think you should state that's speculation? If Ati is late it's more likely the design is just taking longer than they thought. It happens with the majority of product development cycles.
 
3dcgi said:
elroy said:
It seems to me that one of the biggest problems with making a graphics chip these days is the .13 um process. Firstly, there was/is the NV30 debacle, and now it looks like the R400 might be pushed back to next year. Both of these circumstances are a result of an immature .13 um process.

You state that R400 is late because of .13. Don't you think you should state that's speculation? If Ati is late it's more likely the design is just taking longer than they thought. It happens with the majority of product development cycles.

3dcgi,

I thought i sort of alluded to it being speculation when I said looks like and might. I was in no way confirming it.

Everyone,

When I started the topic, I wasn't really trying to target any specific products, but rather the tech. processes themselves. In the case of .13 um, it looks like it is a bitch to produce a LARGE chip (greater than 100 mill transistors). And I think there's been reports of problems with .09 um, hasn't there? I think everyone thought that TSMC would have the problems with .13 um sorted out by now, but it looks like the low-k process is nowhere near ready to go (And the rumour is R400 won't come out 'til low-k is ready :( ). And don't the problems only get worse if you go even smaller? So to get around this, I believe the logical conclusion, in the short term at least, is to go multichip. Once the fabs can get everything sorted out in regards to new processes in a timely manner, this thinking will probably change again.
 
Back
Top