NVIDIA Bridging It (PCI-Express)

oh my lets all bow down to chalnoth and his wisdom that nVida can do no wrong...... :rolleyes:


First they messed up with nV3x and now they are messing up with a bridge chip.


Ati has demo'd a RV360 with PCI EXPRESS built in to the chip so obviously modifying a current design was not as spendy as everyone speculates.

If Nvidia wants to remain competitive and profitable in the long run they have to change their products accordingly. driver cheats, bridge chips are only short term solutions......
 
DaveBaumann said:
And the issues are rarely due to the silicon implementation (at least, on the VGA side), but board specific issues, which are generally resolved much quicker and would be invariant to whether you've used a bridge or not.
I would rather say that they can usually be corrected by modifying the boards. That doesn't mean that some of these fixes are not a workaround of some shortsight or flaw in the chip design.
 
AlphaWolf said:
One bonus for nVidia here may be that they can get this bridge chip to motherboard manufacturers for testing (of the motherboards) much sooner than they could get, say, a PCI Express NV4x.
How is it a bonus? It certainly isn't an advantage over the competition who has been demoing PCI Express cards for months.
It's a bonus over not doing it. If nVidia has no plans on producing an NV3x that works on the PCI Express bus natively, but instead plans to leave the move to native PCI Express to future chips, I'd say it's a heck of a lot better that they have their bridge chip out now for testing.

Remember also that nVidia is selling a much larger variety of chips than ATI right now. It would cost much more for nVidia to redo them all than to simply have this bridge chip.
 
YeuEmMaiMai said:
oh my lets all bow down to chalnoth and his wisdom that nVida can do no wrong...... :rolleyes:
Shall we instead bow down to you and your wisdom that NVIDIA can do no right?
Ati has demo'd a RV360 with PCI EXPRESS built in to the chip so obviously modifying a current design was not as spendy as everyone speculates.
Um, no. It obviously demonstrates that ATI felt it was worth the cost to respin an 'old' part to have PCI-E as its bus, or at least that they had some reasoning for doing it.

It doesn't reveal any cosmic truth about the rightness or wrongness of the decision.

I can see the benefits of taking either path and realise it all comes down to cost analysis based on factors that we, as outsiders, are not privy to.

I don't think anybody will ever know which was the "best decision". ATI will only really know how their decision affected them. NVIDIA will only know how their decision affected them.

And we'll know neither.
 
I think we will know.
We just have to wait 6 months or so before we know.
We can all sit here and sling mud back and forth, but in the end none of us know how it's going to pan out.

It's one thing for people to speculate about what will and won't happen, but it's another thing for people who don't have a clue to sit there and state their opinion as fact!
 
madmartyau said:
It's one thing for people to speculate about what will and won't happen, but it's another thing for people who don't have a clue to sit there and state their opinion as fact!

Umm, isn't that what the internet's for?!
 
John Reynolds said:
madmartyau said:
It's one thing for people to speculate about what will and won't happen, but it's another thing for people who don't have a clue to sit there and state their opinion as fact!

Umm, isn't that what the internet's for?!
that's what I always thought...
 
The Baron said:
John Reynolds said:
madmartyau said:
It's one thing for people to speculate about what will and won't happen, but it's another thing for people who don't have a clue to sit there and state their opinion as fact!
Umm, isn't that what the internet's for?!
that's what I always thought...
Oh, heck yeah!
 
Chalnoth said:
Remember also that nVidia is selling a much larger variety of chips than ATI right now. It would cost much more for nVidia to redo them all than to simply have this bridge chip.
Are they? At the moment, there's such a dizzying amount of model numbers from both that it's hard to do anything but say "argh, too much!" :oops: :p ;) Break it down, baby! It would be an interesting compare.
 
Let me start rattling off the NV chips I can think of...

5200, 5200 Ultra, 5600, 5600XT, 5600 Ultra, 5700, 5700 Ultra, 5900 non-Ultra, 5900XT/SE, 5900U, 5950U, MX4000...

and that's all I can think of at the moment.
 
Thats not chips, thats bins of chips. GFX chips in production are probably:

NV17, NV34, NV36, NV38. Similarily ATI has RV200 (used a lot in China and M7), RV280, RV360, R360.
 
I think nVidia's also still producing the NV28.

Anyway, I guess you might be right, Dave....nVidia has discontinued many of their product lines, so they may not actually be producing a larger variety of chips right now.

At the same time, ATI has demoed PCI Express on the RV360, not the RV280 or RV200.
 
lets take a moment to think about this for a minute we are going to guess that both ATi and Nvidia have access to the same resources when it comes to this standard of PCI express.

ATi takes the time to develop an interface for PCI express and decides to intergrate it into it's current chip technology R3X0 in the process working out any issues that may arise from intergration.... and they are ready to go when the time arises since they have dealt with all of the issues of intergrating the interface.

Nvidia decides to use a bridge chip and at some later date will also have to work out issues when intergrating the interface into their product line and also have to work out any issues that a bridge chip may cause with timings and what not. They also have to spend money on packaging and related PCB design that would not have to be spent with a intergrated interface

some of you guys say "oh the cost of intergration is too much" it most likely costs more to develop a seperate chip and new PCB designs and then new PCB designs when they intergrate than it does to design an intergrated interface plus 1 PCB for new desin

ATI designs 2 chips one with AGP and one with PCI express
ATi designs 2 PCBs one for AGP and one for PCI express


Nvidia designs 1 bridge chip and packaging
Nvidia designs PCBs that have to accomidate the bridge chip and new core.
Nvidia designs PCB for AGP
Nvidia designs new PCB when chip switches over to PCI express
Nvidia desings pcbs for current products to utilize the bridge chip

so I fail to see how having a seperate chip makes it cheaper in the long run...

RussSchultz said:
YeuEmMaiMai said:
oh my lets all bow down to chalnoth and his wisdom that nVida can do no wrong...... :rolleyes:
Shall we instead bow down to you and your wisdom that NVIDIA can do no right?
Ati has demo'd a RV360 with PCI EXPRESS built in to the chip so obviously modifying a current design was not as spendy as everyone speculates.
Um, no. It obviously demonstrates that ATI felt it was worth the cost to respin an 'old' part to have PCI-E as its bus, or at least that they had some reasoning for doing it.

It doesn't reveal any cosmic truth about the rightness or wrongness of the decision.

I can see the benefits of taking either path and realise it all comes down to cost analysis based on factors that we, as outsiders, are not privy to.

I don't think anybody will ever know which was the "best decision". ATI will only really know how their decision affected them. NVIDIA will only know how their decision affected them.

And we'll know neither.
 
YeuEmMaiMai said:
so I fail to see how having a seperate chip makes it cheaper in the long run...
It won't be cheaper if you are only porting 1 core over from AGP to PCI-E. If you are porting 2 or more cores, it looks different:

If you have N cores to port, the bridge chip solution means that you need to integrate, tape out and qualify N+1 cores, whereas the integrated solution requires tape-out and qualification of 2*N cores. Which gets important as tapeouts cost IIRC millions of dollars, increasing with every new process shrink.
 
I seem to remember the cost for a new tapeout being a couple hundred million dollars.

Of course, many foundries share the wafer with chips from other people, to reduce this cost. This is what many smaller companies end up doing (since you have to wait longer to get the tapeout).
 
Do we know (or have a very good indication) that NV40 will be AGP only? And does the bridge chip work both ways, letting an AGP chip work with PCI-E and letting a PCI-E chip work with AGP? If it works both ways, going the bridge route would make more sense than if it only went AGP->PCI-E.
 
Chalnoth said:
I seem to remember the cost for a new tapeout being a couple hundred million dollars.
You misremember.

A full mask set for .13u is less than $1M. Probably more like $400k. The CEO of NVIDIA claimed a modern GPU cost $100M+ to make, but I presume he's lumping a lot of capital and engineering expenses in there to come up with the fantastic number.
Of course, many foundries share the wafer with chips from other people, to reduce this cost. This is what many smaller companies end up doing (since you have to wait longer to get the tapeout).
There is a program at most foundaries called shuttles. You can give them your design and for about $50k get 50 chips back in 3-4 months. You're right, they amortize the cost of the mask set amongst the participants.

However, you still need to do a full mask set for production wafers, because the mask set used for the shuttle wafers has 10-20 other peoples' design on it and you can't use it to mass produce your own design.
 
Lezmaka said:
Do we know (or have a very good indication) that NV40 will be AGP only? And does the bridge chip work both ways, letting an AGP chip work with PCI-E and letting a PCI-E chip work with AGP? If it works both ways, going the bridge route would make more sense than if it only went AGP->PCI-E.

Originally, NV45 was going to be NVIDIA's first native PCI-E card; however, this may no longer be the case. The bridge chip does work both ways AFAIK.
 
YeuEmMaiMai said:
ATI designs 2 chips one with AGP and one with PCI express
ATi designs 2 PCBs one for AGP and one for PCI express


Nvidia designs 1 bridge chip and packaging
Nvidia designs PCBs that have to accomidate the bridge chip and new core.
Nvidia designs PCB for AGP
Nvidia designs new PCB when chip switches over to PCI express
Nvidia desings pcbs for current products to utilize the bridge chip

so I fail to see how having a seperate chip makes it cheaper in the long run...
Uh, let's compare in more detail ...

From now on until PCI-E dominates the market:
NVidia designs 1 bridge chip and x new chips with either native AGP or PCI-E interface (later this year).
ATI designs no bridge chip and y new chips, two versions each .

For each new chip, NVidia designs one PCB for AGP and one PCB for PCI-E, one of them (at first the PCI-E one) equipped with the bridge chip.
For each new chip, ATI designs one PCB for the AGP-version of the chip, and one PCB for the PCI-E version.

NVidia desings PCBs for current products to utilize the bridge chip
ATI designs ... well, what does ATI do? Respin all the current chips for PCI-E, or "drop" a few? Those respins need a new PCB, too.

I fail to see where the big advantage for ATI is.
 
Xmas said:
I fail to see where the big advantage for ATI is.
Production cost. The Nvidia bridge chip will add a few dollars to the cost of every card it appears on (the 2.5 GHz datarates of PCI-E require a lot of power/ground pins, as well as a package type with good signal integrity -> added packaging cost)
 
Back
Top