More SLI

NVIDIA SLI is a patent-pending technology, so it will not be easy to merely duplicate it. ATI will be using some different methods when they develop their multi-gpu solution. And make no mistake, ATI has definitely got something in the works.
 
jimmyjames123 said:
And make no mistake, ATI has definitely got something in the works.
And of course once they reveal it jvd will be sure to remark on what a great idea it is. :cry:
 
About the two boards having to be identical: I can imagine the master board doing the coordination of the resource management for both, for the data that travels over the interconnect and/or to be able to send specific data over the ePCI bus only once. Especially because some resources (like render to texture) are shared anyway. In that case, it needs the exact specs of the other board.

While it wouldn't be strictly nesscesary to do SLi that way, it would make the implementation simpler and more easily optimized, while it would at the same time put a nice restriction on buying a second board next year, thereby increasing profit.
 
lol, I wonder what ATI will coin their joining of two cards? In addition how will the newer motherboards with dual x16 slots handle ATI's method?
 
Trawler said:
Is it multi-chip or multi-core single die?
It's nothing right now.

But, anyway, multi-core isn't something that's useful in the GPU market. Multi-core is only useful for CPU's because we are stuck with the x86 instruction set. The x86 instruction set does not know how to handle parallelism well, but we do have layers on top of x86 that do. So, multicore CPU's are a way around the limited x86 instruction set.
 
Chalnoth said:
But, anyway, multi-core isn't something that's useful in the GPU market. Multi-core is only useful for CPU's because we are stuck with the x86 instruction set. The x86 instruction set does not know how to handle parallelism well, but we do have layers on top of x86 that do. So, multicore CPU's are a way around the limited x86 instruction set.

Well, aren't there multi-core PowerPC chips?
 
PowerPC chips are also based upon a legacy architecture.

Anyway, maybe I overstepped my bounds a little bit on that last statement. It may be true that multicore makes sense for many different types of CPUs because the easiest way to parallelize most code is to simply have many parallel threads running at once.

But for GPUs, multicore still makes zero sense because they're already highly parallel, due to the fact that the nature of the data that they process makes it very easy to parallelize.
 
Well, I wouldn't say multicore is useless for GPU, but I would say that they're only getting half of what "multi-gpu" can offer when SLI-type schemes are used.

You can see SLI as an ultra expensive dual card solution. And you can see it as stepped cost for an expensive fast card right now and a less expensive speed bump 6-12 months down the road.

With multicore you only have the first option. I mean, instead of SLI, nVidia could have probably put two NV40 chips on the same card (like the volari duo) and sell it for $999.

IMO the strength of any SLI-type scheme lies in the upgrade path not its capabilities as full-blown initial dual card performance.

That's why I think that _in theory_ a general SLI-like system (as the one Alienware is (still?) doing) is even better since you are even less constrained in your upgrade card. There's still a lot of be said of performance versus a "native SLI implementation" such as nVidia's though.
 
Nick said:
PatrickL said:
I already don't think there is a significant market for dual pcie-cards...
Did everyone forget the success of Voodoo 2 SLI? :? In terms of performance, NVIDIA will be one generation ahead, and they already are ahead features-wise...

Important difference, though...at the advent of V2 SLI everybody had an extra PCI slot open, and so V2 SLI was a viable retail option from the start. With the advent of PCIe it will be years into the future, if ever, before "everybody" has an open, extra PCIe graphics slot.

On a percentage basis the slice of machines that could support V2 SLI at its introduction was far greater than the number capable of supporting nV4-SLI today, and even moving ahead a couple of years to when PCIex becomes mainstream I would imagine that the number of single-slot PCIe mboards will substantially exceed the number of dual-slot PCIex16 mboards sold. IE, at no time would I expect dual-slot PCIe-graphics to become as unbiquitous as PCI-slot mboards were at the time V2 SLI shipped (as PCI was a general system bus whereas PCIex16 is a graphics-only bus.)
 
WaltC said:
Important difference, though...at the advent of V2 SLI everybody had an extra PCI slot open, and so V2 SLI was a viable retail option from the start. With the advent of PCIe it will be years into the future, if ever, before "everybody" has an open, extra PCIe graphics slot.

Maybe. But that's based on the assumption that dual-PCIe boards will carry a significant premium in the future. OEM's will avoid it like the plague to cut costs but what do we know of the true cost involved in providing a dual-PCIe solution? Look at how much we get for 'free' on today's motherboards.
 
trinibwoy said:
Maybe. But that's based on the assumption that dual-PCIe boards will carry a significant premium in the future. OEM's will avoid it like the plague to cut costs but what do we know of the true cost involved in providing a dual-PCIe solution? Look at how much we get for 'free' on today's motherboards.

I would think that the deciding factor in how well dual-slot PCIe-graphics buses permeate the market ultimately will be the number of useful multi-display adaptations for it that are developed as opposed to anyone's 3d-gaming SLI-type hardware. Even back when V2 SLI was king the major use for multiple PCI graphics cards was multiple display output as opposed to V2 SLI.

It's ironic to recall that a chief detractor to V2 SLI at the time was nVidia, which royally lambasted V2 SLI as a "backwards-looking" technology. Heh...;) In the interests (obviously) of self-preservation I've never seen nVidia balk at turning back the clock yet...;) After all, I guess if you can't "leap ahead" why not "leap behind", right?....:D (I think nVidia missed the marketing boat, though--"Retro 3d, exclusively from nVidia" sounds so much cooler, I think...;))
 
But for GPUs, multicore still makes zero sense because they're already highly parallel, due to the fact that the nature of the data that they process makes it very easy to parallelize.

Yeah, so unless the nature of data changes, like the ratio of polygon size to pixels, I don't think it'll make sense to create multicore GPUs since you'll be bandwidth limited. Beside the programmable part in GPUs are already array of stream processors.
 
WaltC said:
It's ironic to recall that a chief detractor to V2 SLI at the time was nVidia, which royally lambasted V2 SLI as a "backwards-looking" technology. Heh...;)
Well, I don't remember this. I do remember them saying this of the Voodoo5's multichip architecture, but then 3dfx (and, a few months previously, ATI) was attempting to build a high-end product out of what were essentially low-end chips.
 
Back
Top