nVIDIA's "SLI" solution

oh shit .

Not only do u need a new cpu / motherboard but u need to give up 4 slots !?!?! crap man that is alot.

They should have done this with the gts
 
haha, I know it's insane, but people who's willing to pay that much money on it is already insane, right? :D

BTW, it's not really a "SLI" configuration, the screen is cut in the middle, each card is responsible for a half.
 
991060 said:
haha, I know it's insane, but people who's willing to pay that much money on it is already insane, right? :D

BTW, it's not really a "SLI" configuration, the screen is cut in the middle, each card is responsible for a half.

nevermind haha read it wrong :)

thats an expensive cpu set up too.
 
jvd said:
991060 said:
haha, I know it's insane, but people who's willing to pay that much money on it is already insane, right? :D

BTW, it's not really a "SLI" configuration, the screen is cut in the middle, each card is responsible for a half.

nevermind haha read it wrong :)

thats an expensive cpu set up too.
According to xbitlabs, Tumwater will come in 2004Q2, priced $100 8)
 
I have a feeling the Alienware CEO has locked himself in his room and is throwing a major tantrum.
 
Fodder said:
I have a feeling the Alienware CEO has locked himself in his room and is throwing a major tantrum.

I wonder if Alienware knew that Nvidia was coming out with something so the "pre-announced"? I think they(alienware) said 4th quarter for the computer, where it looks like Nvidia may be ready to go with what they have?
 
how will ATI respond to Nvidia's SLI - by bring MAXX back?

if not MAXX, then some other equivalent to SLI?

as you already know, ATI has had the ability to scale to upto 256 VPUs with R3xx since 2002. both Evans & Sutherland (probably the oldest graphics 3D graphics company around) and SGI have taken advantage of this.

E&S has had dual and quad R300 cards since 2003, and SGI is using 2-32 R3xx VPUs in their new UltimateVision line of ultra highend visualization systems.

so now that Nvidia is *apparently* bringing SLI back for consumers, what will ATI's response be?

I have no doubt ATI will do something in response, because they simply CAN.
 
I would hope that they do, and if so, that it would put pressure on Matrox and/or 3dlabs & S3 to do the same. I want dual Parheliae with the recently refreshed cores to keep Surround Gaming viable for the future.
 
3DLabs already has their own solution with their new P20 VPU. but so far, they have not entered the consumer industry via Creative.....



performance of the GeForce 6800 SLI configuration...interesting

Exact performance figures are not yet available, but Nvidia's SLI
concept has already been shown behind closed doors by one of the
companies working with Nvidia on the SLI implementation. On early
driver revisions which only offered non-optimized dynamic
load-balancing algorithms their SLI configuration performed 77% faster
than a single graphics card. However Nvidia has told us that
prospective performance numbers should show a performance increase
closer to 90% over that of a single graphics card.

now if I recall correctly, the 3Dfx Voodoo2 SLI provided twice the fillrate of a single Voodoo2 card. 90 Mpixels/sec with a single card to 180 Mpixels/sec with two Voodoo2s in SLI. but of course SLI did not boost the polygon performance whatsoever (i think) since the CPU was responsible for providing the polygon calculations because Voodoo2 (like all 3Dfx's released chips) was just an 3D accelerator / renderer, not a full 3D graphics processor with geometry processing.

so, I'm assuming and hoping that Nvidia's SLI solution does not just nearly double the fillrate but the geometry as well. vertex shader performance, pixel shader performance, everything. by say, 80 to 90 percent.
 
yeppee... single 6800 Ultra costs here around ~600 Euros. That Dual setup will cost ~1200 Euros. (which is my half a year rent.) Plus, there's no way finding enough CPU power for that sucker. How about PSU? does it need 600-800 Watt PSU?


a real gamers solution? yeah, sure.
 
can you imagine if PowerVR got really bold by introducing a dual PowerVR Series 5 card? I think ImgTech has an oppertunity to school Nvidia in this area. we've seen what the PowerVR people are capable of with the NAOMI2 board for Sega. they have twin PowerVR2DCs / CLX2s on that one board. why can't Nvidia do this? ATI and E&S already have had dual 8500 cards, dual 9700 cards and quad 9700 cards. just think if PowerVR could offer a dual Series 5 card for around $500. you wouldn't even need the 3 chip setup that NAOMI2 needs (elan plus two powervr clx2s) since Series 5 will have modern vertex shaders....

or if ATI could do a dual X800 XT for $700~800 (1 card) and give Nvidia hell. :oops:

why can't Nvidia put 2 GeForce 6800s on one card? electrical power suppy limitations?

anyway... OMG with GeForce 6800 Ultra SLI we should be getting around 1 billion vertices/sec peak performance if we get a 80~90% increase... since one 6800 is supposed to get over 600M verts, hehe!
 
Megadrive1988 said:
now if I recall correctly, the 3Dfx Voodoo2 SLI provided twice the fillrate of a single Voodoo2 card. 90 Mpixels/sec with a single card to 180 Mpixels/sec with two Voodoo2s in SLI. but of course SLI did not boost the polygon performance whatsoever (i think) since the CPU was responsible for providing the polygon calculations because Voodoo2 (like all 3Dfx's released chips) was just an 3D accelerator / renderer, not a full 3D graphics processor with geometry processing.

so, I'm assuming and hoping that Nvidia's SLI solution does not just nearly double the fillrate but the geometry as well. vertex shader performance, pixel shader performance, everything. by say, 80 to 90 percent.
No go. To determine which "half" of the system fills a given triangle, you need to know its viewport position, to get the position, you need to transform it, and that's what the vertex shader does.

You can do either
a)the master/slave approach. Transform only on one chip, dispatch to the correct chip afterwards. This means no redundant work, but you the other VS will simply sit idle, all the time. Very unlikely to ever be implemented, because you need a very fast interconnect "into the middle" of the "slave" chip.
b)the "brute force" approach. Transform on both chips, and selectively discard in the viewport "clipping" stage. You end up always doing redundant work, but there's much less communication between the two chips.

Both approaches have the same net effect on performance: you gain exactly nothing on the vertex side of things.

If you're clever, you'll at first only run the part of the vertex shader that is required to determine position, and defer the rest until after the target chip has been identified. But you'll never get twice the vertex performance, always less.


PS: "SLI" is more complicated to do today than it was in the 3dfx era. Keyword: render to texture.
 
I think you guys are missing the point of IT. IT is not just limited to the 2 slot ultras. IT should be applicable to any Nv40 based product.. and that is just the begining.

Imagine two non ultras in SLI mode. immidiately solves the problem of limmited Quantities of Ultra parts AND keeps the Performance Crown and bragging rights for the enthusiest market.

I'll bank that even 300mhz SLI'd Nv40's would soundly beat an 800XT in GPU limited benchmarks.
 
Hellbinder said:
I think you guys are missing the point of IT. IT is not just limited to the 2 slot ultras. IT should be applicable to any Nv40 based product.. and that is just the begining.

Imagine two non ultras in SLI mode. immidiately solves the problem of limmited Quantities of Ultra parts AND keeps the Performance Crown and bragging rights for the enthusiest market.

I'll bank that even 300mhz SLI'd Nv40's would soundly beat an 800XT in GPU limited benchmarks.

well two 300mhz sli nv40s would most likely be the best price to performance deal.

But only if you can use any chip with it .

I'm not a fan of having to be forced into buying only a xeon chip.
 
Back
Top