nVIDIA's "SLI" solution

It is possible for SLI to accelerate render to texture, provided it has been setup to allow it. 3DFX used to merge the frames on scanout, however it doesn't need to be done that way (on a modern card). You could decide to merge the 2 frames in the frame buffer on buffer swap.
 
Xeon only? Now, that's not very fun. I'm sure that, if only because nVidia has done this, some other chipset manufacturers and board manus will be planning dual PCI-Express X16 chips/boards.... like, perhaps, nVidia ;)
 
The dual PCI-E-x16 motherboard however will mean a significant investment, two PCI-E GeForce 6800GT cards could however make more sense than a single PCI-E GeForce 6800 Ultra or Ultra Extreme, as the performance increase will be much larger. Also, workstation motherboards run at a hefty price premium over consumer products, fortunately they do not require dual Xeons, a single Xeon will work just as well

I'm assuming that no consumer boards will have dual pci-e x16 .

Mabye in a couple years .

So what are the bets that ati puts out a maxx chip ?
 
Megadrive1988 said:
how will ATI respond to Nvidia's SLI - by bring MAXX back?

if not MAXX, then some other equivalent to SLI?

as you already know, ATI has had the ability to scale to upto 256 VPUs with R3xx since 2002. both Evans & Sutherland (probably the oldest graphics 3D graphics company around) and SGI have taken advantage of this.

E&S has had dual and quad R300 cards since 2003, and SGI is using 2-32 R3xx VPUs in their new UltimateVision line of ultra highend visualization systems.

so now that Nvidia is *apparently* bringing SLI back for consumers, what will ATI's response be?

I have no doubt ATI will do something in response, because they simply CAN.

Don't forget about quantum3d too. They've been "SLI"ing multiple cards and chips together for quite some time now. When the people where putting two separate Voodoo2 cards together, Q3D had them on one board (Obsidians?). Not to mention that they've had SLI (the real SLI - Scanline Interleave) licensed to them for quite some time now.
 
I dont expect ATi to respond to this at all.

They are simply going to cruise through the next cycle eating up cash left and right with their big OEM deals. Let Nvidia sell several thousand of these and look cool in a few reviews.

Its not going to change the Financial Side of whats going to happen over the next six months one iota.
 
Remember that NVDA also dabble in motherboard chipsets - I'd be very surprised if future NForce iterations didn't support dual PCIEx16 slots. They could turn it into a tidy little earner by pulling a similar stunt as ATI's RS300 triple-screen-but-only-with-a-Radeon deal.
 
jvd said:
I'm assuming that no consumer boards will have dual pci-e x16 .

Mabye in a couple years .

It depends on your definition of consumer. Dual PEG16X Tumwater boards (btw nice one Sander, way to go with following Tumwater NDA expiry) will definitely exist this year, from more than one manufacturer.

Rys
 
This will be nice for the workstation market, but given the requirements for new motherboards/CPU/memory, I think the cost will be prohibative for all but the nuttiest gamers, at least until motherboards start arriving with dual PEG slots as standard in the consumer market. It may catch people waiting for PCI-e before doing a big bang upgrade, but it will significantly raise the upgrade cost.

While it's a nice idea, it once again moves Nvidia into the rarified, exotic marketplace, allowing them to make a lot of noise about a small number of sales, while allowing ATI to keep making loads of profit on their better yielding parts. Nvidia cut off their noses to spite their faces because they are so desperate to beat ATI in the PR war, but seem to be forgeting that ATI are getting all the OEM deals.
 
Rys said:
jvd said:
I'm assuming that no consumer boards will have dual pci-e x16 .

Mabye in a couple years .

It depends on your definition of consumer. Dual PEG16X Tumwater boards (btw nice one Sander, way to go with following Tumwater NDA expiry) will definitely exist this year, from more than one manufacturer.

Rys

consumer to me is athlon xp , athlon 64 , perhaps athlon 64 fx (those are extremly expensive) p4 , p4 ee .

Xenon , opterons aren't really consumer boards

a fast xenon is going to cost clost to 1k , 200$ for a board (thats cheap for a good xenon board ) 200$ for a good power supply to run everything , a good chunk of change for registerd memory , then 1k on the boards themselves .

YOur looking at close to 3k , not to mention the other things you need inside the pc
 
One small number that might interest you guys: AFAIK, this solution is approximatively 35% faster than Alienware's :)

Uttar
 
Colourless said:
It is possible for SLI to accelerate render to texture, provided it has been setup to allow it. 3DFX used to merge the frames on scanout, however it doesn't need to be done that way (on a modern card). You could decide to merge the 2 frames in the frame buffer on buffer swap.
But that requires a reasonably fast connection between the two chips, something that might cause trouble if the chips are on separate boards.

I think this setup is a reaction to the Wildcat Realizm 800. And it should be able to come out as a clear winner.
 
Uttar said:
One small number that might interest you guys: AFAIK, this solution is approximatively 35% faster than Alienware's :)

Uttar

What is their target group with this expensive beast anyway? Show off or actually demand?
 
LeStoffer said:
Uttar said:
One small number that might interest you guys: AFAIK, this solution is approximatively 35% faster than Alienware's :)

Uttar

What is their target group with this expensive beast anyway? Show off or actually demand?
Shouldn't that be the pro market? And perhaps the high end of consumer market when motherwards are available and 6800s are affordable?
 
This seems remarkably ridiculous for 6800 Ultra and cost factors...at the moment.

The promise for the idea, IMO, is
1) future: motherboards capable of supporting it becoming cheaper, and leveraging the work done before into products at that time
2) over time: allowing incremental upgrades to utilize it
3) as soon as available: avoiding the ridiculous factors (at least the video card related ones) while still getting significant benefit...i.e., the 6800GT seems to go very well with this idea.

This is the type of aggressive mindset that nVidia started to return to with the 5900 SE/"XT" pricing, though it seems, unfortunately, it isn't simply replacing their more problematic "aggressiveness".

The problem (for nVidia...I think this is good for consumers) is that this doesn't seem a bread winner, so they aren't getting a bottome line return on this type of aggressiveness yet. The problem is with motherboard availability...something they might be able to directly address, while making extra sales.

nVidia, first with dual PCI-e Athlon motherboard?
 
Although I take my hat off I do find myself scratching my head after doing that.

Can you overclock such a contraption ? If you cannot then that 75% increase over a single card will be nibbled away at and make it even less cost efficient.

I'd like to see the extreme overclockers try and fit the heads of their cascade coolers onto that thing....
 
Why wouldn't you be able to overclock it? As long as you can get the same stable overclock on both cards it should be fine.
 
Wow

I can't afford one new video card, let a lone two. How about making an affordable card before suggesting we buy two. I guess the pros and fanatics will eat this up, but it's pretty much useless to 99.9% of the PC market.
 
Back
Top