nVIDIA's "SLI" solution

I'm not sure this is what you were referring to earlier, but NVIDIA does seem to claim that their SLi implementation can scale geometry performance.
 
I note when I first raised the NVidia / Alienware dual GPU / CPU solution last week http://www.beyond3d.com/forum/viewtopic.php?t=13402 it qucikly dissappeared from here (in my mind absolutely the correct forum to discuss it and it implications) and moved to the 3d graphics board section (I initially thought WTF??? why put it there) done I might add without any reason or explanation as to why a moderator felt that was a more appropriate forum.

This coupled to 20 pages about 2 lines of 3 dots just says to me things seem a bit surreal lately guys...

But well done NVidia btw!
 
DaveBaumann said:
Another potential problem with this solution, and I'm sure you'll all be horrified to hear this: It might break Quincunx :!: :!: :!: (I can hear the collective gasps of horror now)
Actually, I don't think so (not that I like it). Remember that one video card still has to do all of the displaying. If the video cards are run in the mode where the downsampling is done at scanout, Quincunx could still very well be used.
 
ninelven said:
I'm not sure this is what you were referring to earlier, but NVIDIA does seem to claim that their SLi implementation can scale geometry performance.
That's possible, and may be related to their load-balancing technology. Here's an algorithm that nVidia might be using:

1. Cache one whole frame on the system side.
2. Examine the transform matrix in order to compare the screen-space "y" values for each vertex to be rendered (which wouldn't require a full transform).
3. Execute an algorithm that finds out where to place a clipping plane that would place half of the vertices above and half below. This would be fairly challenging to do efficiently, but is certainly not impossible.
4. Send all triangles that have vertices in the lower portion of the screen to one graphics card, and all triangles with vertices in the upper portion of the screen to the other.

Now, there are many performance issues associated with the above algorithm, particularly in the amount of CPU processing that would need to be done. There are also potential state change issues on the GPU side. These will definitely limit possible performance gains, and may be one reason why 3DMark2003 was used for benchmarking (since it's not very CPU-limited).

It would therefore be beneficial if nVidia allowed users to disable the load-balancing, which would eliminate the geometry acceleration, and limit fillrate efficiency. It will be interesting to see exactly what the CPU impact of the load balancing is, though there is always the potential that nVidia has found a way of doing this efficiently on the GPU (though doing so may require sending the entire scene once, reading back a "coverage mask," and then sending it again).
 
zeckensack said:
There are no cooling benefits from BTX.
False, BTX presents a variety of cooling benefits.

It's a more complicated assembly because of the built-in "thermal module" with specific size and position requirements.
Huh? It should be cheaper and easier to build than what DELL currently does with their Thermal Module.

BTX doesn't work with CPU integrated memory controllers because of layout constraints.

False again. The BTX form factor will work fine with IMC. You just need to shift the dimms slightly forward from the prefered embodiment.

BTX has piss poor airflow characteristics for drives and add-in cards.

Hmm, can't think of a add in card that needs airflow besides the video card.

Drives are just fine. Most drives are cooled primarily via thermal coupling to the case anyways.

BTX thermally couples CPU and graphics chips, which isn't such a brilliant idea.

Why not? A Single 120mm fan can easily provide enough airflow for both while making almost no noise. In additional the thermal difference in the are provides to the video card versus the dead air provided in an ATX case is at worst minimal.

BTX is just a stupid idea that tries to make 100W+ processor-in-a-box things cheaper (to produce ...) at the expense of ... well, a lot.

BTX is designed to make the system thermal solution cheaper overall, by reducing the number of fans needed and simplifying attachment. When BTX takes off, even things like an X800 Pro with be sold fanless with a simple alu heatsink that will end up with much better thermal performance than the current designs.

Aaron Spink
speaking for myself inc.
 
BTX will probably not take off if the sounds coming from Taiwan are to be believed. AMD don't like it either. Even Intel only like it because it was last ditch solution for Prescott and Tejas and now the last ditch has arrived, with BTX in it....

It's a smart move by nvidia because even if you never evert buy that second card you buy the first one because it gives you the potential to buy the second. Another tick for the GT over the X800PRO. Also nVidia get to sell a chipset chip as well. Those crafty beggars !

I don't see the problem with the money, there are a lot of young car enthusiasts spending an awful lot more buying dump valves and carbon fibre air boxes and extra shiny injectors and other ridiculous things that I have no interest in, so I don't see why we should be sounding like our own parents on what is " reasonable "

I have an AMD 754 system at present and have ordered a 6800U but if there is something that will give me a noticable increase in performance ( ie 20-30%) come Christmas with AMD64 San diego and maybe this option then I am prepared to pay.
 
aaronspink said:
BTX is designed to make the system thermal solution cheaper overall, by reducing the number of fans needed and simplifying attachment. When BTX takes off, even things like an X800 Pro with be sold fanless with a simple alu heatsink that will end up with much better thermal performance than the current designs.

BTX makes only Intel's life easier because of the large heat output of upcoming Intel chips. Far Eastern PC makers hate it. It means new cases, PSUs and motherboards for Intel chips, and Intel keep changing the spec on them. How long have we been talking about BTX, and it's still not been finalised?

If Intel really wanted to make it work, they should have had BTX ready for PCIe, so that those doing a major upgrade to PCIe could also make the jump to BTX.
 
aaronspink said:
BTX is designed to make the system thermal solution cheaper overall, by reducing the number of fans needed and simplifying attachment. When BTX takes off, even things like an X800 Pro with be sold fanless with a simple alu heatsink that will end up with much better thermal performance than the current designs.

That's completely false. BTX will make life worse for graphics cards. BTX is a solution to Intels problem, and Intel don't give a flying fudge about the rest of the industry.
 
Nick Spolec said:
Quitch said:
I think the method noted earlier, where you "upgrade" by buying a second card, is the most likely course for most users, and quite a tempting one at that.

And you also upgrade your PSU and/or your motherboard too.

That depends on whether you planned ahead or not, doesn't it? Which works out as more cost-effective? Actually, it's probably going to depend on the final benches.

It is quite likely this has more of a future as a very niche, top end product. After all, no matter how good your PC you're limited by what's available at the time. Being able to link two cards together compensates somewhat.

I wonder if cards could end up competiting with themselves as people opt to buy a second old one rather than move onto the new one. What happened in the era of Voodoo 2/Voodoo 3?
 
I have to say, the way the cards connect together seems all fine and dandy... But what about the inevitable heat issue?

Most video cards emit heat from behind the core (on the opposite side of the HSF). The heat generated from the bottom card on Nvidia's SLI method will undoubtedly effect the top card. Because usually, you want COOL air to be sucked into a HSF, not HOT.

I am guessing people who actually have this setup would be well advised to have a fan blowing inbetween the cards.
 
Nick Spolec said:
I have to say, the way the cards connect together seems all fine and dandy... But what about the inevitable heat issue?

Most video cards emit heat from behind the core (on the opposite side of the HSF). The heat generated from the bottom card on Nvidia's SLI method will undoubtedly effect the top card.
Nahh you take advantage of it. The card on top could have a thermocouple to extract electricity from the waste heat of the bottom card and so save power. :p
 
Nick Spolec said:
I am guessing people who actually have this setup would be well advised to have a fan blowing inbetween the cards.
This is probably why they elected for the blower design that they did.
 
Chalnoth said:
Huh? There's no way a normal fan design would work with two closely-packed video cards.
Neither would a blower. Air intake is severely reduced by the back of the other card in both designs.
 
anaqer said:
Chalnoth said:
Huh? There's no way a normal fan design would work with two closely-packed video cards.
Neither would a blower. Air intake is severely reduced by the back of the other card in both designs.

Depends on your case too. My current project is to drop a 120mm right on top of my gpu so the gpu/cpu share the air. In this config two blowers won't be starved for cool air.
 
trinibwoy said:
Depends on your case too. My current project is to drop a 120mm right on top of my gpu so the gpu/cpu share the air. In this config two blowers won't be starved for cool air.
I was under the impression we were talking about the situation in generic BTX cases, not custom built rigs. :?:
 
anaqer said:
trinibwoy said:
Depends on your case too. My current project is to drop a 120mm right on top of my gpu so the gpu/cpu share the air. In this config two blowers won't be starved for cool air.
I was under the impression we were talking about the situation in generic BTX cases, not custom built rigs. :?:

I don't own a DELL :p
 
Nick Spolec said:
Quitch said:
I think the method noted earlier, where you "upgrade" by buying a second card, is the most likely course for most users, and quite a tempting one at that.

And you also upgrade your PSU and/or your motherboard too.

Remember how a couple of years ago when with hard drives in RAID configurations how even if you had the the same size and model hard drive or how with Dual CPU systems with the same speed CPUs that some times the system did not work? Or, when one installs dual channel RAM it works best to get a dual channel kit or two DIMMs at the same time? The problem is in many cases that there has been a minor revision change and something has changed making the two items incompatible. Does anyone seriously expect buy one 6800 from one AIB manufacturer and another 6800 from different AIB manufacturer in six months later an expect SLI to work? For that matter even if you wait six months and buy another 6800 from the first AIB manufacturer the odds are that it still won't work.

[conspiracy mode=on]
I am imagining various board makers changing the pinout of the SLI connector on their cards to break compatability with other manufacturer’s cards. Anyways how will you get the SLI bridge board if you don't buy the two 6800s at once? Retailers are not going to stock something they may only sell 30 of that costs maybe $30. For that matter, I think nVidia only announced SLI as a marketing gimmick to get people to buy PCIe 6800s with the SLI connector, cards which AIB manufacturers certainly will charge a premium for, to people who think that they will just get another 6800 in six to twelve months and expect them to work. For that matter in six to twelve months the 6800 will be old news and will probably be replaced by an even faster card.
[conspiracy mode=off]
 
Back
Top