nVIDIA's "SLI" solution

Quitch said:
I think the method noted earlier, where you "upgrade" by buying a second card, is the most likely course for most users, and quite a tempting one at that.

And you also upgrade your PSU and/or your motherboard too.
 
Ever since the introduction of the PCIE standard, most of my geek friends immediately thought "aha! Maybe we have dual video cards!" It seems like the majority of people have already considered that PCIE implies the possibility of dual-PEG16.

There will certainly be a highend market for this. One only need to look at the .sigs of people who brag about their enormously expensive custom boxes. It's just another option for performance geeks who want to drop a lot of money to have an uber-box. I don't really see the need to talk down additional user options.
 
a dual 16x board will be at least $100us more than a "regular" MB. I doubt you will see any dual 16x MB less than 300$ for the next year and a half. And i wont be buying one for SLI tho it is cool, ill be buying to have both ATI and NVDA in the same box. This will not be EVER for a regular joe who games more than emails mom.
I can just see the Asus MB for 380$....
 
trinibwoy said:
What are the advantages of alternate frame rendering over an interlaced/split-screen approach? Guess you wouldn't need the 'merge buffer' anymore.

Fewer resource sharing issues. This can matter a lot in render to textures situations. In most cases the render target is updated every frame, and if that's the case there's no need to copy data between the chips. I expect the benefit of nVidia's SLI to go down quite a bit whenever render to texture is used.
 
Another potential problem with this solution, and I'm sure you'll all be horrified to hear this: It might break Quincunx :!: :!: :!: (I can hear the collective gasps of horror now)
 
DaveBaumann said:
Another potential problem with this solution, and I'm sure you'll all be horrified to hear this: It might break Quincunx :!: :!: :!: (I can hear the collective gasps of horror now)


Nice shot of sarcasm :LOL:

I couldn´t give a flying F about Quincunx or any other blur filter incorporating AA mode pffffff.......
 
DaveBaumann said:
Another potential problem with this solution, and I'm sure you'll all be horrified to hear this: It might break Quincunx :!: :!: :!: (I can hear the collective gasps of horror now)

Problem? That's a check-box feature.
 
quick question.

Why are you all jumping to pci-e when in about 10 months to a year btx should be out ?

I would think that the cooling benifits of btx are more pronounced than pci-e
 
jvd said:
quick question.

Why are you all jumping to pci-e when in about 10 months to a year btx should be out ?

I would think that the cooling benifits of btx are more pronounced than pci-e

Maybe I'm missing something here, but what is the relevance of btx to pci-e?
 
jvd said:
quick question.

Why are you all jumping to pci-e when in about 10 months to a year btx should be out ?
Because noone cares about BTX.
jvd said:
I would think that the cooling benifits of btx are more pronounced than pci-e
There are no cooling benefits from BTX.
It's a more complicated assembly because of the built-in "thermal module" with specific size and position requirements.
BTX doesn't work with CPU integrated memory controllers because of layout constraints.
BTX has piss poor airflow characteristics for drives and add-in cards.
BTX thermally couples CPU and graphics chips, which isn't such a brilliant idea.

BTX is just a stupid idea that tries to make 100W+ processor-in-a-box things cheaper (to produce ...) at the expense of ... well, a lot.

I hereby pronounce BTX dead.
 
ninelven said:
Sage, what gave you that impression? Link/quote?

what impression are you referring to? that it wouldnt not increase vertex throughput? well, because they are rendering the same frame... vertices have to be translated in order to know what parts of the frame the resulting triangles reside in, so each GPU has to translate the entire scene and then render the part that has been designated for it.
 
Sage said:
ninelven said:
Sage, what gave you that impression? Link/quote?

what impression are you referring to? that it wouldnt not increase vertex throughput? well, because they are rendering the same frame... vertices have to be translated in order to know what parts of the frame the resulting triangles reside in, so each GPU has to translate the entire scene and then render the part that has been designated for it.

Could be very well the case, yet it doesn´t have to be necessarily that way. I´m too tired for any speculations right now and I´d end up most likely shooting my own foot.
 
Ailuros said:
Sage said:
ninelven said:
Sage, what gave you that impression? Link/quote?

what impression are you referring to? that it wouldnt not increase vertex throughput? well, because they are rendering the same frame... vertices have to be translated in order to know what parts of the frame the resulting triangles reside in, so each GPU has to translate the entire scene and then render the part that has been designated for it.

Could be very well the case, yet it doesn´t have to be necessarily that way. I´m too tired for any speculations right now and I´d end up most likely shooting my own foot.

I would certainly like to see some idea's on how to get around this little problem.
 
ninelven said:
Sage said:
funny, the pro users are saying "this is for the gaming market, it's of no use to us."
Sage, this is what I was referring to.
ahh...

- the GPU nowadays does handle the bulk of the CTE transformations. in this case however, both GPUs cooperating on the same image would be implying redundant operations. I am guessing that this setup is aimed at the gaming market, where increasingly the rendering load is spent in pixel shaders rather than geometry (ie. don't push more polygons, make them look prettier).
http://cgtalk.com/showthread.php?t=151848
 
Back
Top