Synchroniszation issues with SLI and CrossFire

So now I want to know if there's some twiddling I can do with the ATI crossfire profiles...

Damn, treed by Digi :(
 
So now I want to know if there's some twiddling I can do with the ATI crossfire profiles...

Damn, treed by Digi :(

Not quite...not beyond trying different exe names and seeing which you like best, but, in general, if your game scales and has a profile made by ATi you probably won't find a better match through this method(exceptions exist).

There might be more flexibility, a la nV, coming.
 
Frankly, I dont care if you like my tone. I dont like his and I think he's made a joke out of the whole situation. But I havent used any hostile language. I've been very candor and direct. He asked me if I could clarify and the answer to that was no. The drivers provide basic functionality for AFR support in the control panel. But if you want to access the advanced functionality your going to have to get your hands dirty and play with the settings. Theres nothing obscure about these profile settings. They've been the heart of how SLI has been configured since its introduction.

Yep, am not really trying to be rude. To be fair the harsh language was more in other posts, not in your posts.

This is ridiculous. My very first post stated that this was nothing new. Why would I post to the contrary? You will never get rid of it completely if you use AFR.

Chris

OK so to be precise what you mean is that it is not a big problem(not trying to put words in your mouth).
 
Compres. I think the words "Mountain" "Ant Hill" come to mind. Yes its a problem. And there are certainly improvements that could be made to AFR rendering which I would like to see. But as far as the quality and flexibility of SLI goes. This is very low on the totem pole of things I am currently concerned with in regards to the feedback I am giving Nvidia about improving SLI on a whole.

Now I'm sure people will disagree with me. They are certainly within their rights to do so. But most of the things I have been talking to with Nvidia about are things that people are screaming much louder about in regards to SLI functionality.

Chris
 
Just curious - did anyone ever use a 60Hz+ camera with a timer running on the screen in AFR at 30Hz? Clearly that'd be the best way to prove this is a serious problem, since what really matters in the end is the fluidity of what is shown on the screen. From my POV, it is not impossible to argue against the fact that frames being displayed in the following way is massively undesirable: AABCDDEFFGHII.

Now, if anyone dares to mention that this is what you get on a balanced workload without vsync, be prepared to suffer my wrath... :p (there's a reason you are supposed to enable vsync, damnit!)
 
Just curious - did anyone ever use a 60Hz+ camera with a timer running on the screen in AFR at 30Hz? Clearly that'd be the best way to prove this is a serious problem, since what really matters in the end is the fluidity of what is shown on the screen. From my POV, it is not impossible to argue against the fact that frames being displayed in the following way is massively undesirable: AABCDDEFFGHII.

Now, if anyone dares to mention that this is what you get on a balanced workload without vsync, be prepared to suffer my wrath... :p (there's a reason you are supposed to enable vsync, damnit!)


My HDTV displays 1920x1080 in interlaced mode which is 30 HZ and I honestly do not see a difference between it and 60 Hz ((In regards to this issue anyway. 30 Hz is an eyesore at times for different reasons)).
 
The problem can be captured on camera and there is a video on this here: http://www.pcgameshardware.de/?article_id=631668
That's not what I want; that's a subjective (but effective) way to see there is a problem. What I want is an objective measurement that tells me how this affects what happens on the screen's refresh cycles, since in the end that's the only thing that matters.
VSync does not solve the problem.
My point is that the problem I can complaining about also exists without AFR, whenever VSync is off (and the frametime isn't an *exact* multiple of the refresh time). Consider what happens at ~45FPS without AFR and without VSync (but with triple buffering)... Movement won't seem perfectly fluid either. So yes, I consider everyone not playing with VSync on and triple buffering off to be a heretic.
ChrisRay said:
My HDTV displays 1920x1080 in interlaced mode which is 30 HZ and I honestly do not see a difference between it and 60 Hz
My point isn't 30Hz vs 60Hz; it's more along the lines of 30Hz vs 40Hz. What I'm saying is that 30Hz (on a 60Hz monitor) is better than 40Hz, and that 60Hz is *much* better than 50Hz.
 
Interesting thought Arun. Might be something I can look into.

Thanks.

Chris
 
I think that everything that happens after the frame buffer swap is constant and thus is of no interest in context of the problem.

The problem can be measured with the timings on the frame buffer swaps.
 
I think that everything that happens after the frame buffer swap is constant and thus is of no interest in context of the problem.

The problem can be measured with the timings on the frame buffer swaps.
To make it clearer: the refresh cycles (front buffer -> RAMDAC) are constant, but the problem is about inhomogeneous refreshes of the frame buffers.

So the measurement of the frame buffer swap timings is enough for a deterministic evaluation.
 
Yes, but it is possible in theory to delay the refreshes via buffering to equalize the time given to both cards on the screen. This would slightly increase latency but would reduce stuttering. There are some other fun things you could do, actually. And the exact relationship between frametime and 'ready to refresh' time *might* not be as obvious as it appears at first, although I wouldn't expect it to be too exotic either. Either way, the problem might be slightly less or slightly more than the frametime numbers would imply, I think.

Now, one thing I really want to know is how this all ties in with single-GPU HybridPower. I certainly hope there isn't any microstuttering there, or I might just cry... :( Reducing the benefits of AFR for those who want more than the maximum single-GPU performance is one thing. But reducing effective fluidity with a feature that shouldn't affect performance at all is something else completely. And it'd make HybridPower extremely undesirable except for those with Multi-GPU configurations...
 
^^ If ones assumption about HybridPower being what's confused you is correct: it's NVIDIA's way of offsetting the power consumption of their 3 & 4 way SLI.

The discrete GPU (dGPU) only powers on when you play games, the (GeForce) motherboard GPU (mGPU) runs your monitor the rest of the time.

http://www.nvidia.com/object/hybrid_sli.html - NVIDIA's page with the odd HybridPower logo telling you what you need for HybridPower.

This might reassure Arun about HybridPower: http://www.bit-tech.net/hardware/2008/01/22/nvidia_hybrid_sli_preview_part_2/2
Bit-tech said:
The second part of my question was in relation to how the frame is rendered to the screen in performance mode – I wanted to know whether it would be possible to bypass system memory by taking a direct path from the discrete GPU to the northbridge and then straight out to the display without the need to write the frame to the mGPU’s front buffer located in system memory.

“The mGPUs require the display surface to be stored in memory for refresh,” said Nick. “Remember that we have to refresh the display at 60Hz or more. If the display surface passed directly from dGPU to mGPU to display, then the dGPU would have to serve up 60+ fps. If the discrete GPU could only render 40 frames per second, it would look really bad.

“It is also more efficient and guarantees no tearing if the display refresh is handled by the mGPU, independent of the rate that rendered frames are served by the dGPU,” he added.

Unfortunately there are problems with it: the max resolution is 1920x1200 (assuming DVI), manual switching (by a control panel) is required to move from the dGPU to the mGPU & visa versa and you need a 9-series graphic card to use it.
 
Heres an idea for a x2 (single card 2 gpu's) card
why have 2 full gpu's, why not just one and have the other chip just contain what in the old days would of been called pipelines (shaders rops ect)
 
Heres an idea for a x2 (single card 2 gpu's) card
why have 2 full gpu's, why not just one and have the other chip just contain what in the old days would of been called pipelines (shaders rops ect)

How would that help? It'll just be slower than a single chip. Sort of like anti-SLI :)

Also, what exactly would be left for your "GPU" to do if the second chip has "shaders, rops, etc" ?
 
How would that help? It'll just be slower than a single chip. Sort of like anti-SLI :)

Also, what exactly would be left for your "GPU" to do if the second chip has "shaders, rops, etc" ?

He's thinking Xenos.
 
well the gpu would manage everything plus have the memory controller ect
im thinking that way if you had 1gb of memory it would be 1gb not 2*512mb
 
Back
Top