So now I want to know if there's some twiddling I can do with the ATI crossfire profiles...
Damn, treed by Digi
Damn, treed by Digi
So now I want to know if there's some twiddling I can do with the ATI crossfire profiles...
Damn, treed by Digi
Frankly, I dont care if you like my tone. I dont like his and I think he's made a joke out of the whole situation. But I havent used any hostile language. I've been very candor and direct. He asked me if I could clarify and the answer to that was no. The drivers provide basic functionality for AFR support in the control panel. But if you want to access the advanced functionality your going to have to get your hands dirty and play with the settings. Theres nothing obscure about these profile settings. They've been the heart of how SLI has been configured since its introduction.
This is ridiculous. My very first post stated that this was nothing new. Why would I post to the contrary? You will never get rid of it completely if you use AFR.
Chris
Just curious - did anyone ever use a 60Hz+ camera with a timer running on the screen in AFR at 30Hz? Clearly that'd be the best way to prove this is a serious problem, since what really matters in the end is the fluidity of what is shown on the screen. From my POV, it is not impossible to argue against the fact that frames being displayed in the following way is massively undesirable: AABCDDEFFGHII.
Now, if anyone dares to mention that this is what you get on a balanced workload without vsync, be prepared to suffer my wrath... (there's a reason you are supposed to enable vsync, damnit!)
That's not what I want; that's a subjective (but effective) way to see there is a problem. What I want is an objective measurement that tells me how this affects what happens on the screen's refresh cycles, since in the end that's the only thing that matters.The problem can be captured on camera and there is a video on this here: http://www.pcgameshardware.de/?article_id=631668
My point is that the problem I can complaining about also exists without AFR, whenever VSync is off (and the frametime isn't an *exact* multiple of the refresh time). Consider what happens at ~45FPS without AFR and without VSync (but with triple buffering)... Movement won't seem perfectly fluid either. So yes, I consider everyone not playing with VSync on and triple buffering off to be a heretic.VSync does not solve the problem.
My point isn't 30Hz vs 60Hz; it's more along the lines of 30Hz vs 40Hz. What I'm saying is that 30Hz (on a 60Hz monitor) is better than 40Hz, and that 60Hz is *much* better than 50Hz.ChrisRay said:My HDTV displays 1920x1080 in interlaced mode which is 30 HZ and I honestly do not see a difference between it and 60 Hz
Amen brother!I consider everyone not playing with VSync on and triple buffering off to be a heretic.
To make it clearer: the refresh cycles (front buffer -> RAMDAC) are constant, but the problem is about inhomogeneous refreshes of the frame buffers.I think that everything that happens after the frame buffer swap is constant and thus is of no interest in context of the problem.
The problem can be measured with the timings on the frame buffer swaps.
Bit-tech said:The second part of my question was in relation to how the frame is rendered to the screen in performance mode – I wanted to know whether it would be possible to bypass system memory by taking a direct path from the discrete GPU to the northbridge and then straight out to the display without the need to write the frame to the mGPU’s front buffer located in system memory.
“The mGPUs require the display surface to be stored in memory for refresh,” said Nick. “Remember that we have to refresh the display at 60Hz or more. If the display surface passed directly from dGPU to mGPU to display, then the dGPU would have to serve up 60+ fps. If the discrete GPU could only render 40 frames per second, it would look really bad.
“It is also more efficient and guarantees no tearing if the display refresh is handled by the mGPU, independent of the rate that rendered frames are served by the dGPU,” he added.
Heres an idea for a x2 (single card 2 gpu's) card
why have 2 full gpu's, why not just one and have the other chip just contain what in the old days would of been called pipelines (shaders rops ect)
How would that help? It'll just be slower than a single chip. Sort of like anti-SLI
Also, what exactly would be left for your "GPU" to do if the second chip has "shaders, rops, etc" ?