AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

It's nothing new that SLI is superior to CF in terms of user experience. With AFR, fps are not telling the whole story. You can read this over and over again in multiple reviews, from multiple user reports. AMD should finally work this out with CF, but I wouldn't hold my breath.
 
It's nothing new that SLI is superior to CF in terms of user experience. With AFR, fps are not telling the whole story. You can read this over and over again in multiple reviews, from multiple user reports. AMD should finally work this out with CF, but I wouldn't hold my breath.

Every bit of "extra smoothness" nVidia has increases input lag in the process
 
I was under the impression that you could choose to "smooth" AFR in Nvidia's control panel but at the expense of a framerate hit?
 
Every bit of "extra smoothness" nVidia has increases input lag in the process

That may be, but realistically it is not noticeable. I've been running SLI for a good 18 months now and the only thing that really adds noticeable input lag is vsync.

@jimbo75:

It is not known what this option does, as it is fairly new. Afaik the frame metering is done automatically in software (Fermi and earlier) and in software+hardware (Kepler).
 
Although the review did show 7950 winning at higher frame rates nothing in the review showed AMD in a light where it needed to be at a certain frame rate. At least no specific game was pointed out causing issue. Also the 7950 is similarly priced or in some cases lower then what it was benched against. So you have 7950's offering higher frame rates and sold at a lower price point.
 
Last edited by a moderator:
Although the review did show 7950 winning at higher frame rates nothing in the review showed AMD in a light where it needed to be at a certain frame rate. At least no specific game was pointed out causing issue. Also the 7950 is similarly priced or in some cases lower then what it was benched against. So you have 7950's offering higher frame rates and sold at a lower price point.

Yes but [H] is sponsored by Galaxy and that supersedes everything else including reviewer impartiality.

@ Boxleitnerb - this is a year old but good article on both, and it touches on Nvidia's sli optimisations - http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,2995.html

We've run a number of other benchmarks and need to add, for the sake of fairness, that Nvidia's advantage is only evident if its driver is optimized for smooth frame rates rather than raw performance. The company takes certain apps, like synthetic benchmarks and commonly-tested games, and tweaks them to yield higher numbers at the expense of consistency.

I'm actually surprised that I didn't hear a lot more about that.

Overall, SLI had a slight advantage with two cards, but 3-way Crossfire appeared to almost eliminate microstutter completely.
 
Last edited by a moderator:
@ Boxleitnerb - this is a year old but good article on both, and it touches on Nvidia's sli optimisations - http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,2995.html

I'm actually surprised that I didn't hear a lot more about that.

Overall, SLI had a slight advantage with two cards, but 3-way Crossfire appeared to almost eliminate microstutter completely.

This article was poorly done and is very misleading. The 3-way Crossfire is clearly scaling badly which indicates a CPU bottleneck. When the cards are not fully loaded, it is completely normal that frametime consistency improves because then the CPU is the "clock generator" when it comes to releasing the frames at certain intervals, so to speak. Tom's completely failed to see this and drew the wrong conclusion. They also didn't present any more data, which I find very suspicious.

Example:
In Skyrim I had microstuttering with SGSSAA when my fps dropped to about 40-45. At that time the uGridsToLoad tweak that is very heavy on the CPU became popular and I tried it out. fps dropped below that previous threshold as did GPU usage, CPU load increased and the microstutter disappeared completely. I could have had the same result by adding a third (and fourth) card.
 
I'm not really seeing this bad scaling in most of those benchmarks - nothing more than what I'd expect from AMD's drivers on Nvidia titles anyway (Metro and Mafia being the worst). Even then the author points out that the microstutter is best on the 3-way crossfire.

Three-way CrossFire is very compelling in most games and benchmarks. However, driver issues reduce its performance to the level of a dual-GPU setup in a few applications. In those cases, the only remaining benefit of three-way CrossFire is better frame rate consistency in each of the tests we ran (admittedly, no small gain)
 
I did not talk about the review as a whole but about the section where the Tri-Fire is discussed in regards to microstutter:
http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,2995-6.html

They only show Call of Juarez where 2 GPUs provide about 140fps and 3 GPUs about 160fps. This is clearly very bad scaling, most likely due to a CPU bottleneck. No other diagrams or data are presented on this matter except this one game.

The author confuses cause and effect. Not the reduced microstutter is the cause for reduced performance/scaling. Reduced scaling due to a CPU bottleneck is the cause for less microstutter. I have made some experiments on this by downclocking my CPU significantly which led do massively reduced microstutter in several games.
 
They didn't show the stutter graphs for each game, however they have commented on them below the fps graphs.

Metro -
Neither Nvidia nor AMD manage to avoid micro-stuttering in multi-GPU mode. Only the three-way CrossFire setup is more or less OK.
AvP -
It takes a three-way or four-way CrossFire setup to approach the quality of Nvidia's SLI output.
Call of Juarez -
Apart from the three-way and four-way CrossFire setups, which do provide solid performance and low levels of micro-stuttering, AMD's dual-GPU CrossFire implementation isn't impressing us. Nvidia emerges victorious here.
Mafia II -
The three-way CrossFire setup is the overall winner if you want to keep stuttering to a minimum.
The poor scaling on the 3rd card is more likely explained as...poor scaling on the 3rd card. We've been seeing this from both companies for years. In the Tom's article the 3-way crossfire 6870's consistently score lower fps than the 580 sli (as you would expect), and consistently get noted as having lower microstutter - or it's at least 2 wins and 2 draws.
 
Last edited by a moderator:
Every bit of "extra smoothness" nVidia has increases input lag in the process

Which lag ?

If you are talking about Frame metering and purposly introduced averaging, then 20ms-20ms-20ms will feel smoother compared to 10ms-40ms-10ms.

If you are talking about input lag - input has nothing to do with rendering, and perceived input lag will again be more consistent in 1st case, and 2nd scenario will feel jerkier and not more responsive. Unless you magically chose to move your mouse inside that 10ms period.
 
I dont really know if its the place for post this, but well, look like HSA gain an other valuable member and as it will surely touch future gpu acceleration.

http://www.pgroup.com/about/news.htm#54

The Portland Group® (PGI), a wholly-owned subsidiary of STMicroelectronics and the leading independent supplier of compilers and tools for high-performance computing, today announced that PGI Accelerator™ Fortran, C and C++ compilers will soon target the AMD line of accelerated processing units (APUs) as well as the AMD line of discrete GPU (Graphics Processing Unit) accelerators. PGI will work closely with AMD to extend its PGI Accelerator directive-based compilers to generate code directly for AMD GPU accelerators, and to generate heterogeneous x64+GPU executable files that automatically use both the CPU and GPU compute capabilities of AMD APUs.
"The PGI Accelerator compilers will open up programming of AMD APUs and GPUs to the growing number of HPC developers using directives to accelerate science and engineering applications," said Douglas Miles, director, The Portland Group. "Together with AMD, we are working to make heterogeneous programming easily accessible to mainstream C and Fortran developers, and to unleash the power of these devices."
"We look forward to working with PGI to ensure that through the use of standard compiler directives the full computational power of AMD platforms with integrated APUs can be easily tapped," said Terri Hall, Corporate VP, Business Alliances, AMD. "Engagements like this are key to expanding the developer ecosystem and the opportunities for AMD platforms."

(Indeed not the right place where put this... move it or delte it if needed )
 
Last edited by a moderator:
The author confuses cause and effect. Not the reduced microstutter is the cause for reduced performance/scaling. Reduced scaling due to a CPU bottleneck is the cause for less microstutter. I have made some experiments on this by downclocking my CPU significantly which led do massively reduced microstutter in several games.
I've noticed that it can be induced by changing the tRAS (Active to Precharge Delay) from it's predefined value or Command Rate from 2T to 1T. So it's not just the cpu from what I've experienced.
 
They didn't show the stutter graphs for each game, however they have commented on them below the fps graphs.

Metro - AvP - Call of Juarez - Mafia II - The poor scaling on the 3rd card is more likely explained as...poor scaling on the 3rd card. We've been seeing this from both companies for years. In the Tom's article the 3-way crossfire 6870's consistently score lower fps than the 580 sli (as you would expect), and consistently get noted as having lower microstutter - or it's at least 2 wins and 2 draws.


Without tangible data, these observations are not really insightful - I don't take their word for it. They should have posted detailed frametime graphs to back up their claims.

At least SLI is scaling very well with 3 GPUs - IF you apply proper demanding settings. If you only test with MSAA or worse, FXAA, don't expect miracles due to a CPU bottleneck at least in part of the benchmark scene. This is something that many if not all reviewers are doing wrong when benching SLI/CF.
 
You're right that showing the graphs would be nice, but they'd probably be a real mess or take up far too much space. If you look at the Call of Juarez microstutter graph then imagine having another 16 cards in there they would be unreadable.

I'm sure they do actually have the graphs for them all but they just couldn't show it in the article.
 
The author confuses cause and effect. Not the reduced microstutter is the cause for reduced performance/scaling. Reduced scaling due to a CPU bottleneck is the cause for less microstutter.
Exactly what I thought. If something is CPU limited, the microstutter necessarily disappears (as the GPUs are waiting for the game engine, the updates are evenly distributed over time) or one has a badly programmed engine that produces also stuttering on a single GPU.
 
True, but [H] has always made it their task to actually play the games they benchmark (their playable setting approach). They also have a long history of testing this way and of testing SLI and CF.

I don't trust someone who makes such basic mistakes as THG as Gipsel correctly noted.
 
The game is not cpu limited though.

06b_coj_enthusiast.png
 
Back
Top