AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

psolord is doing vsync'd gameplay.

Frankly, that's how it should be done if you aren't measuring your epenis.
What is this nonsense. Vsync causes HUGE input lag problems. I can't play any fast paced games with vsync, the experience is just horrible, it takes all the joy away.
 
psolord is doing vsync'd gameplay.

Frankly, that's how it should be done if you aren't measuring your epenis.

The very reason I have used crossfire for years, is to get a consistent frame rate using vsync. Its so nice to have the framerate match the refresh rate, nice and smooth.

On a side note: all this outrage, just when AMD is bringing out the 7990, coincident?
 
It could be related to the 7990, slightly, although I think it'd be happening regardless.
A concerted push to generate critical mass in the tech media was done by Nvidia seeding the formerly internal tools, rather specific and pricey hardware, and the time and help in settings things up and providing help in interpreting results at multiple sites.
 
The cards are doing all the work. It's just that for runt frames the user doesn't see all of it.
They're still real frames that the system has simulated and submitted, but the final IO and display mechanisms operate on their own timing.

If there were a mechanism for the graphics subsystem to know that X% of a frame has become effectively pointless to the outside world, it could save itself a bit of effort and potentially increase performance even further.

They are hardly a real frame. They don't even take a millisecond.to render. They are just a blip on the screen and look like an artifact. What they do is double frame rate when inserted between every other frame. E.g two normal frame take for example take 15 milliseconds each so 30ms. If one of these frames is a regular and one is a runt that takes 0.3ms and is counted in fraps Your basically getting two frames counted in a 15.3 Ms time frame. Thus doubling frame rates. The truth of the matter is your actually waiting 15.3 Ms between real frames thus it takes 30.6ms to render two frames.
 
Sinistar said:
all this outrage
What outrage? All I have seen is a few very trustworthy websites reporting on the matter in a reasonable manner.

Unless you are referring to misguided "fans." But there will always be such people...

A performance issue was uncovered with CF. AMD tells Anandtech they are now aware of the issue and are working on a solution. There isn't anything to be outraged about...
 
They are hardly a real frame. They don't even take a millisecond.to render.
A runt is a frame that has been fully rendered, but it doesn't have enough time to scan out before it is replaced by the next one in the output.

Timing the output of the pipeline does not give an accurate depiction because it overlaps the render times of multiple frames.
 
Not for benchmark testing.
Hence my statement ;)

There's nothing wrong with epenis measurement. I'm just saying that if you have the horsepower for 60+fps, it's silly to turn off v-sync.

What is this nonsense. Vsync causes HUGE input lag problems.
An average of 8ms is huge? I don't think so. Tearing is what really takes all the joy away.

E.g two normal frame take for example take 15 milliseconds each so 30ms. If one of these frames is a regular and one is a runt that takes 0.3ms and is counted in fraps Your basically getting two frames counted in a 15.3 Ms time frame. Thus doubling frame rates. The truth of the matter is your actually waiting 15.3 Ms between real frames thus it takes 30.6ms to render two frames.
I'm so glad that psolord mentioned vsync, because you and probably others are woefully misinformed. Not a single pixel on the screen is updated faster than every 16.7ms.

Doesn't matter if you have 15ms/0.3ms ping-ponging or a steady 7.65ms. What you will get with the former is a single frame from the game updating 90% of the pixels on the screen (the other frame just a few pixels), and with the latter you get one frame updating 46% of the pixels and the next a different 46%. In fact, since the tears come in pairs with the former, the object you're looking at will be interrupted by tearing half as frequently, so there's even a case to be made that it's a good thing.

I'm really curious how these results change with v-sync on for 120Hz monitors...

-----------------------

So I looked at TR's Skyrim graphs again, and if you look at time spent beyond 16.7 ms, it shows the supposedly stuttering AMD output spends the least amount of time leaving a pixel unchanged for two screen cycles.

Looking at PCPer's Dirt3 results:
http://www.pcper.com/reviews/Graphi...ils-Capture-based-Graphics-Performance-Test-6

I wonder if there is some sort of decision making going on where if the framerate is fast enough, AMD just decides to basically skip outputting a frame because the next one is ready. Maybe that is misbehaving in some situations.
 
Hence my statement ;)

There's nothing wrong with epenis measurement. I'm just saying that if you have the horsepower for 60+fps, it's silly to turn off v-sync.

An average of 8ms is huge? I don't think so. Tearing is what really takes all the joy away.

I'm so glad that psolord mentioned vsync, because you and probably others are woefully misinformed. Not a single pixel on the screen is updated faster than every 16.7ms.

Doesn't matter if you have 15ms/0.3ms ping-ponging or a steady 7.65ms. What you will get with the former is a single frame from the game updating 90% of the pixels on the screen (the other frame just a few pixels), and with the latter you get one frame updating 46% of the pixels and the next a different 46%. In fact, since the tears come in pairs with the former, the object you're looking at will be interrupted by tearing half as frequently, so there's even a case to be made that it's a good thing.

I'm really curious how these results change with v-sync on for 120Hz monitors...

-----------------------

So I looked at TR's Skyrim graphs again, and if you look at time spent beyond 16.7 ms, it shows the supposedly stuttering AMD output spends the least amount of time leaving a pixel unchanged for two screen cycles.

Looking at PCPer's Dirt3 results:
http://www.pcper.com/reviews/Graphi...ils-Capture-based-Graphics-Performance-Test-6

I wonder if there is some sort of decision making going on where if the framerate is fast enough, AMD just decides to basically skip outputting a frame because the next one is ready. Maybe that is misbehaving in some situations.

I am not talking about vsync, never was. I am talking about crossfire performance being used as a marketing tool in reviews.

When have you ever seen a review use vsync results? They never do because it makes the cards all perform the same if they have more than enough performance.

Vsync is turned off in reviews to gauge which card is the fastest. And turning on vsync is only a temporary fix.

I can't believe your trying to turn runts into a good this. 7.65ms frame rate consistently would be awesome. 15ms frame time then a .3 runt frame isn't the same thing. The .3ms frame is just just a plip on the screen while and doesn't add to the positive part of the experience because its is just a very slight partial render which does nothing to make the game smooth by actually increase the fps.

8ms is huge. When your talking about the difference between what an average of 15.3 ms frame time vs 7.65ms frame times. Split evenly, this is the difference between 65 fps(1000/15.3) vs 130 fps(1000/7.65). When your running a thinking of running a dual card setup and your looking at reviews. That 130 fps is going to make you want to upgrade, not that 65 fps. Especially when the single card frame rates are 60fps.

If the actually amount of full frames rendered is only 10 percent more rather than 100 % more which some CF reviews produce, don't you think you would likely not get that second card.

That is the big issue here. Alot of second cards have likely been sold because these runt frames are inflating CF's average.

Answer this question? Do you think crossfire performance in reviews have made alot of 7970s owners buy another card. If the answer is yes. This is what the issue is. People are not getting double the frame rates, but rather only slightly better, with runt frames making it look like the cards are getting 100 percent scaling.

skyrimfpsfiltered.gif


bf3fps.gif


Don't you think the difference between 45.7% vs 98.6% cross fire scaling or the difference between 10.5% vs 87.5% scaling might have swung a few more people to buy another card to do crossfire. That what removing the runt frame does do the average FPS. This is why it is such a big issue.

Think of people who bought something like a 7990 or an ares 2. They bought the card and paid the price because they thought they were getting the fastest card there is. Once you remove the runt frames, this isn't nearly the case anymore and they simply got ripped off.
 
I can't believe your trying to turn runts into a good this. 7.65ms frame rate consistently would be awesome. 15ms frame time then a .3 runt frame isn't the same thing. The .3ms frame is just just a plip on the screen while and doesn't add to the positive part of the experience because its is just a very slight partial render which does nothing to make the game smooth by actually increase the fps.
Except what is replacing it is something closer to the input you ask for. It is a better alternative to display all or most of that frame and delay the display of the next frame?
 
Dave Baumann said:
It is a better alternative to display all or most of that frame and delay the display of the next frame?
If it wasn't then this conversation wouldn't exist because no one would have ever noticed anything wrong.
 
Yes, I play competitive FPS... or I once did anyway. But I don't use CF or SLI, so it really doesn't affect me.

Dave Baumann said:
So you're saying you purposely want to induce delays?
If the choice is an 8ms delay or stuttering, absolutely I would choose the 8ms delay.
 
Personally I always game with v-sync on, even if it means 30fps. I hate tearing and choppy performance, but understand that for those who game competitively it's just not an option. For me, if crossfire got me 60FPS constant instead of 30-60FPS then it would be worth it. That's just how I like to game, tearing is unacceptable.

AMD do need to come up with a frame metering system for CF somewhat like Nvidia's however. It is wrong to think that the runt frames are a way of cheating as they are fully rendered frames, so the CF scaling is not a lie, just what you see is a lie. This is an issue with traditional benchmarking methods, not CF scaling. Regardless, these frames right now are worse than useless, so AMD needs to do something about that. For me, once again I'd just be happy to get 60FPS v-synced, like I was with my 4870x2 in everything except crysis which was 30-15fps :(

I really can't wait until a standard that does away with refresh rates is a reality in the mainstream.
 
In a perfect, the rendering time should be roughly the same on both cards. In practice, it's not for whatever reasons, probably good ones.
We know that it didn't take 0.3ms to render the second frame, so this seems to be a case where one GPU gets an unfair weight in scanning out the final render. That's what needs to be fixed.
If you want to even out, then the introduced additional latency should be no higher than the difference between the max and the min render (not scan out) time on either GPU. That should be way, way less than 8ms.
 
I wonder what we would see if reviewers used monitors with greater than 60hz refresh rates. I suspect Nvidia's exploit of delaying frames to fake smoothness would backfire in the form of noticeable input lag. Maybe it's time reviewers dusted off those old CRTs or upgraded to 120hz monitors.
 
I wonder what we would see if reviewers used monitors with greater than 60hz refresh rates. I suspect Nvidia's exploit of delaying frames to fake smoothness would backfire in the form of noticeable input lag.
Why would it? If implemented right it would certainly take the refresh rate into account.
 
Why would it? If implemented right it would certainly take the refresh rate into account.
So your solution is to further artificially manipulate what the user actually sees? No thanks. I'd rather be given the tools to choose and leave it up to me to decide how I want my hardware to run my games.

Nvidia is getting rather good (and stupid) in forcibly taking features away. Overclocking, voltage tweaking, frame delay manipulation, etc. etc. It worries me that, from the looks of it, the only angle they have left is to instigate a witchhunt based on looking at things below 30 milliseconds far beyond what a normal human can notice. I guess it's typical Nvidia behavior so I shouldn't be surprised.
 
So your solution is to further artificially manipulate what the user actually sees? No thanks. I'd rather be given the tools to choose and leave it up to me to decide how I want my hardware to run my games.

Nvidia is getting rather good (and stupid) in forcibly taking features away. ..., frame delay manipulation, etc. etc. It worries me that, from the looks of it, the only angle they have left is to instigate a witchhunt based on looking at things below 30 milliseconds far beyond what a normal human can notice. I guess it's typical Nvidia behavior so I shouldn't be surprised.
What are you talking about? The goal should be to make sure that there is as good as possible correlation between FRAPS and actual scan out, because games use the FRAPS timestamp in their simulation engines. For AMD CF, there is no such correlation, for Nvidia, it's close. Is it at the expense of a few ms extra latency? Probably. Is it on the order of 16ms? (1 frame) Very unlikely.

Of course, if you don't think people can't notice anything below 30ms, then it is really doesn't matter what they do. But then I'm wondering why you have a problem with smoothing out the frames in the first place?

I don't see a witch hunt. I see data that records reality. If you interpret this as no big deal, then great. Good for you! The thing is: AMD themselves admit that their current solution is far from perfect.
 
I am just wondering, is there any way to explain beyond 100 percent scaling with crossfire we sometimes see in reviews? The new Runt frames being counted as an fps, would explain this actually and reflect badly on AMD. There are a couple reviews coming out of the wood work from Techreport and pcper that highlight the flaws in crossfire technology. Anandtech is also starting an article.

So Malta is unreleasable until they fix this frametime, runt issue.

Its almost certain any crossfire product will undergo this type of testing in the future.... If Malta were released tomorrow and Anandtech did a revew for example, it would attract more bad publicity by highlighting the flaws of cross fire. This flaw being that in some games, (not because of crossfire scaling directly), crossfire isn't particularly faster than a single card for gameplay experience.


Yes, well that's nice and informative and all, but still a far cry from what I've seen with my very own eyes and brain.

Let me give an example of what I see, so maybe you and others can see it.

I made two videos of the scene in Only Human level, where Prophet rides the Vtol. I chose this scene because it's automated up to the point I recorded, so it would be the same for both recordings. Also there's a lot of camera panning, which is the worse motion for the viewer, it there's stuttering.


Here's the video of the 7950 crossfire.
This is how I play and this is what I see. 60fps vsynced. Actually it's better in real life because Crysis 3 is very cpu heavy and Fraps needs a lot of cpu as well, but still it's good enough.

This is the video of the single 7950 at the same settings. I only disabled vsync for this one, otherwise I would be getting a steady 30fps, thus artificially making my point even stronger on a difference that is already huge.

So after you've watched these two very real life examples, can you honestly tell me, that a single card is better than two cards?

I don't know about you, but what I see here, is two cards running the game at perfect smooth 60fps and still have some spare cycles, while the single card is struggling to hit 40fps at full load. (<-this hints at greater fan speed, thus more noise, another benefit of dual gpu)

I don't see AMD cheating me in any way here. I face a situation that a single card can only provide 40fps at best, while two cards can provide rocksolid 60fps and still have some gpu power to spare.

Call me crazy, but this is what dual gpu is all about. Effectively almost doubling your processing power pool, so you can use it for a smoother framerate and overall better gaming experience. I'd suggest to people that are quick to dismiss dual gpu systems, to actually sit in front of one and do some real life testing, because I do, for more than 5 years now (voodoo2 sli doesn't count I guess) and I have only faced situations like the one described above.

I don't defend AMD here, but dual gpu solutions. Both my secondary system with 570 SLI and my tertiarty with 5850 CFX, still exhibit the same behavior, alas with lower settings.

Actually my older 5850 CFX system would be far more suitable for comparisons, since it's much more gpu power starved and the second card can do wonders, but it lacks the cpu power to provide good fraps recordings.
 
Last edited by a moderator:
Back
Top